{
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "EPcJ68xtskZC"
      },
      "source": [
        "# 使用自定义数据集训练<a href='https://github.com/clue-ai/ChatYuan'>ChatYuan</a>模型"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "reQIafyis1HP"
      },
      "source": [
        "在这个notebook中我们将使用transformers库结合GPU训练ChatYuan模型，使用的是<a href='https://github.com/CLUEbenchmark/pCLUE'>pCLUE多任务提示学习数据集</a>。\n",
        "\n",
        "它是一个PyTorch实现，从环境准备、数据下载和转化、模型训练、预测到模型效果评估的整个过程。\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "UQc73qTZtfUX",
        "outputId": "e39baf8e-4426-4301-b0d9-e95b37ee41bc"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
            "Requirement already satisfied: sentencepiece in /usr/local/lib/python3.7/dist-packages (0.1.97)\n",
            "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
            "Requirement already satisfied: transformers in /usr/local/lib/python3.7/dist-packages (4.22.2)\n",
            "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers) (21.3)\n",
            "Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers) (3.8.0)\n",
            "Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (1.21.6)\n",
            "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.7/dist-packages (from transformers) (6.0)\n",
            "Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers) (2.23.0)\n",
            "Requirement already satisfied: huggingface-hub<1.0,>=0.9.0 in /usr/local/lib/python3.7/dist-packages (from transformers) (0.10.0)\n",
            "Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.64.1)\n",
            "Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2022.6.2)\n",
            "Requirement already satisfied: tokenizers!=0.11.3,<0.13,>=0.11.1 in /usr/local/lib/python3.7/dist-packages (from transformers) (0.12.1)\n",
            "Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers) (4.12.0)\n",
            "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from huggingface-hub<1.0,>=0.9.0->transformers) (4.1.1)\n",
            "Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers) (3.0.9)\n",
            "Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers) (3.8.1)\n",
            "Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (1.24.3)\n",
            "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2022.6.15)\n",
            "Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (3.0.4)\n",
            "Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers) (2.10)\n",
            "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
            "Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (1.12.1+cu113)\n",
            "Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch) (4.1.1)\n",
            "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
            "Requirement already satisfied: rich[jupyter] in /usr/local/lib/python3.7/dist-packages (12.6.0)\n",
            "Requirement already satisfied: typing-extensions<5.0,>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from rich[jupyter]) (4.1.1)\n",
            "Requirement already satisfied: commonmark<0.10.0,>=0.9.0 in /usr/local/lib/python3.7/dist-packages (from rich[jupyter]) (0.9.1)\n",
            "Requirement already satisfied: pygments<3.0.0,>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from rich[jupyter]) (2.6.1)\n",
            "Requirement already satisfied: ipywidgets<8.0.0,>=7.5.1 in /usr/local/lib/python3.7/dist-packages (from rich[jupyter]) (7.7.1)\n",
            "Requirement already satisfied: widgetsnbextension~=3.6.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (3.6.1)\n",
            "Requirement already satisfied: ipython-genutils~=0.2.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (0.2.0)\n",
            "Requirement already satisfied: jupyterlab-widgets>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (3.0.3)\n",
            "Requirement already satisfied: ipython>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (7.9.0)\n",
            "Requirement already satisfied: ipykernel>=4.5.1 in /usr/local/lib/python3.7/dist-packages (from ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (5.3.4)\n",
            "Requirement already satisfied: traitlets>=4.3.1 in /usr/local/lib/python3.7/dist-packages (from ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (5.1.1)\n",
            "Requirement already satisfied: tornado>=4.2 in /usr/local/lib/python3.7/dist-packages (from ipykernel>=4.5.1->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (5.1.1)\n",
            "Requirement already satisfied: jupyter-client in /usr/local/lib/python3.7/dist-packages (from ipykernel>=4.5.1->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (6.1.12)\n",
            "Requirement already satisfied: jedi>=0.10 in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (0.18.1)\n",
            "Requirement already satisfied: decorator in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (4.4.2)\n",
            "Requirement already satisfied: backcall in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (0.2.0)\n",
            "Requirement already satisfied: pexpect in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (4.8.0)\n",
            "Requirement already satisfied: setuptools>=18.5 in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (57.4.0)\n",
            "Requirement already satisfied: prompt-toolkit<2.1.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (2.0.10)\n",
            "Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (0.7.5)\n",
            "Requirement already satisfied: parso<0.9.0,>=0.8.0 in /usr/local/lib/python3.7/dist-packages (from jedi>=0.10->ipython>=4.0.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (0.8.3)\n",
            "Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.1.0,>=2.0.0->ipython>=4.0.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (0.2.5)\n",
            "Requirement already satisfied: six>=1.9.0 in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.1.0,>=2.0.0->ipython>=4.0.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (1.15.0)\n",
            "Requirement already satisfied: notebook>=4.4.1 in /usr/local/lib/python3.7/dist-packages (from widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (5.3.1)\n",
            "Requirement already satisfied: Send2Trash in /usr/local/lib/python3.7/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (1.8.0)\n",
            "Requirement already satisfied: terminado>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (0.13.3)\n",
            "Requirement already satisfied: jupyter-core>=4.4.0 in /usr/local/lib/python3.7/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (4.11.1)\n",
            "Requirement already satisfied: nbconvert in /usr/local/lib/python3.7/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (5.6.1)\n",
            "Requirement already satisfied: nbformat in /usr/local/lib/python3.7/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (5.4.0)\n",
            "Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (2.11.3)\n",
            "Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from jupyter-client->ipykernel>=4.5.1->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (2.8.2)\n",
            "Requirement already satisfied: pyzmq>=13 in /usr/local/lib/python3.7/dist-packages (from jupyter-client->ipykernel>=4.5.1->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (23.2.1)\n",
            "Requirement already satisfied: ptyprocess in /usr/local/lib/python3.7/dist-packages (from terminado>=0.8.1->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (0.7.0)\n",
            "Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (2.0.1)\n",
            "Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (1.5.0)\n",
            "Requirement already satisfied: testpath in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (0.6.0)\n",
            "Requirement already satisfied: defusedxml in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (0.7.1)\n",
            "Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (0.8.4)\n",
            "Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (0.4)\n",
            "Requirement already satisfied: bleach in /usr/local/lib/python3.7/dist-packages (from nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (5.0.1)\n",
            "Requirement already satisfied: fastjsonschema in /usr/local/lib/python3.7/dist-packages (from nbformat->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (2.16.1)\n",
            "Requirement already satisfied: jsonschema>=2.6 in /usr/local/lib/python3.7/dist-packages (from nbformat->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (4.3.3)\n",
            "Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from jsonschema>=2.6->nbformat->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (4.12.0)\n",
            "Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.7/dist-packages (from jsonschema>=2.6->nbformat->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (0.18.1)\n",
            "Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from jsonschema>=2.6->nbformat->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (22.1.0)\n",
            "Requirement already satisfied: importlib-resources>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from jsonschema>=2.6->nbformat->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (5.9.0)\n",
            "Requirement already satisfied: zipp>=3.1.0 in /usr/local/lib/python3.7/dist-packages (from importlib-resources>=1.4.0->jsonschema>=2.6->nbformat->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (3.8.1)\n",
            "Requirement already satisfied: webencodings in /usr/local/lib/python3.7/dist-packages (from bleach->nbconvert->notebook>=4.4.1->widgetsnbextension~=3.6.0->ipywidgets<8.0.0,>=7.5.1->rich[jupyter]) (0.5.1)\n"
          ]
        }
      ],
      "source": [
        "# 安装需要的包 install libraries\n",
        "!pip install sentencepiece\n",
        "!pip install transformers\n",
        "!pip install torch\n",
        "!pip install rich[jupyter]"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "XAoNiLx3uHsQ",
        "outputId": "2e4a708f-d4ec-4c7f-8001-b7b2e701b13f"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "end2...\n"
          ]
        }
      ],
      "source": [
        "# 引入相应的包 Importing libraries\n",
        "import os,json\n",
        "import numpy as np\n",
        "import pandas as pd\n",
        "import torch\n",
        "import torch.nn.functional as F\n",
        "from torch.utils.data import Dataset, DataLoader, RandomSampler, SequentialSampler\n",
        "import os,time\n",
        "# Importing the T5 modules from huggingface/transformers\n",
        "from transformers import T5Tokenizer, T5ForConditionalGeneration\n",
        "\n",
        "# rich: for a better display on terminal\n",
        "from rich.table import Column, Table\n",
        "from rich import box\n",
        "from rich.console import Console\n",
        "print(\"end2...\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "wd6a0o4aLyeA",
        "outputId": "a12ad3a8-80e1-4a87-cd1d-0eb99ec88a74"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Tue Oct  4 02:25:05 2022       \n",
            "+-----------------------------------------------------------------------------+\n",
            "| NVIDIA-SMI 460.32.03    Driver Version: 460.32.03    CUDA Version: 11.2     |\n",
            "|-------------------------------+----------------------+----------------------+\n",
            "| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |\n",
            "| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |\n",
            "|                               |                      |               MIG M. |\n",
            "|===============================+======================+======================|\n",
            "|   0  Tesla V100-SXM2...  Off  | 00000000:00:04.0 Off |                    0 |\n",
            "| N/A   34C    P0    23W / 300W |      0MiB / 16160MiB |      0%      Default |\n",
            "|                               |                      |                  N/A |\n",
            "+-------------------------------+----------------------+----------------------+\n",
            "                                                                               \n",
            "+-----------------------------------------------------------------------------+\n",
            "| Processes:                                                                  |\n",
            "|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |\n",
            "|        ID   ID                                                   Usage      |\n",
            "|=============================================================================|\n",
            "|  No running processes found                                                 |\n",
            "+-----------------------------------------------------------------------------+\n"
          ]
        }
      ],
      "source": [
        "# 查看GPU的信息\n",
        "!nvidia-smi"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "zBWNYTdhWdv-"
      },
      "source": [
        "# 数据准备和转化"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "o1rLCBtBXhPa",
        "outputId": "009c836f-7a1c-4ff0-97fa-1b122d0974dd"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "--2022-10-04 02:25:05--  https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_1.json\n",
            "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.110.133, 185.199.111.133, ...\n",
            "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 100662150 (96M) [text/plain]\n",
            "Saving to: ‘pCLUE_train_1.json.7’\n",
            "\n",
            "pCLUE_train_1.json. 100%[===================>]  96.00M   463MB/s    in 0.2s    \n",
            "\n",
            "2022-10-04 02:25:11 (463 MB/s) - ‘pCLUE_train_1.json.7’ saved [100662150/100662150]\n",
            "\n",
            "--2022-10-04 02:25:11--  https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_2.json\n",
            "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.110.133, ...\n",
            "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 100254394 (96M) [text/plain]\n",
            "Saving to: ‘pCLUE_train_2.json.3’\n",
            "\n",
            "pCLUE_train_2.json. 100%[===================>]  95.61M   330MB/s    in 0.3s    \n",
            "\n",
            "2022-10-04 02:25:17 (330 MB/s) - ‘pCLUE_train_2.json.3’ saved [100254394/100254394]\n",
            "\n",
            "--2022-10-04 02:25:17--  https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_3.json\n",
            "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.108.133, ...\n",
            "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 100530480 (96M) [text/plain]\n",
            "Saving to: ‘pCLUE_train_3.json.3’\n",
            "\n",
            "pCLUE_train_3.json. 100%[===================>]  95.87M   435MB/s    in 0.2s    \n",
            "\n",
            "2022-10-04 02:25:22 (435 MB/s) - ‘pCLUE_train_3.json.3’ saved [100530480/100530480]\n",
            "\n",
            "--2022-10-04 02:25:22--  https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_4.json\n",
            "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...\n",
            "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 100107508 (95M) [text/plain]\n",
            "Saving to: ‘pCLUE_train_4.json.3’\n",
            "\n",
            "pCLUE_train_4.json. 100%[===================>]  95.47M   373MB/s    in 0.3s    \n",
            "\n",
            "2022-10-04 02:25:28 (373 MB/s) - ‘pCLUE_train_4.json.3’ saved [100107508/100107508]\n",
            "\n",
            "--2022-10-04 02:25:28--  https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_5.json\n",
            "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...\n",
            "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 100587971 (96M) [text/plain]\n",
            "Saving to: ‘pCLUE_train_5.json.3’\n",
            "\n",
            "pCLUE_train_5.json. 100%[===================>]  95.93M   363MB/s    in 0.3s    \n",
            "\n",
            "2022-10-04 02:25:34 (363 MB/s) - ‘pCLUE_train_5.json.3’ saved [100587971/100587971]\n",
            "\n",
            "--2022-10-04 02:25:34--  https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_6.json\n",
            "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.110.133, 185.199.111.133, ...\n",
            "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 100751713 (96M) [text/plain]\n",
            "Saving to: ‘pCLUE_train_6.json.3’\n",
            "\n",
            "pCLUE_train_6.json. 100%[===================>]  96.08M   442MB/s    in 0.2s    \n",
            "\n",
            "2022-10-04 02:25:40 (442 MB/s) - ‘pCLUE_train_6.json.3’ saved [100751713/100751713]\n",
            "\n",
            "--2022-10-04 02:25:40--  https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_7.json\n",
            "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.110.133, 185.199.111.133, ...\n",
            "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 99844693 (95M) [text/plain]\n",
            "Saving to: ‘pCLUE_train_7.json.3’\n",
            "\n",
            "pCLUE_train_7.json. 100%[===================>]  95.22M   407MB/s    in 0.2s    \n",
            "\n",
            "2022-10-04 02:25:45 (407 MB/s) - ‘pCLUE_train_7.json.3’ saved [99844693/99844693]\n",
            "\n",
            "--2022-10-04 02:25:45--  https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_8.json\n",
            "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...\n",
            "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 100346884 (96M) [text/plain]\n",
            "Saving to: ‘pCLUE_train_8.json.3’\n",
            "\n",
            "pCLUE_train_8.json. 100%[===================>]  95.70M   426MB/s    in 0.2s    \n",
            "\n",
            "2022-10-04 02:25:51 (426 MB/s) - ‘pCLUE_train_8.json.3’ saved [100346884/100346884]\n",
            "\n",
            "--2022-10-04 02:25:51--  https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_9.json\n",
            "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...\n",
            "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 464979 (454K) [text/plain]\n",
            "Saving to: ‘pCLUE_train_9.json.3’\n",
            "\n",
            "pCLUE_train_9.json. 100%[===================>] 454.08K  --.-KB/s    in 0.01s   \n",
            "\n",
            "2022-10-04 02:25:51 (34.9 MB/s) - ‘pCLUE_train_9.json.3’ saved [464979/464979]\n",
            "\n"
          ]
        }
      ],
      "source": [
        "# 下载pCLUE的部分数据（如，pCLUE_train_1.json）到本地\n",
        "!wget https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_1.json\n",
        "!wget https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_2.json\n",
        "!wget https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_3.json\n",
        "!wget https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_4.json\n",
        "!wget https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_5.json\n",
        "!wget https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_6.json\n",
        "!wget https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_7.json\n",
        "!wget https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_8.json\n",
        "!wget https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_train_9.json"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "zvqJhbIPVkYv"
      },
      "outputs": [],
      "source": [
        "# 合并多个训练集，得到一个全量的训练集（如果需要全量数据训练；否则以下只使用部分数据进行训练）\n",
        "!rm -rf pCLUE_train.json\n",
        "!cat pCLUE_train_1.json pCLUE_train_2.json pCLUE_train_3.json pCLUE_train_4.json pCLUE_train_5.json pCLUE_train_6.json pCLUE_train_7.json pCLUE_train_8.json pCLUE_train_9.json >> pCLUE_train.json"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "BKcEoUpWyUYN",
        "outputId": "ed7c617d-0269-4a0b-c2fc-c1770361ef66"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "1200705 pCLUE_train.json\n"
          ]
        }
      ],
      "source": [
        "# 查看数据量\n",
        "!wc -l pCLUE_train.json"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "4-2uA_5jP5zk",
        "outputId": "9815adee-2e43-4828-853c-2578b5b39938"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "length of lines: 1200705\n",
            "0 input: 这是关于哪方面的新闻： 故事,文化,娱乐,体育,财经,房产,汽车,教育,科技,军事,旅游,国际,股票,农业,游戏?崔万军合同到期 广州龙狮主教练离职_答案： ;output: 体育\n",
            "1 input: 这是一个完型填空任务。候选的词语有这些：针锋相对，牵肠挂肚，心急如焚，望眼欲穿，不翼而飞，黯然神伤，金石为开，归心似箭，艰苦卓绝，触景伤情。文章内容为：_既然没有了姚明，我们也没有了那么多可以__的东西。不妨放开心思，好好的欣赏一下姚明之外的东西，也许，乐趣就在其中。(嘟嘟)_ 请问：下划线处应该选择哪个词语？_答案： ;output: 牵肠挂肚\n",
            "2 input: 哪个类别最好的描述了这篇新闻？汶川地震10周年丨航拍新北川 楼房拔地起 旧貌换新颜_选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏_答案： ;output: 国际\n",
            "3 input: “现在买不是很好的时机了”我们这样说有道理吗“现在能以历史最低价买到”？是的,不是,或也许？_答案： ;output: 不是\n",
            "4 input: 假定下面是真的“他想起方才王琦瑶关于指纹的话,就找一块抹布将所有的家什抹了一遍”因此,“他做了亏心事”是必然的,可能的,或不可能？_答案： ;output: 可能的\n",
            "5 input: 哪个类别最好的描述了这个APP应用程序？平台简介有信钱包&mdash;&mdash;您的随身银行，为您提供专业的借钱借款快贷款金融平台。有信钱包为您提供低息借贷产品、征信查询、信用卡办理和理财资讯。平台利用大数据技术，为用户匹配推荐最适合的低费率、放款快的贷款产品，是一款满足个人和中小企业各种现金借贷需求的贷款APP。产品特点1.操作简单仅需身份证，1分钟即可完成申请；2.额度灵活贷款金额100元100万不等，任你借；3.极速贷款30秒审批，3分钟到账；4.超低月息灵活分期，让你贷款无忧；5.征信查询一键查询网贷报告，快速了解您的信用情况；6.安全保障信息安全，息费透明，保护您的隐私安全_选项：银行，社区，电商，支付，经营，卡牌，借贷，驾校，理财，职考，新闻，旅游，交通，魔幻，医疗，影像，动作，工具，体育，小说，运动，相机，工具，快递，教育，股票，菜谱，行车，仙侠，亲子，购物，射击，漫画，小学，同城，成人，求职，电子，艺术，赚钱，约会，经营，兼职，视频，音乐，英语，棋牌，摄影，养生，办公，政务，视频，论坛，彩票，直播，其他，休闲，策略，通讯，买车，违章，地图，民航，电台，语言，搞笑，婚恋，超市，养车，杂志，在线，家政，影视，装修，资讯，社交，餐饮，美颜，挂号，飞行，预定，票务，笔记，买房，外卖，母婴，打车，情侣，日程，租车，博客，百科，绘画，铁路，生活，租房，酒店，保险，问答，收款，竞技，唱歌，技术，减肥，工作，团购，记账，女性，公务，二手，美妆，汽车，行程，免费，教辅，两性，出国，婚庆，民宿_答案： ;output: 借贷\n",
            "6 input: 联合制碱法又称侯氏制碱法，用于在工业上制取纯碱（NaCO），由化学家侯德榜于1939年发明，是世界上广泛采用的制纯碱法。具体过程为：在饱和氨盐水中（氨气，氯化钠都达到饱和的溶液）通入二氧化碳从而发生如下反应：反应中的碳酸氢钠由于溶解度低而析出，可以进一步煅烧分解为碳酸钠，水和二氧化碳，其中二氧化碳可以再次进入反应重复利用。为了获取存留在溶液中的氯化铵，在废液中加入氯化钠，并在30－40℃下向废液中通入氨气，然后降温到10℃以下，由于氯化铵在30℃时的溶解度比氯化钠大，而在10℃下溶解度比氯化钠小，以及同离子效应，使氯化铵从母液析出，其母液又可作为下一次制碱的原料，重复利用。所谓“联合制碱法”中的“联合”，指该法将合成氨工业与制碱工业组合在一起，利用了生产氨时的副产品二氧化碳，革除了用石灰石加热分解来生产的氨碱法，简化了生产设备。此外，联合制碱法也避免了生产氨碱法中用处不大的副产物氯化钙，而用可作化肥的氯化铵来回收，相比于氨碱法更环保。联合制碱法也存在不足。较氨碱法而言，它的用氨量较大，在有些情况下不适用。_问题：联合制碱法存在不足吗？_回答： ;output: 较氨碱法而言，它的用氨量较大，在有些情况下不适用。\n",
            "7 input: 根据文章的意思来回答问题：_段落：我跟你们这些大学生不一样，我必须一边工作一边学习汉语，白天上班，晚上上课，虽然很累，但是觉得很有意思。 _问：他不想学了，因为上着班学汉语太累。 选项：正确，错误_答案： ;output: 错误\n",
            "8 input: “发号施令，有生力量，丧魂失魄，熟视无睹，蠢蠢欲动，指手划脚，言听计从，不闻不问，不以为奇，虎视眈眈” 中，最适合放在段落_ “张豪龙透露，张柏芝在怀上Quintus之前，曾经怀上另一胎，当时谢霆锋忙于拍戏，对张柏芝__。之后张柏芝因独力照顾Lucas，饱受压力，最终保不住腹中胎儿，流产了。张豪龙称：“当时姐姐（张柏芝）哭着打电话给霆锋，谁知霆锋不但没有安慰她，更冷冷抛下一句‘关我屁事’，然后挂线！”” _中的下划线处的是： ;output: 不闻不问\n",
            "9 input: 对话：男：今天的嘉宾是一个在全国范围内家喻户晓的人，她曾经做着特别平凡的工作，但是获得了巨大的声誉，我刚才在后台见到她，还是那么漂亮，她就是李素丽。掌声欢迎李素丽大姐。女：谢谢，大家好。男：听说，你有好多生活中的小窍门，比如大姐的那个皮肤，保养得特别好，听说你有一个窍门，特别逗，每天用醋洗脸，是吧？女：对，白醋，在座的不用，都是年轻人。男：以后，以后需要。女：你们是天生丽质，我今年都46岁了，也就这几年才开始，要不就太显老了。女人都爱美，我一直比较注重我自己的形象，后来我就跟那些美容专家什么的学，每天用洗面奶把脸洗净，完了倒一点温水，倒那么一小盖醋洗，洗完了之后，你涂点儿爽肤水，抹点儿晚霜什么的就没事了。男：您现在还会去坐公共汽车吗？女：我就是靠公共汽车。男：靠公共汽车？女：我上下班是走着去，我每天要走将近一个小时，完了再坐几站车，挺好的。男：你在车上，那个职业病会突然出现吗？就是本能地要帮人。女：你甭说在公交车上，就是我坐电梯、坐地铁什么的，只要看见岁数大的，就想说：您慢点儿，您慢点儿，我这儿扶着呢，您甭着急。已经习惯了。男：可是您现在再坐地铁、坐车什么的方便吗？还是会有很多坐车的人知道你是李素丽。女：对对对，所以我现在上班的打扮，有可能就是牛仔衣，戴个墨镜，把头发一散，就这样。我每天6点多钟就从家出来，一个是锻炼，再一个那会儿人少。男：对，那您从来没赶上过高峰的时候？女：没有。问题：女的曾经是做什么的？选项：售货员,电梯小姐,美容专家,公车售票员_答案： ;output: 公车售票员\n"
          ]
        }
      ],
      "source": [
        "# 数据准备：将json文件转化为csv形式的文件。\n",
        "def convert_json_to_csv(source_file, target_file):\n",
        "    \"\"\"将json文件转化为csv形式的文件。\n",
        "       source_file:输入文件；\n",
        "       target_file：转化后的文件\n",
        "    \"\"\"\n",
        "    lines=open(source_file,'r').readlines()\n",
        "    print(\"length of lines:\",len(lines))\n",
        "    input_list=[]\n",
        "    output_list=[]\n",
        "    answer_choices_list=[]\n",
        "    type_list=[]\n",
        "    for i, line in enumerate(lines):\n",
        "        # {\"input\": \"以下内容为真：“滁县地区专员张友道说:大都架到高处了”那么下面的陈述：“张友道对身边的官员说了话。”是真的,假的,或未知？\\n答案：\", \"target\": \"未知\", \"answer_choices\": [\"真的\", \"假的\", \"未知\"], \"type\": \"nli\"}\n",
        "        # 1)获得字段值\n",
        "        json_string=json.loads(line.strip())\n",
        "        input_=json_string[\"input\"].replace(\"\\n\", \"_\")\n",
        "        output_=json_string[\"target\"]\n",
        "        answer_choices_=json_string.get(\"answer_choices\",[])\n",
        "        type_=json_string[\"type\"]\n",
        "        if i<10:print(i,\"input:\",input_,\";output:\",output_)\n",
        "        # 2)添加到列表中\n",
        "        input_list.append(input_)\n",
        "        output_list.append(output_)\n",
        "        answer_choices_list.append(answer_choices_)\n",
        "        type_list.append(type_)\n",
        "\n",
        "    # 3)写成pandas的dataframe，以csv进行保存\n",
        "    df = pd.DataFrame({'input': input_list,\n",
        "                       'target':output_list,\n",
        "                       'answer_choices': answer_choices_list,\n",
        "                       'type': type_list,\n",
        "                       })\n",
        "    df.to_csv(target_file,index=False)\n",
        "\n",
        "# 请运行以下三行代码进行格式换行，如果你需要全量数据训练。\n",
        "# 默认将只使用部分在线的示例数据进行训练。\n",
        "source_file='pCLUE_train.json'\n",
        "target_file='pCLUE_train.csv'\n",
        "convert_json_to_csv(source_file, target_file)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "5fOWFNvkuX1R",
        "outputId": "c592e1ef-9a30-4c0e-a076-7f4951914088"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "end...\n"
          ]
        }
      ],
      "source": [
        "# 做一些相关的配置(打印显示；GPU设置)\n",
        "# define a rich console logger\n",
        "console = Console(record=True)\n",
        "\n",
        "# to display dataframe in ASCII format\n",
        "def display_df(df):\n",
        "    \"\"\"display dataframe in ASCII format\"\"\"\n",
        "\n",
        "    console = Console()\n",
        "    table = Table(\n",
        "        Column(\"source_text\", justify=\"center\"),\n",
        "        Column(\"target_text\", justify=\"center\"),\n",
        "        title=\"Sample Data\",\n",
        "        pad_edge=False,\n",
        "        box=box.ASCII,\n",
        "    )\n",
        "\n",
        "    for i, row in enumerate(df.values.tolist()):\n",
        "        table.add_row(row[0], row[1])\n",
        "\n",
        "    # console.print(table) # TODO TODO TODO\n",
        "\n",
        "# training logger to log training progress\n",
        "training_logger = Table(\n",
        "    Column(\"Epoch\", justify=\"center\"),\n",
        "    Column(\"Steps\", justify=\"center\"),\n",
        "    Column(\"Loss\", justify=\"center\"),\n",
        "    title=\"Training Status\",\n",
        "    pad_edge=False,\n",
        "    box=box.ASCII,\n",
        ")\n",
        "\n",
        "# Setting up the device for GPU usage\n",
        "from torch import cuda\n",
        "device = 'cuda' if cuda.is_available() else 'cpu'\n",
        "print(\"end...\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "9eRD96N_vF0b"
      },
      "source": [
        "# Dataset Class 自定义数据集类"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "A9zEqFXgvI0L",
        "outputId": "60a7a740-7377-4709-ebbf-639742d8469d"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "end...\n"
          ]
        }
      ],
      "source": [
        "class YourDataSetClass(Dataset):\n",
        "    \"\"\"\n",
        "    创建一个自定义的数据集，用于训练，必须包括两个字段：输入(如source_text)、输出（如target_text）\n",
        "    Creating a custom dataset for reading the dataset and\n",
        "    loading it into the dataloader to pass it to the\n",
        "    neural network for finetuning the model\n",
        "\n",
        "    \"\"\"\n",
        "\n",
        "    def __init__(\n",
        "        self, dataframe, tokenizer, source_len, target_len, source_text, target_text\n",
        "    ):\n",
        "        \"\"\"\n",
        "        Initializes a Dataset class\n",
        "\n",
        "        Args:\n",
        "            dataframe (pandas.DataFrame): Input dataframe\n",
        "            tokenizer (transformers.tokenizer): Transformers tokenizer\n",
        "            source_len (int): Max length of source text\n",
        "            target_len (int): Max length of target text\n",
        "            source_text (str): column name of source text\n",
        "            target_text (str): column name of target text\n",
        "        \"\"\"\n",
        "        self.tokenizer = tokenizer\n",
        "        self.data = dataframe\n",
        "        self.source_len = source_len\n",
        "        self.summ_len = target_len\n",
        "        self.target_text = self.data[target_text]\n",
        "        self.source_text = self.data[source_text]\n",
        "\n",
        "    def __len__(self):\n",
        "        \"\"\"returns the length of dataframe\"\"\"\n",
        "\n",
        "        return len(self.target_text)\n",
        "\n",
        "    def __getitem__(self, index):\n",
        "        \"\"\"return the input ids, attention masks and target ids\"\"\"\n",
        "\n",
        "        source_text = str(self.source_text[index])\n",
        "        target_text = str(self.target_text[index])\n",
        "\n",
        "        # cleaning data so as to ensure data is in string type\n",
        "        source_text = \" \".join(source_text.split())\n",
        "        target_text = \" \".join(target_text.split())\n",
        "\n",
        "        source = self.tokenizer.batch_encode_plus(\n",
        "            [source_text],\n",
        "            max_length=self.source_len,\n",
        "            pad_to_max_length=True,\n",
        "            truncation=True,\n",
        "            padding=\"max_length\",\n",
        "            return_tensors=\"pt\",\n",
        "        )\n",
        "        target = self.tokenizer.batch_encode_plus(\n",
        "            [target_text],\n",
        "            max_length=self.summ_len,\n",
        "            pad_to_max_length=True,\n",
        "            truncation=True,\n",
        "            padding=\"max_length\",\n",
        "            return_tensors=\"pt\",\n",
        "        )\n",
        "\n",
        "        source_ids = source[\"input_ids\"].squeeze()\n",
        "        source_mask = source[\"attention_mask\"].squeeze()\n",
        "        target_ids = target[\"input_ids\"].squeeze()\n",
        "        target_mask = target[\"attention_mask\"].squeeze()\n",
        "\n",
        "        return {\n",
        "            \"source_ids\": source_ids.to(dtype=torch.long),\n",
        "            \"source_mask\": source_mask.to(dtype=torch.long),\n",
        "            \"target_ids\": target_ids.to(dtype=torch.long),\n",
        "            \"target_ids_y\": target_ids.to(dtype=torch.long),\n",
        "        }\n",
        "print(\"end...\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "XpPy84YfwOCL"
      },
      "source": [
        "# 训练方法 Train"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "XYrZKcX1wR3t",
        "outputId": "2e5113be-ebe2-4ecf-fd41-46e196a5b45f"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "end...\n"
          ]
        }
      ],
      "source": [
        "def train(epoch, tokenizer, model, device, loader, optimizer):\n",
        "\n",
        "    \"\"\"\n",
        "    用于训练的方法\n",
        "    Function to be called for training with the parameters passed from main function\n",
        "\n",
        "    \"\"\"\n",
        "\n",
        "    model.train()\n",
        "    time1=time.time()\n",
        "    for _, data in enumerate(loader, 0):\n",
        "        y = data[\"target_ids\"].to(device, dtype=torch.long)\n",
        "        y_ids = y[:, :-1].contiguous() # target, from start to end(except end of token, <EOS>). e.g. \"你好吗？\"\n",
        "        lm_labels = y[:, 1:].clone().detach() # target, for second to end.e.g.\"好吗？<EOS>\"\n",
        "        lm_labels[y[:, 1:] == tokenizer.pad_token_id] = -100 # releted to pad_token and loss. for detail, check here: https://github.com/Shivanandroy/T5-Finetuning-PyTorch/issues/3\n",
        "        ids = data[\"source_ids\"].to(device, dtype=torch.long) # input. e.g. \"how are you?\"\n",
        "        mask = data[\"source_mask\"].to(device, dtype=torch.long)\n",
        "\n",
        "        outputs = model(\n",
        "            input_ids=ids,\n",
        "            attention_mask=mask,\n",
        "            decoder_input_ids=y_ids,\n",
        "            labels=lm_labels,\n",
        "        )\n",
        "        loss = outputs[0]\n",
        "        # 每100步打印日志\n",
        "        if _ % 100 == 0 and _!=0:\n",
        "            time2=time.time()\n",
        "            print(_,\"epoch:\"+str(epoch)+\"-loss:\"+str(loss)+\";each step's time spent:\"+str(float(time2-time1)/float(_+0.0001)))\n",
        "            # training_logger.add_row(str(epoch), str(_), str(loss))\n",
        "            # console.print(training_logger)\n",
        "\n",
        "        optimizer.zero_grad()\n",
        "        loss.backward()\n",
        "        optimizer.step()\n",
        "print(\"end...\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "TC5HTcoJwtbd"
      },
      "source": [
        "# 用于验证的方法 Validate"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "GQPOEv-rwqdT",
        "outputId": "e2042d74-e832-4a0a-f976-6c9c1aa77a37"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "end...\n"
          ]
        }
      ],
      "source": [
        "def validate(epoch, tokenizer, model, device, loader,max_length):\n",
        "\n",
        "  \"\"\"\n",
        "  用于验证的方法：输入用于验证的数据，返回模型预测的结果和正确的标签\n",
        "  Function to evaluate model for predictions\n",
        "\n",
        "  \"\"\"\n",
        "  model.eval()\n",
        "  predictions = []\n",
        "  actuals = []\n",
        "  with torch.no_grad():\n",
        "      for _, data in enumerate(loader, 0):\n",
        "          y = data['target_ids'].to(device, dtype = torch.long)\n",
        "          ids = data['source_ids'].to(device, dtype = torch.long)\n",
        "          mask = data['source_mask'].to(device, dtype = torch.long)\n",
        "\n",
        "          generated_ids = model.generate(\n",
        "              input_ids = ids,\n",
        "              attention_mask = mask,\n",
        "              max_length=max_length,\n",
        "              num_beams=2,\n",
        "              repetition_penalty=2.5,\n",
        "              length_penalty=1.0,\n",
        "              early_stopping=True\n",
        "              )\n",
        "          preds = [tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=True) for g in generated_ids]\n",
        "          target = [tokenizer.decode(t, skip_special_tokens=True, clean_up_tokenization_spaces=True)for t in y]\n",
        "          if _%1000==0:\n",
        "              console.print(f'Completed {_}')\n",
        "\n",
        "          predictions.extend(preds)\n",
        "          actuals.extend(target)\n",
        "  return predictions, actuals\n",
        "print(\"end...\")"
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "vh5J0fdVx1sd"
      },
      "source": [
        "# 训练类 Trainer\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "xZTAz5qkx8TJ",
        "outputId": "6952c23f-3120-4f58-8b46-df5818f722cf"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "end...\n"
          ]
        }
      ],
      "source": [
        "# 训练类：整合数据集类、训练方法、验证方法，加载数据进行训练并验证训练过程的效果\n",
        "def T5Trainer(\n",
        "    dataframe, source_text, target_text, model_params, output_dir=\"./outputs/\"\n",
        "):\n",
        "    \"\"\"\n",
        "    T5 trainer\n",
        "    \"\"\"\n",
        "    # Set random seeds and deterministic pytorch for reproducibility\n",
        "    torch.manual_seed(model_params[\"SEED\"])  # pytorch random seed\n",
        "    np.random.seed(model_params[\"SEED\"])  # numpy random seed\n",
        "    torch.backends.cudnn.deterministic = True\n",
        "\n",
        "    # logging\n",
        "    console.log(f\"\"\"[Model]: Loading {model_params[\"MODEL\"]}...\\n\"\"\")\n",
        "\n",
        "    # tokenzier for encoding the text\n",
        "    tokenizer = T5Tokenizer.from_pretrained(model_params[\"MODEL\"])\n",
        "\n",
        "    # Defining the model. We are using ChatYuan model and added a Language model layer on top for generation of prediction.\n",
        "    # Further this model is sent to device (GPU/TPU) for using the hardware.\n",
        "    model = T5ForConditionalGeneration.from_pretrained(model_params[\"MODEL\"])\n",
        "    model = model.to(device)\n",
        "\n",
        "    # logging\n",
        "    console.log(f\"[Data]: Reading data...\\n\")\n",
        "\n",
        "    # Importing the raw dataset\n",
        "    dataframe = dataframe[[source_text, target_text]]\n",
        "    # display_df(dataframe.head(2))\n",
        "\n",
        "    # Creation of Dataset and Dataloader\n",
        "    # Defining the train size So 94% of the data will be used for training and the rest for validation.\n",
        "    train_size = 0.94\n",
        "    train_dataset = dataframe.sample(frac=train_size, random_state=model_params[\"SEED\"])\n",
        "    val_dataset = dataframe.drop(train_dataset.index).reset_index(drop=True)\n",
        "    train_dataset = train_dataset.reset_index(drop=True)\n",
        "\n",
        "    # 打印数据集相关日志：数据量、训练步数\n",
        "    console.print(f\"FULL Dataset: {dataframe.shape}\")\n",
        "    console.print(f\"TRAIN Dataset: {train_dataset.shape}\")\n",
        "    console.print(f\"TEST Dataset: {val_dataset.shape}\\n\")\n",
        "    total_train_steps=int((train_dataset.shape[0] * model_params[\"TRAIN_EPOCHS\"])/model_params[\"TRAIN_BATCH_SIZE\"])\n",
        "    console.print(f\"Total Train Steps: {total_train_steps}\\n\")\n",
        "\n",
        "    # Creating the Training and Validation dataset for further creation of Dataloader\n",
        "    training_set = YourDataSetClass(\n",
        "        train_dataset,\n",
        "        tokenizer,\n",
        "        model_params[\"MAX_SOURCE_TEXT_LENGTH\"],\n",
        "        model_params[\"MAX_TARGET_TEXT_LENGTH\"],\n",
        "        source_text,\n",
        "        target_text,\n",
        "    )\n",
        "    val_set = YourDataSetClass(\n",
        "        val_dataset,\n",
        "        tokenizer,\n",
        "        model_params[\"MAX_SOURCE_TEXT_LENGTH\"],\n",
        "        model_params[\"MAX_TARGET_TEXT_LENGTH\"],\n",
        "        source_text,\n",
        "        target_text,\n",
        "    )\n",
        "\n",
        "    # Defining the parameters for creation of dataloaders\n",
        "    train_params = {\n",
        "        \"batch_size\": model_params[\"TRAIN_BATCH_SIZE\"],\n",
        "        \"shuffle\": True,\n",
        "        \"num_workers\": 0,\n",
        "    }\n",
        "\n",
        "    val_params = {\n",
        "        \"batch_size\": model_params[\"VALID_BATCH_SIZE\"],\n",
        "        \"shuffle\": False,\n",
        "        \"num_workers\": 0,\n",
        "    }\n",
        "\n",
        "    # Creation of Dataloaders for testing and validation. This will be used down for training and validation stage for the model.\n",
        "    training_loader = DataLoader(training_set, **train_params)\n",
        "    val_loader = DataLoader(val_set, **val_params)\n",
        "\n",
        "    # Defining the optimizer that will be used to tune the weights of the network in the training session.\n",
        "    optimizer = torch.optim.Adam(\n",
        "        params=model.parameters(), lr=model_params[\"LEARNING_RATE\"]\n",
        "    )\n",
        "\n",
        "    # Training loop\n",
        "    console.log(f\"[Initiating Fine Tuning]...\\n\")\n",
        "\n",
        "    for epoch in range(model_params[\"TRAIN_EPOCHS\"]):\n",
        "        # 1) train for one epoch\n",
        "        train(epoch, tokenizer, model, device, training_loader, optimizer)\n",
        "\n",
        "        # 2) save model for each epoch\n",
        "        console.log(f\"[Saving Model]...\\n\")\n",
        "        path = os.path.join(output_dir, \"model_files\")\n",
        "        model.save_pretrained(path)\n",
        "        tokenizer.save_pretrained(path)\n",
        "\n",
        "        # 3) evaluating test dataset\n",
        "        console.log(f\"[Initiating Validation]...\\n\")\n",
        "        with torch.no_grad(): # add 2022.10.4\n",
        "          #for epoch in range(model_params[\"VAL_EPOCHS\"]):\n",
        "          predictions, actuals = validate(epoch, tokenizer, model, device, val_loader,model_params[\"MAX_TARGET_TEXT_LENGTH\"])\n",
        "          final_df = pd.DataFrame({\"Generated Text\": predictions, \"Actual Text\": actuals})\n",
        "          final_df.to_csv(os.path.join(output_dir, \"predictions.csv\"))\n",
        "\n",
        "    console.save_text(os.path.join(output_dir, \"logs.txt\"))\n",
        "\n",
        "    console.log(f\"[Validation Completed.]\\n\")\n",
        "    console.print(\n",
        "        f\"\"\"[Model] Model saved @ {os.path.join(output_dir, \"model_files\")}\\n\"\"\"\n",
        "    )\n",
        "    console.print(\n",
        "        f\"\"\"[Validation] Generation on Validation data saved @ {os.path.join(output_dir,'predictions.csv')}\\n\"\"\"\n",
        "    )\n",
        "    console.print(f\"\"\"[Logs] Logs saved @ {os.path.join(output_dir,'logs.txt')}\\n\"\"\")\n",
        "print(\"end...\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "4fJU8ufEz895",
        "outputId": "ff359120-c787-4640-faf4-fc9d8bdf351e"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "end...\n"
          ]
        }
      ],
      "source": [
        "# 定义模型的参数 let's define model parameters specific to T5\n",
        "model_params = {\n",
        "    \"MODEL\": \"ClueAI/ChatYuan-large-v1\",  # model_type\n",
        "    \"TRAIN_BATCH_SIZE\": 8,  # training batch size, 8\n",
        "    \"VALID_BATCH_SIZE\": 8,  # validation batch size,8\n",
        "    \"TRAIN_EPOCHS\": 1,  # number of training epochs\n",
        "    \"VAL_EPOCHS\": 1,  # number of validation epochs\n",
        "    \"LEARNING_RATE\": 1e-4,  # learning rate\n",
        "    \"MAX_SOURCE_TEXT_LENGTH\": 512,  # max length of source text, 512\n",
        "    \"MAX_TARGET_TEXT_LENGTH\": 64,  # max length of target text,64\n",
        "    \"SEED\": 42,  # set seed for reproducibility\n",
        "}\n",
        "print(\"end...\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000
        },
        "id": "3khlf8dY0Mtp",
        "outputId": "d359f31e-6303-41c8-f93b-b0bd1f59d302"
      },
      "outputs": [
        {
          "metadata": {
            "tags": null
          },
          "name": "stdout",
          "output_type": "stream",
          "text": [
            "df.head:                                                      input target  \\\n",
            "1116350  “不登大雅，一无所有，残杯冷炙，家常茶饭，奇珍异宝，山珍海错，步步为营，名垂青史，绫罗绸缎，...   残杯冷炙   \n",
            "593090   给定“有马是神户附近最有名的一个温泉”因此，它必定是真的“神户附近还有别的温泉。”？是的,不...     是的   \n",
            "823073   以下内容为真：“这个贝宁顿就是早期他就是这个嘶吼,但是后来到了中年其实还是变的,听说是有点流...     假的   \n",
            "79862    对话：男：毕业论文还没写完吗？你不是说你周末一定可以完成吗？女：唉，别提了，本来打算周末写完...    写论文   \n",
            "1084238  “这时候放在床上枕头旁边的手机（候选词）响了，我感到奇怪，因为欠费已被停机两个月，现在它(代...     是的   \n",
            "\n",
            "                                            answer_choices  \\\n",
            "1116350  ['不登大雅', '一无所有', '残杯冷炙', '家常茶饭', '奇珍异宝', '山珍海错...   \n",
            "593090                                  ['是的', '不是', '也许']   \n",
            "823073                                  ['真的', '假的', '未知']   \n",
            "79862                          ['写论文', '逛街', '陪妹妹', '看电影']   \n",
            "1084238                                       ['是的', '不是']   \n",
            "\n",
            "                        type  \n",
            "1116350                  mrc  \n",
            "593090                   nli  \n",
            "823073                   nli  \n",
            "79862                    mrc  \n",
            "1084238  anaphora_resolution  \n",
            "df.shape: (12007, 4)\n"
          ]
        },
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">[02:28:10] </span><span style=\"font-weight: bold\">[</span>Model<span style=\"font-weight: bold\">]</span>: Loading ClueAI/PromptCLUE<span style=\"color: #808000; text-decoration-color: #808000\">...</span>                                 <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">&lt;ipython-input-21-9441cc757a73&gt;:14</span>\n",
              "<span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">           </span>                                                                      <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">                                  </span>\n",
              "</pre>\n"
            ],
            "text/plain": [
              "\u001b[2;36m[02:28:10]\u001b[0m\u001b[2;36m \u001b[0m\u001b[1m[\u001b[0mModel\u001b[1m]\u001b[0m: Loading ClueAI/PromptCLUE\u001b[33m...\u001b[0m                                 \u001b[2m<ipython-input-21-9441cc757a73>\u001b[0m\u001b[2m:\u001b[0m\u001b[2m14\u001b[0m\n",
              "\u001b[2;36m           \u001b[0m                                                                      \u001b[2m                                  \u001b[0m\n"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">[02:28:16] </span><span style=\"font-weight: bold\">[</span>Data<span style=\"font-weight: bold\">]</span>: Reading data<span style=\"color: #808000; text-decoration-color: #808000\">...</span>                                               <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">&lt;ipython-input-21-9441cc757a73&gt;:25</span>\n",
              "<span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">           </span>                                                                      <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">                                  </span>\n",
              "</pre>\n"
            ],
            "text/plain": [
              "\u001b[2;36m[02:28:16]\u001b[0m\u001b[2;36m \u001b[0m\u001b[1m[\u001b[0mData\u001b[1m]\u001b[0m: Reading data\u001b[33m...\u001b[0m                                               \u001b[2m<ipython-input-21-9441cc757a73>\u001b[0m\u001b[2m:\u001b[0m\u001b[2m25\u001b[0m\n",
              "\u001b[2;36m           \u001b[0m                                                                      \u001b[2m                                  \u001b[0m\n"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">FULL Dataset: <span style=\"font-weight: bold\">(</span><span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">12007</span>, <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">2</span><span style=\"font-weight: bold\">)</span>\n",
              "</pre>\n"
            ],
            "text/plain": [
              "FULL Dataset: \u001b[1m(\u001b[0m\u001b[1;36m12007\u001b[0m, \u001b[1;36m2\u001b[0m\u001b[1m)\u001b[0m\n"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">TRAIN Dataset: <span style=\"font-weight: bold\">(</span><span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">11287</span>, <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">2</span><span style=\"font-weight: bold\">)</span>\n",
              "</pre>\n"
            ],
            "text/plain": [
              "TRAIN Dataset: \u001b[1m(\u001b[0m\u001b[1;36m11287\u001b[0m, \u001b[1;36m2\u001b[0m\u001b[1m)\u001b[0m\n"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">TEST Dataset: <span style=\"font-weight: bold\">(</span><span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">720</span>, <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">2</span><span style=\"font-weight: bold\">)</span>\n",
              "\n",
              "</pre>\n"
            ],
            "text/plain": [
              "TEST Dataset: \u001b[1m(\u001b[0m\u001b[1;36m720\u001b[0m, \u001b[1;36m2\u001b[0m\u001b[1m)\u001b[0m\n",
              "\n"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">Total Train Steps: <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">1410</span>\n",
              "\n",
              "</pre>\n"
            ],
            "text/plain": [
              "Total Train Steps: \u001b[1;36m1410\u001b[0m\n",
              "\n"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "data": {
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">           </span><span style=\"font-weight: bold\">[</span>Initiating Fine Tuning<span style=\"font-weight: bold\">]</span><span style=\"color: #808000; text-decoration-color: #808000\">...</span>                                           <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">&lt;ipython-input-21-9441cc757a73&gt;:86</span>\n",
              "<span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">           </span>                                                                      <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">                                  </span>\n",
              "</pre>\n"
            ],
            "text/plain": [
              "\u001b[2;36m          \u001b[0m\u001b[2;36m \u001b[0m\u001b[1m[\u001b[0mInitiating Fine Tuning\u001b[1m]\u001b[0m\u001b[33m...\u001b[0m                                           \u001b[2m<ipython-input-21-9441cc757a73>\u001b[0m\u001b[2m:\u001b[0m\u001b[2m86\u001b[0m\n",
              "\u001b[2;36m           \u001b[0m                                                                      \u001b[2m                                  \u001b[0m\n"
            ]
          },
          "metadata": {},
          "output_type": "display_data"
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "100 epoch:0-loss:tensor(0.7344, device='cuda:0', grad_fn=<NllLossBackward0>);each step's time spent:0.3988712065250642\n",
            "200 epoch:0-loss:tensor(0.8039, device='cuda:0', grad_fn=<NllLossBackward0>);each step's time spent:0.39218965471236256\n",
            "300 epoch:0-loss:tensor(0.3353, device='cuda:0', grad_fn=<NllLossBackward0>);each step's time spent:0.3900473144867411\n",
            "400 epoch:0-loss:tensor(4.0370, device='cuda:0', grad_fn=<NllLossBackward0>);each step's time spent:0.38897510460139295\n",
            "500 epoch:0-loss:tensor(1.5137, device='cuda:0', grad_fn=<NllLossBackward0>);each step's time spent:0.38859843016147855\n",
            "600 epoch:0-loss:tensor(0.4864, device='cuda:0', grad_fn=<NllLossBackward0>);each step's time spent:0.3879960057189783\n",
            "700 epoch:0-loss:tensor(1.6351, device='cuda:0', grad_fn=<NllLossBackward0>);each step's time spent:0.38756365995543995\n",
            "800 epoch:0-loss:tensor(0.5294, device='cuda:0', grad_fn=<NllLossBackward0>);each step's time spent:0.3872689508103219\n",
            "900 epoch:0-loss:tensor(0.7503, device='cuda:0', grad_fn=<NllLossBackward0>);each step's time spent:0.3870034937537961\n",
            "1000 epoch:0-loss:tensor(0.4559, device='cuda:0', grad_fn=<NllLossBackward0>);each step's time spent:0.3868019844921801\n",
            "1100 epoch:0-loss:tensor(0.7827, device='cuda:0', grad_fn=<NllLossBackward0>);each step's time spent:0.3867466520800801\n",
            "1200 epoch:0-loss:tensor(1.4990, device='cuda:0', grad_fn=<NllLossBackward0>);each step's time spent:0.38666041986601857\n",
            "1300 epoch:0-loss:tensor(0.9935, device='cuda:0', grad_fn=<NllLossBackward0>);each step's time spent:0.3865425155478841\n",
            "1400 epoch:0-loss:tensor(0.6494, device='cuda:0', grad_fn=<NllLossBackward0>);each step's time spent:0.38647140739681773\n"
          ]
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "\u001b[2;36m[02:37:21]\u001b[0m\u001b[2;36m \u001b[0m\u001b[1m[\u001b[0mSaving Model\u001b[1m]\u001b[0m\u001b[33m...\u001b[0m                                                     \u001b[2m<ipython-input-21-9441cc757a73>\u001b[0m\u001b[2m:\u001b[0m\u001b[2m93\u001b[0m\n",
              "\u001b[2;36m           \u001b[0m                                                                      \u001b[2m                                  \u001b[0m\n"
            ],
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">[02:37:21] </span><span style=\"font-weight: bold\">[</span>Saving Model<span style=\"font-weight: bold\">]</span><span style=\"color: #808000; text-decoration-color: #808000\">...</span>                                                     <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">&lt;ipython-input-21-9441cc757a73&gt;:93</span>\n",
              "<span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">           </span>                                                                      <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">                                  </span>\n",
              "</pre>\n"
            ]
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "\u001b[2;36m[02:37:24]\u001b[0m\u001b[2;36m \u001b[0m\u001b[1m[\u001b[0mInitiating Validation\u001b[1m]\u001b[0m\u001b[33m...\u001b[0m                                            \u001b[2m<ipython-input-21-9441cc757a73>\u001b[0m\u001b[2m:\u001b[0m\u001b[2m99\u001b[0m\n",
              "\u001b[2;36m           \u001b[0m                                                                      \u001b[2m                                  \u001b[0m\n"
            ],
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">[02:37:24] </span><span style=\"font-weight: bold\">[</span>Initiating Validation<span style=\"font-weight: bold\">]</span><span style=\"color: #808000; text-decoration-color: #808000\">...</span>                                            <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">&lt;ipython-input-21-9441cc757a73&gt;:99</span>\n",
              "<span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">           </span>                                                                      <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">                                  </span>\n",
              "</pre>\n"
            ]
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Completed \u001b[1;36m0\u001b[0m\n"
            ],
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\">Completed <span style=\"color: #008080; text-decoration-color: #008080; font-weight: bold\">0</span>\n",
              "</pre>\n"
            ]
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "\u001b[2;36m[02:38:20]\u001b[0m\u001b[2;36m \u001b[0m\u001b[1m[\u001b[0mValidation Completed.\u001b[1m]\u001b[0m                                              \u001b[2m<ipython-input-21-9441cc757a73>\u001b[0m\u001b[2m:\u001b[0m\u001b[2m108\u001b[0m\n",
              "\u001b[2;36m           \u001b[0m                                                                     \u001b[2m                                   \u001b[0m\n"
            ],
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">[02:38:20] </span><span style=\"font-weight: bold\">[</span>Validation Completed.<span style=\"font-weight: bold\">]</span>                                              <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">&lt;ipython-input-21-9441cc757a73&gt;:108</span>\n",
              "<span style=\"color: #7fbfbf; text-decoration-color: #7fbfbf\">           </span>                                                                     <span style=\"color: #7f7f7f; text-decoration-color: #7f7f7f\">                                   </span>\n",
              "</pre>\n"
            ]
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "\u001b[1m[\u001b[0mModel\u001b[1m]\u001b[0m Model saved @ outputs/model_files\n",
              "\n"
            ],
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">[</span>Model<span style=\"font-weight: bold\">]</span> Model saved @ outputs/model_files\n",
              "\n",
              "</pre>\n"
            ]
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "\u001b[1m[\u001b[0mValidation\u001b[1m]\u001b[0m Generation on Validation data saved @ outputs/predictions.csv\n",
              "\n"
            ],
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">[</span>Validation<span style=\"font-weight: bold\">]</span> Generation on Validation data saved @ outputs/predictions.csv\n",
              "\n",
              "</pre>\n"
            ]
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "\u001b[1m[\u001b[0mLogs\u001b[1m]\u001b[0m Logs saved @ outputs/logs.txt\n",
              "\n"
            ],
            "text/html": [
              "<pre style=\"white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace\"><span style=\"font-weight: bold\">[</span>Logs<span style=\"font-weight: bold\">]</span> Logs saved @ outputs/logs.txt\n",
              "\n",
              "</pre>\n"
            ]
          },
          "metadata": {}
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "end..\n"
          ]
        }
      ],
      "source": [
        "# 训练模型\n",
        "# 使用 pCLUE:1200000+多任务提示学习数据集 的部分数据\n",
        "# dataframe必须有2列:\n",
        "#   - input: 文本输入\n",
        "#   - target: 目标输出\n",
        "df = pd.read_csv('/content/pCLUE_train.csv')  # 数据量：1200k数据。\n",
        "df = df.sample(frac=0.01) # TODO  取消本行代码，如果你需要更多数据训练\n",
        "print(\"df.head:\",df.head(n=5))\n",
        "print(\"df.shape:\",df.shape)\n",
        "# 显存占用说明：如果运行现在显存不足，请使用nvidia-smi查看显存；如果显卡多数被占用了，请重启colab程序\n",
        "T5Trainer(\n",
        "    dataframe=df,\n",
        "    source_text=\"input\",\n",
        "    target_text=\"target\",\n",
        "    model_params=model_params,\n",
        "    output_dir=\"outputs\",\n",
        ")\n",
        "print(\"end..\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Rr1GcVqX7utH",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "bc9171c6-4e35-4a3c-e5f0-cc451dc6fd03"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Tue Oct  4 02:38:58 2022       \n",
            "+-----------------------------------------------------------------------------+\n",
            "| NVIDIA-SMI 460.32.03    Driver Version: 460.32.03    CUDA Version: 11.2     |\n",
            "|-------------------------------+----------------------+----------------------+\n",
            "| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |\n",
            "| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |\n",
            "|                               |                      |               MIG M. |\n",
            "|===============================+======================+======================|\n",
            "|   0  Tesla V100-SXM2...  Off  | 00000000:00:04.0 Off |                    0 |\n",
            "| N/A   38C    P0    50W / 300W |   1203MiB / 16160MiB |      0%      Default |\n",
            "|                               |                      |                  N/A |\n",
            "+-------------------------------+----------------------+----------------------+\n",
            "                                                                               \n",
            "+-----------------------------------------------------------------------------+\n",
            "| Processes:                                                                  |\n",
            "|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |\n",
            "|        ID   ID                                                   Usage      |\n",
            "|=============================================================================|\n",
            "+-----------------------------------------------------------------------------+\n"
          ]
        }
      ],
      "source": [
        "# 查看训练后显存占用情况。如果显存被占用，可以kill掉相关的进程\n",
        "!nvidia-smi\n",
        "# !fuser -v /dev/nvidia*"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "Upa5JYdGnkge"
      },
      "outputs": [],
      "source": [
        "# !nvidia-smi -r\n",
        "# 使用以下命令清除训练中残存的GPU显存缓存\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()\n",
        "torch.cuda.empty_cache()"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "# 定位调占用显存的进程（后面可以kill掉）\n",
        "!fuser -v /dev/nvidia*"
      ],
      "metadata": {
        "id": "5isEzAO2xV9_"
      },
      "execution_count": null,
      "outputs": []
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "MZk8WxTv_Jvj"
      },
      "source": [
        "# 加载训练好的模型做预测"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "TfKDGbCe_NQS",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "8129c44d-cb5b-4674-88f5-1aa228a6ea81"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "end...\n"
          ]
        }
      ],
      "source": [
        "# 加载训练后的模型\n",
        "from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\n",
        "tokenizer = AutoTokenizer.from_pretrained(\"ClueAI/ChatYuan-large\")\n",
        "model_trained = AutoModelForSeq2SeqLM.from_pretrained(\"/content/outputs/model_files/\")\n",
        "print(\"end...\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "JhVbqyB-DuI-",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "0371c9db-2fd1-4d3b-bbf7-40eb665b8a90"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "end...\n"
          ]
        }
      ],
      "source": [
        "# import torch\n",
        "# from transformers import AutoTokenizer\n",
        "# 修改colab笔记本设置为gpu，推理更快\n",
        "device = torch.device('cpu') # cuda\n",
        "model_trained.to(device)\n",
        "def preprocess(text):\n",
        "  return text.replace(\"\\n\", \"_\")\n",
        "def postprocess(text):\n",
        "  return text.replace(\"_\", \"\\n\")\n",
        "\n",
        "def answer_fn(text, sample=False, top_p=0.6):\n",
        "  '''sample：是否抽样。生成任务，可以设置为True;\n",
        "     top_p：0-1之间，生成的内容越多样、\n",
        "  '''\n",
        "  text = preprocess(text)\n",
        "  encoding = tokenizer(text=[text], truncation=True, padding=True, max_length=768, return_tensors=\"pt\").to(device)\n",
        "  if not sample: # 不进行采样\n",
        "    out = model_trained.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_length=128, num_beams=4, length_penalty=0.6)\n",
        "  else: # 采样（生成）\n",
        "    out = model_trained.generate(**encoding, return_dict_in_generate=True, output_scores=False, max_length=128, do_sample=True, top_p=top_p)\n",
        "  out_text = tokenizer.batch_decode(out[\"sequences\"], skip_special_tokens=True)\n",
        "  return postprocess(out_text[0])\n",
        "print(\"end...\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "OOzHqV2RD6xX",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "e228d929-6282-469d-95fe-0753d7576797"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "result2: 国际\n"
          ]
        }
      ],
      "source": [
        "text=\"这是关于哪方面的新闻： 故事,文化,娱乐,体育,财经,房产,汽车,教育,科技,军事,旅游,国际,股票,农业,游戏?如果日本沉没，中国会接收日本难民吗？\"\n",
        "result=answer_fn(text, sample=False, top_p=0.6)\n",
        "print(\"result2:\",result)"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "#  每次预测耗时情况计算\n",
        "time1=time.time()\n",
        "num_times=100\n",
        "for i in range(num_times):\n",
        "  result=answer_fn(text, sample=False, top_p=0.6)\n",
        "time2=time.time()\n",
        "time_spent=float(time2-time1)/float(num_times)\n",
        "print(\"time spent for single input:\"+str(time_spent))\n"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "D4B3nCfP38sr",
        "outputId": "920b1d4e-d703-44aa-a994-6be25608c34f"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "time spent for single input:0.27129202604293823\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "HBaYpTvFZ7hi"
      },
      "source": [
        "# 评估公开测试集的效果"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "d-K9FWNLjMto",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "966ff7ba-2d1b-47d8-8422-387eb19d6e4a"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
            "Requirement already satisfied: pylcs in /usr/local/lib/python3.7/dist-packages (0.0.7)\n",
            "Requirement already satisfied: pybind11>=2.2 in /usr/local/lib/python3.7/dist-packages (from pylcs) (2.10.0)\n",
            "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
            "Requirement already satisfied: Rouge in /usr/local/lib/python3.7/dist-packages (1.0.1)\n",
            "Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from Rouge) (1.15.0)\n"
          ]
        }
      ],
      "source": [
        "!pip install pylcs\n",
        "!pip install Rouge"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "tfbbAzoLjHfm",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "d6958e40-930c-4f5f-98c3-ad101f4f6d76"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "end...\n"
          ]
        }
      ],
      "source": [
        "# 安装包\n",
        "import json,pylcs\n",
        "from rouge import Rouge\n",
        "import numpy as np\n",
        "print(\"end...\")"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "M0z-sxm4DzLf",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "c96bb506-47d3-40ca-96e1-f96a23993223"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "--2022-10-04 02:39:59--  https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_test_public_1.json\n",
            "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.111.133, 185.199.110.133, ...\n",
            "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 61122625 (58M) [text/plain]\n",
            "Saving to: ‘pCLUE_test_public_1.json.1’\n",
            "\n",
            "pCLUE_test_public_1 100%[===================>]  58.29M  --.-KB/s    in 0.1s    \n",
            "\n",
            "2022-10-04 02:40:02 (419 MB/s) - ‘pCLUE_test_public_1.json.1’ saved [61122625/61122625]\n",
            "\n",
            "--2022-10-04 02:40:03--  https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_test_public_2.json\n",
            "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...\n",
            "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 52082600 (50M) [text/plain]\n",
            "Saving to: ‘pCLUE_test_public_2.json.1’\n",
            "\n",
            "pCLUE_test_public_2 100%[===================>]  49.67M   320MB/s    in 0.2s    \n",
            "\n",
            "2022-10-04 02:40:06 (320 MB/s) - ‘pCLUE_test_public_2.json.1’ saved [52082600/52082600]\n",
            "\n"
          ]
        }
      ],
      "source": [
        "# 加载公开测试集(test_public.json)\n",
        "!wget https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_test_public_1.json\n",
        "!wget https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_test_public_2.json"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "RuJxKCCNhB5U",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "139905d4-e969-4f11-ebe4-05f7f272dc6c"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "{\"input\": \"哪个类别最好的描述了这篇新闻？五月一定要去一次江南——吃喝玩乐全攻略\\n选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\\n答案：\", \"target\": \"旅游\", \"answer_choices\": [\"故事\", \"文化\", \"娱乐\", \"体育\", \"财经\", \"房产\", \"汽车\", \"教育\", \"科技\", \"军事\", \"旅游\", \"国际\", \"股票\", \"农业\", \"游戏\"], \"type\": \"classify\"}\n",
            "{\"input\": \"给定下面的问题。\\n大学生就业形势严峻的根本原因是什么？\\n以及下面的答案：工作岗位不足。 写一段可能发生的对话。\\n答案：\", \"target\": \"男：亲爱的网友们，大家好，欢迎收看我们的节目。又到了大学生毕业的季节，大学生们要走出校园进入社会，开始新的人生里程了。我们今天节目的主题就是大家关心的大学生就业问题。我们有幸请到了李教授。您好，李教授。女：您好。男：李教授，您能不能先给我们介绍一下最近几年大学生就业的基本情况？女：好的，最近几年我们国家各高校不断扩招，毕业生数量也逐年增加。这当然使年轻人有了更多的学习机会，对提高国民素质有非常积极的促进作用，但是同时也带来一些消极影响。因为社会上的工作岗位有限，需求已趋于饱和，大量的毕业生涌入社会，没有足够的岗位提供给他们，这就从根本上造成了今天大学生就业压力大、就业形势严峻的局面。男：面对这样的形势，有没有什么应对或者解决的措施？女：从政府方面来说，要做的工作是帮助学校和用人单位搭建一个互动的平台，制定相应的保障机制，应对大学生毕业后马上失业的状况；从学校方面来说，应该给学生提供相应的就业指导，包括求职准备、求职技巧、求职礼仪等，帮助学生搜集就业信息，对学生进行心理辅导等等；从学生自身来说，应该认清就业形势，正确给自己定位，提高自身的综合能力和素质，积极主动地寻找和把握工作机会。男：其实很多用人单位还是有招聘需要的，有很多岗位是需要人才的，一方面大学生找不到工作，而另一方面企业招不到人。女：没错，不同的岗位对人才的要求是不同的。比如专业，某些岗位就是需要特定专业的人，其他专业的毕业生不能胜任。还有经验，工作经验是非常重要的，比如一些管理岗位，没有工作经验的人是不可能得到这样的职位的。另外一个重要的原因，是毕业生本身的原因，也就是刚才我们提到的对自己的定位问题。大学生总觉得自己应该赚多少钱，应该进什么样的公司，达不到心理预期宁可不就业。有的人甚至不惜花费几年的时间去考研究生，拿到更高的学位。我并不是反对年轻人多学习多读书，只是建议年轻人不要为逃避就业而读书。男：很多大学生毕业后自己创业，对此您有什么看法呢？女：这是解决就业问题的一个途径，正是因为工作岗位相对饱和，那么就要想办法创造岗位，而创业就是自己给自己提供工作岗位，这是值得鼓励和提倡的。国家现在为大学生创业提供了很多的优惠政策，包括贷款利率优惠、管理培训等等。大学生可以充分利用这些优惠政策，发挥自己的聪明才智，在工作中不断学习。\", \"type\": \"mrc\"}\n",
            "{\"input\": \"摘要：采用热重分析法对南宁无烟煤在加入催化剂ZnO、NaClO4、Na2Cr2O7、Fe2Os和MnO2前后的燃烧动力学特性进行研究.结果表明,添加的5种催化剂都具有催化效果,但对煤燃烧动力学特性影响程度有所不同；催化剂Na2Cr2O7能改变燃烧反应机理,提高煤的燃烧速率,更有利于煤的完全燃烧；催化剂能够不同程度地降低煤燃烧的表现活化能,使煤的着火点降低；5种催化剂的催化效果依次为:Na2Cr2O7＞NaClO4＞ZnO＞Fe2O3＞MnO2. \\n关键词：热分析，劣质煤燃烧，水泥窑用煤，催化剂。请问：上面的关键词都是这篇摘要合适的关键词吗？\\n选项：是的，不是\\n答案：\", \"target\": \"是的\", \"answer_choices\": [\"是的\", \"不是\"], \"type\": \"classify\"}\n",
            "{\"input\": \"“那就属于防卫过当”问题：“那属于防卫过当”真的,假的,或未知？\\n答案：\", \"target\": \"真的\", \"answer_choices\": [\"真的\", \"假的\", \"未知\"], \"type\": \"nli\"}\n",
            "{\"input\": \"这是关于哪方面的新闻？全美 “最毒” 河流在南加，整治无效！\\n选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\\n答案：\", \"target\": \"国际\", \"answer_choices\": [\"故事\", \"文化\", \"娱乐\", \"体育\", \"财经\", \"房产\", \"汽车\", \"教育\", \"科技\", \"军事\", \"旅游\", \"国际\", \"股票\", \"农业\", \"游戏\"], \"type\": \"classify\"}\n",
            "{\"input\": \"这篇新闻会出现在哪个栏目？为什么在北上广这种一线城市人情味会这么淡？\\n选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\\n答案：\", \"target\": \"房产\", \"answer_choices\": [\"故事\", \"文化\", \"娱乐\", \"体育\", \"财经\", \"房产\", \"汽车\", \"教育\", \"科技\", \"军事\", \"旅游\", \"国际\", \"股票\", \"农业\", \"游戏\"], \"type\": \"classify\"}\n",
            "{\"input\": \"阅读短文：\\n 主持人：恩，看来日本公务员的情况和中国是__，接下来我们来连线本台驻印度记者王超，了解一下在这个经常被拿来同中国做比较的国家，公务员的选拔录用有什么特点。记者：印度公务员考试号称是全世界最难的考试，由印度联邦公务员委员会组织，每年一次，凡是年龄在21岁到30岁之间、拥有国家承认的本科学历的人都可以报名，但每人一生最多只能参加4次考试。每年录取的名额大约只有300到600名左右，但报考的人数却常常达到几十万，竞争非常激烈。比如2006年的时候，一共录取500名左右，报考考生却达到35万，录取率只有千分之一点四。 \\n 从候选成语“立竿见影，文房四宝，盘根错节，双管齐下，燃眉之急，杯水车薪，大同小异，如出一辙，千千万万，名不副实”中选出最适合填在下划线处的成语。正确答案是：\", \"target\": \"大同小异\", \"answer_choices\": [\"立竿见影\", \"文房四宝\", \"盘根错节\", \"双管齐下\", \"燃眉之急\", \"杯水车薪\", \"大同小异\", \"如出一辙\", \"千千万万\", \"名不副实\"], \"type\": \"mrc\"}\n",
            "{\"input\": \"什么类别最好的描述了这段话？寻找老战友\\n选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\\n答案：\", \"target\": \"军事\", \"answer_choices\": [\"故事\", \"文化\", \"娱乐\", \"体育\", \"财经\", \"房产\", \"汽车\", \"教育\", \"科技\", \"军事\", \"旅游\", \"国际\", \"股票\", \"农业\", \"游戏\"], \"type\": \"classify\"}\n",
            "{\"input\": \"这些关键词“本体论，实践，实践批判，辩证法”代表了这篇论文的摘要：“本体范畴的绝对性决定了它的惟一性，因此它是逻辑自明的，既不需要也不能够被追问。作为本体论预设，“对象化”不具有自身的合法性，它只能作为实践展开的结果，被历史地建构起来。“对象化本体论”不能取代实践本体论成为恰当的本体论形态。“实践”作为“大全”，并不导致“自我封闭”，而是为内在的开放性提供了绝对前提。马克思哲学所特有的现实性和批判性，只能植根于实践的本体论奠基。”。这是正确的吗？\\n选项：是的，不是\\n答案：\", \"target\": \"是的\", \"answer_choices\": [\"是的\", \"不是\"], \"type\": \"classify\"}\n",
            "{\"input\": \"我可以用以下的句子：“我问什么打不开花呗”，来替换这个句子：“打开不了花呗怎么办”，并且它们有相同的意思？。选项：是的，不是。答案：\", \"target\": \"不是\", \"answer_choices\": [\"是的\", \"不是\"], \"type\": \"classify\"}\n",
            "^C\n"
          ]
        }
      ],
      "source": [
        "#!tail -f pCLUE_test_public_2.json"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "a3azYfj9eV-e",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "38e6e2fa-81ce-4600-be8c-78e3c38b8694"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "129556 pCLUE_test_public.json\n"
          ]
        }
      ],
      "source": [
        "# 合并公开测试集\n",
        "!rm -rf pCLUE_test_public.json\n",
        "!cat pCLUE_test_public_1.json pCLUE_test_public_2.json >> pCLUE_test_public.json\n",
        "!wc -l pCLUE_test_public.json"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "06WKM7SiemLM",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "130f20f2-e8b4-45e7-aea8-056e275a8522"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "length of lines: 1000\n",
            "0 input_string: 哪个类别最好的描述了这篇新闻？扣篮王拉文：精彩暴扣表演！炸\n",
            "选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\n",
            "答案： ;predict: 体育\n",
            "100 input_string: 哪个类别最好的描述了这篇新闻？新泽西寄宿高中推荐！好位置！好学校！一共就6所，抓紧时间！\n",
            "选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\n",
            "答案： ;predict: 教育\n",
            "200 input_string: 下面两个句子语义是“相同”或“不同”？“点开花呗进去显示系统繁忙”，“点花呗 显示系统繁忙”。选项：相同，不同。答案： ;predict: 不同\n",
            "300 input_string: 给定“抓好传染病、地方病、青少年近视防治”是否遵循“已经不会有人再得传染病、地方病和近视了”是的,不是,或也许？\n",
            "答案： ;predict: 是的\n",
            "400 input_string: “哎,她在完全没道理,后来.”问题：“我听到了她所说的东西”真的,假的,或未知？\n",
            "答案： ;predict: 真的\n",
            "500 input_string: 这是一个完型填空任务。候选的词语有这些：旷日持久，顺理成章，遥遥无期，作如是观，白日做梦，拳打脚踢，一模一样，势均力敌，指日可待，不了了之。文章内容为：\n",
            "小凯说他4岁的时候，父母离婚了，他一直跟着母亲过。从小母亲对他要求十分严格，放学后必须回家，很少有机会和别的同学玩耍、沟通，去超市买东西、去操场打篮球、健身，母亲也要陪护在身边。从小学到初中，母亲一直这样呵护自己长大。因为长的白净，性格又腼腆，男孩离他远远的，女孩都愿意和他交朋友。到了技校后，班里的女生王某经常约他出去打篮球。后来王某提出要和他“交朋友”，还要带他到她家里玩儿，他没答应。一天晚上10点多了，其他班级三四个男生把他拖到操场上__，其中一名男生拽着他的衣领说：“王*是我们的姊妹儿，她能看好你，是你的荣幸，如果不答应，我们看见你就揍！”后来他们又打过他三四次，小凯没敢告诉老师，他偷偷告诉了舅舅，让舅舅找人去惩罚那几个“野蛮男生”。舅舅和母亲去找过学校，校方表示将调查落实，后来#idiom578603#。今年6月，小凯再次被打，贺女士把此事反映给了辖区派出所，民警调查时，那几名男生都不承认“动过手”。\n",
            " 请问：下划线处应该选择哪个词语？\n",
            "答案： ;predict: 拳打脚踢\n",
            "600 input_string: 什么类别最好的描述了这段话？《第五人格》中，用什么方式可以获得最新监管者红蝶以及时装？\n",
            "选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\n",
            "答案： ;predict: 娱乐\n",
            "700 input_string: 给定下面的问题。\n",
            "女的是什么意思？\n",
            "以及下面的答案：他不会给我面子。 写一段可能发生的对话。\n",
            "答案： ;predict: 女：他不会给我面子。\n",
            "800 input_string: 什么类别最好的描述了这段话？神兵出世，获奖作品，素铁汉剑欣赏\n",
            "选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\n",
            "答案： ;predict: 军事\n",
            "900 input_string: \n",
            "段落：有一个觉得自己很聪明、很能干的年轻人，毕业以后一直找不到喜欢的工作，他对社会感到非常失望。有一天，他来到大海边，打算结束自己的生命。正在这个时候，一位老人从附近走过。老人问了他的情况，就从脚下的沙滩上捡起一粒沙子，让年轻人看了看，然后随便地扔在了地上，对年轻人说：“请你把我刚才扔在地上的那粒沙子捡起来。”“这根本不可能!”年轻人说。\n",
            "老人没有说话，从自己的口袋里拿出一颗大大的珍珠，也随便地扔到了地上，然后对年轻人说：“你能不能把这颗珍珠捡起来呢?”“当然可以!”“那你就应该明白是为什么了吧?你应该知道，现在你还不是一颗珍珠，所以你不能要求别人立即承认你。如果要别人承认，那你就要想办法让自己成为一颗珍珠才行。你如果接受不了失败，承受不了别人对你冷淡的态度，就很难取得成功。”  \n",
            "问：年轻人为什么感到失望?  选项：找不到好工作，找不到合适的女朋友，不能完成计划，不能通过考试 \n",
            "答案： ;predict: 找不到好工作\n"
          ]
        }
      ],
      "source": [
        "# 在公开测试集上做预测，并写入到文件\n",
        "def predict_on_test(source_file,target_file,select_top):\n",
        "  lines=open(source_file,'r').readlines()\n",
        "  if select_top!=-1: # select_top==-1 -->全量预测；其他值，则选取top值进行预测\n",
        "    lines=lines[0:select_top]\n",
        "  print(\"length of lines:\",len(lines))\n",
        "  target_object=open(target_file,'w')\n",
        "  for i,line in enumerate(lines):\n",
        "    # print(i,line)\n",
        "    json_string_right=json.loads(line)\n",
        "    input_string=json_string_right[\"input\"]\n",
        "    target_answer=json_string_right[\"target\"]\n",
        "    type=json_string_right[\"type\"]\n",
        "\n",
        "    predict_answer=answer_fn(input_string)\n",
        "    json_string_predict={\"target\":predict_answer.strip(),\"type\":type}\n",
        "    json_string_predict=json.dumps(json_string_predict,ensure_ascii=False)\n",
        "    target_object.write(json_string_predict+\"\\n\")\n",
        "    if i%100==0:\n",
        "      print(i,\"input_string:\",input_string,\";predict:\",predict_answer)\n",
        "\n",
        "select_top=1000 # TODO 改变select_top的值，使得用一个大的数量，或全量数据\n",
        "source_file='pCLUE_test_public.json'\n",
        "target_file='pCLUE_test_public_predict.json'\n",
        "predict_on_test(source_file,target_file,select_top)"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "xa7_UwDsinhF",
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "outputId": "f07e5468-11a7-4148-89af-dc59c89df5b7"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "0 target_answer: 电竞 ;predict_answer: 体育 length of predict_answer: 2\n",
            "0 target_answer: 电竞 ;predict_answer: 体育\n",
            "1 target_answer: 休闲益智 ;predict_answer: 休闲益智\n",
            "2 target_answer: 孤立无援 ;predict_answer: 孤立无援\n",
            "3 target_answer: 军事 ;predict_answer: 教育\n",
            "4 target_answer: 约会社交 ;predict_answer: 视频\n",
            "5 target_answer: 怕丢脸 ;predict_answer: 奖励他\n",
            "6 target_answer: 是的 ;predict_answer: 是的\n",
            "7 target_answer: 也许 ;predict_answer: 是的\n",
            "8 target_answer: 工具 ;predict_answer: 办公\n",
            "9 target_answer: 足三两是哪个品牌的招牌食品之一？ ;predict_answer: 麦当劳的餐牌上足-{}-三两及double足-{}-三两都会以小字体加上「烹调前」标签,以符合香港海关《商品说明条例》的规定。\n",
            "100 target_answer: 教育 ;predict_answer: 教育 length of predict_answer: 2\n",
            "200 target_answer: 相同 ;predict_answer: 不同 length of predict_answer: 2\n",
            "300 target_answer: 不是 ;predict_answer: 是的 length of predict_answer: 2\n",
            "400 target_answer: 未知 ;predict_answer: 真的 length of predict_answer: 2\n",
            "500 target_answer: 拳打脚踢 ;predict_answer: 拳打脚踢 length of predict_answer: 4\n",
            "600 target_answer: 电竞 ;predict_answer: 娱乐 length of predict_answer: 2\n",
            "700 target_answer: 男：这么个小单生意,你去找他就行了。女：你当我是谁呀？他根本就不买我的账。 ;predict_answer: 女：他不会给我面子。 length of predict_answer: 10\n",
            "800 target_answer: 文化 ;predict_answer: 军事 length of predict_answer: 2\n",
            "900 target_answer: 找不到好工作 ;predict_answer: 找不到好工作 length of predict_answer: 6\n",
            "result: {'score': 0.3886296911019823, 'classify_score': 0.55, 'nli_score': 0.28342245989304815, 'generate_score': 0.30497830382404156, 'mrc_em_score': 0.364741641337386, 'mrc_f1_score': 0.46749436004429273}\n"
          ]
        }
      ],
      "source": [
        "# 使用评估脚本进行评估\n",
        "\"\"\"\n",
        "脚本见：https://github.com/CLUEbenchmark/pCLUE/blob/main/evaluate_pclue.py\n",
        "计算pCLUE任务总分，及子分数\n",
        "\"\"\"\n",
        "def f1_sim(text_a, text_b):\n",
        "    \"\"\"F1相似度\n",
        "    说明：算出两个文本的最长公共子序列长度，然后乘2并处以两者\n",
        "    长度之和。推荐用pylcs算，速度较快。\n",
        "    \"\"\"\n",
        "    if not text_a and not text_b:\n",
        "        return 0.\n",
        "    else:\n",
        "        lcs = pylcs.lcs(text_a, text_b)\n",
        "        return 2. * lcs / (len(text_a) + len(text_b))\n",
        "\n",
        "def rouge_l_zh(target, pred):\n",
        "    \"\"\"计算Rouge-l得分，Rouge-l指标常用于评估自动文本摘要及翻译任务\n",
        "    target: 真实标签\n",
        "    pred: 预测标签\"\"\"\n",
        "    if not(isinstance(target, str) or isinstance(pred, str)):\n",
        "        logger.info(\"target或pred为非字符串！请检查!\")\n",
        "        return\n",
        "    else:\n",
        "        rouge = Rouge()\n",
        "        scores = rouge.get_scores(\" \".join(list(pred)), \" \".join(list(target)))\n",
        "        score = scores[0][\"rouge-l\"]\n",
        "        return score[\"f\"]\n",
        "\n",
        "def normalize(text):\n",
        "    \"\"\"简单的文本标准化\n",
        "    \"\"\"\n",
        "    return ' '.join(text.lower().split())\n",
        "\n",
        "def evaluate_pclue_fn(predict_file,target_file,select_top):\n",
        "    \"\"\"\n",
        "    计算pclue的成绩\n",
        "    :param predict_file: 预测文件\n",
        "    :param target_file:  正确的文件\n",
        "    :return: 一个dict，包括总分score，以及各个部分的分数（mrc, generate, classify, nli）\n",
        "    \"\"\"\n",
        "    predict_lines=open(predict_file,'r').readlines()\n",
        "    target_lines=open(target_file,'r').readlines()\n",
        "\n",
        "    predict_lines=predict_lines[0:select_top]\n",
        "    target_lines=target_lines[0:select_top]\n",
        "    # 1.记录\n",
        "    classify_list=[]\n",
        "    mrc_list=[]\n",
        "    generate_list=[]\n",
        "    nli_list=[]\n",
        "    for i, target_line in enumerate(target_lines):\n",
        "        # e.g. target_line = {\"target\": \"不同\"}\n",
        "        predict_line=predict_lines[i]\n",
        "        target_answer=json.loads(target_line.replace(\"，\",\",\"))[\"target\"] # 正确的标签\n",
        "        if isinstance(target_answer, list):  # 将列表转换为字符串，如关键词生成\n",
        "            target_answer = \"，\".join(target_answer)\n",
        "        target_answer=normalize(target_answer)\n",
        "        predict_answer=json.loads(predict_line)[\"target\"] # 预测的标签\n",
        "        predict_answer=normalize(predict_answer)\n",
        "        if len(predict_answer)==0:\n",
        "          predict_answer=\"无答案\"\n",
        "        if i%100==0:\n",
        "          print(i,\"target_answer:\",target_answer,\";predict_answer:\",predict_answer,\"length of predict_answer:\",len(predict_answer))\n",
        "\n",
        "        type=json.loads(target_line.replace(\"，\",\",\"))[\"type\"] # 替换可能存在问题的数据，如有，以便能加载为json\n",
        "        if type=='classify' or type=='anaphora_resolution': # 分类\n",
        "            label_temp=True if target_answer==predict_answer else False\n",
        "            classify_list.append(label_temp)\n",
        "        elif type=='mrc': # 阅读理解\n",
        "            em=1 if target_answer==predict_answer else 0\n",
        "            f1=f1_sim(predict_answer,target_answer)\n",
        "            mrc_list.append((em, f1))\n",
        "        elif type=='generate': # 生成\n",
        "            rouge_l=rouge_l_zh(target_answer, predict_answer)\n",
        "            generate_list.append(rouge_l)\n",
        "        elif type=='nli': # 推理\n",
        "            label_temp = True if target_answer == predict_answer else False\n",
        "            nli_list.append(label_temp)\n",
        "        else:\n",
        "            print(\"error...predict_line:\",predict_line,\";target_line:\",target_line)\n",
        "            break # 中断运行\n",
        "        # if predict_answer==target_answer: count_right=count_right+1\n",
        "        if i<10: print(i, 'target_answer:',target_answer,\";predict_answer:\",predict_answer) # 显示部分内容\n",
        "\n",
        "    # 2.计算最后的得分\n",
        "    classify_score=np.average(classify_list)\n",
        "    nli_score=np.average(nli_list)\n",
        "    generate_score=np.average(generate_list)\n",
        "    mrc_em_score=np.average([x[0] for x in mrc_list])\n",
        "    mrc_f1_score=np.average([x[1] for x in mrc_list])\n",
        "    mrc_score=np.average([mrc_em_score,mrc_f1_score])\n",
        "    # 计算总分\n",
        "    score=np.average([classify_score,nli_score,generate_score,mrc_score])\n",
        "    # 保存分数\n",
        "    result_dict={\"score\":score,\"classify_score\":classify_score,\"nli_score\":nli_score,\"generate_score\":generate_score,\n",
        "                 \"mrc_em_score\":mrc_em_score,\"mrc_f1_score\":mrc_f1_score}\n",
        "    return result_dict\n",
        "\n",
        "# 预测的文件，以及正确的文件\n",
        "target_file='pCLUE_test_public.json'\n",
        "predict_file='pCLUE_test_public_predict.json'\n",
        "result=evaluate_pclue_fn(predict_file,target_file,select_top)\n",
        "print(\"result:\",result)"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# 生成在测试集上的预测结果并提交"
      ],
      "metadata": {
        "id": "AsCDQCWVDQKC"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# 下载合并测试集(test.json)\n",
        "# 加载公开测试集(test_public.json)\n",
        "!wget https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_test_1.json\n",
        "!wget https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_test_2.json\n",
        "!rm -rf pCLUE_test.json\n",
        "!cat pCLUE_test_1.json pCLUE_test_2.json >> pCLUE_test.json\n",
        "!wc -l pCLUE_test.json"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "59m8OdndDzDu",
        "outputId": "7379a3e5-81e9-435b-d32e-c17a050a29ce"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "--2022-10-04 03:36:36--  https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_test_1.json\n",
            "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ...\n",
            "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 89403305 (85M) [text/plain]\n",
            "Saving to: ‘pCLUE_test_1.json’\n",
            "\n",
            "pCLUE_test_1.json   100%[===================>]  85.26M   445MB/s    in 0.2s    \n",
            "\n",
            "2022-10-04 03:36:40 (445 MB/s) - ‘pCLUE_test_1.json’ saved [89403305/89403305]\n",
            "\n",
            "--2022-10-04 03:36:40--  https://raw.githubusercontent.com/CLUEbenchmark/pCLUE/main/datasets/pCLUE_test_2.json\n",
            "Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.111.133, 185.199.109.133, 185.199.108.133, ...\n",
            "Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.111.133|:443... connected.\n",
            "HTTP request sent, awaiting response... 200 OK\n",
            "Length: 97299757 (93M) [text/plain]\n",
            "Saving to: ‘pCLUE_test_2.json’\n",
            "\n",
            "pCLUE_test_2.json   100%[===================>]  92.79M   486MB/s    in 0.2s    \n",
            "\n",
            "2022-10-04 03:36:45 (486 MB/s) - ‘pCLUE_test_2.json’ saved [97299757/97299757]\n",
            "\n",
            "250461 pCLUE_test.json\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [],
      "metadata": {
        "id": "JMSWo-YPDZgH"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "# 在测试集上做预测，并生成预测文件\n",
        "source_file='pCLUE_test.json'\n",
        "target_file='pCLUE_predict.json'\n",
        "select_top=-1 # 全量预测\n",
        "predict_on_test(source_file,target_file,select_top)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 1000
        },
        "id": "BAo6_3l2Dp2e",
        "outputId": "37073993-c16b-4abf-ee96-102974e1de35"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "length of lines: 250461\n",
            "0 input_string: 下面两个句子语义是“相同”或“不同”？“我的蚂蚁借呗和花呗用不了啦”，“花呗借呗现在还不能用” 选项：相同，不同。答案： ;predict: 不同\n",
            "100 input_string: 阅读短文：\n",
            "柳宗悦也是如此这般对比美术与工艺的，他说:如果只有美是美之通途，这样的希望就过于渺茫，因为美术是少数天才所胜任的工作。给予__以美之通途，只有工艺之道。即使没有文化的人，与神的邂逅机缘也是相同的。  \n",
            " 从候选成语“凡夫俗子，举目无亲，同心同德，拉帮结派，数一数二，情不自禁，大惊失色，盛气凌人，微不足道，明哲保身”中选出最适合填在下划线处的成语。正确答案是： ;predict: 举目无亲\n",
            "200 input_string: 我想知道下面两句话的意思是否相同。“美团为什么不能使用花呗付款“，”为什么在美团外卖上用不了花呗”是相同的吗？选项：相同，不同。答案： ;predict: 不同\n",
            "300 input_string: \n",
            "段落：生日那天，保罗的哥哥送给他一辆新车。当保罗离开办公室时，一个男孩儿看着那辆新车，很羡慕地问：“先生，这是您的车？”保罗点点头：“这是我哥哥送给我的生日礼物。”男孩儿吃惊地说：“你是说这是你哥哥送的礼物？„„我也好希望能„„”保罗以为他是希望能有个送他车子的哥哥，但那男孩儿却说，“我希望自己能成为送车给弟弟的哥哥。”保罗对他说：“你要不要坐我的车去兜风？”男孩儿高兴地坐上车，车子开了一会儿以后，那男孩儿小心地说：“先生，你能不能把车开到我家门前？”保罗心想那男孩儿一定是想要告诉他认识的人，他坐了一辆新车子回家。没想到保罗这次又猜错了。男孩儿下了车，过了一会儿保罗听到他回来的声音，但是动作有些缓慢。原来他扶着脚有毛病的弟弟出来了，他扶着弟弟在台阶上坐下，指着那辆新车。只听那男孩儿告诉弟弟：“你看，这就是保罗的哥哥送给他的新车。将来我也会送给你一辆这样的车，到那时你就可以不用每天都呆在家里了。”那个生日，保罗才真正体会到“给予比接受更幸福”的道理。 \n",
            "问：这个故事主要告诉我们什么道理 选项：应该关心别人，“给”更让人幸福，保罗是一个好人，应该送给亲人礼物。答案： ;predict: “给”更让人幸福\n",
            "400 input_string: 天蝎座ν (ν Sco / ν Scorpii，键闭)是在天蝎座的一个恒星系统，它传统的名称是Jabbah，在阿拉伯语的原意是“额头”。它至少是由靠得很近，相距41弧秒的两群恒星组成的五重星。较亮的一群称为天蝎座νA和天蝎座νB，分离角为1.3弧秒，属于B2的次巨星。较暗的一对是天蝎座νC和天蝎座νD，分离角为2.4弧秒，光谱类型分别是B8和B9的主序星。天蝎座νA本身是一颗半接触的光谱联星，较暗的伴星是B型，距离大约只有0.0003弧杪。由于靠近黄道，天蝎座ν会被月球和行星遮蔽。在1821年12月14日曾被水星遮蔽，但下次遮蔽要等到2031年12月2日；1852年12月27日金星也曾遮蔽过天蝎座ν，但下次要等到2095年的12月30日；1808年7月29日天王星也曾经遮蔽过它。IC 4592是反射天蝎座ν光辉的星云。通常反射星云是由看上去黑暗但极细的尘土组成，当反射附近高温恒星的光时会呈现蓝色。\n",
            "问题：由于靠近黄道，天蝎座ν会被什么天体遮蔽？\n",
            "答案： ;predict: 月球和行星\n",
            "500 input_string: 摘要：当下我国科技法制定中存在三种偏差,即注重制定主体的精英化而忽视了大众主体的话语权、追求科技的进步而导致了科技伦理价值的旁落,以及向往科技的经济效益而冷漠了科技的生态效益.偏差缘由唯生产力论、科技价值中立说与功利主义科技观的三重影响.矫治我国科技法制定中的偏差必须注意三个基本理路,即引入可持续发展的科技立法原则、形成公众参与的科技立法机制与构筑优良的科技进步评价制度.\n",
            "关键词：立法机制，立法原则，科技法 。请问：上面的关键词都是这篇摘要合适的关键词吗？选项：是的，不是。答案是：\n",
            "关键词： ;predict: 是的\n",
            "600 input_string: 段落描述：张恪与晚晴（候选词）在植物园前下了车，让傅俊先送周游回市里，公司下班前来接他们（代词）就可以了。问题：在上述的描述中，代词“他们”指代的是“张恪与晚晴”吗？选项：是的，不是。答案： ;predict: 不是\n",
            "700 input_string: 赤峰市智慧教育云是内蒙古一款带给赤峰\n",
            "这个App应用程序的描述会出现在哪个栏目？\n",
            "选项:银行,社区,电商,支付,经营,卡牌,借贷,驾校,理财,职考,新闻,旅游,交通,魔幻,医疗,影像,动作,工具,体育,小说,运动,相机,工具,快递,教育,股票,菜谱,行车,仙侠,亲子,购物,射击,漫画,小学,同城,成人,求职,电子,艺术,赚钱,约会,经营,兼职,视频,音乐,英语,棋牌,摄影,养生,办公,政务,视频,论坛,彩票,直播,其他,休闲,策略,通讯,买车,违章,地图,民航,电台,语言,搞笑,婚恋,超市,养车,杂志,在线,家政,影视,装修,资讯,社交,餐饮,美颜,挂号,飞行,预定,票务,笔记,买房,外卖,母婴,打车,情侣,日程,租车,博客,百科,绘画,铁路,生活,租房,酒店,保险,问答,收款,竞技,唱歌,技术,减肥,工作,团购,记账,女性,公务,二手,美妆,汽车,行程,免费,教辅,两性,出国,婚庆,民宿\n",
            "答案： ;predict: 中小学\n",
            "800 input_string: 什么类别最好的描述了这段话？网宿科技(300017.SZ)拟斥1079.36万元购买CDN-VIDEO 18%股权\n",
            "选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\n",
            "答案： ;predict: 财经\n",
            "900 input_string: 假设它是真的“程先生却是有点增了,半天回不过神来,等渐渐明白,看清了眼前的人,不由的往事回到眼前”那么“程先生与眼前的人曾经是情侣关系。”是真的吗？选项：总是,有时,绝非。答案： ;predict: 有时\n",
            "1000 input_string: 阅读下列论文摘要，然后判断下面的这些关键词是否都是论文摘要合适的关键词？\n",
            "摘要：目的:评价不同层厚重建、不同医师对64层螺旋CT冠状动脉钙化积分计算结果的影响.方法:40例行冠状动脉钙化积分扫描且冠状动脉有钙化的患者,对每一位患者的数据分别进行1.5mm、2.0mm、3.0mm的重建,由两名医师分别获得不同层厚冠状动脉各段斑块钙化的Agatston积分、体积积分、钙质量积分,比较不同医师、不同层厚钙化积分值及图像质量的差异.结果:相同患者不同层厚重建各段冠状动脉得到不同的Agatston积分、体积积分、钙质量积分,两组皆以1.5mm层厚重建获得的钙化积分最高,2.0mm次之,3.0mm最低,但差异无统计学意义(P>0.05).两位医师对相同患者进行相同的层厚重建,冠状动脉各段得到不完全一致的Agatston积分、体积积分、钙质量积分,两位医师获得的积分值一致性较好(r≈1).不同层厚重建的图像噪声差异具有统计学意义(P<0.05),相同层厚重建不同感兴趣区测得的图像噪声之间差异无统计学意义(P>0.05).结论:64层螺旋CT钙化积分计算时,重建层厚3mm即可获得较满意的结果.64层螺旋CT冠状动脉钙化积分软件较为稳定,不受操作者熟练程度的影响.\n",
            "关键词：积分，质量，不同，影响。答案是：选项：是的，不是。答案是：\n",
            "关键词： ;predict: 是的\n",
            "1100 input_string: 什么类别最好的描述了这段话？秋日西藏迎客来 大美雪域进入最佳观赏季\n",
            "选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\n",
            "答案： ;predict: 旅游\n",
            "1200 input_string: 给定“如今似乎到了动它们的时候,夜深人静,王琦瑶从五斗橱的抽屉里取出它来,放在桌上”我们应该假定“它之前放在五斗橱的抽屉里”是真的吗？选项：是的,不是,也许。答案： ;predict: 是的\n",
            "1300 input_string: 阅读下列论文的摘要，然后生成这篇摘要的多个关键词。摘要：我国网络管理法制化建设应当实现四个转变:在立法理念上,从秩序维持向权利保障转变;在管理理念上,从消极防范、积极管制型向强化沟通、积极合作型转变;在管理手段上,从事前审批管制向事中(后)监管制转变;在管理机制上,从政府主导型管理向社会参与型管理转变.。摘要的关键词有这些：\n",
            "关键词： ;predict: \n",
            "1400 input_string: 这是关于哪方面的新闻：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏？\n",
            "有何深意？哈登更新推特晒个人照，没有配文 \n",
            "答案： ;predict: 体育\n",
            "1500 input_string: 来到云南红河，有中国最美的山岭雕刻，还有小巴黎之称的碧色寨\n",
            " 哪个类别最好的描述了这篇新闻？\n",
            "选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\n",
            "答案： ;predict: 旅游\n",
            "1600 input_string: 什么类别最好的描述了这段话？0-3惨遭恒大女排横扫，江苏女排问题出在哪？张常宁做出了解答\n",
            "选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\n",
            "答案： ;predict: 体育\n",
            "1700 input_string: 阅读上下文给出问题的答案：\n",
            "段落：\n",
            "墨西哥街头乐队 (西班牙语：) 是一种墨西哥式的乐队，通常由至少3个小提琴手、2个小号手、1个墨西哥吉他手、1个vihuela琴手与1个guitarrón琴手组成，但在一些餐厅的走唱团体可能只有三到四人；乐队成员通常身著华丽的墨西哥服饰「Charro」，头戴宽边的墨西哥帽。乐队通常在婚礼、节庆等正式场合上表演。Mariachi指的是「墨西哥街头乐队」或有人称「流浪乐手」是一种墨西哥式的乐队，。有关Mariachi的起源是众说纷纭，有理论认为，Mariachi是来自法语「婚礼」-mariage一词，因为在婚礼中常出现各类型的音乐，但问题是在1864年法国人到达墨西哥之前Mariachi就已经存在。但也有人认为Mariachi这个名字来自一种歌颂圣母玛丽亚（mah-ree-ah AH-chay）的节庆与音乐表演。但一般相信Mariachi应该是19世纪起源于墨西哥南部的哈利斯科州（Estado Jalisco）。虽然墨西哥的土著部落的音乐元素如笛子，鼓和口哨声，和Mariachi并没有没有明确的关联。但mariachi的乐器和曲风明确是受到了西班牙殖民时期的深远影响，他的乐器编制包括一个guitarrón（低音吉他），一把vihuela（比维拉琴，一种5弦的高音域吉他），吉他，小提琴和小号。有些团体可能会加上一把竖琴或长笛，其中小号是重要的领奏乐器。70年代的一些歌手有时会加上其他乐器如手风琴，电子琴，键盘，口琴，萨克斯风，甚至鼓。他们的歌唱内容包罗万象，包括大男子主义，爱情，背叛，死亡，政治，革命英雄，甚至动物（特别是一首著名歌曲是“La Cucaracha”，意为“蟑螂”）等等。有人认为Mariachi是墨西哥除了龙舌兰酒以外最显著的象征。\n",
            "问题：墨西哥乐队成员通常身穿什么地方的服饰？\n",
            "答案： ;predict: 身著华丽的墨西哥服饰「Charro」,头戴宽边的墨西哥帽。\n",
            "1800 input_string: 八十年代昆明街拍，往昔的生活记忆 \n",
            "这篇新闻会出现在哪个栏目？\n",
            "选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\n",
            "答案： ;predict: 旅游\n",
            "1900 input_string: ““能治瘟疫的宝物啊。”贾诩（候选词）端起茶杯抿了两口，没什么特殊的感觉，“陶恭祖看起来真的快不行了，而且看现在的形势，他（代词）是在给主公铺路。””。在前文描述中，可以用代词“他”来替换实体“贾诩”吗？选项：是的，不是。答案： ;predict: 不是\n",
            "2000 input_string: 这个是关于哪方面的App应用程序的描述？\n",
            "选项：银行,社区,电商,支付,经营,卡牌,借贷,驾校,理财,职考,新闻,旅游,交通,魔幻,医疗,影像,动作,工具,体育,小说,运动,相机,工具,快递,教育,股票,菜谱,行车,仙侠,亲子,购物,射击,漫画,小学,同城,成人,求职,电子,艺术,赚钱,约会,经营,兼职,视频,音乐,英语,棋牌,摄影,养生,办公,政务,视频,论坛,彩票,直播,其他,休闲,策略,通讯,买车,违章,地图,民航,电台,语言,搞笑,婚恋,超市,养车,杂志,在线,家政,影视,装修,资讯,社交,餐饮,美颜,挂号,飞行,预定,票务,笔记,买房,外卖,母婴,打车,情侣,日程,租车,博客,百科,绘画,铁路,生活,租房,酒店,保险,问答,收款,竞技,唱歌,技术,减肥,工作,团购,记账,女性,公务,二手,美妆,汽车,行程,免费,教辅,两性,出国,婚庆,民宿?\n",
            "美术宝学素描手绘、画动漫、玩艺术的学习助手美术艺考生和美术兴趣爱好者,学画画过程中,可以把自己的作品发布在美术宝和同学们一起互相交流和帮助。美术宝专注美术生作品点评,指导美术生联考校考和绘画过程的疑难问题,绘画遇到瓶颈,在美术宝发布可以让老师帮你指导学习。老师是来自中央美院、清华美院、中国美院、鲁迅美术学院等高校的美术老师,还可以在线语音点评指导,让美术高考和画画更简单。美院名师高清直播绘画知道,学艺术的道路上,让美术宝陪你。产品功能艺信在美术宝可以快速跟学友、老师即时沟通,还可以建群一起大声说话,互相倾诉学美术之苦,享受斗图之乐,美术生专属表情包,拿来PK一下作品点评刚画完的画有问题,发布到美术宝让大家瞅瞅,美院的专业老师给你做出评价和指导,画的好不想炫耀一下吗美术作品库我给你讲啊,你想找的绘画素材,搜一搜都能出来,不信,不信就试试美术院校你的老师有可能都不知道哪些院校校考不考素描吗你知道哪些院校用联考成绩就能报名更新内容注册流程优化调整体验优化与缺陷修复 \n",
            "答案：  ;predict: 中小学\n",
            "2100 input_string: “他赶紧顺着大皇子的话笑着说道：“陛下，郭铮（候选词）此人，老臣不怕言语无状，也要多言一句。此人好大喜功，多行妄涎之举，去年才被陛下贬去江南，难保他（代词）不会因为与小范大人宿怨的关系，刻意夸大其事，构陷害人。”” 在上面的描述中代词“他”指代的是“郭铮”吗？选项：是的，不是。答案： ;predict: 不是\n",
            "2200 input_string: 这个句子“开通了花呗，不能付款”转述(意思是一样的)了下面这句话？“我想开通这个手机的花呗，另外一个不要了” 选项：相同，不同。答案： ;predict: 不同\n",
            "2300 input_string: “王洛和（候选词）揽着小美的腰，得意洋洋地看我的衰样，笑，他（代词）说你睁大眼睛，再看一看。” 在上面的描述中代词“他”指代的是“王洛和”吗？选项：是的，不是。答案： ;predict: 不是\n",
            "2400 input_string: “既然公孙将军的志向是如此，那敢问将军可曾动摇过，在这苦寒的北疆将军可曾动摇过？”审配并没有去游说公孙瓒（候选词），反而继续和公孙瓒谈他（代词）的志向，这仿佛是两人共同的语言。上面的句子中，代词“他”指代的是“公孙瓒”吗？选项：是的，不是。答案： ;predict: 不是\n",
            "2500 input_string: 什么类别最好的描述了这段话？西伯利亚渔民：用天然冰窖冷藏渔获，送到市里一趟赚3万\n",
            "选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\n",
            "答案： ;predict: 农业\n",
            "2600 input_string: 这个是关于哪方面的App应用程序的描述？\n",
            "选项：银行,社区,电商,支付,经营,卡牌,借贷,驾校,理财,职考,新闻,旅游,交通,魔幻,医疗,影像,动作,工具,体育,小说,运动,相机,工具,快递,教育,股票,菜谱,行车,仙侠,亲子,购物,射击,漫画,小学,同城,成人,求职,电子,艺术,赚钱,约会,经营,兼职,视频,音乐,英语,棋牌,摄影,养生,办公,政务,视频,论坛,彩票,直播,其他,休闲,策略,通讯,买车,违章,地图,民航,电台,语言,搞笑,婚恋,超市,养车,杂志,在线,家政,影视,装修,资讯,社交,餐饮,美颜,挂号,飞行,预定,票务,笔记,买房,外卖,母婴,打车,情侣,日程,租车,博客,百科,绘画,铁路,生活,租房,酒店,保险,问答,收款,竞技,唱歌,技术,减肥,工作,团购,记账,女性,公务,二手,美妆,汽车,行程,免费,教辅,两性,出国,婚庆,民宿?\n",
            "苏州日报报业集团倾力打造,苏州新闻发布权威移动门户。即时发布苏报集团全媒体采编团队采集的新闻资讯,汇集苏报集团旗下和各区县主要报纸,引力播,引领前行的方向,传播精神的力量。获取最新资讯、进行激情互动,随时掌握新闻时空,知性触摸时代脉搏。更新内容稳定性改进和错误修正。 \n",
            "答案：  ;predict: 新闻\n",
            "2700 input_string: 文化视点·听非遗讲故事丨在黄土地上狂飙的安塞腰鼓\n",
            "这是关于哪方面的新闻：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏？\n",
            "答案： ;predict: 文化\n",
            "2800 input_string: “许思想坐在那里等，想了想，又站起来陪张恪（候选词）走到出口处，张恪要牵她（代词）的手，她没有让，还站得离张恪稍远一些。” 在上面的描述中代词“她”指代的是“张恪”吗？选项：是的，不是。答案： ;predict: 不是\n",
            "2900 input_string: 阅读文章：\n",
            "开国功臣算什么，当年刘邦能杀韩信跟彭越，李世民也能拿自己开刀。 自此之后，尉迟恭算是彻底老实了，再也不敢__，四处惹是生非了。后来干脆回家养老，整日待在家中修道，不与外界往来，算是平平安安的终老了。\n",
            "其中下划线的地方需要填写成语，有以下候选的成语：久别重逢，坐拥百城，借刀杀人，了若指掌，居功自傲，碌碌无为，铁打心肠，心心相印，一般无二，八仙过海。正确的成语是： ;predict: 借刀杀人\n",
            "3000 input_string: 这个句子“我的蚂蚁借呗现在无法使用”转述(意思是一样的)了下面这句话？“我以前蚂蚁借呗以前可以借现在为啥不能” 选项：相同，不同。答案： ;predict: 不同\n",
            "3100 input_string: 给定“尽管是这么南北通风,还是有一股无法散去的葱蒜味保证是真实的吗“这里的空气有一股味道。”？选项：是的,不是,也许。答案： ;predict: 是的\n",
            "3200 input_string: 以下两句话的意思相同的吗？“花呗记录怎么删掉”，“这花呗账单，为什么删不掉” 选项：相同，不同。答案： ;predict: 不同\n",
            "3300 input_string: 立即下载香港迪士尼乐园的官方手机应用程序享受超凡的流动,缔造奇妙旅程。查看等候时间即时查看游乐设施预计等候时间,让你尽情玩乐。轻松导航地图设有GPS定位功能,让您迅速寻找身处乐园内的位置及邻近的游乐设施丶与迪士尼朋友见面的地点丶餐厅和商店等。为您提供所需资讯查看乐园开放时间丶与迪士尼朋友见面及娱乐表演的时间丶各项游乐设施的介绍以及其他资料。预订行程一键拨通餐厅订座热线。注意下载本应用程序前,请留意部分应用程序功能需使用定位服务数据,并需连接WiFi或移动服务供应商的数据网络。此外,本应用程序将会连接用户装置的外部存储以储存调试日志,并将连接用户装置内的各个账户,以安全方式获取公共令牌Token以便访问服务器的内容。此应用程序提供简体中文丶繁体中文及英文版本可供使用。此应用程序所使用之地图为百度地图。您可点按以下连结查阅百度通过使用而所收集丶使用及共享您的资料的隐私权保护声明http//www.baidu.com/duty/yinsiquan.html。透过下载和/或使用本应用程序时,您将会被视作接受以上条款。\n",
            "这个App应用程序的描述会出现在哪个栏目？\n",
            "选项:银行,社区,电商,支付,经营,卡牌,借贷,驾校,理财,职考,新闻,旅游,交通,魔幻,医疗,影像,动作,工具,体育,小说,运动,相机,工具,快递,教育,股票,菜谱,行车,仙侠,亲子,购物,射击,漫画,小学,同城,成人,求职,电子,艺术,赚钱,约会,经营,兼职,视频,音乐,英语,棋牌,摄影,养生,办公,政务,视频,论坛,彩票,直播,其他,休闲,策略,通讯,买车,违章,地图,民航,电台,语言,搞笑,婚恋,超市,养车,杂志,在线,家政,影视,装修,资讯,社交,餐饮,美颜,挂号,飞行,预定,票务,笔记,买房,外卖,母婴,打车,情侣,日程,租车,博客,百科,绘画,铁路,生活,租房,酒店,保险,问答,收款,竞技,唱歌,技术,减肥,工作,团购,记账,女性,公务,二手,美妆,汽车,行程,免费,教辅,两性,出国,婚庆,民宿\n",
            "答案： ;predict: 休闲益智\n",
            "3400 input_string: 摘要：为利用旋挖钻机在回转阶段的制动能量,提出一种基于二次调节技术和液压蓄能器的能量回收系统.通过分析上车回转的工况特点,运用功率键合图理论建立回转动能回收利用的数学模型.针对系统参数的不确定性及存在的外干扰,设计自适应模糊滑模控制器对回转速度进行跟踪控制,并利用李雅普诺夫函数证明控制系统的稳定性和收敛性.为对系统进行优化设计,仿真分析液压蓄能器容积、充气压力及回转制动时间这3个主要因素对系统工作性能的影响规律.研究结果表明:所提出的回转系统在制动时能有效地完成能量回收,其中,回转制动时间对系统工作压力和能量回收效率影响最大,而液压蓄能器容积和充气压力对能量回收效率影响较小,但对恒压网络压力波动影响较大.\n",
            " 以下的关键词都是这篇摘要合适的关键词吗？关键词：二次调节技术，旋挖钻机，能量回收，自适应模糊控制。选项：是的，不是。答案是：\n",
            "关键词： ;predict: 是的\n",
            "3500 input_string: 千人一起在长桌上吃饭，是一种什么样的体验？来感受舌尖上的瑶族\n",
            " 哪个类别最好的描述了这篇新闻？\n",
            "选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\n",
            "答案： ;predict: 旅游\n",
            "3600 input_string: 阅读短文：\n",
            "阳春四月，__，2019辽宁(营口)春季赏花观鸟季启动仪式正式拉开序幕。营口市文化和旅游广播电视局局长王丽接受采访表示，营口市将文化、体育和旅游等要素进行融合，通过融合会更加丰富营口旅游的业态和项目。未来营口还会开展房车宿营基地等... \n",
            " 从候选成语“玉石俱焚，无米之炊，地大物博，巧立名目，依然故我，草长莺飞，开源节流，口不择言，饥不择食，遍地开花”中选出最适合填在下划线处的成语。正确答案是： ;predict: 依然故我\n",
            "3700 input_string: 信安世纪科创板闯关，仍存专利侵权的未决诉讼\n",
            " 哪个类别最好的描述了这篇新闻？\n",
            "选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\n",
            "答案： ;predict: 科技\n",
            "3800 input_string: 给定“那里的人生是凡夫俗子无法设想的,是前边大马路的喧哗与繁荣不可比拟的”因此，它必定是真的“那里的人生会很艰苦”？选项：是的,不是,也许。答案： ;predict: 是的\n",
            "3900 input_string: 你会把这个新闻推荐给关注哪方面的人：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏？大气！这部贵州取景的剧，被封“神作”\n",
            "答案： ;predict: 娱乐\n",
            "4000 input_string: 蓝天航空（Blue 1），是一家总部设在芬兰赫尔辛基的区域性航空公司，为星空联盟的区域成员，由北欧航空（SAS）集团完全控股。蓝天航空以芬兰赫尔辛基－万塔机场为枢纽，主要经营芬兰国内航线以及芬兰至斯德哥尔摩、奥斯陆和哥本哈根等北欧城市的航线，此外还从赫尔辛基飞往欧洲其他国家的一些主要城市或旅游目的地。蓝天航空的前身是成立于1988年的波的尼亚航空（Air Botnia），经营芬兰国内航线。1998年1月，波的尼亚航空被北欧航空集团并购。因此，北欧航空的枢纽斯德哥尔摩、奥斯陆和哥本哈根也很快成为波的尼亚航空的经营重点。2000年，波的尼亚航空对机队进行重组，主要引进Avro RJ85。同年，波的尼亚航空加入国际航空运输协会。2004年1月1日，波的尼亚航空更名为蓝天航空（Blue 1），随后开通了赫尔辛基－奥卢、赫尔辛基－库奥皮奥两条国内航线。10月31日，蓝天航空成为星空联盟首家区域成员航空公司。同年，蓝天航空成为芬兰首家通过国际航空运输协会IOSA认证的航空公司。2005年，蓝天航空进一步扩展其芬兰国内航线，开通了由赫尔辛基到瓦萨以及罗瓦涅米的航班。2006年，蓝天航空的航点网络经历了大规模的扩展，开通了11条由赫尔辛基到欧洲各城市的航线。2009年，蓝天航空又新增伊瓦洛和库萨莫两个芬兰国内目的地，以及杜布罗夫尼克、斯普利特和比亚里茨三个国际目的地。2012年11月1日，蓝天航空将营运转由母公司北欧航空负责，并转型为维修公司，遂于11月29日退出星空联盟。截至2009年9月19日，蓝天航空通达下列13个国家的27个目的地。截至2012年12月，蓝天航空机队平均机龄10.7年，组成如下：\n",
            "从上面的段落中产生一个问题： ;predict: 2012年11月1日,蓝天航空将营运转由母公司北欧航空负责,并转型为维修公司,遂于11月29日退出星空联盟。\n",
            "4100 input_string: 阅读以下文章，并选择一个合适的成语。文章：\n",
            "这天老仆在街上买东西的时候，遇到一老道士，老道士对他说，听说你主人死了，但我可以__，不知道你愿不愿意。老人一听，特别高兴，赶紧把老道士带到府上，喊七位夫人过来商量。七个人一听自己的男人还能活过来，都非常高兴，说着无论多... \n",
            "候选成语：远见卓识，居高临下，冷暖自知，冷血动物，艰苦卓绝，借尸还魂，相濡以沫，始作俑者，成千成万，日落西山 答案是： ;predict: 居高临下\n",
            "4200 input_string: 阅读下列论文的摘要，然后生成这篇摘要的多个关键词。摘要：全国普法办下发《关于开展法治城市、法治县（市、区）创建活动的意见》以来,法治区县创建工作在全国各省市全面铺开。创建法治区县,对于加快区域法治化进程,保障和促进科学发展具有十分重要的现实意义,不仅是贯彻落实依法治国基本方略,实践科学发展观的必然要求,也是加快推进法治天津建设的基础性工作。一、天津市法治区县创建工作总体情况2008年,天津市全面启动法治区县创建工作。依法治市领导小组办公室（以下简称治市办）转发了全国普法办《关于开展法治城市、法治县（市、区）创建活动的意见》,。摘要的关键词有这些：\n",
            "关键词： ;predict: 法治区县，天津市，法治区县，天津市\n",
            "4300 input_string: 给定“我们以前以为可以有种族融合什么大熔炉,根本没有、没有”是否遵循“清楚地认识了种族之间的关系” 选项：是的,不是,也许。答案： ;predict: 不是\n",
            "4400 input_string: 8日机构强推买入 6股极度低估\n",
            " 哪个类别最好的描述了这篇新闻？\n",
            "选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\n",
            "答案： ;predict: 股票\n",
            "4500 input_string: 走出一条把不可能变成可能的人生道路 \n",
            "这篇新闻会出现在下列哪个栏目：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏？\n",
            "答案： ;predict: 教育\n",
            "4600 input_string: 淮网huinet.com是淮安地区领先的网络媒体和综合性门户网站淮水安澜bbs.huinet.com是其旗下的网上社区和已注册商标。淮网倾力关注经济社会发展、政风政纪、民生民情,为淮安、淮阴、清江浦、洪泽、盱眙、金湖、涟水及周边的泗洪、泗阳、沭阳等地人民,提供阳光纪检、招聘求职、婚恋交友、房产家居、亲子教育、美食旅游、户外运动、寻人寻物、拼车公益、论坛求助等信息服务平台。更新内容版块详情页改造支持顶部轮播图、支持多模块设置,页面元素更丰满新增圈子置顶优秀内容,置顶显示,帮助用户更好的展现自己报名帖优化支持展示报名帖,且可以在APP中报名活动帖子支持取消点赞手速太快点错了,想取消点赞我们支持啦分享至通讯录碰到好的文章,可以一键分享给APP中的好友和群组中其他更新帖子回复支持小视频、发帖支持先选择版块\n",
            "这个是关于哪方面的App应用程序的描述？\n",
            "选项：银行,社区,电商,支付,经营,卡牌,借贷,驾校,理财,职考,新闻,旅游,交通,魔幻,医疗,影像,动作,工具,体育,小说,运动,相机,工具,快递,教育,股票,菜谱,行车,仙侠,亲子,购物,射击,漫画,小学,同城,成人,求职,电子,艺术,赚钱,约会,经营,兼职,视频,音乐,英语,棋牌,摄影,养生,办公,政务,视频,论坛,彩票,直播,其他,休闲,策略,通讯,买车,违章,地图,民航,电台,语言,搞笑,婚恋,超市,养车,杂志,在线,家政,影视,装修,资讯,社交,餐饮,美颜,挂号,飞行,预定,票务,笔记,买房,外卖,母婴,打车,情侣,日程,租车,博客,百科,绘画,铁路,生活,租房,酒店,保险,问答,收款,竞技,唱歌,技术,减肥,工作,团购,记账,女性,公务,二手,美妆,汽车,行程,免费,教辅,两性,出国,婚庆,民宿。\n",
            "答案： ;predict: 论坛圈子\n",
            "4700 input_string: 你会把这个新闻推荐给关注哪方面的人：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏？缺席上市仪式，“接盘侠”孙宏斌去哪了？\n",
            "答案： ;predict: 财经\n",
            "4800 input_string: “但是石昊（候选词）还是听到了，他（代词）的神觉何其敏锐，当下惊讶，难怪随便遇到一个少女就这么美丽，原来是天仙书院绝色榜上的名人。”。在上面的描述中，代词“他”指代的是“石昊”吗？选项：是的，不是。答案： ;predict: 不是\n",
            "4900 input_string: 这是关于哪方面的新闻：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏？\n",
            "美国针对个人的制裁，会有哪些影响？ \n",
            "答案： ;predict: 国际\n",
            "5000 input_string: 这个句子“可以锁屏借呗”转述(意思是一样的)了下面这句话？“借呗可以一年后再换么” 选项：相同，不同。答案： ;predict: 不同\n",
            "5100 input_string: 生成一个与这个意思相同的句子:“押金退了，花呗还是还一样”\n",
            "答案： ;predict: 押金退了,花呗还一样\n",
            "5200 input_string: 以下两句话的意思相同的吗？“余额宝不能转入怎么还借呗”，“余额转不进余额宝、蚂蚁借呗怎么还款” 选项：相同，不同。答案： ;predict: 不同\n",
            "5300 input_string: 带着问题来阅读文章并回答问题：\n",
            "问：那只老狐狸是怎么进入葡萄园的？ \n",
            "选项：找到一条小路，从围墙上爬过去，从小洞里钻进去，和另外两只狐狸一起进去\n",
            "段落：一天，一只老狐狸无意间经过一个被围墙围住的葡萄园。凭着经验，它闻出了这个园子里的葡萄是自己从未吃过的极品。这只老狐狸曾吃过无数种好葡萄，它曾向自己的同伴吹嘘过：“这世上还不曾有我没吃过的葡萄呢!”面对这一园自己没有品尝过的葡萄，它的食欲和好胜心都被挑逗起来了。它对自己说：“吃不到葡萄说葡萄酸的狐狸，就像不想当元帅的士兵，是最没出息的。”于是，它发誓一定要吃到这里的葡萄。可当它在四周转了两圈之后才发现：围墙太高，它跳不上去。又经过一番搜寻，它终于找到了一个可以进入葡萄园的小洞。可是，这个洞口太小，它无法通过。思索片刻，它做出一个决定：绝食减肥。经过三天绝食，这只老狐狸真的瘦了下来，它可以进入葡萄园了。如它所料，这里的葡萄是迄今为止它所吃过的最好的。于是，它放开肚子，整整吃了三天。这时，问题出现了：由于吃了太多葡萄，它又胖了，无法再从那个小洞出去。无奈，它只好再次绝食，这次比上次花的时间还多了一天。等身体终于变得和刚进来时一样瘦小，它又从那个小洞钻了出去。回来后，它把这次经历告诉了另外两只老狐狸，并问它们：“这事儿做得值不值？”其中一只说：“你胖了多少又瘦了多少，等于什么都没吃，还冒着性命之忧，当然不值。”另一只则说：“虽然你担了不少风险，但你吃到了从未吃过的葡萄，当然值。 。答案： ;predict: 找到一条小路\n",
            "5400 input_string: 这个句子“借呗选择任一期都可以提前还款”转述(意思是一样的)了下面这句话？“借呗能提前还一部分么” 选项：相同，不同。答案： ;predict: 不同\n",
            "5500 input_string: 水与山的遐想\n",
            " 哪个类别最好的描述了这篇新闻？\n",
            "选项：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏\n",
            "答案： ;predict: 旅游\n",
            "5600 input_string: “金南勇到下午才知道李馨予（候选词）给张恪拐跑的事情，他（代词）找到李在洙：“馨予小姐的事情，是不是跟李会长说一声？””。在上面的描述中，代词“他”指代的是“李馨予”吗？选项：是的，不是。答案： ;predict: 不是\n",
            "5700 input_string: 给定“可他生性柔和,从来不善驳人面子,只得敷衍”因此，它必定是真的“生性柔和的人都不善于驳人面子”？选项：是的,不是,也许。答案： ;predict: 不是\n",
            "5800 input_string: 摘要：单因素实验中考察多种表面活性剂的浓度、提取温度、提取时间、料液比(体积与质量比)以及提取次数对玉竹多糖产率的影响.在此基础上,利用表面响应分析法优化表面活性剂协助提取玉竹多糖的条件,研究提取温度、提取时间及料液比3个自变量之间的交互作用对多糖得率的影响,并得到最佳提取条件为:提取温度92℃,提取时间1.61h,料液比19.93mL/g,此条件下玉竹根茎中多糖产率预测达到11.11％(质量分数).\n",
            " 以下的关键词都是这篇摘要合适的关键词吗？关键词：表面活性剂，提取条件，产率，表面响应法。选项：是的，不是。答案是：\n",
            "关键词： ;predict: 表面活性剂，表面响应法\n",
            "5900 input_string: 假定下面是真的“但是,贵州家用电器厂的优质服务是建立在过硬的产品质量基础上的”因此,“贵州家用电器厂全国闻名。”怎么样？选项：必然的,可能的,不可能。答案： ;predict: 必然的\n",
            "6000 input_string: 阅读文章：\n",
            "据了解，在试点中，我市将开展天价彩礼、__、低俗婚闹、随礼攀比等不正之风的整治，着力规范婚礼仪式和操办模式，限制大操大办和盲目攀比，同时加大优秀婚俗文化产品和服务的供给。版权所有:重庆市人民政府网站 主办:重庆市人民政府办公...\n",
            "其中下划线的地方需要填写成语，有以下候选的成语：铺张浪费，老羞成怒，名副其实，马首是瞻，当头棒喝，灰头土脸，挥金如土，显山露水，一盘散沙，水滴石穿。正确的成语是： ;predict: 铺张浪费\n",
            "6100 input_string: “支持工会、共青团、妇联等群团组织更好发挥作用”问题：“群团组织获得了社会各界的支持。” 选项：真的,假的,未知。答案： ;predict: 真的\n",
            "6200 input_string: 民生银行：公司建立“以岗定价、岗变薪变、按绩取酬”的薪酬分配制度\n",
            "这是关于哪方面的新闻：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏？\n",
            "答案： ;predict: 财经\n",
            "6300 input_string: 3号型蒸汽机车是全台铁路商务总局购入的饱合式蒸汽机车，其特征是披覆在车体上的水柜，如同马鞍般。台湾清治时期的全台铁路商务总局向英国(Hawthorn Leslie and Company)，订购马鞍型水柜式机车。1889年与1893年各制造3部，总共6部。1895年甲午战争清朝战败后日本成立临时台湾铁道队来代管台湾铁路，最初将3号型全配北部线。1899年台湾总督府交通局铁道部成立后于1904年将2部机车转配彰化段。进入大正时代后又集合北部、在基隆段1部、台北段5部。1918年为了宜兰线的工程和营运而将2部机车海运至宜兰段、1920年全数转配宜兰段。随著机车逐渐老化与过时，至1926年3号机车报废。1927年在台北段2部宜兰段3部，1929年全部停止运用，1931年报废。今已无一部保存。3号-5号无另取名。\n",
            "参考上述上下文，3号型蒸汽机车什么时候全部被停用？\n",
            "答案： ;predict: 1929年\n",
            "6400 input_string: 看购影豆原影豆是看购电影集团旗下的一个集在线购票、电影资讯、互动社区及影迷福利等服务于一体的一站式电影平台。我们致力于打造好玩的电影APP,让更多人享受电影带来的乐趣。影片资讯抢鲜看电影导读、电影解析、热映电影精彩预告片,为您提供更多精彩的电影资讯。影迷圈看有意思的内容影迷圈为您提供影迷精选内容、影迷动态,看看他们都在看什么会员享特权积分兑好礼升级会员,享受专属特权,购票更优惠。每天做任务,积分好礼随心换支付便捷看购卡购票更简单红包账户、看购卡余额、第三方支付,用户可随心组合购买影票。持有看购卡用户可直接绑卡购买,也可以使用多种支付形式组合购买影票。联系我们看购电影客服热线每天90021004006776501看购影豆热线工作日830173001057228847看购影豆APP新版开通了自助客服功能,欢迎点击我的在线客服体验小秘书服务。官方微信订阅号影豆生活官方微信服务号看购电影更新内容更新日志1.修改部分Bug\n",
            "这个是关于哪方面的App应用程序的描述？\n",
            "选项：银行,社区,电商,支付,经营,卡牌,借贷,驾校,理财,职考,新闻,旅游,交通,魔幻,医疗,影像,动作,工具,体育,小说,运动,相机,工具,快递,教育,股票,菜谱,行车,仙侠,亲子,购物,射击,漫画,小学,同城,成人,求职,电子,艺术,赚钱,约会,经营,兼职,视频,音乐,英语,棋牌,摄影,养生,办公,政务,视频,论坛,彩票,直播,其他,休闲,策略,通讯,买车,违章,地图,民航,电台,语言,搞笑,婚恋,超市,养车,杂志,在线,家政,影视,装修,资讯,社交,餐饮,美颜,挂号,飞行,预定,票务,笔记,买房,外卖,母婴,打车,情侣,日程,租车,博客,百科,绘画,铁路,生活,租房,酒店,保险,问答,收款,竞技,唱歌,技术,减肥,工作,团购,记账,女性,公务,二手,美妆,汽车,行程,免费,教辅,两性,出国,婚庆,民宿。\n",
            "答案： ;predict: 电影资讯\n",
            "6500 input_string: 你会把这个新闻推荐给关注哪方面的人：故事，文化，娱乐，体育，财经，房产，汽车，教育，科技，军事，旅游，国际，股票，农业，游戏？疫情下我国高校应届毕业生创业现状调查\n",
            "答案： ;predict: 教育\n"
          ]
        },
        {
          "output_type": "error",
          "ename": "KeyboardInterrupt",
          "evalue": "ignored",
          "traceback": [
            "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
            "\u001b[0;31mKeyboardInterrupt\u001b[0m                         Traceback (most recent call last)",
            "\u001b[0;32m<ipython-input-53-5e1e3a9aa7af>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m      3\u001b[0m \u001b[0mtarget_file\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m'pCLUE_predict.json'\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m      4\u001b[0m \u001b[0mselect_top\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m-\u001b[0m\u001b[0;36m1\u001b[0m \u001b[0;31m# 全量预测\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 5\u001b[0;31m \u001b[0mpredict_on_test\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msource_file\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mtarget_file\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mselect_top\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
            "\u001b[0;32m<ipython-input-50-61d1f473029e>\u001b[0m in \u001b[0;36mpredict_on_test\u001b[0;34m(source_file, target_file, select_top)\u001b[0m\n\u001b[1;32m     13\u001b[0m     \u001b[0mtype\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mjson_string_right\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"type\"\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     14\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 15\u001b[0;31m     \u001b[0mpredict_answer\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0manswer_fn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput_string\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     16\u001b[0m     \u001b[0mjson_string_predict\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m{\u001b[0m\u001b[0;34m\"target\"\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0mpredict_answer\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mstrip\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\"type\"\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0mtype\u001b[0m\u001b[0;34m}\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     17\u001b[0m     \u001b[0mjson_string_predict\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mjson\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdumps\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mjson_string_predict\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mensure_ascii\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m<ipython-input-30-349366666625>\u001b[0m in \u001b[0;36manswer_fn\u001b[0;34m(text, sample, top_p)\u001b[0m\n\u001b[1;32m     16\u001b[0m   \u001b[0mencoding\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtokenizer\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mtext\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mtext\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtruncation\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mpadding\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmax_length\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m768\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mreturn_tensors\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"pt\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mto\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdevice\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     17\u001b[0m   \u001b[0;32mif\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0msample\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;31m# 不进行采样\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 18\u001b[0;31m     \u001b[0mout\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mmodel_trained\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgenerate\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m**\u001b[0m\u001b[0mencoding\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mreturn_dict_in_generate\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0moutput_scores\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmax_length\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m128\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnum_beams\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m4\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlength_penalty\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m0.6\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     19\u001b[0m   \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;31m# 采样（生成）\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     20\u001b[0m     \u001b[0mout\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mmodel_trained\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mgenerate\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m**\u001b[0m\u001b[0mencoding\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mreturn_dict_in_generate\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0moutput_scores\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmax_length\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m128\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdo_sample\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtop_p\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtop_p\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py\u001b[0m in \u001b[0;36mdecorate_context\u001b[0;34m(*args, **kwargs)\u001b[0m\n\u001b[1;32m     25\u001b[0m         \u001b[0;32mdef\u001b[0m \u001b[0mdecorate_context\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     26\u001b[0m             \u001b[0;32mwith\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mclone\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 27\u001b[0;31m                 \u001b[0;32mreturn\u001b[0m \u001b[0mfunc\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0margs\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     28\u001b[0m         \u001b[0;32mreturn\u001b[0m \u001b[0mcast\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mF\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdecorate_context\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     29\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py\u001b[0m in \u001b[0;36mgenerate\u001b[0;34m(self, inputs, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, typical_p, repetition_penalty, bad_words_ids, force_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, logits_processor, renormalize_logits, stopping_criteria, constraints, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, exponential_decay_length_penalty, **model_kwargs)\u001b[0m\n\u001b[1;32m   1393\u001b[0m                 \u001b[0mreturn_dict_in_generate\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mreturn_dict_in_generate\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1394\u001b[0m                 \u001b[0msynced_gpus\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0msynced_gpus\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1395\u001b[0;31m                 \u001b[0;34m**\u001b[0m\u001b[0mmodel_kwargs\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1396\u001b[0m             )\n\u001b[1;32m   1397\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py\u001b[0m in \u001b[0;36mbeam_search\u001b[0;34m(self, input_ids, beam_scorer, logits_processor, stopping_criteria, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs)\u001b[0m\n\u001b[1;32m   2247\u001b[0m             \u001b[0mnext_token_logits\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0madjust_logits_during_generation\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnext_token_logits\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcur_len\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcur_len\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   2248\u001b[0m             next_token_scores = nn.functional.log_softmax(\n\u001b[0;32m-> 2249\u001b[0;31m                 \u001b[0mnext_token_logits\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdim\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m-\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   2250\u001b[0m             )  # (batch_size * num_beams, vocab_size)\n\u001b[1;32m   2251\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;32m/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py\u001b[0m in \u001b[0;36mlog_softmax\u001b[0;34m(input, dim, _stacklevel, dtype)\u001b[0m\n\u001b[1;32m   1921\u001b[0m         \u001b[0mdim\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_get_softmax_dim\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"log_softmax\"\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdim\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0m_stacklevel\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1922\u001b[0m     \u001b[0;32mif\u001b[0m \u001b[0mdtype\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1923\u001b[0;31m         \u001b[0mret\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlog_softmax\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdim\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m   1924\u001b[0m     \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m   1925\u001b[0m         \u001b[0mret\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlog_softmax\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdim\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mdtype\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mdtype\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
            "\u001b[0;31mKeyboardInterrupt\u001b[0m: "
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "802MvFlmwqEU"
      },
      "source": [
        "本notebook基于以下项目，并结合ChatYuan模型和pCLUE数据集修改得到：https://github.com/Shivanandroy/T5-Finetuning-PyTorch"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "machine_shape": "hm",
      "provenance": [],
      "gpuClass": "premium"
    },
    "gpuClass": "premium",
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}