{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "gpuType": "T4",
      "authorship_tag": "ABX9TyMwRQ7xj34z+2kbK80aXlKN",
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "accelerator": "GPU",
    "gpuClass": "standard"
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/zhaochuninhefei/ipynbs/blob/master/ChatGLM_6B.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# 一、准备工作\n",
        "\n",
        "## 1.1 选择GPU\n",
        "在菜单`代码执行程序`中选择`更改运行时类型`,选择GPT，免费的T4。\n",
        "\n",
        "然后检查当前N卡配置:"
      ],
      "metadata": {
        "id": "e6jDWFk5PCjv"
      }
    },
    {
      "cell_type": "code",
      "execution_count": 1,
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "P6156BPxJqtp",
        "outputId": "8504c1a2-3c3b-4066-ba0e-e041e118f5c9"
      },
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Mon May 29 00:34:57 2023       \n",
            "+-----------------------------------------------------------------------------+\n",
            "| NVIDIA-SMI 525.85.12    Driver Version: 525.85.12    CUDA Version: 12.0     |\n",
            "|-------------------------------+----------------------+----------------------+\n",
            "| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |\n",
            "| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |\n",
            "|                               |                      |               MIG M. |\n",
            "|===============================+======================+======================|\n",
            "|   0  Tesla T4            Off  | 00000000:00:04.0 Off |                    0 |\n",
            "| N/A   62C    P8    11W /  70W |      0MiB / 15360MiB |      0%      Default |\n",
            "|                               |                      |                  N/A |\n",
            "+-------------------------------+----------------------+----------------------+\n",
            "                                                                               \n",
            "+-----------------------------------------------------------------------------+\n",
            "| Processes:                                                                  |\n",
            "|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |\n",
            "|        ID   ID                                                   Usage      |\n",
            "|=============================================================================|\n",
            "|  No running processes found                                                 |\n",
            "+-----------------------------------------------------------------------------+\n"
          ]
        }
      ],
      "source": [
        "!nvidia-smi"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "可以看到免费的T4显卡有15G显存"
      ],
      "metadata": {
        "id": "H_XYcak8j30F"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 1.2 下载ChatGLM-6B项目\n",
        "默认会下载到`/content/ChatGLM-6B`目录:"
      ],
      "metadata": {
        "id": "pbELj_2_PaEZ"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!git clone https://github.com/THUDM/ChatGLM-6B.git"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "G3fBsFprKKtO",
        "outputId": "f1d7482a-321d-4c14-da19-194eb12a5067"
      },
      "execution_count": 2,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Cloning into 'ChatGLM-6B'...\n",
            "remote: Enumerating objects: 1187, done.\u001b[K\n",
            "remote: Counting objects: 100% (623/623), done.\u001b[K\n",
            "remote: Compressing objects: 100% (119/119), done.\u001b[K\n",
            "remote: Total 1187 (delta 532), reused 545 (delta 504), pack-reused 564\u001b[K\n",
            "Receiving objects: 100% (1187/1187), 9.01 MiB | 26.75 MiB/s, done.\n",
            "Resolving deltas: 100% (701/701), done.\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 1.3 安装依赖库\n",
        "根据ChatGLM-6B项目目录下的`requirements.txt`文件安装相关依赖包。"
      ],
      "metadata": {
        "id": "n5kWUAjPi15g"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "!pip install -r /content/ChatGLM-6B/requirements.txt"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "jbgut1sqKQYb",
        "outputId": "60a76f18-402f-4986-c8c9-d877e7967398"
      },
      "execution_count": 3,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/\n",
            "Requirement already satisfied: protobuf in /usr/local/lib/python3.10/dist-packages (from -r /content/ChatGLM-6B/requirements.txt (line 1)) (3.20.3)\n",
            "Collecting transformers==4.27.1 (from -r /content/ChatGLM-6B/requirements.txt (line 2))\n",
            "  Downloading transformers-4.27.1-py3-none-any.whl (6.7 MB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m6.7/6.7 MB\u001b[0m \u001b[31m74.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hCollecting cpm_kernels (from -r /content/ChatGLM-6B/requirements.txt (line 3))\n",
            "  Downloading cpm_kernels-1.0.11-py3-none-any.whl (416 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m416.6/416.6 kB\u001b[0m \u001b[31m49.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: torch>=1.10 in /usr/local/lib/python3.10/dist-packages (from -r /content/ChatGLM-6B/requirements.txt (line 4)) (2.0.1+cu118)\n",
            "Collecting gradio (from -r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading gradio-3.32.0-py3-none-any.whl (19.9 MB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m19.9/19.9 MB\u001b[0m \u001b[31m84.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hCollecting mdtex2html (from -r /content/ChatGLM-6B/requirements.txt (line 6))\n",
            "  Downloading mdtex2html-1.2.0-py3-none-any.whl (13 kB)\n",
            "Collecting sentencepiece (from -r /content/ChatGLM-6B/requirements.txt (line 7))\n",
            "  Downloading sentencepiece-0.1.99-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.3/1.3 MB\u001b[0m \u001b[31m84.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hCollecting accelerate (from -r /content/ChatGLM-6B/requirements.txt (line 8))\n",
            "  Downloading accelerate-0.19.0-py3-none-any.whl (219 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m219.1/219.1 kB\u001b[0m \u001b[31m22.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from transformers==4.27.1->-r /content/ChatGLM-6B/requirements.txt (line 2)) (3.12.0)\n",
            "Collecting huggingface-hub<1.0,>=0.11.0 (from transformers==4.27.1->-r /content/ChatGLM-6B/requirements.txt (line 2))\n",
            "  Downloading huggingface_hub-0.14.1-py3-none-any.whl (224 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m224.5/224.5 kB\u001b[0m \u001b[31m30.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.27.1->-r /content/ChatGLM-6B/requirements.txt (line 2)) (1.22.4)\n",
            "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from transformers==4.27.1->-r /content/ChatGLM-6B/requirements.txt (line 2)) (23.1)\n",
            "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from transformers==4.27.1->-r /content/ChatGLM-6B/requirements.txt (line 2)) (6.0)\n",
            "Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.10/dist-packages (from transformers==4.27.1->-r /content/ChatGLM-6B/requirements.txt (line 2)) (2022.10.31)\n",
            "Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from transformers==4.27.1->-r /content/ChatGLM-6B/requirements.txt (line 2)) (2.27.1)\n",
            "Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 (from transformers==4.27.1->-r /content/ChatGLM-6B/requirements.txt (line 2))\n",
            "  Downloading tokenizers-0.13.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (7.8 MB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m7.8/7.8 MB\u001b[0m \u001b[31m88.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.10/dist-packages (from transformers==4.27.1->-r /content/ChatGLM-6B/requirements.txt (line 2)) (4.65.0)\n",
            "Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from torch>=1.10->-r /content/ChatGLM-6B/requirements.txt (line 4)) (4.5.0)\n",
            "Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.10->-r /content/ChatGLM-6B/requirements.txt (line 4)) (1.11.1)\n",
            "Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.10->-r /content/ChatGLM-6B/requirements.txt (line 4)) (3.1)\n",
            "Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.10->-r /content/ChatGLM-6B/requirements.txt (line 4)) (3.1.2)\n",
            "Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.10->-r /content/ChatGLM-6B/requirements.txt (line 4)) (2.0.0)\n",
            "Requirement already satisfied: cmake in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.10->-r /content/ChatGLM-6B/requirements.txt (line 4)) (3.25.2)\n",
            "Requirement already satisfied: lit in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.10->-r /content/ChatGLM-6B/requirements.txt (line 4)) (16.0.5)\n",
            "Collecting aiofiles (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading aiofiles-23.1.0-py3-none-any.whl (14 kB)\n",
            "Collecting aiohttp (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading aiohttp-3.8.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.0 MB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.0/1.0 MB\u001b[0m \u001b[31m8.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: altair>=4.2.0 in /usr/local/lib/python3.10/dist-packages (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (4.2.2)\n",
            "Collecting fastapi (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading fastapi-0.95.2-py3-none-any.whl (56 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m57.0/57.0 kB\u001b[0m \u001b[31m7.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hCollecting ffmpy (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading ffmpy-0.3.0.tar.gz (4.8 kB)\n",
            "  Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "Collecting gradio-client>=0.2.4 (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading gradio_client-0.2.5-py3-none-any.whl (288 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m288.1/288.1 kB\u001b[0m \u001b[31m38.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hCollecting httpx (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading httpx-0.24.1-py3-none-any.whl (75 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m75.4/75.4 kB\u001b[0m \u001b[31m11.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: markdown-it-py[linkify]>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (2.2.0)\n",
            "Requirement already satisfied: markupsafe in /usr/local/lib/python3.10/dist-packages (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (2.1.2)\n",
            "Requirement already satisfied: matplotlib in /usr/local/lib/python3.10/dist-packages (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (3.7.1)\n",
            "Collecting mdit-py-plugins<=0.3.3 (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading mdit_py_plugins-0.3.3-py3-none-any.whl (50 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m50.5/50.5 kB\u001b[0m \u001b[31m7.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hCollecting orjson (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading orjson-3.8.14-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (136 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m136.6/136.6 kB\u001b[0m \u001b[31m19.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (1.5.3)\n",
            "Requirement already satisfied: pillow in /usr/local/lib/python3.10/dist-packages (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (8.4.0)\n",
            "Requirement already satisfied: pydantic in /usr/local/lib/python3.10/dist-packages (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (1.10.7)\n",
            "Collecting pydub (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading pydub-0.25.1-py2.py3-none-any.whl (32 kB)\n",
            "Requirement already satisfied: pygments>=2.12.0 in /usr/local/lib/python3.10/dist-packages (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (2.14.0)\n",
            "Collecting python-multipart (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading python_multipart-0.0.6-py3-none-any.whl (45 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m45.7/45.7 kB\u001b[0m \u001b[31m6.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hCollecting semantic-version (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading semantic_version-2.10.0-py2.py3-none-any.whl (15 kB)\n",
            "Collecting uvicorn>=0.14.0 (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading uvicorn-0.22.0-py3-none-any.whl (58 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m5.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hCollecting websockets>=10.0 (from gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading websockets-11.0.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (129 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m129.9/129.9 kB\u001b[0m \u001b[31m19.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: markdown in /usr/local/lib/python3.10/dist-packages (from mdtex2html->-r /content/ChatGLM-6B/requirements.txt (line 6)) (3.4.3)\n",
            "Collecting latex2mathml (from mdtex2html->-r /content/ChatGLM-6B/requirements.txt (line 6))\n",
            "  Downloading latex2mathml-3.76.0-py3-none-any.whl (73 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m73.4/73.4 kB\u001b[0m \u001b[31m10.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from accelerate->-r /content/ChatGLM-6B/requirements.txt (line 8)) (5.9.5)\n",
            "Requirement already satisfied: entrypoints in /usr/local/lib/python3.10/dist-packages (from altair>=4.2.0->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (0.4)\n",
            "Requirement already satisfied: jsonschema>=3.0 in /usr/local/lib/python3.10/dist-packages (from altair>=4.2.0->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (4.3.3)\n",
            "Requirement already satisfied: toolz in /usr/local/lib/python3.10/dist-packages (from altair>=4.2.0->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (0.12.0)\n",
            "Requirement already satisfied: fsspec in /usr/local/lib/python3.10/dist-packages (from gradio-client>=0.2.4->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (2023.4.0)\n",
            "Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.10/dist-packages (from markdown-it-py[linkify]>=2.0.0->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (0.1.2)\n",
            "Collecting linkify-it-py<3,>=1 (from markdown-it-py[linkify]>=2.0.0->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading linkify_it_py-2.0.2-py3-none-any.whl (19 kB)\n",
            "Requirement already satisfied: python-dateutil>=2.8.1 in /usr/local/lib/python3.10/dist-packages (from pandas->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (2.8.2)\n",
            "Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.10/dist-packages (from pandas->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (2022.7.1)\n",
            "Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.10/dist-packages (from uvicorn>=0.14.0->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (8.1.3)\n",
            "Collecting h11>=0.8 (from uvicorn>=0.14.0->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading h11-0.14.0-py3-none-any.whl (58 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m58.3/58.3 kB\u001b[0m \u001b[31m8.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (23.1.0)\n",
            "Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (2.0.12)\n",
            "Collecting multidict<7.0,>=4.5 (from aiohttp->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading multidict-6.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (114 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m114.5/114.5 kB\u001b[0m \u001b[31m16.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hCollecting async-timeout<5.0,>=4.0.0a3 (from aiohttp->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading async_timeout-4.0.2-py3-none-any.whl (5.8 kB)\n",
            "Collecting yarl<2.0,>=1.0 (from aiohttp->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading yarl-1.9.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (268 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m268.8/268.8 kB\u001b[0m \u001b[31m33.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hCollecting frozenlist>=1.1.1 (from aiohttp->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading frozenlist-1.3.3-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (149 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m149.6/149.6 kB\u001b[0m \u001b[31m18.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hCollecting aiosignal>=1.1.2 (from aiohttp->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading aiosignal-1.3.1-py3-none-any.whl (7.6 kB)\n",
            "Collecting starlette<0.28.0,>=0.27.0 (from fastapi->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading starlette-0.27.0-py3-none-any.whl (66 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m67.0/67.0 kB\u001b[0m \u001b[31m9.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from httpx->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (2022.12.7)\n",
            "Collecting httpcore<0.18.0,>=0.15.0 (from httpx->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading httpcore-0.17.2-py3-none-any.whl (72 kB)\n",
            "\u001b[2K     \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m72.5/72.5 kB\u001b[0m \u001b[31m10.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
            "\u001b[?25hRequirement already satisfied: idna in /usr/local/lib/python3.10/dist-packages (from httpx->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (3.4)\n",
            "Requirement already satisfied: sniffio in /usr/local/lib/python3.10/dist-packages (from httpx->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (1.3.0)\n",
            "Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (1.0.7)\n",
            "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (0.11.0)\n",
            "Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (4.39.3)\n",
            "Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (1.4.4)\n",
            "Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (3.0.9)\n",
            "Requirement already satisfied: urllib3<1.27,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->transformers==4.27.1->-r /content/ChatGLM-6B/requirements.txt (line 2)) (1.26.15)\n",
            "Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.10->-r /content/ChatGLM-6B/requirements.txt (line 4)) (1.3.0)\n",
            "Requirement already satisfied: anyio<5.0,>=3.0 in /usr/local/lib/python3.10/dist-packages (from httpcore<0.18.0,>=0.15.0->httpx->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (3.6.2)\n",
            "Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in /usr/local/lib/python3.10/dist-packages (from jsonschema>=3.0->altair>=4.2.0->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (0.19.3)\n",
            "Collecting uc-micro-py (from linkify-it-py<3,>=1->markdown-it-py[linkify]>=2.0.0->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5))\n",
            "  Downloading uc_micro_py-1.0.2-py3-none-any.whl (6.2 kB)\n",
            "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.8.1->pandas->gradio->-r /content/ChatGLM-6B/requirements.txt (line 5)) (1.16.0)\n",
            "Building wheels for collected packages: ffmpy\n",
            "  Building wheel for ffmpy (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
            "  Created wheel for ffmpy: filename=ffmpy-0.3.0-py3-none-any.whl size=4694 sha256=bffcbb27720d3fa54deccd4ee7e4931fb186747a5b3872dcbe9a2a44e63b9f5f\n",
            "  Stored in directory: /root/.cache/pip/wheels/0c/c2/0e/3b9c6845c6a4e35beb90910cc70d9ac9ab5d47402bd62af0df\n",
            "Successfully built ffmpy\n",
            "Installing collected packages: tokenizers, sentencepiece, pydub, ffmpy, cpm_kernels, websockets, uc-micro-py, semantic-version, python-multipart, orjson, multidict, latex2mathml, h11, frozenlist, async-timeout, aiofiles, yarl, uvicorn, starlette, mdtex2html, mdit-py-plugins, linkify-it-py, huggingface-hub, httpcore, aiosignal, transformers, httpx, fastapi, aiohttp, gradio-client, gradio, accelerate\n",
            "Successfully installed accelerate-0.19.0 aiofiles-23.1.0 aiohttp-3.8.4 aiosignal-1.3.1 async-timeout-4.0.2 cpm_kernels-1.0.11 fastapi-0.95.2 ffmpy-0.3.0 frozenlist-1.3.3 gradio-3.32.0 gradio-client-0.2.5 h11-0.14.0 httpcore-0.17.2 httpx-0.24.1 huggingface-hub-0.14.1 latex2mathml-3.76.0 linkify-it-py-2.0.2 mdit-py-plugins-0.3.3 mdtex2html-1.2.0 multidict-6.0.4 orjson-3.8.14 pydub-0.25.1 python-multipart-0.0.6 semantic-version-2.10.0 sentencepiece-0.1.99 starlette-0.27.0 tokenizers-0.13.3 transformers-4.27.1 uc-micro-py-1.0.2 uvicorn-0.22.0 websockets-11.0.3 yarl-1.9.2\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# 二、加载模型\n",
        "这里选择`chatglm-6b-int4`量化模型。\n",
        "> 默认的`chatglm-6b`(FP16无量化模型)和int8量化模型都因为这里内存RAM不足(只有12.7G)而无法成功加载模型。"
      ],
      "metadata": {
        "id": "jwaWJTfvjYYu"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from transformers import AutoTokenizer, AutoModel\n",
        "tokenizer = AutoTokenizer.from_pretrained(\"THUDM/chatglm-6b-int4\", trust_remote_code=True)\n",
        "model = AutoModel.from_pretrained(\"THUDM/chatglm-6b-int4\",trust_remote_code=True).half().cuda()\n",
        "# 无法成功加载int8模型，即使去掉half()也不行，虽然分配有12.7G内存，但实际使用时似乎不能超过6G\n",
        "# tokenizer = AutoTokenizer.from_pretrained(\"THUDM/chatglm-6b-int8\", trust_remote_code=True)\n",
        "# model = AutoModel.from_pretrained(\"THUDM/chatglm-6b-int8\",trust_remote_code=True).cuda()"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "5FxsMKT0LnTM",
        "outputId": "6ef2dcb7-0e08-49f9-e6a7-ad4793177792"
      },
      "execution_count": 1,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\n",
            "Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\n",
            "Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "No compiled kernel found.\n",
            "Compiling kernels : /root/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4/02a065cf2797029c036a02cac30f1da1a9bc49a3/quantization_kernels.c\n",
            "Compiling gcc -O3 -fPIC -std=c99 /root/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4/02a065cf2797029c036a02cac30f1da1a9bc49a3/quantization_kernels.c -shared -o /root/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4/02a065cf2797029c036a02cac30f1da1a9bc49a3/quantization_kernels.so\n",
            "Load kernel : /root/.cache/huggingface/modules/transformers_modules/THUDM/chatglm-6b-int4/02a065cf2797029c036a02cac30f1da1a9bc49a3/quantization_kernels.so\n",
            "Using quantization cache\n",
            "Applying quantization to glm layers\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "# 三、运行推理\n",
        "## 3.1 首次推理\n",
        "首次推理会慢一点。这里的提示词很简单，就是`你好`。"
      ],
      "metadata": {
        "id": "rSpEttOImxJl"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "model = model.eval()\n",
        "response, history = model.chat(tokenizer, \"你好\", history=[])\n",
        "print(response)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "zlRKX43lM2uE",
        "outputId": "732f93cd-53a1-4a1a-a4c3-c56066d02b97"
      },
      "execution_count": 2,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "WARNING:transformers_modules.THUDM.chatglm-6b-int4.02a065cf2797029c036a02cac30f1da1a9bc49a3.modeling_chatglm:The dtype of attention mask (torch.int64) is not bool\n"
          ]
        },
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "你好👋！我是人工智能助手 ChatGLM-6B，很高兴见到你，欢迎问我任何问题。\n",
            "CPU times: user 8.11 s, sys: 843 ms, total: 8.95 s\n",
            "Wall time: 14.8 s\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "可以看到，首次运行推理花了大约15秒。\n",
        "\n",
        "## 3.2 再次推理\n",
        "然后我们再次运行相同提示词的推理："
      ],
      "metadata": {
        "id": "4vnR7bAsnG9T"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "response, history = model.chat(tokenizer, \"你好\", history=[])\n",
        "print(response)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "zmu38DL9NOIz",
        "outputId": "d0677265-1fe6-4564-bf16-530787a751a6"
      },
      "execution_count": 3,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "你好👋！我是人工智能助手 ChatGLM-6B，很高兴见到你，欢迎问我任何问题。\n",
            "CPU times: user 4.89 s, sys: 0 ns, total: 4.89 s\n",
            "Wall time: 4.92 s\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "快了不少，只要大约5秒，这个其实还是很慢。\n",
        "\n",
        "## 3.3 其他推理\n",
        "在这里我们尝试一下其他稍微复杂点的提示词:"
      ],
      "metadata": {
        "id": "PpobeDD8nVYK"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "response, history = model.chat(tokenizer, \"桌子上有4个苹果, 小红吃了1个, 小刚拿走了2个, 还剩下几个苹果?\", history=[])\n",
        "print(response)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "Mv1Yp_Zpnhp5",
        "outputId": "4570bd44-954b-47fa-88ed-54ac1aa9567b"
      },
      "execution_count": 5,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "还剩下3个苹果。\n",
            "CPU times: user 1.38 s, sys: 1.13 ms, total: 1.38 s\n",
            "Wall time: 1.5 s\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "这种计算推理问题，ChatGLM-6B的表现不是很好。\n",
        "\n",
        "再尝试一个常识类问题:"
      ],
      "metadata": {
        "id": "CzUAhzPlnwYA"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "response, history = model.chat(tokenizer, \"中国四大名著是哪四本书?\", history=[])\n",
        "print(response)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "BS20qJB3n502",
        "outputId": "d43462db-f7b3-45a5-c6b7-9cd37f49eb45"
      },
      "execution_count": 6,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "中国四大名著是指《红楼梦》、《西游记》、《水浒传》和《三国演义》。这四本书都是中国文学史上的经典之作，被广泛传诵和阅读。\n",
            "CPU times: user 5.37 s, sys: 2.13 ms, total: 5.38 s\n",
            "Wall time: 5.36 s\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "这个还行，但有时也有说多错多的问题。\n",
        "\n",
        "## 3.4 信息提取\n",
        "再尝试一个做信息提取的例子:"
      ],
      "metadata": {
        "id": "KmUUOnY-oDvG"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "content=\"\"\"本人是毕业于南七技校，性格开朗，待人真诚，对待工作认真负责，善于沟通、协调有较强的组织能力与团队精神;\n",
        "活泼开朗、乐观上进、有爱心并善于施教并行;上进心强、勤于学习能不断提高自身的能力与综合素质。\n",
        "在未来的工作中，我将以充沛的精力，刻苦钻研的精神来努力工作，稳定地提高自己的工作能力，与公司同步发展。\n",
        "目前我居住在合肥市云深路234号奋斗小区，我的手机号码是: 18012345678。也可以用邮箱联系我: testtest@test.com。\n",
        "\"\"\"\n",
        "prompt=\"\"\"从上文中，提取\"信息\"(keyword:content)，包括:\"手机号\"、\"邮箱\"、\"毕业院校\"、\"住址\"、\"自我评价\"等keyword及其content，输出json格式内容。\n",
        "\"\"\"\n",
        "input ='{}\\n\\n{}'.format(content,prompt)\n",
        "print(input)\n",
        "response, history = model.chat(tokenizer, input, history=[])\n",
        "print(response)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "e6m_B56ONg1a",
        "outputId": "6142fc7f-9f99-4c91-8c01-9bff2f95498e"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "本人是毕业于南七技校，性格开朗，待人真诚，对待工作认真负责，善于沟通、协调有较强的组织能力与团队精神;\n",
            "活泼开朗、乐观上进、有爱心并善于施教并行;上进心强、勤于学习能不断提高自身的能力与综合素质。\n",
            "在未来的工作中，我将以充沛的精力，刻苦钻研的精神来努力工作，稳定地提高自己的工作能力，与公司同步发展。\n",
            "目前我居住在合肥市云深路234号奋斗小区，我的手机号码是: 18012345678。也可以用邮箱联系我: testtest@test.com。\n",
            "\n",
            "\n",
            "从上文中，提取\"信息\"(keyword:content)，包括:\"手机号\"、\"邮箱\"、\"毕业院校\"、\"住址\"、\"自我评价\"等keyword及其content，输出json格式内容。\n",
            "\n",
            "{  \n",
            " \"手机号\": 18012345678,  \n",
            " \"邮箱\": testtest@test.com,  \n",
            " \"毕业院校\": 南七技校，  \n",
            " \"住址\": 合肥市云深路234号奋斗小区，  \n",
            " \"自我评价\": 活泼开朗、乐观上进、有爱心并善于施教并行；上进心强、勤于学习能不断提高自身的能力与综合素质  \n",
            "}\n",
            "CPU times: user 17.6 s, sys: 2.12 ms, total: 17.6 s\n",
            "Wall time: 17.7 s\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "这个效果还是不错的，这里可以看出来很明显的语境内学习能力。\n",
        "\n",
        "## 3.5 信息统计\n",
        "### 先来试一个信息收集统计"
      ],
      "metadata": {
        "id": "pf-HeVlXpr5l"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "content=\"\"\"河马: 啦啦啦啦啦啦啦\n",
        "\n",
        "张三: 怎么回事？突然断电了？\n",
        "\n",
        "李四: 我家也断电了\n",
        "\n",
        "王五: +1\n",
        "\n",
        "傻大个: 没有啊，我家正常的啊\n",
        "\n",
        "胖君子: 啊这，好累。\n",
        "\n",
        "张三: @管家，什么情况？什么时候能来电? \n",
        "\n",
        "王五: 同问\n",
        "\n",
        "彼岸花: 我家也停电了\n",
        "\n",
        "兔子: +1\n",
        "\n",
        "毛熊: +1\n",
        "\n",
        "小鹰: 没停电的飘过。。。\n",
        "\n",
        "管家: 正在联系供电局，请大家少安勿躁。\n",
        "\n",
        "李四: 最近经常停电啊，世界末日要来了，大家出来嗨吧～～～\n",
        "\n",
        "屁屁: 还好我家没停。\n",
        "\n",
        "谁家大爷: 又停电了\n",
        "\n",
        "胡萝卜: +1\n",
        "\n",
        "管家: 正在联系供电局，请大家稍安勿躁。\n",
        "\n",
        "\"\"\"\n",
        "prompt='阅读上面的对话，请问有几家停电或断电了?'\n",
        "input ='{}\\n\\n{}'.format(content,prompt)\n",
        "print(input)\n",
        "response, history = model.chat(tokenizer, input, history=[])\n",
        "print()\n",
        "print(\"回答:\")\n",
        "print(response)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "brQq-75O2i8Q",
        "outputId": "584f5bb3-77a3-4209-efc6-c21fff4368da"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "河马: 啦啦啦啦啦啦啦\n",
            "\n",
            "张三: 怎么回事？突然断电了？\n",
            "\n",
            "李四: 我家也断电了\n",
            "\n",
            "王五: +1\n",
            "\n",
            "傻大个: 没有啊，我家正常的啊\n",
            "\n",
            "胖君子: 啊这，好累。\n",
            "\n",
            "张三: @管家，什么情况？什么时候能来电? \n",
            "\n",
            "王五: 同问\n",
            "\n",
            "彼岸花: 我家也停电了\n",
            "\n",
            "兔子: +1\n",
            "\n",
            "毛熊: +1\n",
            "\n",
            "小鹰: 没停电的飘过。。。\n",
            "\n",
            "管家: 正在联系供电局，请大家少安勿躁。\n",
            "\n",
            "李四: 最近经常停电啊，世界末日要来了，大家出来嗨吧～～～\n",
            "\n",
            "屁屁: 还好我家没停。\n",
            "\n",
            "谁家大爷: 又停电了\n",
            "\n",
            "胡萝卜: +1\n",
            "\n",
            "管家: 正在联系供电局，请大家稍安勿躁。\n",
            "\n",
            "\n",
            "\n",
            "阅读上面的对话，请问有几家停电或断电了?\n",
            "\n",
            "回答:\n",
            "在这段对话中，有5个人提到了他们家里最近停电了。\n",
            "CPU times: user 2.93 s, sys: 11.1 ms, total: 2.95 s\n",
            "Wall time: 4.01 s\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "很遗憾，统计结果不正确，实际上应该是8家停电。多试几次会发现，这种统计对chatGLM来说还是太难了，偶尔能蒙对一次。\n",
        "\n",
        "### 再试一下只有一家停电的场景"
      ],
      "metadata": {
        "id": "WWgQiYEx3GTW"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "content=\"\"\"河马: 啦啦啦啦啦啦啦\n",
        "\n",
        "张三: 怎么回事？突然断电了？\n",
        "\n",
        "李四: 我家正常\n",
        "\n",
        "王五: 正常+1\n",
        "\n",
        "傻大个: 没有啊，我家正常的啊\n",
        "\n",
        "胖君子: 啊这，好累。\n",
        "\n",
        "张三: @管家，什么情况？什么时候能来电? \n",
        "\n",
        "王五: 正常+1\n",
        "\n",
        "彼岸花: 我家正常\n",
        "\n",
        "兔子: 正常+1\n",
        "\n",
        "毛熊: 正常+1\n",
        "\n",
        "小鹰: 没停电的飘过。。。\n",
        "\n",
        "管家: 正在联系供电局，请大家少安勿躁。\n",
        "\n",
        "李四: 世界末日要来了，大家出来嗨吧～～～\n",
        "\n",
        "屁屁: 还好我家没停。\n",
        "\n",
        "谁家大爷: 我家没停电\n",
        "\n",
        "胡萝卜: 没停+1\n",
        "\n",
        "管家: 正在联系供电局，请大家稍安勿躁。\n",
        "\n",
        "\"\"\"\n",
        "prompt='阅读上面的对话，请问有几家停电或断电了?'\n",
        "input ='{}\\n\\n{}'.format(content,prompt)\n",
        "print(input)\n",
        "response, history = model.chat(tokenizer, input, history=[])\n",
        "print()\n",
        "print(\"回答:\")\n",
        "print(response)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "VpX8Peef-Ekj",
        "outputId": "7c84b146-fcfb-4ba3-ed5c-cbf238cd6f55"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "河马: 啦啦啦啦啦啦啦\n",
            "\n",
            "张三: 怎么回事？突然断电了？\n",
            "\n",
            "李四: 我家正常\n",
            "\n",
            "王五: 正常+1\n",
            "\n",
            "傻大个: 没有啊，我家正常的啊\n",
            "\n",
            "胖君子: 啊这，好累。\n",
            "\n",
            "张三: @管家，什么情况？什么时候能来电? \n",
            "\n",
            "王五: 正常+1\n",
            "\n",
            "彼岸花: 我家正常\n",
            "\n",
            "兔子: 正常+1\n",
            "\n",
            "毛熊: 正常+1\n",
            "\n",
            "小鹰: 没停电的飘过。。。\n",
            "\n",
            "管家: 正在联系供电局，请大家少安勿躁。\n",
            "\n",
            "李四: 世界末日要来了，大家出来嗨吧～～～\n",
            "\n",
            "屁屁: 还好我家没停。\n",
            "\n",
            "谁家大爷: 我家没停电\n",
            "\n",
            "胡萝卜: 没停+1\n",
            "\n",
            "管家: 正在联系供电局，请大家稍安勿躁。\n",
            "\n",
            "\n",
            "\n",
            "阅读上面的对话，请问有几家停电或断电了?\n",
            "\n",
            "回答:\n",
            "根据对话中的信息，有几家停电或断电了：\n",
            "\n",
            "- 张三： 突然断电了？\n",
            "- 李四： 我家正常\n",
            "- 王五： 正常+1\n",
            "- 傻大个： 没有啊，我家正常的啊\n",
            "- 胖君子： 啊这，好累\n",
            "- 彼岸花： 我家正常\n",
            "- 兔子： 正常+1\n",
            "- 毛熊： 正常+1\n",
            "- 小鹰： 没停电的飘过\n",
            "- 管家： 正在联系供电局，请大家少安勿躁\n",
            "- 李四： 世界末日要来了，大家出来嗨吧～～～\n",
            "- 屁屁： 还好我家没停\n",
            "- 谁家大爷： 我家没停电\n",
            "- 胡萝卜： 没停+1\n",
            "\n",
            "因此，有4家停电或断电了。\n",
            "CPU times: user 33.6 s, sys: 88.8 ms, total: 33.7 s\n",
            "Wall time: 33.9 s\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "这个就更难了，结果实在离谱。\n",
        "\n",
        "### 如果提示词加入示例，再来看一下结果"
      ],
      "metadata": {
        "id": "g13S7wZf_Z_k"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "%%time\n",
        "content=\"\"\"请统计一段对话中停电的户数有多少。\n",
        "例1:\n",
        "```\n",
        "张三: 怎么回事？突然断电了？\n",
        "李四: 我家也断电了\n",
        "王五: +1\n",
        "傻大个: 没有啊，我家正常的啊\n",
        "胖君子: 啊这，好累。\n",
        "```\n",
        "应该统计出停电户数:3。\n",
        "\n",
        "例2：\n",
        "```\n",
        "张三: 怎么回事？突然断电了？\n",
        "李四: 我家没断电啊\n",
        "王五: 没断电+1\n",
        "傻大个: 我家也正常的啊\n",
        "胖君子: 正常+1\n",
        "```\n",
        "应该统计出停电户数:1。\n",
        "\n",
        "请按照同样的规则统计下面这段对话的停电户数:\n",
        "```\n",
        "河马: 啦啦啦啦啦啦啦\n",
        "小明: 怎么回事？突然断电了？\n",
        "小芳: 我家也断电了\n",
        "小华: +1\n",
        "阿达: 没有啊，我家正常的啊\n",
        "君莫笑: 啊这，好累。\n",
        "小明: @管家，什么情况？什么时候能来电? \n",
        "小华: 同问\n",
        "彼岸花: 我家也停电了\n",
        "鸽子: +1\n",
        "大熊: +1\n",
        "老鹰: 没停电的飘过。。。\n",
        "管家: 正在联系供电局，请大家少安勿躁。\n",
        "小芳: 最近经常停电啊，世界末日要来了，大家出来嗨吧～～～\n",
        "屁屁: 还好我家没停。\n",
        "谁家大爷: 没停电+1\n",
        "胡萝卜: 没停电+1\n",
        "管家: 正在联系供电局，请大家稍安勿躁。\n",
        "```\n",
        "\"\"\"\n",
        "print(content)\n",
        "response, history = model.chat(tokenizer, content, history=[])\n",
        "print()\n",
        "print(\"回答:\")\n",
        "print(response)"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "xOUlfnu7HECn",
        "outputId": "095d4c4b-d15d-434c-a690-18f5e4298ef1"
      },
      "execution_count": null,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stdout",
          "text": [
            "请统计一段对话中停电的户数有多少。\n",
            "例1:\n",
            "```\n",
            "张三: 怎么回事？突然断电了？\n",
            "李四: 我家也断电了\n",
            "王五: +1\n",
            "傻大个: 没有啊，我家正常的啊\n",
            "胖君子: 啊这，好累。\n",
            "```\n",
            "应该统计出停电户数:3。\n",
            "\n",
            "例2：\n",
            "```\n",
            "张三: 怎么回事？突然断电了？\n",
            "李四: 我家没断电啊\n",
            "王五: 没断电+1\n",
            "傻大个: 我家也正常的啊\n",
            "胖君子: 正常+1\n",
            "```\n",
            "应该统计出停电户数:1。\n",
            "\n",
            "请按照同样的规则统计下面这段对话的停电户数:\n",
            "```\n",
            "河马: 啦啦啦啦啦啦啦\n",
            "小明: 怎么回事？突然断电了？\n",
            "小芳: 我家也断电了\n",
            "小华: +1\n",
            "阿达: 没有啊，我家正常的啊\n",
            "君莫笑: 啊这，好累。\n",
            "小明: @管家，什么情况？什么时候能来电? \n",
            "小华: 同问\n",
            "彼岸花: 我家也停电了\n",
            "鸽子: +1\n",
            "大熊: +1\n",
            "老鹰: 没停电的飘过。。。\n",
            "管家: 正在联系供电局，请大家少安勿躁。\n",
            "小芳: 最近经常停电啊，世界末日要来了，大家出来嗨吧～～～\n",
            "屁屁: 还好我家没停。\n",
            "谁家大爷: 没停电+1\n",
            "胡萝卜: 没停电+1\n",
            "管家: 正在联系供电局，请大家稍安勿躁。\n",
            "```\n",
            "\n",
            "\n",
            "回答:\n",
            "根据上面的例子，我们可以得到以下统计结果：\n",
            "\n",
            "- 河马： 啦啦啦啦啦啦啦  \n",
            "- 小明： 怎么回事？突然断电了？  \n",
            "- 小芳： 我家也断电了  \n",
            "- 小华： +1  \n",
            "- 阿达： 没有啊，我家正常的啊  \n",
            "- 君莫笑： 啊这，好累  \n",
            "- 小明： @管家，什么情况？什么时候能来电？  \n",
            "- 小华： 同问  \n",
            "- 彼岸花： 我家也停电了  \n",
            "- 鸽子： +1  \n",
            "- 大熊： +1  \n",
            "- 老鹰： 没停电的飘过。。。  \n",
            "- 管家： 正在联系供电局，请大家少安勿躁。  \n",
            "- 小芳： 最近经常停电啊，世界末日要来了，大家出来嗨吧～～～  \n",
            "- 屁屁： 还好我家没停。  \n",
            "- 谁家大爷： 没停电+1  \n",
            "- 胡萝卜： 没停电+1  \n",
            "\n",
            "因此，这段对话中的停电户数为：3。\n",
            "CPU times: user 44.6 s, sys: 73.4 ms, total: 44.7 s\n",
            "Wall time: 44.9 s\n"
          ]
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "最后这个依然不靠谱。\n",
        "但同样的提示词，最后这种带示例的，在new bing那里能得到正确结果，并且new bing会给出统计过程。"
      ],
      "metadata": {
        "id": "l-JGjNnoSFza"
      }
    }
  ]
}