{
  "nbformat": 4,
  "nbformat_minor": 0,
  "metadata": {
    "colab": {
      "provenance": [],
      "authorship_tag": "ABX9TyPPGaF0reGIbSvQS2PdBWeS",
      "include_colab_link": true
    },
    "kernelspec": {
      "name": "python3",
      "display_name": "Python 3"
    },
    "language_info": {
      "name": "python"
    },
    "widgets": {
      "application/vnd.jupyter.widget-state+json": {
        "39f49d91903c4316b24202a606652a35": {
          "model_module": "@jupyter-widgets/controls",
          "model_name": "HBoxModel",
          "model_module_version": "1.5.0",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_61936c29935d4e24b708b3359d924567",
              "IPY_MODEL_0c3ddf85df4648869cb47648138f6f87",
              "IPY_MODEL_33f592beb9c14a0a9f763a64be0be333"
            ],
            "layout": "IPY_MODEL_f8414b1e20f44c7da14d8f4ab0885c68"
          }
        },
        "45decf2444ca43a7812950f8d7e47a86": {
          "model_module": "@jupyter-widgets/controls",
          "model_name": "HBoxModel",
          "model_module_version": "1.5.0",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_19de1f52ac5b4c4eb2cbb90cbfa7e36e",
              "IPY_MODEL_54722aaa4d914d6ab5fa1b39e49e553d",
              "IPY_MODEL_ce237c918e1d4cd189d5a2e682a7ccc8"
            ],
            "layout": "IPY_MODEL_7104467741014743a254d2c01c79af2e"
          }
        },
        "9fa0080d6720497dbf1413ab892c3fcb": {
          "model_module": "@jupyter-widgets/controls",
          "model_name": "HBoxModel",
          "model_module_version": "1.5.0",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_524e310f832b4443952615e0c3625698",
              "IPY_MODEL_c87a2dd8b8ab44eb9bf9df173bc78677",
              "IPY_MODEL_abced2aa2cd446a8b024b4a27ac7aefa"
            ],
            "layout": "IPY_MODEL_336dd257666a433382e5452355f1ae55"
          }
        },
        "1616ed9277af4202af0ee830641237d4": {
          "model_module": "@jupyter-widgets/controls",
          "model_name": "HBoxModel",
          "model_module_version": "1.5.0",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_59ed81bce89145239941cdd3a2233dc9",
              "IPY_MODEL_e6c068a6119645de8cd48d0387c0c8f8",
              "IPY_MODEL_ce22e02d05c84374a6fe2110e5a2dd7c"
            ],
            "layout": "IPY_MODEL_5503496378fe450a9983b00a3274f859"
          }
        },
        "d986fbfb12a84f69ae8c86f95c54f63f": {
          "model_module": "@jupyter-widgets/controls",
          "model_name": "HBoxModel",
          "model_module_version": "1.5.0",
          "state": {
            "_dom_classes": [],
            "_model_module": "@jupyter-widgets/controls",
            "_model_module_version": "1.5.0",
            "_model_name": "HBoxModel",
            "_view_count": null,
            "_view_module": "@jupyter-widgets/controls",
            "_view_module_version": "1.5.0",
            "_view_name": "HBoxView",
            "box_style": "",
            "children": [
              "IPY_MODEL_34dff38251da4906b06756d50c3d83bd",
              "IPY_MODEL_422087983b054cf699f78d71b5111519",
              "IPY_MODEL_810c59c209f54a1a84b4176ad0f88298"
            ],
            "layout": "IPY_MODEL_e2f9c4b063e34b2cac6c0f70ebaf854c"
          }
        }
      }
    }
  },
  "cells": [
    {
      "cell_type": "markdown",
      "metadata": {
        "id": "view-in-github",
        "colab_type": "text"
      },
      "source": [
        "<a href=\"https://colab.research.google.com/github/chenyu313/Colaboratory_note/blob/main/Bert_Note.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 模型描述\n",
        "BERT是以自我监督的方式在大量英语数据语料库上进行预训练的transformers模型。这意味着它只在原始文本上进行了预训练，没有人为以任何方式对它们进行标记(这就是为什么它可以使用大量公开可用的数据)，并通过自动过程从这些文本中生成输入和标签。更准确地说，它的预训练有两个目标:\n",
        "* 掩码语言建模(MLM)：以一个句子为例，模型随机掩码输入中15%的单词，然后通过模型运行整个掩码句子，并预测被掩码单词。这与传统的循环神经网络(RNN)不同，传统的递归神经网络通常一个接一个地看到单词，也不同于像GPT这样的自回归模型，它在内部掩盖了未来的标记。它允许模型学习句子的双向表示。\n",
        "\n",
        "* 下一句预测(NSP)：模型在预训练期间连接两个被屏蔽的句子作为输入。有时它们与原文中相邻的句子相对应，有时则相反。然后，该模型必须预测这两个句子是否相互关联。\n",
        "\n",
        "通过这种方式，模型学习了英语语言的内部表示，然后可以用来提取对下游任务有用的特征:例如，如果你有一个标记句子的数据集，你可以使用BERT模型产生的特征作为输入来训练一个标准分类器。"
      ],
      "metadata": {
        "id": "fPHTkVxj-nva"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 训练数据\n",
        "BERT模型是在BookCorpus上进行预训练的，BookCorpus是一个由11038本未出版的书籍和英文维基百科(不包括列表、表格和标题)组成的数据集。\n",
        "\n",
        "### 训练过程\n",
        "* 预处理：文本使用WordPiece进行标记，词汇量为30,000。模型的输入是这样的:\n",
        "\n",
        "[CLS] Sentence A [SEP] Sentence B [SEP]  \n",
        "\n",
        "在0.5的概率下，句子A和句子B对应于原始语料库中的两个连续句子，在其他情况下，它是语料库中的另一个随机句子。注意，这里所说的一个句子是一个连续的文本，通常比一个句子长。唯一的约束是两个“句子”的结果的总长度小于512个标记。  \n",
        "\n",
        "每个句子的掩蔽程序细节如下:  \n",
        "1）15%的tokens被掩盖了。  \n",
        "2）在80%的情况下，掩码令牌被[MASK]取代。  \n",
        "3）在10%的情况下，被屏蔽的令牌会被一个与其所替换的随机令牌(不同)所替换。  \n",
        "4）在剩下的10%的情况下，掩码令牌保持原样。\n",
        "\n",
        "\n",
        "* 预训练：该模型在Pod配置的4个云TPU(共16个TPU芯片)上进行100万步的训练，批大小为256。对于90%的步骤，序列长度被限制为128个标记，其余10%的步骤被限制为512个标记。使用的优化器为Adam，其学习率为1e-4， α1 =0.9 β1 =0.9和α2 =0.999 β2 =0.999，权值衰减为0.01，学习率预热10000步，之后学习率线性衰减。\n",
        "\n",
        "* 评价结果：当对下游任务进行微调时，该模型实现了以下结果\n",
        "![image.png]()\n"
      ],
      "metadata": {
        "id": "lrn5bqlwIP93"
      }
    },
    {
      "cell_type": "markdown",
      "source": [
        "## 具体使用"
      ],
      "metadata": {
        "id": "7DLGewPnE5Ov"
      }
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "i0k8XTJf-gnW"
      },
      "outputs": [],
      "source": [
        "!pip install transformers"
      ]
    },
    {
      "cell_type": "code",
      "source": [
        "from transformers import pipeline\n",
        "unmasker = pipeline('fill-mask', model='bert-base-cased')\n",
        "unmasker(\"Hello I'm a [MASK] model.\")\n"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/",
          "height": 459,
          "referenced_widgets": [
            "39f49d91903c4316b24202a606652a35",
            "45decf2444ca43a7812950f8d7e47a86",
            "9fa0080d6720497dbf1413ab892c3fcb",
            "1616ed9277af4202af0ee830641237d4",
            "d986fbfb12a84f69ae8c86f95c54f63f"
          ]
        },
        "id": "BpEqNZ3tFM1f",
        "outputId": "c2e6caf9-3eb2-4210-a59b-e8e7392ee5e9"
      },
      "execution_count": 2,
      "outputs": [
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Downloading (…)lve/main/config.json:   0%|          | 0.00/570 [00:00<?, ?B/s]"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "39f49d91903c4316b24202a606652a35"
            }
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Downloading pytorch_model.bin:   0%|          | 0.00/436M [00:00<?, ?B/s]"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "45decf2444ca43a7812950f8d7e47a86"
            }
          },
          "metadata": {}
        },
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias']\n",
            "- This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
            "- This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n"
          ]
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Downloading (…)okenizer_config.json:   0%|          | 0.00/29.0 [00:00<?, ?B/s]"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "9fa0080d6720497dbf1413ab892c3fcb"
            }
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Downloading (…)solve/main/vocab.txt:   0%|          | 0.00/213k [00:00<?, ?B/s]"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "1616ed9277af4202af0ee830641237d4"
            }
          },
          "metadata": {}
        },
        {
          "output_type": "display_data",
          "data": {
            "text/plain": [
              "Downloading (…)/main/tokenizer.json:   0%|          | 0.00/436k [00:00<?, ?B/s]"
            ],
            "application/vnd.jupyter.widget-view+json": {
              "version_major": 2,
              "version_minor": 0,
              "model_id": "d986fbfb12a84f69ae8c86f95c54f63f"
            }
          },
          "metadata": {}
        },
        {
          "output_type": "execute_result",
          "data": {
            "text/plain": [
              "[{'score': 0.09019182622432709,\n",
              "  'token': 4633,\n",
              "  'token_str': 'fashion',\n",
              "  'sequence': \"Hello I'm a fashion model.\"},\n",
              " {'score': 0.0635000690817833,\n",
              "  'token': 1207,\n",
              "  'token_str': 'new',\n",
              "  'sequence': \"Hello I'm a new model.\"},\n",
              " {'score': 0.06228196248412132,\n",
              "  'token': 2581,\n",
              "  'token_str': 'male',\n",
              "  'sequence': \"Hello I'm a male model.\"},\n",
              " {'score': 0.0441727377474308,\n",
              "  'token': 1848,\n",
              "  'token_str': 'professional',\n",
              "  'sequence': \"Hello I'm a professional model.\"},\n",
              " {'score': 0.03326152265071869,\n",
              "  'token': 7688,\n",
              "  'token_str': 'super',\n",
              "  'sequence': \"Hello I'm a super model.\"}]"
            ]
          },
          "metadata": {},
          "execution_count": 2
        }
      ]
    },
    {
      "cell_type": "markdown",
      "source": [
        "在Pytorch中使用这个模型获取指定的文本"
      ],
      "metadata": {
        "id": "3xLHGWbwGXcP"
      }
    },
    {
      "cell_type": "code",
      "source": [
        "from transformers import BertTokenizer, BertModel\n",
        "tokenizer = BertTokenizer.from_pretrained('bert-base-cased')\n",
        "model = BertModel.from_pretrained(\"bert-base-cased\")\n",
        "text = \"Replace me by any text you'd like.\"\n",
        "encoded_input = tokenizer(text, return_tensors='pt')\n",
        "output = model(**encoded_input)\n"
      ],
      "metadata": {
        "colab": {
          "base_uri": "https://localhost:8080/"
        },
        "id": "TXeUwyovGcWA",
        "outputId": "fc716152-79b6-45fa-b3f8-a2812d4871b2"
      },
      "execution_count": 6,
      "outputs": [
        {
          "output_type": "stream",
          "name": "stderr",
          "text": [
            "Some weights of the model checkpoint at bert-base-cased were not used when initializing BertModel: ['cls.predictions.transform.LayerNorm.bias', 'cls.seq_relationship.bias', 'cls.predictions.decoder.weight', 'cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.weight', 'cls.predictions.transform.dense.bias']\n",
            "- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
            "- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n"
          ]
        }
      ]
    }
  ]
}