{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "08e09d48",
   "metadata": {},
   "source": [
    "# BERT Based NER Tutorial\n",
    "> Tutorial author: Xin Xu（<xxucs@zju.edu.cn>）\n",
    "\n",
    "In this tutorial, we use `BERT` to recognize named entities. We hope this tutorial can help you understand the process of named entity recognition.\n",
    "\n",
    "This tutorial uses `Python3`."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "44cc3a6d",
   "metadata": {},
   "source": [
    "## NER\n",
    "**Named-entity recognition** (also known as named entity identification, entity chunking, and entity extraction) is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f13b1128",
   "metadata": {},
   "source": [
    "## Dataset\n",
    "In this demo, we use [**People's Daily(人民日报) dataset**](https://github.com/OYE93/Chinese-NLP-Corpus/tree/master/NER/People's%20Daily). It is a dataset for NER, concentrating on their types of named entities related to persons(PER), locations(LOC), and organizations(ORG).\n",
    "\n",
    "| Word | Named entity tag |\n",
    "| :--: | :--------------: |\n",
    "|  早  |        O         |\n",
    "|  在  |        O         |\n",
    "|  1   |        O         |\n",
    "|  9   |        O         |\n",
    "|  7   |        O         |\n",
    "|  5   |        O         |\n",
    "|  年  |        O         |\n",
    "|  ，  |        O         |\n",
    "|  张  |      B-PER       |\n",
    "|  鸿  |      I-PER       |\n",
    "|  飞  |      I-PER       |\n",
    "|  就  |        O         |\n",
    "|  有  |        O         |\n",
    "|  《  |        O         |\n",
    "|  草  |        O         |\n",
    "|  原  |        O         |\n",
    "|  新  |        O         |\n",
    "|  医  |        O         |\n",
    "|  》  |        O         |\n",
    "|  赴  |        O         |\n",
    "|  法  |      B-LOC       |\n",
    "|  展  |        O         |\n",
    "|  览  |        O         |\n",
    "|  ，  |        O         |\n",
    "|  为  |        O         |\n",
    "|  我  |        O         |\n",
    "|  国  |        O         |\n",
    "|  驻  |      B-ORG       |\n",
    "|  法  |      I-ORG       |\n",
    "|  使  |      I-ORG       |\n",
    "|  馆  |      I-ORG       |\n",
    "|  收  |        O         |\n",
    "|  藏  |        O         |\n",
    "|  。  |        O         |\n",
    "\n",
    "\n",
    "- train.txt: It contains 20,864 sentences, including 979,180 named entity tags.\n",
    "- valid.txt: It contains 2,318 sentences, including 109,870 named entity tags.\n",
    "- test.txt: It contains 4,636 sentences, including 219,197 named entity tags."
   ]
  },
  {
   "cell_type": "markdown",
   "id": "47715483",
   "metadata": {},
   "source": [
    "## BERT\n",
    "[**Bidirectional Encoder Representations from Transformers (BERT)**](https://github.com/google-research/bert) \n",
    "\n",
    "![BERT](img/BERT.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c3b0cf3f",
   "metadata": {},
   "source": [
    "## Prepare the runtime environment"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ddb0f3e4",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install deepke\n",
    "!wget 120.27.214.45/Data/ner/standard/data.tar.gz\n",
    "!tar -xzvf data.tar.gz"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "83a346b5",
   "metadata": {},
   "source": [
    "## Import modules"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e21416ae",
   "metadata": {},
   "outputs": [],
   "source": [
    "import json\n",
    "import logging\n",
    "import os\n",
    "\n",
    "import random\n",
    "import numpy as np\n",
    "import torch\n",
    "import torch.nn.functional as F\n",
    "from transformers import AdamW, BertConfig, BertForTokenClassification, BertTokenizer, get_linear_schedule_with_warmup\n",
    "from torch import nn\n",
    "from torch.utils.data import DataLoader, RandomSampler, SequentialSampler, TensorDataset\n",
    "from tqdm import tqdm, trange\n",
    "from seqeval.metrics import classification_report\n",
    "import hydra\n",
    "from hydra import utils\n",
    "from deepke.name_entity_re.standard import *\n",
    "\n",
    "import wandb\n",
    "\n",
    "\n",
    "logging.basicConfig(format  =  '%(asctime)s - %(levelname)s - %(name)s -   %(message)s',\n",
    "                    datefmt = '%m/%d/%Y %H:%M:%S',\n",
    "                    level = logging.INFO)\n",
    "logger = logging.getLogger(__name__)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "029e661b",
   "metadata": {},
   "source": [
    "## Configure model parameters"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2178ffdf",
   "metadata": {},
   "outputs": [],
   "source": [
    "class Config(object):\n",
    "    adam_epsilon = 1e-8\n",
    "    data_dir = \"data\"\n",
    "    do_eval = True\n",
    "    do_train = True\n",
    "    eval_batch_size = 8\n",
    "    eval_on = \"dev\"\n",
    "    gpu_id = 1\n",
    "    gradient_accumulation_steps = 1 \n",
    "    learning_rate = 1e-3\n",
    "    num_train_epochs = 3            # the number of training epochs\n",
    "    output_dir = \"checkpoints\"\n",
    "    seed = 42\n",
    "    train_batch_size = 128\n",
    "    use_gpu=True                # use gpu or not\n",
    "    warmup_proportion = 0.1\n",
    "    weight_decay = 0.01\n",
    "\n",
    "    # For StepLR Optimizer\n",
    "    lr_step = 5\n",
    "    lr_gamma = 0.8\n",
    "    beta1 = 0.9\n",
    "    beta2 = 0.999\n",
    "\n",
    "    # NER labels\n",
    "    labels = ['LOC','ORG','PER']\n",
    "    # labels=['YAS','TOJ', 'NGS', 'QCV', 'OKB', 'BQF', 'CAR', 'ZFM', 'EMT', 'UER', 'QEE', 'UFT', 'GJS', 'SVA', 'ANO', 'KEJ', 'ZDI', 'CAT', 'GCK', 'FQK', 'BAK', 'RET', 'QZP', 'QAQ', 'ZRE', 'TDZ', 'CVC', 'PMN']\n",
    "\n",
    "    use_multi_gpu = False\n",
    "    text = \"秦始皇兵马俑位于陕西省西安市，1961年被国务院公布为第一批全国重点文物保护单位，是世界八大奇迹之一。\"\n",
    "\n",
    "cfg = Config()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5a67afe9",
   "metadata": {},
   "source": [
    "## Prepare the model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f45cc648",
   "metadata": {},
   "outputs": [],
   "source": [
    "class TrainNer(BertForTokenClassification):\n",
    "\n",
    "    def forward(\n",
    "        self, \n",
    "        input_ids, \n",
    "        attention_mask=None,\n",
    "        token_type_ids=None,  \n",
    "        labels=None,\n",
    "        valid_ids=None,\n",
    "        attention_mask_label=None,\n",
    "        device=None\n",
    "    ):\n",
    "        sequence_output = self.bert(input_ids, token_type_ids, attention_mask, head_mask=None)[0]\n",
    "        batch_size, max_len, feat_dim = sequence_output.shape\n",
    "        valid_output = torch.zeros(batch_size, max_len, feat_dim, dtype=torch.float32, device=device)\n",
    "        for i in range(batch_size):\n",
    "            jj = -1\n",
    "            for j in range(max_len):\n",
    "                    if valid_ids[i][j].item() == 1:\n",
    "                        jj += 1\n",
    "                        valid_output[i][jj] = sequence_output[i][j]\n",
    "        sequence_output = self.dropout(valid_output)\n",
    "        logits = self.classifier(sequence_output)\n",
    "\n",
    "        if labels is not None:\n",
    "            loss_fct = nn.CrossEntropyLoss(ignore_index=0)\n",
    "            if attention_mask_label is not None:\n",
    "                active_loss = attention_mask_label.view(-1) == 1\n",
    "                active_logits = logits.view(-1, self.num_labels)[active_loss]\n",
    "                active_labels = labels.view(-1)[active_loss]\n",
    "                loss = loss_fct(active_logits, active_labels)\n",
    "            else:\n",
    "                loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))\n",
    "            return loss\n",
    "        else:\n",
    "            return logits"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9ed1c3dd",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Use gpu or not\n",
    "USE_MULTI_GPU = cfg.use_multi_gpu\n",
    "if USE_MULTI_GPU and torch.cuda.device_count() > 1:\n",
    "    MULTI_GPU = True\n",
    "    device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "    n_gpu = torch.cuda.device_count()\n",
    "else:\n",
    "    MULTI_GPU = False\n",
    "if not MULTI_GPU:\n",
    "    n_gpu = 1\n",
    "    if cfg.use_gpu and torch.cuda.is_available():\n",
    "        device = torch.device('cuda', cfg.gpu_id)\n",
    "    else:\n",
    "        device = torch.device('cpu')\n",
    "\n",
    "if cfg.gradient_accumulation_steps < 1:\n",
    "    raise ValueError(\"Invalid gradient_accumulation_steps parameter: {}, should be >= 1\".format(cfg.gradient_accumulation_steps))\n",
    "\n",
    "cfg.train_batch_size = cfg.train_batch_size // cfg.gradient_accumulation_steps\n",
    "\n",
    "random.seed(cfg.seed)\n",
    "np.random.seed(cfg.seed)\n",
    "torch.manual_seed(cfg.seed)\n",
    "\n",
    "if not cfg.do_train and not cfg.do_eval:\n",
    "    raise ValueError(\"At least one of `do_train` or `do_eval` must be True.\")\n",
    "\n",
    "# Checkpoints\n",
    "if os.path.exists(os.path.join(utils.get_original_cwd(), cfg.output_dir)) and os.listdir(os.path.join(utils.get_original_cwd(), cfg.output_dir)) and cfg.do_train:\n",
    "    raise ValueError(\"Output directory ({}) already exists and is not empty.\".format(os.path.join(utils.get_original_cwd(), cfg.output_dir)))\n",
    "if not os.path.exists(os.path.join(utils.get_original_cwd(), cfg.output_dir)):\n",
    "    os.makedirs(os.path.join(utils.get_original_cwd(), cfg.output_dir))\n",
    "\n",
    "# Preprocess the input dataset\n",
    "processor = NerProcessor()\n",
    "label_list = processor.get_labels(cfg)\n",
    "num_labels = len(label_list) + 1\n",
    "\n",
    "# Prepare the model\n",
    "tokenizer = BertTokenizer.from_pretrained(cfg.bert_model, do_lower_case=cfg.do_lower_case)\n",
    "\n",
    "train_examples = None\n",
    "num_train_optimization_steps = 0\n",
    "if cfg.do_train:\n",
    "    train_examples = processor.get_train_examples(os.path.join(utils.get_original_cwd(), cfg.data_dir))\n",
    "    num_train_optimization_steps = int(len(train_examples) / cfg.train_batch_size / cfg.gradient_accumulation_steps) * cfg.num_train_epochs\n",
    "\n",
    "config = BertConfig.from_pretrained(cfg.bert_model, num_labels=num_labels, finetuning_task=cfg.task_name)\n",
    "model = TrainNer.from_pretrained(cfg.bert_model,from_tf = False,config = config)\n",
    "if n_gpu > 1:\n",
    "    model = torch.nn.DataParallel(model)\n",
    "model.to(device)\n",
    "\n",
    "param_optimizer = list(model.named_parameters())\n",
    "no_decay = ['bias','LayerNorm.weight']\n",
    "optimizer_grouped_parameters = [\n",
    "    {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': cfg.weight_decay},\n",
    "    {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}\n",
    "    ]\n",
    "warmup_steps = int(cfg.warmup_proportion * num_train_optimization_steps)\n",
    "optimizer = AdamW(optimizer_grouped_parameters, lr=cfg.learning_rate, eps=cfg.adam_epsilon)\n",
    "scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=warmup_steps, num_training_steps=num_train_optimization_steps)\n",
    "global_step = 0\n",
    "nb_tr_steps = 0\n",
    "tr_loss = 0\n",
    "label_map = {i : label for i, label in enumerate(label_list,1)}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4bd6d3c3",
   "metadata": {},
   "source": [
    "## Train"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "95e37ce2",
   "metadata": {},
   "outputs": [],
   "source": [
    "if cfg.do_train:\n",
    "    train_features = convert_examples_to_features(train_examples, label_list, cfg.max_seq_length, tokenizer)\n",
    "    all_input_ids = torch.tensor([f.input_ids for f in train_features], dtype=torch.long)\n",
    "    all_input_mask = torch.tensor([f.input_mask for f in train_features], dtype=torch.long)\n",
    "    all_segment_ids = torch.tensor([f.segment_ids for f in train_features], dtype=torch.long)\n",
    "    all_label_ids = torch.tensor([f.label_id for f in train_features], dtype=torch.long)\n",
    "    all_valid_ids = torch.tensor([f.valid_ids for f in train_features], dtype=torch.long)\n",
    "    all_lmask_ids = torch.tensor([f.label_mask for f in train_features], dtype=torch.long)\n",
    "    train_data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids,all_valid_ids,all_lmask_ids)\n",
    "    train_sampler = RandomSampler(train_data)\n",
    "    \n",
    "    train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=cfg.train_batch_size * n_gpu)\n",
    "\n",
    "    model.train()\n",
    "\n",
    "    for _ in trange(int(cfg.num_train_epochs), desc=\"Epoch\"):\n",
    "        tr_loss = 0\n",
    "        nb_tr_examples, nb_tr_steps = 0, 0\n",
    "        for step, batch in enumerate(tqdm(train_dataloader, desc=\"Iteration\")):\n",
    "            batch = tuple(t.to(device) for t in batch)\n",
    "            input_ids, input_mask, segment_ids, label_ids, valid_ids,l_mask = batch\n",
    "            loss = model(input_ids, segment_ids, input_mask, label_ids,valid_ids,l_mask,device)\n",
    "            if n_gpu > 1:\n",
    "                loss = loss.mean()\n",
    "            \n",
    "            if cfg.gradient_accumulation_steps > 1:\n",
    "                loss = loss / cfg.gradient_accumulation_steps\n",
    "\n",
    "            loss.backward()\n",
    "            torch.nn.utils.clip_grad_norm_(model.parameters(), cfg.max_grad_norm)\n",
    "\n",
    "            tr_loss += loss.item()\n",
    "            nb_tr_examples += input_ids.size(0)\n",
    "            nb_tr_steps += 1\n",
    "            if (step + 1) % cfg.gradient_accumulation_steps == 0:\n",
    "                optimizer.step()\n",
    "                scheduler.step()  # Update learning rate schedule\n",
    "                model.zero_grad()\n",
    "                global_step += 1\n",
    "        wandb.log({\n",
    "            \"train_loss\":tr_loss/nb_tr_steps\n",
    "        })\n",
    "    # Save a trained model and the associated configuration\n",
    "    model_to_save = model.module if hasattr(model, 'module') else model  # Only save the model it-self\n",
    "    model_to_save.save_pretrained(os.path.join(utils.get_original_cwd(), cfg.output_dir))\n",
    "    tokenizer.save_pretrained(os.path.join(utils.get_original_cwd(), cfg.output_dir))\n",
    "    label_map = {i : label for i, label in enumerate(label_list,1)}\n",
    "    model_config = {\"bert_model\":cfg.bert_model,\"do_lower\":cfg.do_lower_case,\"max_seq_length\":cfg.max_seq_length,\"num_labels\":len(label_list)+1,\"label_map\":label_map}\n",
    "    json.dump(model_config,open(os.path.join(utils.get_original_cwd(), cfg.output_dir,\"model_config.json\"),\"w\"))\n",
    "    # Load a trained model and config that you have fine-tuned\n",
    "else:\n",
    "    # Load a trained model and vocabulary that you have fine-tuned\n",
    "    model = TrainNer.from_pretrained(os.path.join(utils.get_original_cwd(), cfg.output_dir))\n",
    "    tokenizer = BertTokenizer.from_pretrained(os.path.join(utils.get_original_cwd(), cfg.output_dir), do_lower_case=cfg.do_lower_case)\n",
    "\n",
    "model.to(device)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8a5853c4",
   "metadata": {},
   "source": [
    "## Evaluate"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5cf1972a",
   "metadata": {},
   "outputs": [],
   "source": [
    "if cfg.do_eval:\n",
    "    if cfg.eval_on == \"dev\":\n",
    "        eval_examples = processor.get_dev_examples(os.path.join(utils.get_original_cwd(), cfg.data_dir))\n",
    "    elif cfg.eval_on == \"test\":\n",
    "        eval_examples = processor.get_test_examples(os.path.join(utils.get_original_cwd(), cfg.data_dir))\n",
    "    else:\n",
    "        raise ValueError(\"eval on dev or test set only\")\n",
    "    eval_features = convert_examples_to_features(eval_examples, label_list, cfg.max_seq_length, tokenizer)\n",
    "    all_input_ids = torch.tensor([f.input_ids for f in eval_features], dtype=torch.long)\n",
    "    all_input_mask = torch.tensor([f.input_mask for f in eval_features], dtype=torch.long)\n",
    "    all_segment_ids = torch.tensor([f.segment_ids for f in eval_features], dtype=torch.long)\n",
    "    all_label_ids = torch.tensor([f.label_id for f in eval_features], dtype=torch.long)\n",
    "    all_valid_ids = torch.tensor([f.valid_ids for f in eval_features], dtype=torch.long)\n",
    "    all_lmask_ids = torch.tensor([f.label_mask for f in eval_features], dtype=torch.long)\n",
    "    eval_data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids,all_valid_ids,all_lmask_ids)\n",
    "    # Run prediction for full data\n",
    "    eval_sampler = SequentialSampler(eval_data)\n",
    "    eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=cfg.eval_batch_size * n_gpu)\n",
    "    model.eval()\n",
    "    eval_loss, eval_accuracy = 0, 0\n",
    "    nb_eval_steps, nb_eval_examples = 0, 0\n",
    "    y_true = []\n",
    "    y_pred = []\n",
    "    label_map = {i : label for i, label in enumerate(label_list,1)}\n",
    "    for input_ids, input_mask, segment_ids, label_ids,valid_ids,l_mask in tqdm(eval_dataloader, desc=\"Evaluating\"):\n",
    "        input_ids = input_ids.to(device)\n",
    "        input_mask = input_mask.to(device)\n",
    "        segment_ids = segment_ids.to(device)\n",
    "        valid_ids = valid_ids.to(device)\n",
    "        label_ids = label_ids.to(device)\n",
    "        l_mask = l_mask.to(device)\n",
    "\n",
    "        with torch.no_grad():\n",
    "            logits = model(input_ids, segment_ids, input_mask,valid_ids=valid_ids,attention_mask_label=l_mask,device=device)\n",
    "\n",
    "        logits = torch.argmax(F.log_softmax(logits,dim=2),dim=2)\n",
    "        logits = logits.detach().cpu().numpy()\n",
    "        label_ids = label_ids.to('cpu').numpy()\n",
    "        input_mask = input_mask.to('cpu').numpy()\n",
    "\n",
    "        for i, label in enumerate(label_ids):\n",
    "            temp_1 = []\n",
    "            temp_2 = []\n",
    "            for j,m in enumerate(label):\n",
    "                if j == 0:\n",
    "                    continue\n",
    "                elif label_ids[i][j] == len(label_map):\n",
    "                    y_true.append(temp_1)\n",
    "                    y_pred.append(temp_2)\n",
    "                    break\n",
    "                else:\n",
    "                    temp_1.append(label_map[label_ids[i][j]])\n",
    "                    \n",
    "                    if logits[i][j] != 0:\n",
    "                        temp_2.append(label_map[logits[i][j]])\n",
    "                    else:\n",
    "                        temp_2.append(0)\n",
    "\n",
    "\n",
    "    report = classification_report(y_true, y_pred)\n",
    "    logger.info(\"\\n%s\", report)\n",
    "    output_eval_file = os.path.join(os.path.join(utils.get_original_cwd(), cfg.output_dir), \"eval_results.txt\")\n",
    "    with open(output_eval_file, \"w\") as writer:\n",
    "        logger.info(\"***** Eval results *****\")\n",
    "        logger.info(\"\\n%s\", report)\n",
    "        writer.write(report)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6c0f79a8",
   "metadata": {},
   "source": [
    "## Predict"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b0e33a75",
   "metadata": {},
   "outputs": [],
   "source": [
    "model = InferNer(os.path.join(utils.get_original_cwd(), \"checkpoints\"), cfg)\n",
    "text = cfg.text\n",
    "\n",
    "print(\"NER句子:\")\n",
    "print(text)\n",
    "print('NER结果:')\n",
    "\n",
    "result = model.predict(text)\n",
    "print(result)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.13"
  },
  "latex_envs": {
   "LaTeX_envs_menu_present": true,
   "autoclose": false,
   "autocomplete": true,
   "bibliofile": "biblio.bib",
   "cite_by": "apalike",
   "current_citInitial": 1,
   "eqLabelWithNumbers": true,
   "eqNumInitial": 1,
   "hotkeys": {
    "equation": "Ctrl-E",
    "itemize": "Ctrl-I"
   },
   "labels_anchors": false,
   "latex_user_defs": false,
   "report_style_numbering": false,
   "user_envs_cfg": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
