{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 自然语言处理实战——命名实体识别\n",
    "BERT模型(Bidirectional Encoder Representations from Transformers)是2018年10月谷歌推出的，它在机器阅读理解顶级水平测试SQuAD1.1中表现出惊人的成绩：全部两个衡量指标上全面超越人类，并且还在11种不同NLP测试中创出最佳成绩，包括将GLUE基准推至80.4％（绝对改进率7.6％），MultiNLI准确度达到86.7%（绝对改进率5.6％）等。BERT模型被认为是NLP新时代的开始，自此NLP领域终于找到了一种方法，可以像计算机视觉那样进行迁移学习，任何需要构建语言处理模型的人都可以将这个强大的预训练模型作为现成的组件使用，从而节省了从头开始训练模型所需的时间、精力、知识和资源。具体地来说，BERT可以用于以下自然语言处理任务中：\n",
    "- 问答系统\n",
    "- 命名实体识别\n",
    "- 文档聚类\n",
    "- 邮件过滤和分类\n",
    "- 情感分析  \n",
    "\n",
    "本案例将带大家学习BERT模型的命名实体识别功能。\n",
    "\n",
    "### 进入ModelArts\n",
    "\n",
    "点击如下链接：https://www.huaweicloud.com/product/modelarts.html ， 进入ModelArts主页。点击“立即使用”按钮，输入用户名和密码登录，进入ModelArts使用页面。\n",
    "\n",
    "### 创建ModelArts notebook\n",
    "\n",
    "下面，我们在ModelArts中创建一个notebook开发环境，ModelArts notebook提供网页版的Python开发环境，可以方便的编写、运行代码，并查看运行结果。\n",
    "\n",
    "第一步：在ModelArts服务主界面依次点击“开发环境”、“创建”\n",
    "\n",
    "![create_nb_create_button](./img/create_nb_create_button.png)\n",
    "\n",
    "第二步：填写notebook所需的参数：\n",
    "\n",
    "![jupyter](./img/notebook1.png)\n",
    "\n",
    "第三步：配置好notebook参数后，点击下一步，进入notebook信息预览。确认无误后，点击“立即创建”\n",
    "![jupyter](./img/notebook2.png)\n",
    "\n",
    "第四步：创建完成后，返回开发环境主界面，等待Notebook创建完毕后，打开Notebook，进行下一步操作。\n",
    "![modelarts_notebook_index](./img/modelarts_notebook_index.png)\n",
    "\n",
    "### 在ModelArts中创建开发环境\n",
    "\n",
    "接下来，我们创建一个实际的开发环境，用于后续的实验步骤。\n",
    "\n",
    "第一步：点击下图所示的“启动”按钮，加载后“打开”按钮变从灰色变为蓝色后点击“打开”进入刚刚创建的Notebook\n",
    "![jupyter](./img/notebook3.png)\n",
    "![jupyter](./img/notebook4.png)\n",
    "\n",
    "\n",
    "第二步：创建一个Python3环境的的Notebook。点击右上角的\"New\"，然后选择TensorFlow 1.13.1开发环境。\n",
    "\n",
    "第三步：点击左上方的文件名\"Untitled\"，并输入一个与本实验相关的名称，如\"facial_expression\"\n",
    "![notebook_untitled_filename](./img/notebook_untitled_filename.png)\n",
    "![notebook_name_the_ipynb](./img/notebook_name_the_ipynb.png)\n",
    "\n",
    "\n",
    "### 在Notebook中编写并执行代码\n",
    "\n",
    "在Notebook中，我们输入一个简单的打印语句，然后点击上方的运行按钮，可以查看语句执行的结果：\n",
    "![run_helloworld](./img/run_helloworld.png)\n",
    "\n",
    "\n",
    "开发环境准备好啦，接下来可以愉快地写代码啦！\n",
    "\n",
    "\n",
    "\n",
    "### 准备源代码和数据\n",
    "\n",
    "准备案例所需的源代码和数据，相关资源已经保存在 OBS 中，我们通过 Moxing 将资源下载到本地。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:root:Using MoXing-v1.17.3-\n",
      "INFO:root:Using OBS-Python-SDK-3.20.7\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading datasets and code ...\n",
      "Download success\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import subprocess\n",
    "import moxing as mox\n",
    "\n",
    "print('Downloading datasets and code ...')\n",
    "if not os.path.exists('./ner'):\n",
    "    mox.file.copy('obs://modelarts-labs-bj4/notebook/DL_nlp_ner/ner.tar.gz', './ner.tar.gz')\n",
    "    p1 = subprocess.run(['tar xf ./ner.tar.gz;rm ./ner.tar.gz'], stdout=subprocess.PIPE, shell=True, check=True)\n",
    "    if os.path.exists('./ner'):\n",
    "        print('Download success')\n",
    "    else:\n",
    "        raise Exception('Download failed')\n",
    "else:\n",
    "    print('Download success')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "解压从obs下载的压缩包，解压后删除压缩包。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 导入Python库"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n",
      "  _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n",
      "/home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n",
      "  _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n",
      "/home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n",
      "  _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n",
      "/home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n",
      "  _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n",
      "/home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n",
      "  _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n",
      "/home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n",
      "  np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import json\n",
    "import numpy as np\n",
    "import tensorflow as tf\n",
    "import codecs\n",
    "import pickle\n",
    "import collections\n",
    "from ner.bert import modeling, optimization, tokenization"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 定义路径及参数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_dir = \"./ner/data\"    \n",
    "output_dir = \"./ner/output\"    \n",
    "vocab_file = \"./ner/chinese_L-12_H-768_A-12/vocab.txt\"    \n",
    "data_config_path = \"./ner/chinese_L-12_H-768_A-12/bert_config.json\"    \n",
    "init_checkpoint = \"./ner/chinese_L-12_H-768_A-12/bert_model.ckpt\"    \n",
    "max_seq_length = 128    \n",
    "batch_size = 64    \n",
    "num_train_epochs = 5.0    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 定义processor类获取数据，打印标签"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "labels: ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC', 'X', '[CLS]', '[SEP]']\n"
     ]
    }
   ],
   "source": [
    "tf.logging.set_verbosity(tf.logging.INFO)\n",
    "from ner.src.models import InputFeatures, InputExample, DataProcessor, NerProcessor\n",
    "\n",
    "processors = {\"ner\": NerProcessor }\n",
    "processor = processors[\"ner\"](output_dir)\n",
    "\n",
    "label_list = processor.get_labels()\n",
    "print(\"labels:\", label_list)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "以上 labels 分别表示：\n",
    "\n",
    "- O：非标注实体\n",
    "- B-PER：人名首字\n",
    "- I-PER：人名非首字\n",
    "- B-ORG：组织首字\n",
    "- I-ORG：组织名非首字\n",
    "- B-LOC：地名首字\n",
    "- I-LOC：地名非首字\n",
    "- X：未知\n",
    "- [CLS]：句首\n",
    "- [SEP]：句尾\n",
    "\n",
    "#### 加载预训练参数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "显示配置信息:\n",
      "attention_probs_dropout_prob:0.1\n",
      "directionality:bidi\n",
      "hidden_act:gelu\n",
      "hidden_dropout_prob:0.1\n",
      "hidden_size:768\n",
      "initializer_range:0.02\n",
      "intermediate_size:3072\n",
      "max_position_embeddings:512\n",
      "num_attention_heads:12\n",
      "num_hidden_layers:12\n",
      "pooler_fc_size:768\n",
      "pooler_num_attention_heads:12\n",
      "pooler_num_fc_layers:3\n",
      "pooler_size_per_head:128\n",
      "pooler_type:first_token_transform\n",
      "type_vocab_size:2\n",
      "vocab_size:21128\n",
      "num_train_steps:1630\n",
      "num_warmup_steps:163\n",
      "num_train_size:20864\n"
     ]
    }
   ],
   "source": [
    "data_config = json.load(codecs.open(data_config_path))\n",
    "train_examples = processor.get_train_examples(data_dir)    \n",
    "num_train_steps = int(len(train_examples) / batch_size * num_train_epochs)    \n",
    "num_warmup_steps = int(num_train_steps * 0.1)   \n",
    "data_config['num_train_steps'] = num_train_steps\n",
    "data_config['num_warmup_steps'] = num_warmup_steps\n",
    "data_config['num_train_size'] = len(train_examples)\n",
    "\n",
    "print(\"显示配置信息:\")\n",
    "for key,value in data_config.items():\n",
    "    print('{key}:{value}'.format(key = key, value = value))\n",
    "\n",
    "bert_config = modeling.BertConfig.from_json_file(data_config_path)\n",
    "tokenizer = tokenization.FullTokenizer(vocab_file=vocab_file, do_lower_case=True)\n",
    "\n",
    "#tf.estimator运行参数\n",
    "run_config = tf.estimator.RunConfig(\n",
    "    model_dir=output_dir,\n",
    "    save_summary_steps=1000,\n",
    "    save_checkpoints_steps=1000,\n",
    "    session_config=tf.ConfigProto(\n",
    "        log_device_placement=False,\n",
    "        inter_op_parallelism_threads=0,\n",
    "        intra_op_parallelism_threads=0,\n",
    "        allow_soft_placement=True\n",
    "    )\n",
    ")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 读取数据，获取句向量"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Writing example 0 of 20864\n",
      "INFO:tensorflow:Writing example 5000 of 20864\n",
      "INFO:tensorflow:Writing example 10000 of 20864\n",
      "INFO:tensorflow:Writing example 15000 of 20864\n",
      "INFO:tensorflow:Writing example 20000 of 20864\n"
     ]
    }
   ],
   "source": [
    "def convert_single_example(ex_index, example, label_list, max_seq_length, \n",
    "                           tokenizer, output_dir, mode):\n",
    "    label_map = {}\n",
    "    for (i, label) in enumerate(label_list, 1):\n",
    "        label_map[label] = i\n",
    "    if not os.path.exists(os.path.join(output_dir, 'label2id.pkl')):\n",
    "        with codecs.open(os.path.join(output_dir, 'label2id.pkl'), 'wb') as w:\n",
    "            pickle.dump(label_map, w)\n",
    "\n",
    "    textlist = example.text.split(' ')\n",
    "    labellist = example.label.split(' ')\n",
    "    tokens = []\n",
    "    labels = []\n",
    "    for i, word in enumerate(textlist):\n",
    "        token = tokenizer.tokenize(word)\n",
    "        tokens.extend(token)\n",
    "        label_1 = labellist[i]\n",
    "        for m in range(len(token)):\n",
    "            if m == 0:\n",
    "                labels.append(label_1)\n",
    "            else:  \n",
    "                labels.append(\"X\")\n",
    "    if len(tokens) >= max_seq_length - 1:\n",
    "        tokens = tokens[0:(max_seq_length - 2)]\n",
    "        labels = labels[0:(max_seq_length - 2)]\n",
    "    ntokens = []\n",
    "    segment_ids = []\n",
    "    label_ids = []\n",
    "    ntokens.append(\"[CLS]\")  # 句子开始设置 [CLS] 标志\n",
    "    segment_ids.append(0)\n",
    "    label_ids.append(label_map[\"[CLS]\"])  \n",
    "    for i, token in enumerate(tokens):\n",
    "        ntokens.append(token)\n",
    "        segment_ids.append(0)\n",
    "        label_ids.append(label_map[labels[i]])\n",
    "    ntokens.append(\"[SEP]\")  # 句尾添加 [SEP] 标志\n",
    "    segment_ids.append(0)\n",
    "    label_ids.append(label_map[\"[SEP]\"])\n",
    "    input_ids = tokenizer.convert_tokens_to_ids(ntokens)  \n",
    "    input_mask = [1] * len(input_ids)\n",
    "\n",
    "    while len(input_ids) < max_seq_length:\n",
    "        input_ids.append(0)\n",
    "        input_mask.append(0)\n",
    "        segment_ids.append(0)\n",
    "        label_ids.append(0)\n",
    "        ntokens.append(\"**NULL**\")\n",
    "\n",
    "    assert len(input_ids) == max_seq_length\n",
    "    assert len(input_mask) == max_seq_length\n",
    "    assert len(segment_ids) == max_seq_length\n",
    "    assert len(label_ids) == max_seq_length\n",
    "\n",
    "    feature = InputFeatures(\n",
    "        input_ids=input_ids,\n",
    "        input_mask=input_mask,\n",
    "        segment_ids=segment_ids,\n",
    "        label_ids=label_ids,\n",
    "    )\n",
    "   \n",
    "    return feature\n",
    "\n",
    "def filed_based_convert_examples_to_features(\n",
    "        examples, label_list, max_seq_length, tokenizer, output_file, mode=None):\n",
    "    writer = tf.python_io.TFRecordWriter(output_file)\n",
    "    for (ex_index, example) in enumerate(examples):\n",
    "        if ex_index % 5000 == 0:\n",
    "            tf.logging.info(\"Writing example %d of %d\" % (ex_index, len(examples)))\n",
    "        feature = convert_single_example(ex_index, example, label_list, max_seq_length, tokenizer, output_dir, mode)\n",
    "\n",
    "        def create_int_feature(values):\n",
    "            f = tf.train.Feature(int64_list=tf.train.Int64List(value=list(values)))\n",
    "            return f\n",
    "\n",
    "        features = collections.OrderedDict()\n",
    "        features[\"input_ids\"] = create_int_feature(feature.input_ids)\n",
    "        features[\"input_mask\"] = create_int_feature(feature.input_mask)\n",
    "        features[\"segment_ids\"] = create_int_feature(feature.segment_ids)\n",
    "        features[\"label_ids\"] = create_int_feature(feature.label_ids)\n",
    "        tf_example = tf.train.Example(features=tf.train.Features(feature=features))\n",
    "        writer.write(tf_example.SerializeToString())\n",
    "\n",
    "train_file = os.path.join(output_dir, \"train.tf_record\")\n",
    "\n",
    "#将训练集中字符转化为features作为训练的输入\n",
    "filed_based_convert_examples_to_features(\n",
    "            train_examples, label_list, max_seq_length, tokenizer, output_file=train_file)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 引入 BiLSTM+CRF 层，作为下游模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "learning_rate = 5e-5 \n",
    "dropout_rate = 1.0   \n",
    "lstm_size=1    \n",
    "cell='lstm'\n",
    "num_layers=1\n",
    "\n",
    "from ner.src.models import BLSTM_CRF\n",
    "from tensorflow.contrib.layers.python.layers import initializers\n",
    "\n",
    "def create_model(bert_config, is_training, input_ids, input_mask,\n",
    "                 segment_ids, labels, num_labels, use_one_hot_embeddings,\n",
    "                 dropout_rate=dropout_rate, lstm_size=1, cell='lstm', num_layers=1):\n",
    "    model = modeling.BertModel(\n",
    "        config=bert_config,\n",
    "        is_training=is_training,\n",
    "        input_ids=input_ids,\n",
    "        input_mask=input_mask,\n",
    "        token_type_ids=segment_ids,\n",
    "        use_one_hot_embeddings=use_one_hot_embeddings\n",
    "    )\n",
    "    embedding = model.get_sequence_output()\n",
    "    max_seq_length = embedding.shape[1].value\n",
    "    used = tf.sign(tf.abs(input_ids))\n",
    "    lengths = tf.reduce_sum(used, reduction_indices=1)  \n",
    "    blstm_crf = BLSTM_CRF(embedded_chars=embedding, hidden_unit=1, cell_type='lstm', num_layers=1,\n",
    "                          dropout_rate=dropout_rate, initializers=initializers, num_labels=num_labels,\n",
    "                          seq_length=max_seq_length, labels=labels, lengths=lengths, is_training=is_training)\n",
    "    rst = blstm_crf.add_blstm_crf_layer(crf_only=True)\n",
    "    return rst\n",
    "\n",
    "def model_fn_builder(bert_config, num_labels, init_checkpoint, learning_rate,\n",
    "                     num_train_steps, num_warmup_steps,use_one_hot_embeddings=False):\n",
    "    #构建模型\n",
    "    def model_fn(features, labels, mode, params):\n",
    "        tf.logging.info(\"*** Features ***\")\n",
    "        for name in sorted(features.keys()):\n",
    "            tf.logging.info(\"  name = %s, shape = %s\" % (name, features[name].shape))\n",
    "        input_ids = features[\"input_ids\"]\n",
    "        input_mask = features[\"input_mask\"]\n",
    "        segment_ids = features[\"segment_ids\"]\n",
    "        label_ids = features[\"label_ids\"]\n",
    "\n",
    "        print('shape of input_ids', input_ids.shape)\n",
    "        is_training = (mode == tf.estimator.ModeKeys.TRAIN)\n",
    "\n",
    "        total_loss, logits, trans, pred_ids = create_model(\n",
    "            bert_config, is_training, input_ids, input_mask, segment_ids, label_ids,\n",
    "            num_labels, False, dropout_rate, lstm_size, cell, num_layers)\n",
    "\n",
    "        tvars = tf.trainable_variables()\n",
    "\n",
    "        if init_checkpoint:\n",
    "            (assignment_map, initialized_variable_names) = \\\n",
    "                 modeling.get_assignment_map_from_checkpoint(tvars, init_checkpoint)\n",
    "            tf.train.init_from_checkpoint(init_checkpoint, assignment_map)\n",
    "        \n",
    "        output_spec = None\n",
    "        if mode == tf.estimator.ModeKeys.TRAIN:\n",
    "            train_op = optimization.create_optimizer(\n",
    "                 total_loss, learning_rate, num_train_steps, num_warmup_steps, False)\n",
    "            hook_dict = {}\n",
    "            hook_dict['loss'] = total_loss\n",
    "            hook_dict['global_steps'] = tf.train.get_or_create_global_step()\n",
    "            logging_hook = tf.train.LoggingTensorHook(\n",
    "                hook_dict, every_n_iter=100)\n",
    "\n",
    "            output_spec = tf.estimator.EstimatorSpec(\n",
    "                mode=mode,\n",
    "                loss=total_loss,\n",
    "                train_op=train_op,\n",
    "                training_hooks=[logging_hook])\n",
    "\n",
    "        elif mode == tf.estimator.ModeKeys.EVAL:\n",
    "            def metric_fn(label_ids, pred_ids):\n",
    "\n",
    "                return {\n",
    "                    \"eval_loss\": tf.metrics.mean_squared_error(labels=label_ids, predictions=pred_ids),   }\n",
    "            \n",
    "            eval_metrics = metric_fn(label_ids, pred_ids)\n",
    "            output_spec = tf.estimator.EstimatorSpec(\n",
    "                mode=mode,\n",
    "                loss=total_loss,\n",
    "                eval_metric_ops=eval_metrics\n",
    "            )\n",
    "        else:\n",
    "            output_spec = tf.estimator.EstimatorSpec(\n",
    "                mode=mode,\n",
    "                predictions=pred_ids\n",
    "            )\n",
    "        return output_spec\n",
    "\n",
    "    return model_fn\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 创建模型，开始训练\n",
    "耗时约15分钟"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:***** Running training *****\n",
      "INFO:tensorflow:  Num examples = 20864\n",
      "INFO:tensorflow:  Batch size = 64\n",
      "INFO:tensorflow:  Num steps = 1630\n",
      "INFO:tensorflow:Using config: {'_model_dir': './ner/output', '_tf_random_seed': None, '_save_summary_steps': 1000, '_save_checkpoints_steps': 1000, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true\n",
      ", '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f5d8e54d3c8>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}\n",
      "WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Colocations handled automatically by placer.\n",
      "WARNING:tensorflow:From <ipython-input-8-ff8f40149364>:37: map_and_batch (from tensorflow.contrib.data.python.ops.batching) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use `tf.data.experimental.map_and_batch(...)`.\n",
      "WARNING:tensorflow:From <ipython-input-8-ff8f40149364>:23: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use tf.cast instead.\n",
      "INFO:tensorflow:Calling model_fn.\n",
      "INFO:tensorflow:*** Features ***\n",
      "INFO:tensorflow:  name = input_ids, shape = (32, 128)\n",
      "INFO:tensorflow:  name = input_mask, shape = (32, 128)\n",
      "INFO:tensorflow:  name = label_ids, shape = (32, 128)\n",
      "INFO:tensorflow:  name = segment_ids, shape = (32, 128)\n",
      "WARNING:tensorflow:From /home/ma-user/work/DL_nlp_bert_ner/ner/bert/modeling.py:358: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.\n",
      "WARNING:tensorflow:From /home/ma-user/work/DL_nlp_bert_ner/ner/bert/modeling.py:671: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use keras.layers.dense instead.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "shape of input_ids (32, 128)\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/contrib/crf/python/ops/crf.py:213: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use `keras.layers.RNN(cell)`, which is equivalent to this API\n",
      "WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/training/learning_rate_decay_v2.py:321: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Deprecated in favor of operator or tf.math.divide.\n",
      "INFO:tensorflow:Done calling model_fn.\n",
      "INFO:tensorflow:Create CheckpointSaverHook.\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:Saving checkpoints for 0 into ./ner/output/model.ckpt.\n",
      "INFO:tensorflow:loss = 101.12476, step = 0\n",
      "INFO:tensorflow:global_steps = 0, loss = 101.12476\n",
      "INFO:tensorflow:global_step/sec: 2.11756\n",
      "INFO:tensorflow:loss = 44.08002, step = 100 (47.225 sec)\n",
      "INFO:tensorflow:global_steps = 100, loss = 44.08002 (47.224 sec)\n",
      "INFO:tensorflow:global_step/sec: 2.81244\n",
      "INFO:tensorflow:loss = 37.894424, step = 200 (35.556 sec)\n",
      "INFO:tensorflow:global_steps = 200, loss = 37.894424 (35.556 sec)\n",
      "INFO:tensorflow:global_step/sec: 2.82339\n",
      "INFO:tensorflow:loss = 41.6473, step = 300 (35.418 sec)\n",
      "INFO:tensorflow:global_steps = 300, loss = 41.6473 (35.418 sec)\n",
      "INFO:tensorflow:global_step/sec: 2.81859\n",
      "INFO:tensorflow:loss = 43.790276, step = 400 (35.479 sec)\n",
      "INFO:tensorflow:global_steps = 400, loss = 43.790276 (35.479 sec)\n",
      "INFO:tensorflow:global_step/sec: 2.81616\n",
      "INFO:tensorflow:loss = 39.31551, step = 500 (35.509 sec)\n",
      "INFO:tensorflow:global_steps = 500, loss = 39.31551 (35.509 sec)\n",
      "INFO:tensorflow:global_step/sec: 2.81752\n",
      "INFO:tensorflow:loss = 40.44882, step = 600 (35.492 sec)\n",
      "INFO:tensorflow:global_steps = 600, loss = 40.44882 (35.492 sec)\n",
      "INFO:tensorflow:global_step/sec: 2.82517\n",
      "INFO:tensorflow:loss = 40.503124, step = 700 (35.396 sec)\n",
      "INFO:tensorflow:global_steps = 700, loss = 40.503124 (35.396 sec)\n",
      "INFO:tensorflow:global_step/sec: 2.81547\n",
      "INFO:tensorflow:loss = 38.893593, step = 800 (35.518 sec)\n",
      "INFO:tensorflow:global_steps = 800, loss = 38.893593 (35.518 sec)\n",
      "INFO:tensorflow:global_step/sec: 2.81628\n",
      "INFO:tensorflow:loss = 43.374577, step = 900 (35.508 sec)\n",
      "INFO:tensorflow:global_steps = 900, loss = 43.374577 (35.508 sec)\n",
      "INFO:tensorflow:Saving checkpoints for 1000 into ./ner/output/model.ckpt.\n",
      "INFO:tensorflow:global_step/sec: 2.30028\n",
      "INFO:tensorflow:loss = 44.156574, step = 1000 (43.473 sec)\n",
      "INFO:tensorflow:global_steps = 1000, loss = 44.156574 (43.473 sec)\n",
      "INFO:tensorflow:global_step/sec: 2.8118\n",
      "INFO:tensorflow:loss = 41.555115, step = 1100 (35.565 sec)\n",
      "INFO:tensorflow:global_steps = 1100, loss = 41.555115 (35.565 sec)\n",
      "INFO:tensorflow:global_step/sec: 2.8125\n",
      "INFO:tensorflow:loss = 42.96095, step = 1200 (35.555 sec)\n",
      "INFO:tensorflow:global_steps = 1200, loss = 42.96095 (35.556 sec)\n",
      "INFO:tensorflow:global_step/sec: 2.81514\n",
      "INFO:tensorflow:loss = 42.284943, step = 1300 (35.522 sec)\n",
      "INFO:tensorflow:global_steps = 1300, loss = 42.284943 (35.522 sec)\n",
      "INFO:tensorflow:global_step/sec: 2.81755\n",
      "INFO:tensorflow:loss = 41.85956, step = 1400 (35.492 sec)\n",
      "INFO:tensorflow:global_steps = 1400, loss = 41.85956 (35.492 sec)\n",
      "INFO:tensorflow:global_step/sec: 2.81149\n",
      "INFO:tensorflow:loss = 39.010487, step = 1500 (35.568 sec)\n",
      "INFO:tensorflow:global_steps = 1500, loss = 39.010487 (35.568 sec)\n",
      "INFO:tensorflow:global_step/sec: 2.81946\n",
      "INFO:tensorflow:loss = 39.25674, step = 1600 (35.468 sec)\n",
      "INFO:tensorflow:global_steps = 1600, loss = 39.25674 (35.468 sec)\n",
      "INFO:tensorflow:Saving checkpoints for 1630 into ./ner/output/model.ckpt.\n",
      "INFO:tensorflow:Loss for final step: 45.543602.\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<tensorflow_estimator.python.estimator.estimator.Estimator at 0x7f5d8e42b780>"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "model_fn = model_fn_builder(\n",
    "        bert_config=bert_config,\n",
    "        num_labels=len(label_list) + 1,\n",
    "        init_checkpoint=init_checkpoint,\n",
    "        learning_rate=learning_rate,\n",
    "        num_train_steps=num_train_steps,\n",
    "        num_warmup_steps=num_warmup_steps,\n",
    "        use_one_hot_embeddings=False)\n",
    "\n",
    "def file_based_input_fn_builder(input_file, seq_length, is_training, drop_remainder):\n",
    "    name_to_features = {\n",
    "        \"input_ids\": tf.FixedLenFeature([seq_length], tf.int64),\n",
    "        \"input_mask\": tf.FixedLenFeature([seq_length], tf.int64),\n",
    "        \"segment_ids\": tf.FixedLenFeature([seq_length], tf.int64),\n",
    "        \"label_ids\": tf.FixedLenFeature([seq_length], tf.int64),\n",
    "    }\n",
    "\n",
    "    def _decode_record(record, name_to_features):\n",
    "        example = tf.parse_single_example(record, name_to_features)\n",
    "        for name in list(example.keys()):\n",
    "            t = example[name]\n",
    "            if t.dtype == tf.int64:\n",
    "                t = tf.to_int32(t)\n",
    "            example[name] = t\n",
    "        return example\n",
    "\n",
    "    def input_fn(params):\n",
    "        params[\"batch_size\"] = 32\n",
    "        batch_size = params[\"batch_size\"]\n",
    "        d = tf.data.TFRecordDataset(input_file)\n",
    "        if is_training:\n",
    "            d = d.repeat()\n",
    "            d = d.shuffle(buffer_size=300)\n",
    "        d = d.apply(tf.contrib.data.map_and_batch(\n",
    "            lambda record: _decode_record(record, name_to_features),\n",
    "            batch_size=batch_size,\n",
    "            drop_remainder=drop_remainder\n",
    "        ))\n",
    "        return d\n",
    "\n",
    "    return input_fn\n",
    "\n",
    "#训练输入\n",
    "train_input_fn = file_based_input_fn_builder(\n",
    "            input_file=train_file,\n",
    "            seq_length=max_seq_length,\n",
    "            is_training=True,\n",
    "            drop_remainder=True)\n",
    "\n",
    "num_train_size = len(train_examples)\n",
    "\n",
    "tf.logging.info(\"***** Running training *****\")\n",
    "tf.logging.info(\"  Num examples = %d\", num_train_size)\n",
    "tf.logging.info(\"  Batch size = %d\", batch_size)\n",
    "tf.logging.info(\"  Num steps = %d\", num_train_steps)\n",
    "\n",
    "#模型预测estimator\n",
    "estimator = tf.estimator.Estimator(\n",
    "        model_fn=model_fn,\n",
    "        config=run_config,\n",
    "        params={\n",
    "        'batch_size':batch_size\n",
    "    })\n",
    "\n",
    "estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 在验证集上验证模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Writing example 0 of 4631\n",
      "INFO:tensorflow:***** Running evaluation *****\n",
      "INFO:tensorflow:  Num examples = 4631\n",
      "INFO:tensorflow:  Batch size = 64\n",
      "INFO:tensorflow:Calling model_fn.\n",
      "INFO:tensorflow:*** Features ***\n",
      "INFO:tensorflow:  name = input_ids, shape = (?, 128)\n",
      "INFO:tensorflow:  name = input_mask, shape = (?, 128)\n",
      "INFO:tensorflow:  name = label_ids, shape = (?, 128)\n",
      "INFO:tensorflow:  name = segment_ids, shape = (?, 128)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "shape of input_ids (?, 128)\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Done calling model_fn.\n",
      "INFO:tensorflow:Starting evaluation at 2020-11-19T09:46:43Z\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Restoring parameters from ./ner/output/model.ckpt-1630\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:Finished evaluation at 2020-11-19-09:47:09\n",
      "INFO:tensorflow:Saving dict for global step 1630: eval_loss = 0.1202376, global_step = 1630, loss = 34.488083\n",
      "INFO:tensorflow:Saving 'checkpoint_path' summary for global step 1630: ./ner/output/model.ckpt-1630\n",
      "INFO:tensorflow:***** Eval results *****\n",
      "INFO:tensorflow:  eval_loss = 0.1202376\n",
      "INFO:tensorflow:  global_step = 1630\n",
      "INFO:tensorflow:  loss = 34.488083\n"
     ]
    }
   ],
   "source": [
    "eval_examples = processor.get_dev_examples(data_dir)\n",
    "eval_file = os.path.join(output_dir, \"eval.tf_record\")\n",
    "filed_based_convert_examples_to_features(\n",
    "                eval_examples, label_list, max_seq_length, tokenizer, eval_file)\n",
    "data_config['eval.tf_record_path'] = eval_file\n",
    "data_config['num_eval_size'] = len(eval_examples)\n",
    "num_eval_size = data_config.get('num_eval_size', 0)\n",
    "\n",
    "tf.logging.info(\"***** Running evaluation *****\")\n",
    "tf.logging.info(\"  Num examples = %d\", num_eval_size)\n",
    "tf.logging.info(\"  Batch size = %d\", batch_size)\n",
    "\n",
    "eval_steps = None\n",
    "eval_drop_remainder = False\n",
    "eval_input_fn = file_based_input_fn_builder(\n",
    "            input_file=eval_file,\n",
    "            seq_length=max_seq_length,\n",
    "            is_training=False,\n",
    "            drop_remainder=eval_drop_remainder)\n",
    "\n",
    "result = estimator.evaluate(input_fn=eval_input_fn, steps=eval_steps)\n",
    "output_eval_file = os.path.join(output_dir, \"eval_results.txt\")\n",
    "with codecs.open(output_eval_file, \"w\", encoding='utf-8') as writer:\n",
    "    tf.logging.info(\"***** Eval results *****\")\n",
    "    for key in sorted(result.keys()):\n",
    "        tf.logging.info(\"  %s = %s\", key, str(result[key]))\n",
    "        writer.write(\"%s = %s\\n\" % (key, str(result[key])))\n",
    "\n",
    "if not os.path.exists(data_config_path):\n",
    "    with codecs.open(data_config_path, 'a', encoding='utf-8') as fd:\n",
    "        json.dump(data_config, fd)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 在测试集上进行测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Writing example 0 of 68\n",
      "INFO:tensorflow:***** Running prediction*****\n",
      "INFO:tensorflow:  Num examples = 68\n",
      "INFO:tensorflow:  Batch size = 64\n",
      "INFO:tensorflow:Calling model_fn.\n",
      "INFO:tensorflow:*** Features ***\n",
      "INFO:tensorflow:  name = input_ids, shape = (?, 128)\n",
      "INFO:tensorflow:  name = input_mask, shape = (?, 128)\n",
      "INFO:tensorflow:  name = label_ids, shape = (?, 128)\n",
      "INFO:tensorflow:  name = segment_ids, shape = (?, 128)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "shape of input_ids (?, 128)\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Done calling model_fn.\n",
      "INFO:tensorflow:Starting evaluation at 2020-11-19T09:48:19Z\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Restoring parameters from ./ner/output/model.ckpt-1630\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:Finished evaluation at 2020-11-19-09:48:22\n",
      "INFO:tensorflow:Saving dict for global step 1630: eval_loss = 0.053423714, global_step = 1630, loss = 33.130444\n",
      "INFO:tensorflow:Saving 'checkpoint_path' summary for global step 1630: ./ner/output/model.ckpt-1630\n",
      "INFO:tensorflow:***** Predict results *****\n",
      "INFO:tensorflow:  eval_loss = 0.053423714\n",
      "INFO:tensorflow:  global_step = 1630\n",
      "INFO:tensorflow:  loss = 33.130444\n",
      "INFO:tensorflow:Calling model_fn.\n",
      "INFO:tensorflow:*** Features ***\n",
      "INFO:tensorflow:  name = input_ids, shape = (?, 128)\n",
      "INFO:tensorflow:  name = input_mask, shape = (?, 128)\n",
      "INFO:tensorflow:  name = label_ids, shape = (?, 128)\n",
      "INFO:tensorflow:  name = segment_ids, shape = (?, 128)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "shape of input_ids (?, 128)\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:Done calling model_fn.\n",
      "INFO:tensorflow:Graph was finalized.\n",
      "INFO:tensorflow:Restoring parameters from ./ner/output/model.ckpt-1630\n",
      "INFO:tensorflow:Running local_init_op.\n",
      "INFO:tensorflow:Done running local_init_op.\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:在 香 港 回 归 前 的 最 后 阶 段 ， 中 共 中 央 举 办 《 “ 一 国 两 制 ” 与 香 港 基 本 法 》 讲 座 ， 中 央 领 导 同 志 认 真 听 讲 ， 虚 心 学 习 ， 很 有 意 义 。\n",
      "INFO:tensorflow:O B-LOC I-LOC O O O O O O O O O B-ORG I-ORG I-ORG I-ORG O O O O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:这 表 明 ， 以 江 泽 民 同 志 为 核 心 的 党 中 央 坚 定 不 移 地 贯 彻 邓 小 平 同 志 “ 一 国 两 制 ” 的 伟 大 构 想 ， 不 折 不 扣 地 执 行 基 本 法 。\n",
      "INFO:tensorflow:O O O O O B-PER I-PER I-PER O O O O O O B-ORG I-ORG I-ORG O O O O O O O B-PER I-PER I-PER O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:“ 一 国 两 制 ” 是 邓 小 平 同 志 的 一 个 伟 大 构 想 ， 《 中 华 人 民 共 和 国 香 港 特 别 行 政 区 基 本 法 》 是 贯 彻 落 实 “ 一 国 两 制 ” 伟 大 构 想 的 一 部 全 国 性 法 律 ， 是 一 部 有 鲜 明 中 国 特 色 的 法 律 。\n",
      "INFO:tensorflow:O O O O O O O B-PER I-PER I-PER O O O O O O O O O O O B-LOC I-LOC I-LOC I-LOC I-LOC I-LOC I-LOC B-LOC I-LOC I-LOC I-LOC I-LOC I-LOC I-LOC O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O B-LOC I-LOC O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:看 包 公 断 案 的 戏 ， 看 他 威 风 凛 凛 坐 公 堂 拍 桌 子 动 刑 具 ， 多 少 还 有 一 点 担 心 ， 总 怕 靠 这 一 套 办 法 弄 出 错 案 来 ， 放 过 了 真 正 的 坏 人 ；\n",
      "INFO:tensorflow:O B-PER I-PER O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:可 看 《 包 公 赶 驴 》 这 出 戏 ， 心 里 就 很 踏 实 ： 这 样 是 一 断 一 个 准 的 。\n",
      "INFO:tensorflow:O O O B-PER I-PER O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:譬 如 看 《 施 公 案 》 ， 施 大 人 坐 公 堂 问 案 子 不 得 要 领 ， 总 是 扮 成 普 通 百 姓 深 入 民 间 暗 中 查 访 ， 结 果 就 屡 破 奇 案 了 。\n",
      "INFO:tensorflow:O O O O B-PER O O O O B-PER O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:如 果 有 人 问 我 ： “ 你 看 过 许 多 包 公 戏 ， 哪 一 出 最 好 ？ ”\n",
      "INFO:tensorflow:O O O O O O O O O O O O O B-PER I-PER O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:我 要 毫 不 犹 豫 地 回 答 道 ： “ 自 然 是 《 包 公 赶 驴 》 啦 ！\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O B-PER I-PER O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:学 习 基 本 法 顺 利 迎 回 归\n",
      "INFO:tensorflow:O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:本 报 评 论 员\n",
      "INFO:tensorflow:O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:再 过 5 5 天 ， 我 国 政 府 将 对 香 港 恢 复 行 使 主 权 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O B-LOC I-LOC O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:它 把 中 央 对 解 决 香 港 问 题 的 基 本 方 针 政 策 具 体 化 、 法 律 化 ， 成 为 国 家 意 志 。\n",
      "INFO:tensorflow:O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:学 习 基 本 法 ， 顺 利 迎 回 归 ， 是 一 项 迫 切 的 任 务 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:要 学 好 基 本 法 ， 首 先 要 认 识 到 基 本 法 的 意 义 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:说 国 际 意 义 ， 不 只 对 第 三 世 界 ， 而 且 对 全 人 类 都 具 有 长 远 意 义 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:这 是 一 个 具 有 创 造 性 的 杰 作 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:“ 基 本 法 不 仅 为 确 保 香 港 平 稳 过 渡 发 挥 重 要 作 用 ， 也 为 确 保 香 港 长 期 繁 荣 稳 定 发 挥 重 要 作 用 ；\n",
      "INFO:tensorflow:O O O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:不 仅 为 当 前 解 决 香 港 问 题 发 挥 作 用 ， 也 为 在 不 远 的 将 来 解 决 澳 门 问 题 和 最 终 解 决 台 湾 问 题 ， 实 现 祖 国 完 全 统 一 发 挥 重 要 作 用 。\n",
      "INFO:tensorflow:O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O O O O O B-LOC I-LOC O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:基 本 法 的 主 要 特 征 ， 是 把 “ 一 国 ” 与 “ 两 制 ” 紧 密 结 合 ， 维 护 国 家 的 主 权 、 统 一 和 领 土 完 整 与 授 权 香 港 特 别 行 政 区 实 行 高 度 自 治 紧 密 结 合 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O B-LOC I-LOC I-LOC I-LOC I-LOC I-LOC I-LOC O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:在 一 个 统 一 的 中 华 人 民 共 和 国 ， 可 以 实 行 社 会 主 义 和 资 本 主 义 两 种 制 度 ， 这 是 为 了 民 族 、 国 家 的 根 本 利 益 。\n",
      "INFO:tensorflow:O O O O O O B-LOC I-LOC I-LOC I-LOC I-LOC I-LOC I-LOC O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:只 有 认 真 学 习 ， 才 能 理 解 意 义 ， 认 识 特 征 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:制 定 一 部 好 法 律 ， 很 不 容 易 ；\n",
      "INFO:tensorflow:O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:遵 守 法 律 ， 执 行 法 律 ， 也 很 不 容 易 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:必 须 重 申 ， 有 法 必 依 ， 执 法 必 严 ， 违 法 必 究 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:基 本 法 作 为 一 部 全 国 性 的 法 律 ， 不 仅 香 港 要 严 格 遵 守 ， 各 省 、 自 治 区 、 直 辖 市 都 要 严 格 遵 守 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:从 中 共 中 央 举 办 这 个 讲 座 ， 可 以 看 出 ， 党 和 政 府 正 在 努 力 加 强 法 制 建 设 ， 坚 持 依 法 治 国 。\n",
      "INFO:tensorflow:O B-ORG I-ORG I-ORG I-ORG O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:有 了 法 律 ， 有 了 制 度 ， 就 有 了 保 证 ， 就 使 “ 一 国 两 制 ” 的 伟 大 构 想 以 法 律 的 形 式 固 定 下 来 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:全 国 人 民 特 别 是 香 港 同 胞 也 从 中 再 一 次 看 到 ， 中 国 共 产 党 和 人 民 政 府 是 高 度 负 责 任 的 党 和 政 府 ， 一 切 从 人 民 的 利 益 出 发 ， 一 切 为 了 祖 国 的 繁 荣 富 强 ， 香 港 的 明 天 将 更 美 好 。\n",
      "INFO:tensorflow:O O O O O O O B-LOC I-LOC O O O O O O O O O O O B-ORG I-ORG I-ORG I-ORG I-ORG O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O B-LOC I-LOC O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:学 习 基 本 法 ， 中 央 领 导 带 了 个 好 头 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:全 党 和 全 国 人 民 特 别 是 各 级 党 政 领 导 干 部 ， 都 要 重 视 学 习 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:只 有 学 习 好 ， 才 能 贯 彻 好 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:为 了 迎 接 香 港 顺 利 回 归 祖 国 这 一 中 华 民 族 的 盛 事 ， 首 先 要 有 一 个 扎 实 的 思 想 准 备 和 良 好 的 精 神 状 态 。\n",
      "INFO:tensorflow:O O O O B-LOC I-LOC O O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:基 本 法 连 着 你 我 他\n",
      "INFO:tensorflow:O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:叶 秋\n",
      "INFO:tensorflow:B-PER I-PER\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:赠 书 想 来 是 香 港 同 胞 的 一 种 文 明 礼 仪 。\n",
      "INFO:tensorflow:O O O O O B-LOC I-LOC O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:抵 港 仅 数 日 ， 就 收 到 厚 厚 几 摞 书 。\n",
      "INFO:tensorflow:O B-LOC O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:匆 匆 翻 阅 一 遍 ， 发 现 各 种 版 本 的 《 中 华 人 民 共 和 国 香 港 特 别 行 政 区 基 本 法 》 竟 有 六 册 之 多 ， 推 介 普 及 基 本 法 的 书 籍 还 要 多 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O B-LOC I-LOC I-LOC I-LOC I-LOC I-LOC I-LOC B-LOC I-LOC I-LOC I-LOC I-LOC I-LOC I-LOC O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:应 约 去 湾 仔 道 谈 事 ， 路 过 一 个 名 为 “ 艺 美 ” 的 书 店 ， 看 到 摆 放 在 最 抢 眼 位 置 的 也 是 基 本 法 及 其 推 介 图 书 。\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:tensorflow:O O O B-LOC I-LOC I-LOC O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:由 此 可 见 ， 在 法 制 观 念 很 强 的 港 人 心 目 中 ， 基 本 法 具 有 极 大 的 权 威 和 尊 严 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O B-LOC O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:行 政 官 员 表 示 ： “ 香 港 继 续 繁 荣 稳 定 、 实 现 香 港 梦 的 成 功 要 素 ， 在 基 本 法 中 得 到 了 充 分 保 证 。 ”\n",
      "INFO:tensorflow:O O O O O O O O B-LOC I-LOC O O O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:法 律 界 人 士 认 为 ： “ 法 治 精 神 能 否 继 续 保 持 ， 基 本 法 已 作 了 明 确 规 定 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:只 要 恪 守 广 大 港 人 认 受 的 香 港 法 律 体 系 中 的 这 个 总 纲 纪 、 总 章 程 ， 香 港 将 健 步 迈 向 新 世 纪 。 ”\n",
      "INFO:tensorflow:O O O O O O B-LOC O O O O B-LOC I-LOC O O O O O O O O O O O O O O O O B-LOC I-LOC O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:劳 工 界 的 成 员 说 ， 涉 及 保 障 劳 工 合 法 权 益 的 条 款 ， “ 香 港 现 在 有 的 ， 基 本 法 都 保 持 了 ；\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:大 家 因 此 吃 了 定 心 丸 。 ”\n",
      "INFO:tensorflow:O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:基 本 法 受 到 港 人 的 普 遍 欢 迎 和 高 度 重 视 是 势 所 必 然 。\n",
      "INFO:tensorflow:O O O O O B-LOC O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:历 时 四 年 零 八 个 月 、 凝 聚 了 香 港 和 内 地 无 数 人 的 智 慧 而 制 定 的 基 本 法 ， 将 邓 小 平 同 志 倡 导 的 “ 一 国 两 制 ” 伟 大 构 想 以 法 律 形 式 固 定 下 来 ， 成 为 国 家 和 人 民 的 意 志 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O O O O O O B-PER I-PER I-PER O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:基 本 法 既 是 香 港 回 归 后 特 区 一 切 运 作 的 法 律 基 础 ， 更 是 保 持 香 港 长 期 稳 定 繁 荣 的 法 律 保 证 。\n",
      "INFO:tensorflow:O O O O O B-LOC I-LOC O O O B-LOC I-LOC O O O O O O O O O O O O O O B-LOC I-LOC O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:实 践 已 经 并 将 继 续 证 明 这 一 点 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:说 来 也 巧 ， 姬 鹏 飞 同 志 1 9 9 0 年 4 月 在 邓 小 平 同 志 题 写 书 名 的 《 基 本 法 的 诞 生 》 一 书 序 言 中 也 写 了 同 样 的 话 。\n",
      "INFO:tensorflow:O O O O O B-PER I-PER I-PER O O O O O O O O O O B-PER I-PER I-PER O O O O O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:真 可 谓 仁 者 智 者 所 见 略 同 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:基 本 法 是 一 部 具 有 普 遍 约 束 力 的 重 要 法 律 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:7 月 1 日 ， 这 部 重 要 法 律 即 开 始 正 式 实 施 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:基 本 法 不 仅 体 现 了 香 港 同 胞 的 意 志 和 利 益 ， 也 体 现 了 全 国 人 民 的 意 志 和 利 益 。\n",
      "INFO:tensorflow:O O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:正 因 为 如 此 ， 江 泽 民 同 志 强 调 ： 香 港 基 本 法 是 一 部 全 国 性 的 法 律 ， 不 仅 香 港 要 严 格 遵 守 ， 各 省 、 自 治 区 、 直 辖 市 都 要 严 格 遵 守 。\n",
      "INFO:tensorflow:O O O O O O B-PER I-PER I-PER O O O O O B-LOC I-LOC O O O O O O O O O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:还 表 示 ， 不 仅 我 要 遵 守 ， 我 希 望 香 港 同 胞 和 全 国 1 2 亿 人 民 也 要 遵 守 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:学 习 、 贯 彻 基 本 法 的 过 程 ， 无 疑 是 增 强 法 制 观 念 、 推 进 法 制 建 设 的 过 程 ， 无 疑 是 内 地 和 香 港 在 新 的 征 途 上 并 肩 同 行 、 共 创 辉 煌 的 过 程 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O B-LOC I-LOC O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:法 律 一 旦 为 人 民 群 众 所 掌 握 ， 就 会 变 成 伟 大 的 力 量 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O O O O O\n",
      "INFO:tensorflow:list index out of range\n",
      "INFO:tensorflow:行 文 至 此 ， 我 对 “ 基 本 法 连 着 你 我 他 ” 有 了 更 深 刻 、 更 真 切 的 理 解 。\n",
      "INFO:tensorflow:O O O O O O O O O O O O O O O O O O O O O O O O O O O O O O\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "processed 370 tokens with 19 phrases; found: 18 phrases; correct: 16.\n",
      "\n",
      "accuracy:  98.92%; precision:  88.89%; recall:  84.21%; FB1:  86.49\n",
      "\n",
      "              LOC: precision: 100.00%; recall: 100.00%; FB1: 100.00  4\n",
      "\n",
      "              ORG: precision: 100.00%; recall: 100.00%; FB1: 100.00  4\n",
      "\n",
      "              PER: precision:  80.00%; recall:  72.73%; FB1:  76.19  10\n",
      "\n"
     ]
    }
   ],
   "source": [
    "token_path = os.path.join(output_dir, \"token_test.txt\")\n",
    "if os.path.exists(token_path):\n",
    "    os.remove(token_path)\n",
    "\n",
    "with codecs.open(os.path.join(output_dir, 'label2id.pkl'), 'rb') as rf:\n",
    "    label2id = pickle.load(rf)\n",
    "    id2label = {value: key for key, value in label2id.items()}\n",
    "\n",
    "predict_examples = processor.get_test_examples(data_dir)\n",
    "predict_file = os.path.join(output_dir, \"predict.tf_record\")\n",
    "filed_based_convert_examples_to_features(predict_examples, label_list,\n",
    "                                                 max_seq_length, tokenizer,\n",
    "                                                 predict_file, mode=\"test\")\n",
    "\n",
    "tf.logging.info(\"***** Running prediction*****\")\n",
    "tf.logging.info(\"  Num examples = %d\", len(predict_examples))\n",
    "tf.logging.info(\"  Batch size = %d\", batch_size)\n",
    "    \n",
    "predict_drop_remainder = False\n",
    "predict_input_fn = file_based_input_fn_builder(\n",
    "            input_file=predict_file,\n",
    "            seq_length=max_seq_length,\n",
    "            is_training=False,\n",
    "            drop_remainder=predict_drop_remainder)\n",
    "\n",
    "predicted_result = estimator.evaluate(input_fn=predict_input_fn)\n",
    "output_eval_file = os.path.join(output_dir, \"predicted_results.txt\")\n",
    "with codecs.open(output_eval_file, \"w\", encoding='utf-8') as writer:\n",
    "    tf.logging.info(\"***** Predict results *****\")\n",
    "    for key in sorted(predicted_result.keys()):\n",
    "        tf.logging.info(\"  %s = %s\", key, str(predicted_result[key]))\n",
    "        writer.write(\"%s = %s\\n\" % (key, str(predicted_result[key])))\n",
    "\n",
    "result = estimator.predict(input_fn=predict_input_fn)\n",
    "output_predict_file = os.path.join(output_dir, \"label_test.txt\")\n",
    "\n",
    "def result_to_pair(writer):\n",
    "    for predict_line, prediction in zip(predict_examples, result):\n",
    "        idx = 0\n",
    "        line = ''\n",
    "        line_token = str(predict_line.text).split(' ')\n",
    "        label_token = str(predict_line.label).split(' ')\n",
    "        if len(line_token) != len(label_token):\n",
    "            tf.logging.info(predict_line.text)\n",
    "            tf.logging.info(predict_line.label)\n",
    "        for id in prediction:\n",
    "            if id == 0:\n",
    "                continue\n",
    "            curr_labels = id2label[id]\n",
    "            if curr_labels in ['[CLS]', '[SEP]']:\n",
    "                continue\n",
    "            try:\n",
    "                line += line_token[idx] + ' ' + label_token[idx] + ' ' + curr_labels + '\\n'\n",
    "            except Exception as e:\n",
    "                tf.logging.info(e)\n",
    "                tf.logging.info(predict_line.text)\n",
    "                tf.logging.info(predict_line.label)\n",
    "                line = ''\n",
    "                break\n",
    "            idx += 1\n",
    "        writer.write(line + '\\n')\n",
    "            \n",
    "from ner.src.conlleval import return_report\n",
    "\n",
    "with codecs.open(output_predict_file, 'w', encoding='utf-8') as writer:\n",
    "    result_to_pair(writer)\n",
    "eval_result = return_report(output_predict_file)\n",
    "for line in eval_result:\n",
    "    print(line)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 命名实体识别效果测试 —— 交互式测试方式\n",
    "\n",
    "接下来我们将使用BERT模型测试一些具体的句子，看看命名实体识别的效果怎样。我们采用交互式的方式来测试，交互式是指你输入一个句子，模型就预测一个句子，可以不断输入，不断预测，直到你退出测试。  \n",
    "测试步骤如下所示：  \n",
    " \n",
    "1. 找到本页面顶部的菜单栏，点击 Kernel -> Restart；  \n",
    "2. 执行下面的 \"%run ner/src/terminal_predict.py\" 命令；  \n",
    "3. 手动输入一个句子，按回车进行预测；  \n",
    "4. 如果想结束测试，则输入“再见”。  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n",
      "  _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\n",
      "/home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n",
      "  _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\n",
      "/home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n",
      "  _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\n",
      "/home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n",
      "  _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\n",
      "/home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n",
      "  _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\n",
      "/home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n",
      "  np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "checkpoint path:./ner/output/checkpoint\n",
      "going to restore checkpoint\n",
      "WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Colocations handled automatically by placer.\n",
      "WARNING:tensorflow:From /home/ma-user/work/DL_nlp_bert_ner/ner/bert/modeling.py:671: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use keras.layers.dense instead.\n",
      "WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/contrib/crf/python/ops/crf.py:567: dynamic_rnn (from tensorflow.python.ops.rnn) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Please use `keras.layers.RNN(cell)`, which is equivalent to this API\n",
      "WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/ops/rnn.py:626: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use tf.cast instead.\n",
      "WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/training/saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.\n",
      "Instructions for updating:\n",
      "Use standard file APIs to check for files with this prefix.\n",
      "INFO:tensorflow:Restoring parameters from ./ner/output/model.ckpt-1630\n",
      "{1: 'O', 2: 'B-PER', 3: 'I-PER', 4: 'B-ORG', 5: 'I-ORG', 6: 'B-LOC', 7: 'I-LOC', 8: 'X', 9: '[CLS]', 10: '[SEP]'}\n",
      "输入句子:\n",
      "周杰伦（Jay Chou），1979年1月18日出生于台湾省新北市，毕业于淡江中学，中国台湾流行乐男歌手。\n",
      "[['B-PER', 'I-PER', 'I-PER', 'O', 'B-PER', 'I-PER', 'I-PER', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'B-LOC', 'I-LOC', 'I-LOC', 'O', 'O', 'O', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'O', 'B-LOC', 'I-LOC', 'B-LOC', 'I-LOC', 'O', 'O', 'O', 'O', 'O', 'O', 'O']]\n",
      "LOC, 台湾省, 新北市, 中国, 台湾\n",
      "PER, 周杰伦, jaycho##u\n",
      "ORG, 淡江中学\n",
      "time used: 0.826048 sec\n",
      "输入句子:\n",
      "马云，1964年9月10日生于浙江省杭州市，1988年毕业于杭州师范学院外语系，同年担任杭州电子工业学院英文及国际贸易教师。\n",
      "[['B-PER', 'I-PER', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-LOC', 'I-LOC', 'I-LOC', 'B-LOC', 'I-LOC', 'I-LOC', 'O', 'O', 'O', 'O', 'O', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O', 'B-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'I-ORG', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']]\n",
      "LOC, 浙江省, 杭州市\n",
      "PER, 马云\n",
      "ORG, 杭州师范学院外语系, 杭州电子工业学院\n",
      "time used: 0.034984 sec\n",
      "输入句子:\n",
      "再见\n",
      "\n",
      "再见\n"
     ]
    }
   ],
   "source": [
    "%run ner/src/terminal_predict.py"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 本案例到此结束"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
