{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 命名实体识别 \n",
    "## 1、数据的处理\n",
    "##### id ner_tags tokens 是原始信息，增加 input_ids labels 信息：\n",
    "{'id': '0',\n",
    " 'tokens': ['海','钓','比','赛','地','点','在','厦','门','与','金','门','之','间','的','海','域','。'],\n",
    " 'ner_tags': [0, 0, 0, 0, 0, 0, 0, 5, 6, 0, 5, 6, 0, 0, 0, 0, 0, 0],\n",
    " 'input_ids': [101,3862,7157,3683,6612,...1,1,1,1],\n",
    " 'labels': [-100, 0, 0, 0, 0, 0, 0, 0, 5, 6, 0, 5, 6, 0, 0, 0, 0, 0, 0, -100]}\n",
    "## 2、使用到的类\n",
    "### 2.1 文字处理必有的三个类 AutoTokenizer TrainingArguments Trainer\n",
    "### 2.2 分类器 AutoModelForTokenClassification（BertForTokenClassification是其中一种）\n",
    "####  针对如下带有 input_ids，labels 的数据比较适合使用分类器\n",
    "'input_ids': [101,3862,7157,3683,6612,...1,1,1,1],\n",
    "'labels': [-100, 0, 0, 0, 0, 0, 0, 0, 5, 6, 0, 5, 6, 0, 0, 0, 0, 0, 0, -100]\n",
    "#### 原理：将labels 经过分类器会得到一个 labels 长度 * 隐藏层维度的向量，在计算softmax 。\n",
    "### 2.3 DataCollatorForTokenClassification 用于将数据转化为模型可以接受的格式\n",
    "\n",
    "# 阅读理解\n",
    "## 1、数据的处理\n",
    "##### input_ids start_positions  end_positions 三个信息即可\n",
    "{'input_ids': [101,5745,2455,7563,3221,784,720,3198,952,6158,818,711,712,3136,4638,8043,102,5745,2455,7563,3364,3322,8020,8024,8021,...1,1,1],\n",
    " 'start_positions': 47,\n",
    " 'end_positions': 48}\n",
    "## 2、使用到的类\n",
    "### 2.1 文字处理必有的三个类 AutoTokenizer TrainingArguments Trainer\n",
    "### 2.2 模型 AutoModelForQuestionAnswering（BertForQuestionAnswering是其中一种）\n",
    "####  针对如下带有 input_ids，labels 的数据比较适合使用分类器\n",
    "'input_ids': [101,3862,7157,3683,6612,...1,1,1,1],\n",
    "'labels': [-100, 0, 0, 0, 0, 0, 0, 0, 5, 6, 0, 5, 6, 0, 0, 0, 0, 0, 0, -100]\n",
    "#### 原理：输出变成两个长度为tokens_length的向量，一个为start_logits，一个为end_logits,然后做loss计算\n",
    "### 2.3 DefaultDataCollator 用于将数据转化为模型可以接受的格式\n",
    "\n",
    "# 多项选择题\n",
    "## 1、数据的处理\n",
    "##### input_ids start_positions  end_positions 三个信息即可\n",
    "{'input_ids': [101,5745,2455,7563,3221,784,720,3198,952,6158,818,711,712,3136,4638,8043,102,5745,2455,7563,3364,3322,8020,8024,8021,...1,1,1],\n",
    " 'start_positions': 47,\n",
    " 'end_positions': 48}\n",
    "## 2、使用到的类\n",
    "### 2.1 文字处理必有的三个类 AutoTokenizer TrainingArguments Trainer\n",
    "### 2.2 模型 AutoModelForQuestionAnswering（BertForQuestionAnswering是其中一种）\n",
    "####  针对如下带有 input_ids，labels 的数据比较适合使用分类器\n",
    "'input_ids': [101,3862,7157,3683,6612,...1,1,1,1],\n",
    "'labels': [-100, 0, 0, 0, 0, 0, 0, 0, 5, 6, 0, 5, 6, 0, 0, 0, 0, 0, 0, -100]\n",
    "#### 原理：输出变成两个长度为tokens_length的向量，一个为start_logits，一个为end_logits,然后做loss计算\n",
    "### 2.3 DefaultDataCollator 用于将数据转化为模型可以接受的格式"
   ]
  }
 ],
 "metadata": {
  "language_info": {
   "name": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
