{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# NLP 要解决的任务\n",
    "- 处理文本数据，首先对文本数据进行分词操作(分词的方法可能会不同，中文常见的就是分词或者分字)\n",
    "- 分完词然后将其转换为特征向量\n",
    "- 词嵌入后，接下来就构建模型(BERT, GPT等)\n",
    "- 为了完成任务只需要在预训练模型上进行微调"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. HuggingFace开箱即用示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# torch库的安装： pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121\n",
    "import warnings\n",
    "warnings.filterwarnings(\"ignore\")\n",
    "from transformers import pipeline\n",
    "\n",
    "classifier = pipeline(\"sentiment-analysis\",    # 情绪识别\n",
    "                model=\"distilbert-base-uncased-finetuned-sst-2-english\")     \n",
    "\n",
    "sentence   = [\"I've been waiting for a HuggingFace course my whole life!\",\n",
    "              \"I hate this so much!\",\n",
    "              \"I love it so much!\"\n",
    "            ]\n",
    "result     = classifier(sentence)\n",
    "print(result)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. HuggingFace基本流程\n",
    "\n",
    "### 2.1 Tokenizer\n",
    "\n",
    "- 分词，分字以及对特殊字符的处理(起始，终止，间隔，分类等特殊字符可以自定义设计)\n",
    "- 将每个词映射到一个ID, 每个词都有一个唯一的ID\n",
    "- 一些辅助信息，比如当前词属于哪个句子(MASK信息等)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import AutoTokenizer\n",
    "\n",
    "model_name = \"distilbert-base-uncased-finetuned-sst-2-english\"\n",
    "tokenizer        = AutoTokenizer.from_pretrained(model_name)\n",
    "\n",
    "raw_inputs       = [\n",
    "                    \"I've been waiting for a HuggingFace course my whole life.\",\n",
    "                    \"I hate this so much.\",\n",
    "                ]\n",
    "\n",
    "\"\"\"_summary_\n",
    "    padding:          按照当前输入的最长句子进行补齐\n",
    "    truncation:       这段句子的最大长度, 需要与max_len一起配合使用\n",
    "    return_tensors:   设置返回的张量类型, 默认为pt即PyTorch\n",
    "\"\"\"\n",
    "inputs           = tokenizer(raw_inputs, padding=True, truncation=True, return_tensors=\"pt\")\n",
    "\n",
    "# attention_mask是一个占位符，为0的地方，句子内的单词不需要与其进行attention的计算\n",
    "print(inputs[\"input_ids\"], \"\\n\", inputs['attention_mask'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "tokenizer.decode([101,  1045,  1005,  2310,  2042,  3403,  2005,  1037, 17662, 12172, 2607,  2026,  2878,  2166,  1012,   102])"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.2 模型加载"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import AutoModel    # 自动选择模型\n",
    "\n",
    "model = AutoModel.from_pretrained(model_name)\n",
    "print(model)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.3 模型运行"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "outputs = model(**inputs)\n",
    "print(outputs.last_hidden_state.shape)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.4 序列的分类"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from transformers import AutoModelForSequenceClassification\n",
    "\n",
    "model = AutoModelForSequenceClassification.from_pretrained(model_name)\n",
    "print(model)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们发现就只是在模型的最后添加了两个线性连接层"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "outputs = model(**inputs)\n",
    "print(outputs.logits.shape)\n",
    "\n",
    "\n",
    "import torch\n",
    "pred = torch.nn.functional.softmax(outputs.logits, dim=-1)\n",
    "print(pred)\n",
    "\n",
    "\n",
    "print(model.config.id2label)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.5 Padding的作用"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sequence_ids1 = [[200, 200, 200]]\n",
    "sequence_ids2 = [[200, 200]]\n",
    "\n",
    "batch_ids     =  [[200, 200, 200],\n",
    "                 [200,200, tokenizer.pad_token_id]]\n",
    "\n",
    "print(model(torch.tensor(sequence_ids1)).logits)\n",
    "print(model(torch.tensor(sequence_ids2)).logits)\n",
    "print(model(torch.tensor(batch_ids)).logits)\n",
    "print(tokenizer.pad_token_id)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们发现sequence_ids2与batch_ids的结果不一样，我们继续测试："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "sequence_ids3 = [[200, 200, 0]]\n",
    "\n",
    "print(model(torch.tensor(sequence_ids3)).logits)\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "其原因在于默认填充的ID不一样，为了解决此问题，我们需要引入attnention mask，其作用是屏蔽掉填充部分的注意力。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "batch_ids     =  [[200, 200, 200],\n",
    "                 [200,200, tokenizer.pad_token_id]]\n",
    "\n",
    "\n",
    "attention_mask     =  [[1, 1, 1], [1, 1, 0]]\n",
    "\n",
    "print(model(torch.tensor(batch_ids), attention_mask=torch.tensor(attention_mask)).logits)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "常见的不同padding方法有以下形式："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "inputs = tokenizer(batch_ids, padding='longest')         # batch 中的最长\n",
    "inputs = tokenizer(batch_ids, padding='max_length')      # 整个语料库的最大\n",
    "\n",
    "inputs = tokenizer(batch_ids, padding='max_length', max_length=8)                            # 不够的填充到8，超过的不处理\n",
    "inputs = tokenizer(batch_ids, padding='max_length', max_length=10, truncation=True)          # 超过的进行截断"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 参考文档\n",
    "1. tokenizer: https://mp.weixin.qq.com/s/mYqnJug2tVT8gieTMKiUaQ"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "base",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.10"
  },
  "orig_nbformat": 4
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
