{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "b3748355-d830-4eb0-b56f-7639ba3a4532",
   "metadata": {},
   "source": [
    "# transformers 自定义模型下载的路径"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "af1f2c08-299b-407e-ba15-2e5d6453ce8b",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "os.environ['HF_HOME'] = '../hf'\n",
    "os.environ['HF_HUB_CACHE'] = '../hf/hub'"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5845f0b8-0c5f-4d68-a114-0e0192f7ccc2",
   "metadata": {},
   "source": [
    "# 一、使用 Pipeline API 实现 Text Classification 任务\n",
    "\n",
    "Text classification(文本分类)与任何模态中的分类任务一样，文本分类将一个文本序列（可以是句子级别、段落或者整篇文章）标记为预定义的类别集合之一。文本分类有许多实际应用，其中包括：\n",
    "\n",
    "- 情感分析：根据某种极性（如积极或消极）对文本进行标记，以在政治、金融和市场等领域支持决策制定。\n",
    "- 内容分类：根据某个主题对文本进行标记，以帮助组织和过滤新闻和社交媒体信息流中的信息（天气、体育、金融等）。\n",
    "\n",
    "下面以 Text classification 中的情感分析任务为例，展示如何使用 Pipeline API。\n",
    "\n",
    "模型主页：https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b4e0bd6b-1e79-47e5-9af9-5156e885d997",
   "metadata": {},
   "source": [
    "## 1.1 默认模型（distilbert-base-uncased-finetuned-sst-2-english）和 对比模型（textattack/bert-base-uncased-SST-2）测试对比 - 中文"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "9008aab5-6232-484f-acf0-75255b4a03d4",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "No model was supplied, defaulted to distilbert/distilbert-base-uncased-finetuned-sst-2-english and revision 714eb0f (https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english).\n",
      "Using a pipeline without specifying a model name and revision in production is not recommended.\n",
      "Device set to use cuda:0\n",
      "Device set to use cuda:0\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ 默认模型结果（distilbert-base-uncased-finetuned-sst-2-english）:\n",
      "  ➤ Label : NEGATIVE\n",
      "  ➤ Score : 0.8957\n",
      "\n",
      "🔁 对比模型结果（textattack/bert-base-uncased-SST-2）:\n",
      "  ➤ Label : LABEL_1\n",
      "  ➤ Score : 0.5482\n"
     ]
    }
   ],
   "source": [
    "from transformers import pipeline\n",
    "\n",
    "# 待分析文本\n",
    "text = \"今儿上海可真冷啊\"\n",
    "\n",
    "# 默认模型\n",
    "pipe_default = pipeline(\"sentiment-analysis\")\n",
    "result_default = pipe_default(text)[0]\n",
    "\n",
    "# 对比模型\n",
    "pipe_compare = pipeline(\"sentiment-analysis\", model=\"textattack/bert-base-uncased-SST-2\")\n",
    "result_compare = pipe_compare(text)[0]\n",
    "\n",
    "# 格式化输出\n",
    "print(\"✅ 默认模型结果（distilbert-base-uncased-finetuned-sst-2-english）:\")\n",
    "print(f\"  ➤ Label : {result_default['label']}\")\n",
    "print(f\"  ➤ Score : {round(result_default['score'], 4)}\\n\")\n",
    "\n",
    "print(\"🔁 对比模型结果（textattack/bert-base-uncased-SST-2）:\")\n",
    "print(f\"  ➤ Label : {result_compare['label']}\")\n",
    "print(f\"  ➤ Score : {round(result_compare['score'], 4)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c450215d-1131-4a30-999b-9618dc8cacbe",
   "metadata": {},
   "source": [
    "## 1.2 默认模型（distilbert-base-uncased-finetuned-sst-2-english）和 对比模型（textattack/bert-base-uncased-SST-2）测试对比 - 英文"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "16dc5bc2-0769-471e-ad26-aa8218b8b35f",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "No model was supplied, defaulted to distilbert/distilbert-base-uncased-finetuned-sst-2-english and revision 714eb0f (https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english).\n",
      "Using a pipeline without specifying a model name and revision in production is not recommended.\n",
      "Device set to use cuda:0\n",
      "Device set to use cuda:0\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ 默认模型结果（distilbert-base-uncased-finetuned-sst-2-english）:\n",
      "  ➤ Label : NEGATIVE\n",
      "  ➤ Score : 0.9995\n",
      "\n",
      "🔁 对比模型结果（textattack/bert-base-uncased-SST-2）:\n",
      "  ➤ Label : LABEL_0\n",
      "  ➤ Score : 0.9973\n"
     ]
    }
   ],
   "source": [
    "from transformers import pipeline\n",
    "\n",
    "# 待分析文本\n",
    "text = \"Today Shanghai is really cold.\"\n",
    "\n",
    "# 默认模型\n",
    "pipe_default = pipeline(\"sentiment-analysis\")\n",
    "result_default = pipe_default(text)[0]\n",
    "\n",
    "# 对比模型\n",
    "pipe_compare = pipeline(\"sentiment-analysis\", model=\"textattack/bert-base-uncased-SST-2\")\n",
    "result_compare = pipe_compare(text)[0]\n",
    "\n",
    "# 格式化输出\n",
    "print(\"✅ 默认模型结果（distilbert-base-uncased-finetuned-sst-2-english）:\")\n",
    "print(f\"  ➤ Label : {result_default['label']}\")\n",
    "print(f\"  ➤ Score : {round(result_default['score'], 4)}\\n\")\n",
    "\n",
    "print(\"🔁 对比模型结果（textattack/bert-base-uncased-SST-2）:\")\n",
    "print(f\"  ➤ Label : {result_compare['label']}\")\n",
    "print(f\"  ➤ Score : {round(result_compare['score'], 4)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d007ddfc-e73b-4fa0-895d-f5155bc68810",
   "metadata": {},
   "source": [
    "## 1.3 批处理调用模型推理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "77fd2d11-33df-4569-9e5a-c4effb0b3f3e",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'label': 'NEGATIVE', 'score': 0.9995032548904419},\n",
       " {'label': 'NEGATIVE', 'score': 0.9984821677207947},\n",
       " {'label': 'POSITIVE', 'score': 0.9961802959442139}]"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "text_list = [\n",
    "    \"Today Shanghai is really cold.\",\n",
    "    \"I think the taste of the garlic mashed pork in this store is average.\",\n",
    "    \"You learn things really quickly. You understand the theory class as soon as it is taught.\"\n",
    "]\n",
    "\n",
    "pipe(text_list)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d73f5959-ca1c-4ef8-a5fc-7d2de20fe186",
   "metadata": {},
   "source": [
    "# 二、Natural Language Processing(NLP)\n",
    "\n",
    "NLP(自然语言处理)任务是最常见的任务类型之一，因为文本是我们进行交流的一种自然方式。要将文本转换为模型可识别的格式，需要对其进行分词。这意味着将一系列文本划分为单独的单词或子词（标记），然后将这些标记转换为数字。结果就是，您可以将一系列文本表示为一系列数字，并且一旦您拥有了一系列数字，它就可以输入到模型中来解决各种NLP任务！\n",
    "\n",
    "上面演示的 文本分类任务，以及接下来的标记、问答等任务都属于 NLP 范畴。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "436a07d1-94f0-44c0-8dde-f00dd803ebd2",
   "metadata": {},
   "source": [
    "## 2.1 Token Classification\n",
    "\n",
    "在任何 NLP 任务中，文本都经过预处理，将文本序列分成单个单词或子词。这些被称为tokens。\n",
    "\n",
    "Token Classification（Token分类）将每个token分配一个来自预定义类别集的标签。\n",
    "\n",
    "两种常见的 Token 分类是：\n",
    "\n",
    "- 命名实体识别（NER）：根据实体类别（如组织、人员、位置或日期）对 token 进行标记。NER在生物医学设置中特别受欢迎，可以标记基因、蛋白质和药物名称。\n",
    "- 词性标注（POS）：根据其词性（如名词、动词或形容词）对标记进行标记。POS对于帮助翻译系统了解两个相同的单词如何在语法上不同很有用（作为名词的银行与作为动词的银行）。\n",
    "\n",
    "模型主页：https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4dd47596-4aa4-472c-993a-964c9338558c",
   "metadata": {},
   "source": [
    "### 2.1.1 NER 默认模型（bert-large-cased-finetuned-conll03-english）对比 模型（dslim/bert-base-NER）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "d37de434-132d-437d-8f2a-59b2c86bcf68",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "No model was supplied, defaulted to dbmdz/bert-large-cased-finetuned-conll03-english and revision 4c53496 (https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english).\n",
      "Using a pipeline without specifying a model name and revision in production is not recommended.\n",
      "Some weights of the model checkpoint at dbmdz/bert-large-cased-finetuned-conll03-english were not used when initializing BertForTokenClassification: ['bert.pooler.dense.bias', 'bert.pooler.dense.weight']\n",
      "- This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
      "- This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n",
      "Device set to use cuda:1\n",
      "Some weights of the model checkpoint at dslim/bert-base-NER were not used when initializing BertForTokenClassification: ['bert.pooler.dense.bias', 'bert.pooler.dense.weight']\n",
      "- This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
      "- This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n",
      "Device set to use cuda:1\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "默认 模型识别结果:\n",
      "  entity: I-ORG           score: 0.9968000054359436 word: Hu              start: 0   end: 2\n",
      "  entity: I-ORG           score: 0.9293000102043152 word: ##gging         start: 2   end: 7\n",
      "  entity: I-ORG           score: 0.9763000011444092 word: Face            start: 8   end: 12\n",
      "  entity: I-MISC          score: 0.9983000159263611 word: French          start: 18  end: 24\n",
      "  entity: I-LOC           score: 0.9990000128746033 word: New             start: 42  end: 45\n",
      "  entity: I-LOC           score: 0.9987000226974487 word: York            start: 46  end: 50\n",
      "  entity: I-LOC           score: 0.9991999864578247 word: City            start: 51  end: 55\n",
      "\n",
      "dslim/bert-base-NER 模型识别结果:\n",
      "  entity: B-ORG           score: 0.8934999704360962 word: Hu              start: 0   end: 2\n",
      "  entity: I-ORG           score: 0.9150000214576721 word: ##gging         start: 2   end: 7\n",
      "  entity: I-ORG           score: 0.9776999950408936 word: Face            start: 8   end: 12\n",
      "  entity: B-MISC          score: 0.9995999932289124 word: French          start: 18  end: 24\n",
      "  entity: B-LOC           score: 0.9994999766349792 word: New             start: 42  end: 45\n",
      "  entity: I-LOC           score: 0.9994000196456909 word: York            start: 46  end: 50\n",
      "  entity: I-LOC           score: 0.9995999932289124 word: City            start: 51  end: 55\n"
     ]
    }
   ],
   "source": [
    "from transformers import pipeline\n",
    "\n",
    "# 文本样本\n",
    "text = \"Hugging Face is a French company based in New York City.\"\n",
    "\n",
    "# 默认模型（可能是 `dbmdz/bert-large-cased-finetuned-conll03-english`）\n",
    "default_classifier = pipeline(task=\"ner\",device=1)\n",
    "default_preds = default_classifier(text)\n",
    "\n",
    "# dslim/bert-base-NER 模型\n",
    "dslim_classifier = pipeline(task=\"ner\", model=\"dslim/bert-base-NER\",device=1)\n",
    "dslim_preds = dslim_classifier(text)\n",
    "\n",
    "# 格式化输出函数\n",
    "def format_preds(name, preds):\n",
    "    print(f\"\\n{name} 模型识别结果:\")\n",
    "    for pred in preds:\n",
    "        print(\n",
    "            f\"  entity: {pred['entity']:<15} score: {round(pred['score'], 4):<7} \"\n",
    "            f\"word: {pred['word']:<15} start: {pred['start']:<3} end: {pred['end']}\"\n",
    "        )\n",
    "\n",
    "# 输出对比结果\n",
    "format_preds(\"默认\", default_preds)\n",
    "format_preds(\"dslim/bert-base-NER\", dslim_preds)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "39015439-c724-4a6d-a3a6-82b43b56fcf9",
   "metadata": {},
   "source": [
    "### 2.2.2 合并实体（bert-large-cased-finetuned-conll03-english 和 dslim/bert-base-NER）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "ac37d322-085f-4a8b-a64b-ead6b792d7b6",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "No model was supplied, defaulted to dbmdz/bert-large-cased-finetuned-conll03-english and revision 4c53496 (https://huggingface.co/dbmdz/bert-large-cased-finetuned-conll03-english).\n",
      "Using a pipeline without specifying a model name and revision in production is not recommended.\n",
      "Some weights of the model checkpoint at dbmdz/bert-large-cased-finetuned-conll03-english were not used when initializing BertForTokenClassification: ['bert.pooler.dense.bias', 'bert.pooler.dense.weight']\n",
      "- This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
      "- This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n",
      "Device set to use cuda:1\n",
      "/opt/conda/lib/python3.11/site-packages/transformers/pipelines/token_classification.py:181: UserWarning: `grouped_entities` is deprecated and will be removed in version v5.0.0, defaulted to `aggregation_strategy=\"AggregationStrategy.SIMPLE\"` instead.\n",
      "  warnings.warn(\n",
      "Some weights of the model checkpoint at dslim/bert-base-NER were not used when initializing BertForTokenClassification: ['bert.pooler.dense.bias', 'bert.pooler.dense.weight']\n",
      "- This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
      "- This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n",
      "Device set to use cuda:1\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "默认 模型识别结果（合并实体）:\n",
      "  entity_group: ORG      score: 0.9674999713897705 word: Hugging Face              start: 0   end: 12\n",
      "  entity_group: MISC     score: 0.9983000159263611 word: French                    start: 18  end: 24\n",
      "  entity_group: LOC      score: 0.9990000128746033 word: New York City             start: 42  end: 55\n",
      "\n",
      "dslim/bert-base-NER 模型识别结果（合并实体）:\n",
      "  entity_group: ORG      score: 0.9286999702453613 word: Hugging Face              start: 0   end: 12\n",
      "  entity_group: MISC     score: 0.9995999932289124 word: French                    start: 18  end: 24\n",
      "  entity_group: LOC      score: 0.9994999766349792 word: New York City             start: 42  end: 55\n"
     ]
    }
   ],
   "source": [
    "from transformers import pipeline\n",
    "\n",
    "# 文本样本\n",
    "text = \"Hugging Face is a French company based in New York City.\"\n",
    "\n",
    "# 默认模型（默认使用的可能是 `dbmdz/bert-large-cased-finetuned-conll03-english`）\n",
    "default_classifier = pipeline(task=\"ner\", grouped_entities=True, device=1)\n",
    "default_preds = default_classifier(text)\n",
    "\n",
    "# 指定模型：dslim/bert-base-NER\n",
    "dslim_classifier = pipeline(task=\"ner\", model=\"dslim/bert-base-NER\", grouped_entities=True, device=1)\n",
    "dslim_preds = dslim_classifier(text)\n",
    "\n",
    "# 格式化输出函数\n",
    "def format_grouped_preds(name, preds):\n",
    "    print(f\"\\n{name} 模型识别结果（合并实体）:\")\n",
    "    for pred in preds:\n",
    "        print(\n",
    "            f\"  entity_group: {pred['entity_group']:<8} \"\n",
    "            f\"score: {round(pred['score'], 4):<7} \"\n",
    "            f\"word: {pred['word']:<25} \"\n",
    "            f\"start: {pred['start']:<3} end: {pred['end']}\"\n",
    "        )\n",
    "\n",
    "# 输出对比结果\n",
    "format_grouped_preds(\"默认\", default_preds)\n",
    "format_grouped_preds(\"dslim/bert-base-NER\", dslim_preds)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c3286324-ffc1-49d4-a242-a1b0197b049c",
   "metadata": {},
   "source": [
    "## 2.2 Question Answering\n",
    "\n",
    "Question Answering(问答)是另一个token-level的任务，返回一个问题的答案，有时带有上下文（开放领域），有时不带上下文（封闭领域）。每当我们向虚拟助手提出问题时，例如询问一家餐厅是否营业，就会发生这种情况。它还可以提供客户或技术支持，并帮助搜索引擎检索您要求的相关信息。\n",
    "\n",
    "有两种常见的问答类型：\n",
    "\n",
    "- 提取式：给定一个问题和一些上下文，模型必须从上下文中提取出一段文字作为答案\n",
    "- 生成式：给定一个问题和一些上下文，答案是根据上下文生成的；这种方法由Text2TextGenerationPipeline处理，而不是下面展示的QuestionAnsweringPipeline\n",
    "\n",
    "模型主页：https://huggingface.co/distilbert-base-cased-distilled-squad"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bcba1504-51c0-4121-96cb-c08adb6f9e0f",
   "metadata": {},
   "source": [
    "### 默认模型 和 deepset/roberta-base-squad2 模型对比测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "4eb800dc-888f-4631-8271-a3416aa5a050",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "No model was supplied, defaulted to distilbert/distilbert-base-cased-distilled-squad and revision 564e9b5 (https://huggingface.co/distilbert/distilbert-base-cased-distilled-squad).\n",
      "Using a pipeline without specifying a model name and revision in production is not recommended.\n",
      "Device set to use cuda:1\n",
      "Device set to use cuda:1\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ 默认模型结果（默认模型）:\n",
      "  ➤ Score : 0.9327\n",
      "  ➤ Answer: huggingface/transformers\n",
      "  ➤ Start : 30, End: 54\n",
      "\n",
      "🔁 对比模型结果（deepset/roberta-base-squad2）:\n",
      "  ➤ Score : 0.9068\n",
      "  ➤ Answer: huggingface/transformers\n",
      "  ➤ Start : 30, End: 54\n"
     ]
    }
   ],
   "source": [
    "from transformers import pipeline\n",
    "\n",
    "# 默认模型\n",
    "question_answerer = pipeline(task=\"question-answering\", device=1)\n",
    "\n",
    "preds_default = question_answerer(\n",
    "    question=\"What is the name of the repository?\",\n",
    "    context=\"The name of the repository is huggingface/transformers\",\n",
    ")\n",
    "\n",
    "# 对比模型：deepset/roberta-base-squad2\n",
    "question_answerer = pipeline(task=\"question-answering\", model=\"deepset/roberta-base-squad2\", device=1)\n",
    "\n",
    "preds_roberta = question_answerer(\n",
    "    question=\"What is the name of the repository?\",\n",
    "    context=\"The name of the repository is huggingface/transformers\",\n",
    ")\n",
    "\n",
    "# 格式化输出\n",
    "print(\"✅ 默认模型结果（默认模型）:\")\n",
    "print(f\"  ➤ Score : {round(preds_default['score'], 4)}\")\n",
    "print(f\"  ➤ Answer: {preds_default['answer']}\")\n",
    "print(f\"  ➤ Start : {preds_default['start']}, End: {preds_default['end']}\\n\")\n",
    "\n",
    "print(\"🔁 对比模型结果（deepset/roberta-base-squad2）:\")\n",
    "print(f\"  ➤ Score : {round(preds_roberta['score'], 4)}\")\n",
    "print(f\"  ➤ Answer: {preds_roberta['answer']}\")\n",
    "print(f\"  ➤ Start : {preds_roberta['start']}, End: {preds_roberta['end']}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a601a527-afbe-4a8f-b1fb-d48abf3a6837",
   "metadata": {},
   "source": [
    "## 2.3 Summarization 文本摘要\n",
    "\n",
    "Summarization(文本摘要）从较长的文本中创建一个较短的版本，同时尽可能保留原始文档的大部分含义。摘要是一个序列到序列的任务；它输出比输入更短的文本序列。有许多长篇文档可以进行摘要，以帮助读者快速了解主要要点。法案、法律和财务文件、专利和科学论文等文档可以摘要，以节省读者的时间并作为阅读辅助工具。\n",
    "\n",
    "与问答类似，摘要有两种类型：\n",
    "\n",
    "- 提取式：从原始文本中识别和提取最重要的句子\n",
    "- 生成式：从原始文本中生成目标摘要（可能包括输入文件中没有的新单词）；SummarizationPipeline使用生成式方法\n",
    "模型主页：https://huggingface.co/t5-base"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7e931f80-eaa6-4d8e-b358-15ca2e52b962",
   "metadata": {},
   "source": [
    "### 2.3.1 t5-base 和 sshleifer/distilbart-cnn-12-6（相对轻量） 对比"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "8619ddd8-4cc5-4855-adf2-7eaae021bd82",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Device set to use cuda:1\n",
      "Your max_length is set to 200, but your input_length is only 128. Since this is a summarization task, where outputs shorter than the input are typically wanted, you might consider decreasing max_length manually, e.g. summarizer('...', max_length=64)\n",
      "Device set to use cuda:1\n",
      "Your max_length is set to 142, but your input_length is only 129. Since this is a summarization task, where outputs shorter than the input are typically wanted, you might consider decreasing max_length manually, e.g. summarizer('...', max_length=64)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🔹 T5-base 模型摘要：\n",
      "the Transformer is the first sequence transduction model based entirely on attention . it replaces recurrent layers commonly used in encoder-decoder architectures with multi-headed self-attention . for translation tasks, the Transformer can be trained significantly faster than architectures based on convolutional layers .\n",
      "\n",
      "🔹 sshleifer/distilbart-cnn-12-6 模型摘要：\n",
      " The Transformer is the first sequence transduction model based entirely on attention . It replaces recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention . For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers .\n"
     ]
    }
   ],
   "source": [
    "from transformers import pipeline\n",
    "\n",
    "# 文本内容\n",
    "text = \"\"\"\n",
    "In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, \n",
    "replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. \n",
    "For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. \n",
    "On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. \n",
    "In the former task our best model outperforms even all previously reported ensembles.\n",
    "\"\"\"\n",
    "\n",
    "# T5-base 摘要\n",
    "t5_summarizer = pipeline(\"summarization\", model=\"t5-base\", min_length=8, max_length=64, device=1)\n",
    "t5_summary = t5_summarizer(text)[0]['summary_text']\n",
    "\n",
    "# sshleifer/distilbart-cnn-12-6 摘要\n",
    "bart_summarizer = pipeline(\"summarization\", model=\"sshleifer/distilbart-cnn-12-6\", min_length=8, max_length=64, device=1)\n",
    "bart_summary = bart_summarizer(text)[0]['summary_text']\n",
    "\n",
    "# 格式化输出\n",
    "print(\"🔹 T5-base 模型摘要：\")\n",
    "print(t5_summary)\n",
    "print(\"\\n🔹 sshleifer/distilbart-cnn-12-6 模型摘要：\")\n",
    "print(bart_summary)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a22e4d40-a698-4d9e-b4c6-1459cfc0ad59",
   "metadata": {},
   "source": [
    "# 三、Audio 音频处理任务\n",
    "音频和语音处理任务与其他模态略有不同，主要是因为音频作为输入是一个连续的信号。与文本不同，原始音频波形不能像句子可以被划分为单词那样被整齐地分割成离散的块。为了解决这个问题，通常在固定的时间间隔内对原始音频信号进行采样。如果在每个时间间隔内采样更多样本，采样率就会更高，音频更接近原始音频源。\n",
    "\n",
    "以前的方法是预处理音频以从中提取有用的特征。现在更常见的做法是直接将原始音频波形输入到特征编码器中，以提取音频表示。这样可以简化预处理步骤，并允许模型学习最重要的特征。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "00fd8b55-53a6-42f2-b0ae-8d21cc171a72",
   "metadata": {},
   "source": [
    "## 3.1 前置依赖包安装\n",
    "```\n",
    "$apt update & apt upgrade\n",
    "$apt install -y ffmpeg\n",
    "$pip install ffmpeg ffmpeg-python\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6c1167b1-8025-4199-984f-fc6e094c6676",
   "metadata": {},
   "source": [
    "## 3.2 Audio classification\n",
    "\n",
    "Audio classification(音频分类)是一项将音频数据从预定义的类别集合中进行标记的任务。这是一个广泛的类别，具有许多具体的应用，其中一些包括：\n",
    "\n",
    "- 声学场景分类：使用场景标签（“办公室”、“海滩”、“体育场”）对音频进行标记。\n",
    "- 声学事件检测：使用声音事件标签（“汽车喇叭声”、“鲸鱼叫声”、“玻璃破碎声”）对音频进行标记。\n",
    "- 标记：对包含多种声音的音频进行标记（鸟鸣、会议中的说话人识别）。\n",
    "- 音乐分类：使用流派标签（“金属”、“嘻哈”、“乡村”）对音乐进行标记。\n",
    "\n",
    "模型主页：https://huggingface.co/superb/hubert-base-superb-er\n",
    "\n",
    "数据集主页：https://huggingface.co/datasets/superb#er\n",
    "\n",
    "情感识别（ER）为每个话语预测一个情感类别。我们采用了最广泛使用的ER数据集IEMOCAP，并遵循传统的评估协议：我们删除不平衡的情感类别，只保留最后四个具有相似数量数据点的类别，并在标准分割的五折交叉验证上进行评估。评估指标是准确率（ACC）。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e075ebef-ab62-406f-90d7-7c1efc39617f",
   "metadata": {},
   "source": [
    "### 3.2.1 superb/hubert-base-superb-er 模型 对比 wav2vec2-base 模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "68135024-1010-4b29-b04c-a98d74e17b1b",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Device set to use cuda:1\n",
      "/opt/conda/lib/python3.11/site-packages/transformers/configuration_utils.py:309: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`.\n",
      "  warnings.warn(\n",
      "Device set to use cuda:1\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "🔹 Hubert-base 模型结果：\n",
      "  hap             : 0.4532\n",
      "  sad             : 0.3622\n",
      "  neu             : 0.0943\n",
      "  ang             : 0.0903\n",
      "\n",
      "🔹 Wav2Vec2-base 模型结果：\n",
      "  sad             : 0.8078\n",
      "  neu             : 0.1083\n",
      "  hap             : 0.0799\n",
      "  ang             : 0.004\n"
     ]
    }
   ],
   "source": [
    "from transformers import pipeline\n",
    "\n",
    "# 待分类音频\n",
    "audio_url = \"https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac\"\n",
    "\n",
    "# 默认模型：hubert-base\n",
    "hubert_classifier = pipeline(task=\"audio-classification\", model=\"superb/hubert-base-superb-er\", device=1)\n",
    "hubert_preds = hubert_classifier(audio_url)\n",
    "hubert_preds = [{\"label\": pred[\"label\"], \"score\": round(pred[\"score\"], 4)} for pred in hubert_preds]\n",
    "\n",
    "# 对比模型：wav2vec2-base\n",
    "wav2vec_classifier = pipeline(task=\"audio-classification\", model=\"superb/wav2vec2-base-superb-er\", device=1)\n",
    "wav2vec_preds = wav2vec_classifier(audio_url)\n",
    "wav2vec_preds = [{\"label\": pred[\"label\"], \"score\": round(pred[\"score\"], 4)} for pred in wav2vec_preds]\n",
    "\n",
    "# 输出对比结果\n",
    "print(\"🔹 Hubert-base 模型结果：\")\n",
    "for p in hubert_preds:\n",
    "    print(f\"  {p['label']:<15} : {p['score']}\")\n",
    "\n",
    "print(\"\\n🔹 Wav2Vec2-base 模型结果：\")\n",
    "for p in wav2vec_preds:\n",
    "    print(f\"  {p['label']:<15} : {p['score']}\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e3d402fd-b304-4441-ae71-31e4242c52e9",
   "metadata": {},
   "source": [
    "## 3.3 Automatic speech recognition（ASR）\n",
    "\n",
    "Automatic speech recognition（自动语音识别）将语音转录为文本。这是最常见的音频任务之一，部分原因是因为语音是人类交流的自然形式。如今，ASR系统嵌入在智能技术产品中，如扬声器、电话和汽车。我们可以要求虚拟助手播放音乐、设置提醒和告诉我们天气。\n",
    "\n",
    "但是，Transformer架构帮助解决的一个关键挑战是低资源语言。通过在大量语音数据上进行预训练，仅在一个低资源语言的一小时标记语音数据上进行微调，仍然可以产生与以前在100倍更多标记数据上训练的ASR系统相比高质量的结果。\n",
    "\n",
    "模型主页：https://huggingface.co/openai/whisper-small"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dc966489-42ca-4107-a307-e317003177dc",
   "metadata": {},
   "outputs": [],
   "source": [
    "### 3.3.1 openai/whisper-small 与 Whisper-base 对比"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "1cd2bb61-0a68-4290-b297-04538b373de0",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Device set to use cuda:1\n",
      "/opt/conda/lib/python3.11/site-packages/transformers/models/whisper/generation_whisper.py:604: FutureWarning: The input name `inputs` is deprecated. Please make sure to use `input_features` instead.\n",
      "  warnings.warn(\n",
      "Device set to use cuda:1\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "📌 模型对比结果：\n",
      "[Whisper-small] 耗时: 4.79 秒\n",
      "  识别文本:  I have a dream that one day this nation will rise up and live out the true meaning of its creed.\n",
      "[Whisper-base ] 耗时: 3.69 秒\n",
      "  识别文本:  I have a dream that one day this nation will rise up and live out the true meaning of its creed.\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/conda/lib/python3.11/site-packages/transformers/models/whisper/generation_whisper.py:604: FutureWarning: The input name `inputs` is deprecated. Please make sure to use `input_features` instead.\n",
      "  warnings.warn(\n"
     ]
    }
   ],
   "source": [
    "import time\n",
    "from transformers import pipeline\n",
    "\n",
    "# 测试音频（可替换为本地路径）\n",
    "audio_url = \"https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac\"\n",
    "\n",
    "# Whisper-small 模型\n",
    "start = time.time()\n",
    "whisper_small = pipeline(task=\"automatic-speech-recognition\", model=\"openai/whisper-small\", device=1)\n",
    "small_result = whisper_small(audio_url)\n",
    "small_time = round(time.time() - start, 2)\n",
    "\n",
    "# Whisper-base 模型\n",
    "start = time.time()\n",
    "whisper_base = pipeline(task=\"automatic-speech-recognition\", model=\"openai/whisper-base\", device=1)\n",
    "base_result = whisper_base(audio_url)\n",
    "base_time = round(time.time() - start, 2)\n",
    "\n",
    "# 格式化输出\n",
    "print(\"\\n📌 模型对比结果：\")\n",
    "print(f\"[Whisper-small] 耗时: {small_time} 秒\\n  识别文本: {small_result['text']}\")\n",
    "print(f\"[Whisper-base ] 耗时: {base_time} 秒\\n  识别文本: {base_result['text']}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e5d8bea0-6dca-49b1-ae88-ab829ca824ee",
   "metadata": {},
   "source": [
    "# 四、Computer Vision 计算机视觉\n",
    "\n",
    "Computer Vision（计算机视觉）任务中最早成功之一是使用卷积神经网络（CNN）识别邮政编码数字图像。图像由像素组成，每个像素都有一个数值。这使得将图像表示为像素值矩阵变得容易。每个像素值组合描述了图像的颜色。\n",
    "\n",
    "计算机视觉任务可以通过以下两种通用方式解决：\n",
    "\n",
    "- 使用卷积来学习图像的层次特征，从低级特征到高级抽象特征。\n",
    "- 将图像分成块，并使用Transformer逐步学习每个图像块如何相互关联以形成图像。与CNN偏好的自底向上方法不同，这种方法有点像从一个模糊的图像开始，然后逐渐将其聚焦清晰。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7a228d4b-421d-4948-88a1-4d35cda88067",
   "metadata": {},
   "source": [
    "## 4.1 Image Classificaiton\n",
    "Image Classificaiton(图像分类)将整个图像从预定义的类别集合中进行标记。像大多数分类任务一样，图像分类有许多实际用例，其中一些包括：\n",
    "\n",
    "- 医疗保健：标记医学图像以检测疾病或监测患者健康状况\n",
    "- 环境：标记卫星图像以监测森林砍伐、提供野外管理信息或检测野火\n",
    "- 农业：标记农作物图像以监测植物健康或用于土地使用监测的卫星图像\n",
    "- 生态学：标记动物或植物物种的图像以监测野生动物种群或跟踪濒危物种\n",
    "\n",
    "模型主页：https://huggingface.co/google/vit-base-patch16-224"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e44f180d-1031-4b13-a309-61a59015df08",
   "metadata": {},
   "source": [
    "### 4.1.1 vit-base-patch16-224 模型 对比 facebook/deit-tiny-patch16-224 模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "f2f25a27-7f9b-4e75-a396-04efb2b36f14",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "No model was supplied, defaulted to google/vit-base-patch16-224 and revision 3f49326 (https://huggingface.co/google/vit-base-patch16-224).\n",
      "Using a pipeline without specifying a model name and revision in production is not recommended.\n",
      "Fast image processor class <class 'transformers.models.vit.image_processing_vit_fast.ViTImageProcessorFast'> is available for this model. Using slow image processor class. To use the fast image processor class set `use_fast=True`.\n",
      "Device set to use cuda:1\n",
      "Fast image processor class <class 'transformers.models.vit.image_processing_vit_fast.ViTImageProcessorFast'> is available for this model. Using slow image processor class. To use the fast image processor class set `use_fast=True`.\n",
      "Device set to use cuda:1\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "🧠 默认模型（ViT Base）识别结果:\n",
      "{'score': 0.9962, 'label': 'giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca'}\n",
      "{'score': 0.0018, 'label': 'lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens'}\n",
      "{'score': 0.0002, 'label': 'ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus'}\n",
      "{'score': 0.0001, 'label': 'sloth bear, Melursus ursinus, Ursus ursinus'}\n",
      "{'score': 0.0001, 'label': 'brown bear, bruin, Ursus arctos'}\n",
      "\n",
      "🧠 轻量模型（DEiT Tiny）识别结果:\n",
      "{'score': 0.1611, 'label': 'giant panda, panda, panda bear, coon bear, Ailuropoda melanoleuca'}\n",
      "{'score': 0.0137, 'label': 'lacewing, lacewing fly'}\n",
      "{'score': 0.01, 'label': 'nematode, nematode worm, roundworm'}\n",
      "{'score': 0.0097, 'label': 'leafhopper'}\n",
      "{'score': 0.0081, 'label': 'soccer ball'}\n"
     ]
    }
   ],
   "source": [
    "from transformers import pipeline\n",
    "\n",
    "# 狼猫\n",
    "# image_path = \"./cat-chonk.jpeg\"\n",
    "# 熊猫\n",
    "image_path = \"./panda.jpg\"\n",
    "\n",
    "# 默认模型（ViT Base）\n",
    "default_classifier = pipeline(\"image-classification\", device=1)\n",
    "default_preds = default_classifier(image_path)\n",
    "default_preds = [{\"score\": round(pred[\"score\"], 4), \"label\": pred[\"label\"]} for pred in default_preds]\n",
    "\n",
    "# 轻量模型（DEiT Tiny）\n",
    "tiny_classifier = pipeline(\"image-classification\", model=\"facebook/deit-tiny-patch16-224\", device=1)\n",
    "tiny_preds = tiny_classifier(image_path)\n",
    "tiny_preds = [{\"score\": round(pred[\"score\"], 4), \"label\": pred[\"label\"]} for pred in tiny_preds]\n",
    "\n",
    "# 格式化输出\n",
    "print(\"\\n🧠 默认模型（ViT Base）识别结果:\")\n",
    "print(*default_preds, sep=\"\\n\")\n",
    "\n",
    "print(\"\\n🧠 轻量模型（DEiT Tiny）识别结果:\")\n",
    "print(*tiny_preds, sep=\"\\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b330a0e8-7f89-4fa7-8a83-c3d2759772fe",
   "metadata": {},
   "source": [
    "## 4.2 Object Detection\n",
    "\n",
    "与图像分类不同，目标检测在图像中识别多个对象以及这些对象在图像中的位置（由边界框定义）。目标检测的一些示例应用包括：\n",
    "\n",
    "- 自动驾驶车辆：检测日常交通对象，如其他车辆、行人和红绿灯\n",
    "- 遥感：灾害监测、城市规划和天气预报\n",
    "- 缺陷检测：检测建筑物中的裂缝或结构损坏，以及制造业产品缺陷\n",
    "\n",
    "模型主页：https://huggingface.co/facebook/detr-resnet-50"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d751d26f-7db7-47b8-9d30-e8210af4473e",
   "metadata": {},
   "source": [
    "### 4.2.1 前置依赖包安装"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 63,
   "id": "131c892e-4b07-4010-8639-fc6144e0df9d",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Requirement already satisfied: timm in /opt/conda/lib/python3.11/site-packages (1.0.17)\n",
      "Requirement already satisfied: torch in /opt/conda/lib/python3.11/site-packages (from timm) (2.7.1)\n",
      "Requirement already satisfied: torchvision in /opt/conda/lib/python3.11/site-packages (from timm) (0.22.1)\n",
      "Requirement already satisfied: pyyaml in /opt/conda/lib/python3.11/site-packages (from timm) (6.0.1)\n",
      "Requirement already satisfied: huggingface_hub in /opt/conda/lib/python3.11/site-packages (from timm) (0.33.4)\n",
      "Requirement already satisfied: safetensors in /opt/conda/lib/python3.11/site-packages (from timm) (0.5.3)\n",
      "Requirement already satisfied: filelock in /opt/conda/lib/python3.11/site-packages (from huggingface_hub->timm) (3.18.0)\n",
      "Requirement already satisfied: fsspec>=2023.5.0 in /opt/conda/lib/python3.11/site-packages (from huggingface_hub->timm) (2023.9.2)\n",
      "Requirement already satisfied: packaging>=20.9 in /opt/conda/lib/python3.11/site-packages (from huggingface_hub->timm) (23.2)\n",
      "Requirement already satisfied: requests in /opt/conda/lib/python3.11/site-packages (from huggingface_hub->timm) (2.31.0)\n",
      "Requirement already satisfied: tqdm>=4.42.1 in /opt/conda/lib/python3.11/site-packages (from huggingface_hub->timm) (4.66.1)\n",
      "Requirement already satisfied: typing-extensions>=3.7.4.3 in /opt/conda/lib/python3.11/site-packages (from huggingface_hub->timm) (4.14.1)\n",
      "Requirement already satisfied: hf-xet<2.0.0,>=1.1.2 in /opt/conda/lib/python3.11/site-packages (from huggingface_hub->timm) (1.1.5)\n",
      "Requirement already satisfied: sympy>=1.13.3 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (1.14.0)\n",
      "Requirement already satisfied: networkx in /opt/conda/lib/python3.11/site-packages (from torch->timm) (3.2)\n",
      "Requirement already satisfied: jinja2 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (3.1.2)\n",
      "Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.6.77 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (12.6.77)\n",
      "Requirement already satisfied: nvidia-cuda-runtime-cu12==12.6.77 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (12.6.77)\n",
      "Requirement already satisfied: nvidia-cuda-cupti-cu12==12.6.80 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (12.6.80)\n",
      "Requirement already satisfied: nvidia-cudnn-cu12==9.5.1.17 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (9.5.1.17)\n",
      "Requirement already satisfied: nvidia-cublas-cu12==12.6.4.1 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (12.6.4.1)\n",
      "Requirement already satisfied: nvidia-cufft-cu12==11.3.0.4 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (11.3.0.4)\n",
      "Requirement already satisfied: nvidia-curand-cu12==10.3.7.77 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (10.3.7.77)\n",
      "Requirement already satisfied: nvidia-cusolver-cu12==11.7.1.2 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (11.7.1.2)\n",
      "Requirement already satisfied: nvidia-cusparse-cu12==12.5.4.2 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (12.5.4.2)\n",
      "Requirement already satisfied: nvidia-cusparselt-cu12==0.6.3 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (0.6.3)\n",
      "Requirement already satisfied: nvidia-nccl-cu12==2.26.2 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (2.26.2)\n",
      "Requirement already satisfied: nvidia-nvtx-cu12==12.6.77 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (12.6.77)\n",
      "Requirement already satisfied: nvidia-nvjitlink-cu12==12.6.85 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (12.6.85)\n",
      "Requirement already satisfied: nvidia-cufile-cu12==1.11.1.6 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (1.11.1.6)\n",
      "Requirement already satisfied: triton==3.3.1 in /opt/conda/lib/python3.11/site-packages (from torch->timm) (3.3.1)\n",
      "Requirement already satisfied: setuptools>=40.8.0 in /opt/conda/lib/python3.11/site-packages (from triton==3.3.1->torch->timm) (68.2.2)\n",
      "Requirement already satisfied: numpy in /opt/conda/lib/python3.11/site-packages (from torchvision->timm) (1.24.4)\n",
      "Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /opt/conda/lib/python3.11/site-packages (from torchvision->timm) (10.1.0)\n",
      "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /opt/conda/lib/python3.11/site-packages (from sympy>=1.13.3->torch->timm) (1.3.0)\n",
      "Requirement already satisfied: MarkupSafe>=2.0 in /opt/conda/lib/python3.11/site-packages (from jinja2->torch->timm) (2.1.3)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in /opt/conda/lib/python3.11/site-packages (from requests->huggingface_hub->timm) (3.3.0)\n",
      "Requirement already satisfied: idna<4,>=2.5 in /opt/conda/lib/python3.11/site-packages (from requests->huggingface_hub->timm) (3.4)\n",
      "Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/conda/lib/python3.11/site-packages (from requests->huggingface_hub->timm) (2.0.7)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.11/site-packages (from requests->huggingface_hub->timm) (2023.7.22)\n"
     ]
    }
   ],
   "source": [
    "!pip install timm"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b8908256-7d3f-426a-b82c-9caccb13d143",
   "metadata": {},
   "source": [
    "### 4.2.2 detr-resnet-50 模型与 YOLOS-Tiny 模型对比"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "4034a78c-bf99-459b-b9ba-3df94dde01a9",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "No model was supplied, defaulted to facebook/detr-resnet-50 and revision 1d5f47b (https://huggingface.co/facebook/detr-resnet-50).\n",
      "Using a pipeline without specifying a model name and revision in production is not recommended.\n",
      "Some weights of the model checkpoint at facebook/detr-resnet-50 were not used when initializing DetrForObjectDetection: ['model.backbone.conv_encoder.model.layer1.0.downsample.1.num_batches_tracked', 'model.backbone.conv_encoder.model.layer2.0.downsample.1.num_batches_tracked', 'model.backbone.conv_encoder.model.layer3.0.downsample.1.num_batches_tracked', 'model.backbone.conv_encoder.model.layer4.0.downsample.1.num_batches_tracked']\n",
      "- This IS expected if you are initializing DetrForObjectDetection from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\n",
      "- This IS NOT expected if you are initializing DetrForObjectDetection from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\n",
      "Device set to use cuda:1\n",
      "Device set to use cuda:1\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "默认模型 DETR 检测结果:\n",
      "  cat             score: 0.9985 box: {'xmin': 78, 'ymin': 57, 'xmax': 309, 'ymax': 371}\n",
      "  dog             score: 0.9891 box: {'xmin': 279, 'ymin': 20, 'xmax': 482, 'ymax': 416}\n",
      "\n",
      "YOLOS-Tiny 检测结果:\n",
      "  dog             score: 0.6406 box: {'xmin': 258, 'ymin': 18, 'xmax': 479, 'ymax': 415}\n",
      "  cat             score: 0.9946 box: {'xmin': 75, 'ymin': 60, 'xmax': 290, 'ymax': 369}\n",
      "  dog             score: 0.9899 box: {'xmin': 280, 'ymin': 18, 'xmax': 479, 'ymax': 416}\n"
     ]
    }
   ],
   "source": [
    "from transformers import pipeline\n",
    "\n",
    "image_path = \"./cat_dog.jpg\"\n",
    "\n",
    "# 默认模型（DETR）\n",
    "detector_default = pipeline(task=\"object-detection\", device=1)\n",
    "preds_default = detector_default(image_path)\n",
    "preds_default = [{\"score\": round(p[\"score\"],4), \"label\": p[\"label\"], \"box\": p[\"box\"]} for p in preds_default]\n",
    "\n",
    "# 对比模型（YOLOS-Tiny）\n",
    "detector_yolos = pipeline(task=\"object-detection\", model=\"hustvl/yolos-tiny\", device=1)\n",
    "preds_yolos = detector_yolos(image_path)\n",
    "preds_yolos = [{\"score\": round(p[\"score\"],4), \"label\": p[\"label\"], \"box\": p[\"box\"]} for p in preds_yolos]\n",
    "\n",
    "# 输出格式化函数\n",
    "def print_preds(name, preds):\n",
    "    print(f\"\\n{name} 检测结果:\")\n",
    "    for p in preds:\n",
    "        print(f\"  {p['label']:<15} score: {p['score']} box: {p['box']}\")\n",
    "\n",
    "print_preds(\"默认模型 DETR\", preds_default)\n",
    "print_preds(\"YOLOS-Tiny\", preds_yolos)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
