"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"show_random_elements(tokenized_datasets[\"train\"], num_examples=1)"
]
},
{
"cell_type": "markdown",
"id": "1c33d153-f729-4f04-972c-a764c1cbbb8b",
"metadata": {},
"source": [
"### 数据抽样\n",
"\n",
"使用 1000 个数据样本,在 BERT 上演示小规模训练(基于 Pytorch Trainer)\n",
"\n",
"`shuffle()`函数会随机重新排列列的值。如果您希望对用于洗牌数据集的算法有更多控制,可以在此函数中指定generator参数来使用不同的numpy.random.Generator。"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "a17317d8-3c6a-467f-843d-87491f600db1",
"metadata": {},
"outputs": [],
"source": [
"# small_train_dataset = tokenized_datasets[\"train\"].shuffle(seed=42).select(range(1000))\n",
"# small_eval_dataset = tokenized_datasets[\"test\"].shuffle(seed=42).select(range(1000))"
]
},
{
"cell_type": "markdown",
"id": "d3b65d63-2d3a-4a56-bc31-6e88a29e9dec",
"metadata": {},
"source": [
"## 微调训练配置\n",
"\n",
"### 加载 BERT 模型\n",
"\n",
"警告通知我们正在丢弃一些权重(`vocab_transform` 和 `vocab_layer_norm` 层),并随机初始化其他一些权重(`pre_classifier` 和 `classifier` 层)。在微调模型情况下是绝对正常的,因为我们正在删除用于预训练模型的掩码语言建模任务的头部,并用一个新的头部替换它,对于这个新头部,我们没有预训练的权重,所以库会警告我们在用它进行推理之前应该对这个模型进行微调,而这正是我们要做的事情。"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "4d2af4df-abd4-4a4b-94b6-b0e7375304ed",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.weight', 'classifier.bias']\n",
"You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
]
}
],
"source": [
"from transformers import AutoModelForSequenceClassification\n",
"\n",
"model = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased\", num_labels=5)"
]
},
{
"cell_type": "markdown",
"id": "b44014df-b52c-4c72-9e9f-54424725a473",
"metadata": {},
"source": [
"### 训练超参数(TrainingArguments)\n",
"\n",
"完整配置参数与默认值:https://huggingface.co/docs/transformers/v4.36.1/en/main_classes/trainer#transformers.TrainingArguments\n",
"\n",
"源代码定义:https://github.com/huggingface/transformers/blob/v4.36.1/src/transformers/training_args.py#L161\n",
"\n",
"**最重要配置:模型权重保存路径(output_dir)**"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "98c01d5c-de72-4ff0-b11d-e07ac5346888",
"metadata": {},
"outputs": [],
"source": [
"# from transformers import TrainingArguments\n",
"\n",
"# model_dir = \"models/bert-base-cased\"\n",
"\n",
"# # logging_steps 默认值为500,根据我们的训练数据和步长,将其设置为100\n",
"# training_args = TrainingArguments(output_dir=f\"{model_dir}/test_trainer\",\n",
"# logging_dir=f\"{model_dir}/test_trainer/runs\",\n",
"# logging_steps=100)\n",
"# # 完整的超参数配置\n",
"# print(training_args)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0ce03480-3aaa-48ea-a0c6-a177b8d8e34f",
"metadata": {
"collapsed": true,
"jupyter": {
"outputs_hidden": true
}
},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "7ebd3365-d359-4ab4-a300-4717590cc240",
"metadata": {},
"source": [
"### 训练过程中的指标评估(Evaluate)\n",
"\n",
"**[Hugging Face Evaluate 库](https://huggingface.co/docs/evaluate/index)** 支持使用一行代码,获得数十种不同领域(自然语言处理、计算机视觉、强化学习等)的评估方法。 当前支持 **完整评估指标:https://huggingface.co/evaluate-metric**\n",
"\n",
"训练器(Trainer)在训练过程中不会自动评估模型性能。因此,我们需要向训练器传递一个函数来计算和报告指标。 \n",
"\n",
"Evaluate库提供了一个简单的准确率函数,您可以使用`evaluate.load`函数加载"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "2a8ef138-5bf2-41e5-8c68-df8e11f4e98f",
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"import evaluate\n",
"\n",
"metric = evaluate.load(\"accuracy\")"
]
},
{
"cell_type": "markdown",
"id": "70d406c0-56d0-4a54-9c6c-e126ab7f5254",
"metadata": {},
"source": [
"\n",
"接着,调用 `compute` 函数来计算预测的准确率。\n",
"\n",
"在将预测传递给 compute 函数之前,我们需要将 logits 转换为预测值(**所有Transformers 模型都返回 logits**)。"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "f46d2e59-1ebf-43d2-bc86-6b57a4d24d19",
"metadata": {},
"outputs": [],
"source": [
"def compute_metrics(eval_pred):\n",
" logits, labels = eval_pred\n",
" predictions = np.argmax(logits, axis=-1)\n",
" return metric.compute(predictions=predictions, references=labels)"
]
},
{
"cell_type": "markdown",
"id": "e2feba67-9ca9-4793-9a15-3eaa426df2a1",
"metadata": {},
"source": [
"#### 训练过程指标监控\n",
"\n",
"通常,为了监控训练过程中的评估指标变化,我们可以在`TrainingArguments`指定`evaluation_strategy`参数,以便在 epoch 结束时报告评估指标。"
]
},
{
"cell_type": "code",
"execution_count": 62,
"id": "afaaee18-4986-4e39-8ad9-b8d413ab4cd1",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"outputs": [],
"source": [
"from transformers import TrainingArguments, Trainer\n",
"model_dir = \"models/bert-base-cased\"\n",
"batch_size = 14\n",
"\n",
"training_args = TrainingArguments(\n",
" output_dir=f\"{model_dir}/test_trainer\",\n",
" evaluation_strategy=\"epoch\", \n",
" logging_dir=f\"{model_dir}/test_trainer/runs\",\n",
" logging_steps=500,\n",
" save_total_limit=3,\n",
" per_device_train_batch_size=batch_size,\n",
" per_device_eval_batch_size=batch_size,\n",
")"
]
},
{
"cell_type": "markdown",
"id": "d47d6981-e444-4c0f-a7cb-dd7f2ba8df12",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"source": [
"## 开始训练\n",
"\n",
"### 实例化训练器(Trainer)\n",
"\n",
"`kernel version` 版本问题:暂不影响本示例代码运行"
]
},
{
"cell_type": "code",
"execution_count": 63,
"id": "ca1d12ac-89dc-4c30-8282-f859724c0062",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"outputs": [],
"source": [
"small_train_dataset = tokenized_datasets[\"train\"].shuffle(seed=42).select(range(1000))\n",
"small_eval_dataset = tokenized_datasets[\"test\"].shuffle(seed=42).select(range(1000))\n",
"\n",
"trainer = Trainer(\n",
" model=model,\n",
" args=training_args,\n",
" train_dataset=tokenized_datasets[\"train\"],\n",
" eval_dataset=small_eval_dataset,\n",
" compute_metrics=compute_metrics,\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 64,
"id": "9b3c069d-a0dc-4f43-aea0-6cb8799643f3",
"metadata": {},
"outputs": [],
"source": [
"# trainer.args"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "449eb845-cff7-40ba-8915-38de79248840",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "a833e0db-1168-4a3c-8b75-bfdcef8c5157",
"metadata": {},
"source": [
"## 使用 nvidia-smi 查看 GPU 使用\n",
"\n",
"为了实时查看GPU使用情况,可以使用 `watch` 指令实现轮询:`watch -n 1 nvidia-smi`:\n",
"\n",
"```shell\n",
"Every 1.0s: nvidia-smi Wed Dec 20 14:37:41 2023\n",
"\n",
"Wed Dec 20 14:37:41 2023\n",
"+---------------------------------------------------------------------------------------+\n",
"| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |\n",
"|-----------------------------------------+----------------------+----------------------+\n",
"| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |\n",
"| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |\n",
"| | | MIG M. |\n",
"|=========================================+======================+======================|\n",
"| 0 Tesla T4 Off | 00000000:00:0D.0 Off | 0 |\n",
"| N/A 64C P0 69W / 70W | 6665MiB / 15360MiB | 98% Default |\n",
"| | | N/A |\n",
"+-----------------------------------------+----------------------+----------------------+\n",
"\n",
"+---------------------------------------------------------------------------------------+\n",
"| Processes: |\n",
"| GPU GI CI PID Type Process name GPU Memory |\n",
"| ID ID Usage |\n",
"|=======================================================================================|\n",
"| 0 N/A N/A 18395 C /root/miniconda3/bin/python 6660MiB |\n",
"+---------------------------------------------------------------------------------------+\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 65,
"id": "accfe921-471d-481a-96da-c491cdebad0c",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"outputs": [
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
"
\n",
" [46431/46431 17:33:04, Epoch 3/3]\n",
"
\n",
" \n",
" \n",
" \n",
" Epoch | \n",
" Training Loss | \n",
" Validation Loss | \n",
" Accuracy | \n",
"
\n",
" \n",
" \n",
" \n",
" 1 | \n",
" 0.727000 | \n",
" 0.694410 | \n",
" 0.703000 | \n",
"
\n",
" \n",
" 2 | \n",
" 0.633200 | \n",
" 0.691635 | \n",
" 0.710000 | \n",
"
\n",
" \n",
" 3 | \n",
" 0.528600 | \n",
" 0.732436 | \n",
" 0.711000 | \n",
"
\n",
" \n",
"
"
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"IOPub message rate exceeded.\n",
"The Jupyter server will temporarily stop sending output\n",
"to the client in order to avoid crashing it.\n",
"To change this limit, set the config variable\n",
"`--ServerApp.iopub_msg_rate_limit`.\n",
"\n",
"Current values:\n",
"ServerApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
"ServerApp.rate_limit_window=3.0 (secs)\n",
"\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n",
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n"
]
},
{
"data": {
"text/plain": [
"TrainOutput(global_step=46431, training_loss=0.6399251460484753, metrics={'train_runtime': 63185.9462, 'train_samples_per_second': 30.861, 'train_steps_per_second': 0.735, 'total_flos': 5.130803778048e+17, 'train_loss': 0.6399251460484753, 'epoch': 3.0})"
]
},
"execution_count": 65,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"trainer.train(False)"
]
},
{
"cell_type": "code",
"execution_count": 66,
"id": "6d581099-37a4-4470-b051-1ada38554089",
"metadata": {
"editable": true,
"slideshow": {
"slide_type": ""
},
"tags": []
},
"outputs": [],
"source": [
"small_test_dataset = tokenized_datasets[\"test\"].shuffle(seed=64).select(range(100))"
]
},
{
"cell_type": "code",
"execution_count": 67,
"id": "ffb47eab-1370-491e-8a84-6d5347a350b2",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/usr/local/lib/python3.9/dist-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.\n",
" warnings.warn('Was asked to gather along dimension 0, but all '\n"
]
},
{
"data": {
"text/html": [
"\n",
" \n",
" \n",
"
\n",
" [3/3 00:00]\n",
"
\n",
" "
],
"text/plain": [
""
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"text/plain": [
"{'eval_loss': 0.8645618557929993,\n",
" 'eval_accuracy': 0.65,\n",
" 'eval_runtime': 1.3182,\n",
" 'eval_samples_per_second': 75.861,\n",
" 'eval_steps_per_second': 2.276,\n",
" 'epoch': 3.0}"
]
},
"execution_count": 67,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"trainer.evaluate(small_test_dataset)"
]
},
{
"cell_type": "markdown",
"id": "27a55686-7c43-4ab8-a5cd-0e77f14c7c52",
"metadata": {},
"source": [
"### 保存模型和训练状态\n",
"\n",
"- 使用 `trainer.save_model` 方法保存模型,后续可以通过 from_pretrained() 方法重新加载\n",
"- 使用 `trainer.save_state` 方法保存训练状态"
]
},
{
"cell_type": "code",
"execution_count": 68,
"id": "ad0cbc14-9ef7-450f-a1a3-4f92b6486f41",
"metadata": {},
"outputs": [],
"source": [
"trainer.save_model(f\"{model_dir}/finetuned-trainer\")"
]
},
{
"cell_type": "code",
"execution_count": 69,
"id": "badf5868-2847-439d-a73e-42d1cca67b5e",
"metadata": {},
"outputs": [],
"source": [
"trainer.save_state()"
]
},
{
"cell_type": "markdown",
"id": "61828934-01da-4fc3-9e75-8d754c25dfbc",
"metadata": {},
"source": [
"## Homework: 使用完整的 YelpReviewFull 数据集训练,对比看 Acc 最高能到多少"
]
},
{
"cell_type": "code",
"execution_count": 74,
"id": "6ee2580a-7a5a-46ae-a28b-b41e9e838eb1",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"model.safetensors: 100%|██████████| 433M/433M [00:15<00:00, 28.0MB/s] \n"
]
},
{
"data": {
"text/plain": [
"CommitInfo(commit_url='https://huggingface.co/yqzhangjx/bert-base-cased-for-yelp/commit/ef8247a2eb2c3e93a70f0198591833256f6d197c', commit_message='Upload BertForSequenceClassification', commit_description='', oid='ef8247a2eb2c3e93a70f0198591833256f6d197c', pr_url=None, pr_revision=None, pr_num=None)"
]
},
"execution_count": 74,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.push_to_hub(\"yqzhangjx/bert-base-cased-for-yelp\", token=\"XXX\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "478f8d8e-2597-4a6c-a84c-d66e3d231e1d",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "561af3da-d720-4478-99de-b72d7419fb37",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 5
}