{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "717a2045-9b58-4aff-bde5-bc37bcdf561b",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.10/dist-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
      "  from .autonotebook import tqdm as notebook_tqdm\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>label</th>\n",
       "      <th>text</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>4 stars</td>\n",
       "      <td>Love, love, love this place! The food is always excellent and the portions are enormous. I want to say their omelets have like 6 eggs in them (or more). Seriously, come hungry or be prepared to take home leftovers. \\n\\nWhen a place is 24 hours and they serve breakfast the whole time, I tend toward breakfast food. That being said, their eggs benedict is one of the best in Las Vegas. Burgers are delish and cooked to order (beware asking for rare...they take it seriously and will produce something pretty much raw on the inside). The service is great. Waitresses are attentive and friendly. There seem to always be cops eating there; take what you will from that. \\n\\nDownsides: The place gets crowded. Really crowded. Weeknights seem to be the best time to go if you want to sit down and eat quickly. Also, the parking situation can be difficult. They have an itty bitty parking lot in front (maybe more in the back, not sure!) right on LV Blvd, so getting out can be tricky. \\n\\nBottom line: if you haven't been, go for the decor. Stay for the food!</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>4 stars</td>\n",
       "      <td>convenient cafe for people who wake up late and still want breakfast (they serve breakfast until 1pm)...the white chocolate french toast was different than any french toast i've tried. it's sweet but i still put syrup on it. the berries were a nice compliment to balance the sweetness.</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>5 stars</td>\n",
       "      <td>Ballo Blanco is fantastic!  Located within the hip Clarendon Hotel near downtown Phoenix, this is one of the best places for brunch / lunch in the $10 - $12 range.  Tacos range from $2 - $3 each with 2-3 composing an adequate amount of food for an entree.  The cochinita (pork) and Ahi (eco-friendly and the current Market Selection) tacos are my favorite but the carne asada is also good if you're in the mood for some beef instead.  The Pico Rico Burger and Naco Torta are also fantastic if you're feeling like a sandwich, the Naco being another one of my favorites.  The coffee is great, the fries are a must for $3 and I while I've yet to order the Chicharron de Queso for an appetizer it looks amazing!  \\n\\nThe service might not be 5-stars depending on the day but they can get very busy and at the least they are always nice.  If you haven't already been to the Clarendon Hotel itself it's very cool and has a beautiful pool that's open to the public on certain days as well!</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>3 stars</td>\n",
       "      <td>Great fish tacos.  But salsa they serve with their chips... kind of not to my liking. And they also serve two bowls of refried beans to dip your chips in too.  That was weird. Or maybe I'm weird. I don't know.  But that was a first for me at a Mexican restaurant.  I mean, I know they serve beans as a side with your meal but as a dip for your chips? Weird. \\n\\nThat's pretty much all I can say about this place. The ambiance though felt like a chain restaurant. \\n\\nThis place was good, but not great.</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>2 star</td>\n",
       "      <td>We went there late Sunday. The food was good. Our server's name is LESLIE B. and she was awful!!  Took forever to come back for us to order. It was late, only two other tables were there. For like a whole 15 minutes we had no idea where she went. \\n\\nLater we asked if they do anything for birthdays, she told us about this dessert but seem very annoyed by it. \\n\\nWe asked for our CHECKS she brought one back. When we asked her to split it. She doesn't even remember what we ordered and made a face to us. \\n\\nThe food was good but the service SUCKED!! You would think they have better service when they want to charge you 200 for 2 a meal? Totally not worth the experience.</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>5</th>\n",
       "      <td>1 star</td>\n",
       "      <td>I am at this casino right now and my boyfriend just picked up $50 on a blackjack table and the dealer had the nerve to say \\\"thanks for being generous\\\" when he didn't receive a tip. We were at the table for 5 mins TOPS. Such a classless comment. We cashed out immediately and took our money elsewhere!!!</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>6</th>\n",
       "      <td>3 stars</td>\n",
       "      <td>What a great sub Jimmy John's makes! By far the best I've found since moving from the East Coast. Unfortunately, this particular JJ's is having a hard time living up to the \\\"Subs So Fast You'll Freak\\\" slogan.\\n\\nIn my opinion, 40-mins to deliver is not fast considering we are about 6-mins away. And this has been the norm for the past few weeks. Yes, at one time they were fast... like 10-15 mins on average. But now.... fuggedaboutit! I've spoken to the manager - Chris - twice about this. But alas... our subs took 42-mins again today. \\n\\nOutside of that; JJ's subs are fantastic! Packed with ingredients, a great quality bread, and the subs are consistently good. If you go into the shop, it's a well-oiled machine. And there's always an employee sweeping or washing something.\\n\\nOverall, if you're hungry right now, I'd suggest driving to the shop. Otherwise, if you choose delivery, you might still be hungry in 40-mins.</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>7</th>\n",
       "      <td>5 stars</td>\n",
       "      <td>YUM!\\n\\nMaybe it's just because we had extra sauce and extra cheese added to our pizza, but it was AMAZING.  Thick crust, flavorful sauce, tons of cheese - and they deliver?!  I'm in heaven.\\n\\n*Update:  We are outside of their delivery area.  : (</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>8</th>\n",
       "      <td>3 stars</td>\n",
       "      <td>It's interesting how different their food is depending on where/when you eat it.\\n\\nI usually get the Italian sub, and while it's great if you eat at the restaurant, it's a very wet sandwich and doesn't travel well.  It's not great if you get it to go and let it sit more than 10 minutes before eating it.  It's soggy and downright bad if you get it delivered.\\n\\nThe buffalo wings are pretty good with a nice kick of heat.  The french fries are crunchy and salty - pretty darn tasty.\\n\\nAs other reviewers have mentioned, the prices are a bit higher than other sub places, but if you get a coupon in the mail or from a previous order, you can save a few bucks.</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>9</th>\n",
       "      <td>3 stars</td>\n",
       "      <td>After hearing so much hype about this place, I eventually made my way to Sahara and Commercial to finally give it a try.  On my first visit we ordered drunken noodles (pad kee mao) with chicken to go. She told us 15 minutes, but it was closer to 30 and the restaurant was empty. The chicken was minced/ground which was a bit unusual from what I am used to. The flavor was fine and the ingredients were quality, but it was not amazing nor did it warrant rushing back there. \\nI decided I would try this place again before I wrote a review. A week or so ago, I went back because I was in the field and was in the nearby area. I met another friend for lunch and we gave their lunch special a try. I had their panang curry lunch special that came with an eggroll, rice and a soup of the day for $7.95. The soup was pretty bland with vegetable and tofu. The curry was fine, but not unlike Panang curry I have tried at plenty of other Thai joints. I had spicy 7/10 and I'd say to me more realistically it was like a 5/10.\\n The service was very spotty and inconsistent. You can tell the wait staff started to get overwhelmed when the lunch rush hour began. But to be fair, they were pretty slow when it was just two tables in the joint.. The restaurant appears clean with interesting traditional Thai decor. The menu's have wooden covers which is a cool touch. I would probably try this place again if in the area and craving Thai, but I wouldn't drive here from home. This is a perfect example of a place being hyped up.</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.bias', 'classifier.weight']\n",
      "You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "完整的超参数配置： TrainingArguments(\n",
      "_n_gpu=1,\n",
      "accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None, 'use_configured_state': False},\n",
      "adafactor=False,\n",
      "adam_beta1=0.9,\n",
      "adam_beta2=0.999,\n",
      "adam_epsilon=1e-08,\n",
      "auto_find_batch_size=False,\n",
      "average_tokens_across_devices=False,\n",
      "batch_eval_metrics=False,\n",
      "bf16=False,\n",
      "bf16_full_eval=False,\n",
      "data_seed=None,\n",
      "dataloader_drop_last=False,\n",
      "dataloader_num_workers=0,\n",
      "dataloader_persistent_workers=False,\n",
      "dataloader_pin_memory=True,\n",
      "dataloader_prefetch_factor=None,\n",
      "ddp_backend=None,\n",
      "ddp_broadcast_buffers=None,\n",
      "ddp_bucket_cap_mb=None,\n",
      "ddp_find_unused_parameters=None,\n",
      "ddp_timeout=1800,\n",
      "debug=[],\n",
      "deepspeed=None,\n",
      "disable_tqdm=False,\n",
      "do_eval=True,\n",
      "do_predict=False,\n",
      "do_train=False,\n",
      "eval_accumulation_steps=None,\n",
      "eval_delay=0,\n",
      "eval_do_concat_batches=True,\n",
      "eval_on_start=False,\n",
      "eval_steps=None,\n",
      "eval_strategy=epoch,\n",
      "eval_use_gather_object=False,\n",
      "fp16=False,\n",
      "fp16_backend=auto,\n",
      "fp16_full_eval=False,\n",
      "fp16_opt_level=O1,\n",
      "fsdp=[],\n",
      "fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False},\n",
      "fsdp_min_num_params=0,\n",
      "fsdp_transformer_layer_cls_to_wrap=None,\n",
      "full_determinism=False,\n",
      "gradient_accumulation_steps=1,\n",
      "gradient_checkpointing=False,\n",
      "gradient_checkpointing_kwargs=None,\n",
      "greater_is_better=None,\n",
      "group_by_length=False,\n",
      "half_precision_backend=auto,\n",
      "hub_always_push=False,\n",
      "hub_model_id=None,\n",
      "hub_private_repo=None,\n",
      "hub_revision=None,\n",
      "hub_strategy=every_save,\n",
      "hub_token=<HUB_TOKEN>,\n",
      "ignore_data_skip=False,\n",
      "include_for_metrics=[],\n",
      "include_inputs_for_metrics=False,\n",
      "include_num_input_tokens_seen=False,\n",
      "include_tokens_per_second=False,\n",
      "jit_mode_eval=False,\n",
      "label_names=None,\n",
      "label_smoothing_factor=0.0,\n",
      "learning_rate=5e-05,\n",
      "length_column_name=length,\n",
      "liger_kernel_config=None,\n",
      "load_best_model_at_end=False,\n",
      "local_rank=0,\n",
      "log_level=passive,\n",
      "log_level_replica=warning,\n",
      "log_on_each_node=True,\n",
      "logging_dir=models/bert-base-cased-finetune-yelp/runs/Aug02_21-49-13_iZ6wearaq5de2lchqv8ap2Z,\n",
      "logging_first_step=False,\n",
      "logging_nan_inf_filter=True,\n",
      "logging_steps=30,\n",
      "logging_strategy=steps,\n",
      "lr_scheduler_kwargs={},\n",
      "lr_scheduler_type=linear,\n",
      "max_grad_norm=1.0,\n",
      "max_steps=-1,\n",
      "metric_for_best_model=None,\n",
      "mp_parameters=,\n",
      "neftune_noise_alpha=None,\n",
      "no_cuda=False,\n",
      "num_train_epochs=3,\n",
      "optim=adamw_torch,\n",
      "optim_args=None,\n",
      "optim_target_modules=None,\n",
      "output_dir=models/bert-base-cased-finetune-yelp,\n",
      "overwrite_output_dir=False,\n",
      "past_index=-1,\n",
      "per_device_eval_batch_size=8,\n",
      "per_device_train_batch_size=16,\n",
      "prediction_loss_only=False,\n",
      "push_to_hub=False,\n",
      "push_to_hub_model_id=None,\n",
      "push_to_hub_organization=None,\n",
      "push_to_hub_token=<PUSH_TO_HUB_TOKEN>,\n",
      "ray_scope=last,\n",
      "remove_unused_columns=True,\n",
      "report_to=[],\n",
      "restore_callback_states_from_checkpoint=False,\n",
      "resume_from_checkpoint=None,\n",
      "run_name=models/bert-base-cased-finetune-yelp,\n",
      "save_on_each_node=False,\n",
      "save_only_model=False,\n",
      "save_safetensors=True,\n",
      "save_steps=500,\n",
      "save_strategy=steps,\n",
      "save_total_limit=None,\n",
      "seed=42,\n",
      "skip_memory_metrics=True,\n",
      "tf32=None,\n",
      "torch_compile=False,\n",
      "torch_compile_backend=None,\n",
      "torch_compile_mode=None,\n",
      "torch_empty_cache_steps=None,\n",
      "torchdynamo=None,\n",
      "tpu_metrics_debug=False,\n",
      "tpu_num_cores=None,\n",
      "use_cpu=False,\n",
      "use_ipex=False,\n",
      "use_legacy_prediction_loop=False,\n",
      "use_liger_kernel=False,\n",
      "use_mps_device=False,\n",
      "warmup_ratio=0.0,\n",
      "warmup_steps=0,\n",
      "weight_decay=0.0,\n",
      ")\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1762: FutureWarning: `encoder_attention_mask` is deprecated and will be removed in version 4.55.0 for `BertSdpaSelfAttention.forward`.\n",
      "  return forward_call(*args, **kwargs)\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='189' max='189' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [189/189 05:37, Epoch 3/3]\n",
       "    </div>\n",
       "    <table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       " <tr style=\"text-align: left;\">\n",
       "      <th>Epoch</th>\n",
       "      <th>Training Loss</th>\n",
       "      <th>Validation Loss</th>\n",
       "      <th>Accuracy</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <td>1</td>\n",
       "      <td>1.391500</td>\n",
       "      <td>1.257495</td>\n",
       "      <td>0.443000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>2</td>\n",
       "      <td>1.023600</td>\n",
       "      <td>1.099798</td>\n",
       "      <td>0.519000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <td>3</td>\n",
       "      <td>0.671200</td>\n",
       "      <td>1.080617</td>\n",
       "      <td>0.553000</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table><p>"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1762: FutureWarning: `encoder_attention_mask` is deprecated and will be removed in version 4.55.0 for `BertSdpaSelfAttention.forward`.\n",
      "  return forward_call(*args, **kwargs)\n",
      "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1762: FutureWarning: `encoder_attention_mask` is deprecated and will be removed in version 4.55.0 for `BertSdpaSelfAttention.forward`.\n",
      "  return forward_call(*args, **kwargs)\n",
      "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1762: FutureWarning: `encoder_attention_mask` is deprecated and will be removed in version 4.55.0 for `BertSdpaSelfAttention.forward`.\n",
      "  return forward_call(*args, **kwargs)\n",
      "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1762: FutureWarning: `encoder_attention_mask` is deprecated and will be removed in version 4.55.0 for `BertSdpaSelfAttention.forward`.\n",
      "  return forward_call(*args, **kwargs)\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "\n",
       "    <div>\n",
       "      \n",
       "      <progress value='13' max='13' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
       "      [13/13 00:02]\n",
       "    </div>\n",
       "    "
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "评估结果： {'eval_loss': 1.0612053871154785, 'eval_accuracy': 0.58, 'eval_runtime': 2.6973, 'eval_samples_per_second': 37.074, 'eval_steps_per_second': 4.82, 'epoch': 3.0}\n",
      "最终评估结果 - accuracy: 0.58\n"
     ]
    }
   ],
   "source": [
    "import random\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "import evaluate\n",
    "from datasets import Dataset, load_dataset\n",
    "import datasets\n",
    "from IPython.display import display, HTML\n",
    "from transformers import AutoTokenizer, AutoModelForSequenceClassification, TrainingArguments, Trainer\n",
    "\n",
    "def main():\n",
    "    def show_random_elements(dataset, num_examples=10):\n",
    "        assert num_examples <= len(dataset), \"Can't pick more elements than there are in the dataset.\"\n",
    "        picks = []\n",
    "        for _ in range(num_examples):\n",
    "            pick = random.randint(0, len(dataset)-1)\n",
    "            while pick in picks:\n",
    "                pick = random.randint(0, len(dataset)-1)\n",
    "            picks.append(pick)\n",
    "        \n",
    "        df = pd.DataFrame(dataset[picks])\n",
    "        for column, typ in dataset.features.items():\n",
    "            if isinstance(typ, datasets.ClassLabel):\n",
    "                df[column] = df[column].transform(lambda i: typ.names[i])\n",
    "        display(HTML(df.to_html()))\n",
    "\n",
    "    #dataset = load_dataset('parquet', data_files={'train': 'D:/ideaSpace/MyPython/data/yelp_review_full_local/train-00000-of-00001.parquet', 'test': 'D:/ideaSpace/MyPython/data/yelp_review_full_local/test-00000-of-00001.parquet'})\n",
    "    # 加载数据集\n",
    "    dataset = load_dataset(\"yelp_review_full\")\n",
    "    # 随机展示数据集10条数据\n",
    "    show_random_elements(dataset[\"train\"])\n",
    "\n",
    "    \"\"\"model_path: str = r\"D:/ideaSpace/MyPython/models/bert-base-cased\"\n",
    "    tokenizer = AutoTokenizer.from_pretrained(model_path)\n",
    "    model = AutoModelForSequenceClassification.from_pretrained(model_path, \n",
    "                                                               num_labels=5,\n",
    "                                                                id2label={0: \"1星\", 1: \"2星\", 2: \"3星\", 3: \"4星\", 4: \"5星\"},\n",
    "                                                                label2id={\"1星\": 0, \"2星\": 1, \"3星\": 2, \"4星\": 3, \"5星\": 4},\n",
    "                                                                ignore_mismatched_sizes=True  # 关键修复\n",
    "                                                              )\"\"\"\n",
    "     # 加载bert的分词器\n",
    "    tokenizer = AutoTokenizer.from_pretrained(\"bert-base-cased\")\n",
    "    # 加载bert模型\n",
    "    model = AutoModelForSequenceClassification.from_pretrained(\"bert-base-cased\", num_labels=5)\n",
    "\n",
    "    def tokenize_function(examples):\n",
    "        # padding=\"max_length\"确保填充到统一长度；max_length=256明确指定最大长度\n",
    "        # return tokenizer(examples[\"text\"], padding=\"max_length\", truncation=True, max_length=256)\n",
    "        return tokenizer(examples[\"text\"], padding=\"max_length\", truncation=True)\n",
    "        \n",
    "    tokenized_datasets = dataset.map(tokenize_function, batched=True)\n",
    "    # small_train_dataset = tokenized_datasets[\"train\"].shuffle(seed=42).select(range(100))\n",
    "    # small_eval_dataset = tokenized_datasets[\"test\"].shuffle(seed=42).select(range(100))\n",
    "    # 修改为使用完整的训练集与测试集去训练\n",
    "    small_train_dataset = tokenized_datasets[\"train\"].select(range(1000))\n",
    "    small_eval_dataset = tokenized_datasets[\"test\"].select(range(1000))\n",
    "\n",
    "    model_dir = \"models/bert-base-cased-finetune-yelp\"\n",
    "    # Evaluate库提供了一个简单的准确率函数，可以使用evaluate.load函数加载，用于评估准确率\n",
    "    metric = evaluate.load(\"accuracy\")\n",
    "    # metric = evaluate.load(\"D:/ideaSpace/MyPython/data/metrics/accuracy\")\n",
    "\n",
    "    \"\"\"训练器（Trainer）在训练过程中不会自动评估模型性能。因此，我们需要向训练器传递一个函数来计算和报告指标\n",
    "    Evaluate库提供了一个简单的准确率函数，您可以使用evaluate.load函数加载。接着，调用 compute 函数来计算预测的准确率\n",
    "    在将预测传递给 compute 函数之前，我们需要将 logits 转换为预测值（所有Transformers 模型都返回 logits）\"\"\"\n",
    "    def compute_metrics(eval_pred):\n",
    "        logits, labels = eval_pred\n",
    "        predictions = np.argmax(logits, axis=-1)\n",
    "        return metric.compute(predictions=predictions, references=labels)\n",
    "        \n",
    "    # logging_steps 默认值为500，根据我们的训练数据和步长，将其设置为30\n",
    "    training_args = TrainingArguments(output_dir=model_dir,\n",
    "                                  eval_strategy=\"epoch\", \n",
    "                                  per_device_train_batch_size=16,\n",
    "                                  num_train_epochs=3,\n",
    "                                  logging_steps=30)\n",
    "    print(\"完整的超参数配置：\", training_args)\n",
    "\n",
    "    trainer = Trainer(\n",
    "        model=model,\n",
    "        args=training_args,\n",
    "        train_dataset=small_train_dataset,\n",
    "        eval_dataset=small_eval_dataset,\n",
    "        compute_metrics=compute_metrics,\n",
    "    )\n",
    "    trainer.train() # 训练\n",
    "\n",
    "    # 随机选择100个测试集进行评估\n",
    "    tokenized_small_test_dataset = tokenized_datasets[\"test\"].shuffle(seed=64).select(range(100))\n",
    "    eval_results = trainer.evaluate(tokenized_small_test_dataset) # 评估\n",
    "    print(\"评估结果：\", eval_results)\n",
    "    print(f\"最终评估结果 - accuracy: {eval_results['eval_accuracy']:.2f}\")\n",
    "\n",
    "    # trainer.save_model(model_dir)  # 保存模型\n",
    "    # trainer.save_state()  # 保存训练状态\n",
    "\n",
    "if __name__ == '__main__':\n",
    "    main()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ae5bb22c-5736-4089-911f-dd5d4d8fc01d",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
