{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 通过PAI Python SDK使用Swift微调和部署大语言模型\n",
    "\n",
    "\n",
    "[ModelScope Swift](https://github.com/modelscope/swift) 是 ModelScope 社区开发的高效微调训练和推理框架，他支持一系列的先进的开源大语言模型的微调训练和部署，包括 `Qwen`，`Mixtral`，`Baichuan`，`ChatGLM`，`LLama2`等，开发者可以通过数行代码即可完成大语言模型的微调和部署。\n",
    "\n",
    "在当前文档中，我们将以[qwen/Qwen1.5-7B-Chat](https://modelscope.cn/models/qwen/Qwen1.5-7B-Chat/summary) 模型为示例，展示如何通过[PAI Python SDK](https://alipai.readthedocs.io/)在PAI平台上使用ModelScope Swift进行微调训练和模型部署。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "## 费用说明\n",
    "\n",
    "本示例将会使用以下云产品，并产生相应的费用账单：\n",
    "\n",
    "- PAI-DLC：运行训练任务，详细计费说明请参考[PAI-DLC计费说明](https://help.aliyun.com/zh/pai/product-overview/billing-of-dlc)\n",
    "- PAI-EAS：部署推理服务，详细计费说明请参考[PAI-EAS计费说明](https://help.aliyun.com/zh/pai/product-overview/billing-of-eas)\n",
    "- OSS：存储训练任务输出的模型、训练代码、TensorBoard日志等，详细计费说明请参考[OSS计费概述](https://help.aliyun.com/zh/oss/product-overview/billing-overview)\n",
    "\n",
    "\n",
    "> 通过参与云产品免费试用，使用**指定资源机型**提交训练作业或是部署推理服务，可以免费试用PAI产品，具体请参考[PAI免费试用](https://help.aliyun.com/zh/pai/product-overview/free-quota-for-new-users)。\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 前提准备\n",
    "\n",
    "\n",
    "安装PAI Python SDK，用于提交任务或是部署推理服务。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 安装PAI Python SDK\n",
    "!python -m pip install -U \"pai\""
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "SDK需要配置访问阿里云服务需要的AccessKey，以及当前使用的工作空间和OSS Bucket。在PAI SDK安装之后，通过在**命令行终端** 中执行以下命令，按照引导配置密钥、工作空间等信息。\n",
    "\n",
    "\n",
    "```shell\n",
    "\n",
    "# 以下命令，请在 \"命令行终端\" 中执行.\n",
    "\n",
    "python -m pai.toolkit.config\n",
    "\n",
    "```\n",
    "\n",
    "我们可以执行以下代码，验证配置是否成功。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pai\n",
    "from pai.session import get_default_session, setup_default_session\n",
    "\n",
    "print(pai.__version__)\n",
    "\n",
    "sess = get_default_session()\n",
    "\n",
    "# 用户也可以通过代码方式配置AK/SK/Region/WorkspaceId等信息\n",
    "# if not sess:\n",
    "#     sess = setup_default_session(\n",
    "#         access_key_id=\"<your-access-key-id>\",\n",
    "#         access_key_secret=\"<your-access-key-secret>\",\n",
    "#         region_id=\"<region-id>\",\n",
    "#         workspace_id=\"<workspace-id>\",\n",
    "#         oss_bucket_name=\"<oss-bucket-name>\",\n",
    "#     )\n",
    "#     sess.save_config()\n",
    "\n",
    "\n",
    "# 配置成功之后，我们可以拿到工作空间的信息\n",
    "assert sess is not None\n",
    "assert sess.workspace_name is not None\n",
    "assert sess.oss_bucket is not None"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "## 模型微调训练\n",
    "\n",
    "Swifit对常见的大语言模型提供了开箱即用的参数配置和脚本，支持LoRA，全参数等方式对模型进行微调训练。\n",
    "例如以下Swift提供的微调训练脚本，将使用`ms-bench-mini`数据集，对`Qwen1.5-7B-Chat`模型进行LoRA微调训练。\n",
    "\n",
    "```shell\n",
    "# source: https://github.com/modelscope/swift/blob/main/examples/pytorch/llm/scripts/qwen1half_7b_chat/lora/sft.sh\n",
    "\n",
    "# Experimental environment: A100\n",
    "# 30GB GPU memory\n",
    "PYTHONPATH=../../.. \\\n",
    "CUDA_VISIBLE_DEVICES=0 \\\n",
    "python llm_sft.py \\\n",
    "    --model_id_or_path qwen/Qwen1.5-7B-Chat\n",
    "    --model_revision master,\n",
    "    --sft_type lora \\\n",
    "    --tuner_backend swift \\\n",
    "    --dtype AUTO \\\n",
    "    --output_dir output \\\n",
    "    --dataset ms-bench-mini \\\n",
    "    --train_dataset_sample 5000 \\\n",
    "    --num_train_epochs 2 \\\n",
    "    --max_length 1024 \\\n",
    "    --check_dataset_strategy warning \\\n",
    "    --lora_rank 8 \\\n",
    "    --lora_alpha 32 \\\n",
    "    --lora_dropout_p 0.05 \\\n",
    "    --lora_target_modules ALL \\\n",
    "    --gradient_checkpointing true \\\n",
    "    --batch_size 1 \\\n",
    "    --weight_decay 0.01 \\\n",
    "    --learning_rate 1e-4 \\\n",
    "    --gradient_accumulation_steps 16 \\\n",
    "    --max_grad_norm 0.5 \\\n",
    "    --warmup_ratio 0.03 \\\n",
    "    --eval_steps 100 \\\n",
    "    --save_steps 100 \\\n",
    "    --save_total_limit 2 \\\n",
    "    --logging_steps 10 \\\n",
    "    --use_flash_attn false \\\n",
    "    --self_cognition_sample 1000 \\\n",
    "    --model_name 卡卡罗特 \\\n",
    "    --model_author 陶白白 \\\n",
    "    --push_to_hub false \\\n",
    "    --hub_model_id qwen1half-7b-chat-lora \\\n",
    "    --hub_private_repo true \\\n",
    "    --hub_token 'your-sdk-token' \\\n",
    "\n",
    "```\n",
    "\n",
    "以上命令中的参数详解，用户可以参考Swift文档：[支持的模型和数据集文档](https://github.com/modelscope/swift/blob/main/docs/source/LLM/%E6%94%AF%E6%8C%81%E7%9A%84%E6%A8%A1%E5%9E%8B%E5%92%8C%E6%95%B0%E6%8D%AE%E9%9B%86.md)，[微调参数详解文档](https://github.com/modelscope/swift/blob/main/docs/source/LLM/%E5%91%BD%E4%BB%A4%E8%A1%8C%E5%8F%82%E6%95%B0.md)。\n",
    "\n",
    "### 使用PAI Python SDK提交微调训练任务\n",
    "\n",
    "通过PAI Python SDK提供的`ModelScopeEstimator`对象，用户可以方便得使用PAI提供的镜像提交训练作业。预置镜像中安装了基础的依赖库，包括`ModelScope`, `Swift`, `PyTorch`等，使用`ModelScopeEstimator`，用户可以轻松得在PAI上使用Swift框架完成模型微调训练。\n",
    "\n",
    "\n",
    "以下代码中，我们基于Swift提供的`Qwen1.5-7B-Chat`模型的[LoRA训练脚本](https://github.com/modelscope/swift/blob/main/examples/pytorch/llm/scripts/qwen_7b_chat/lora/sft.sh)提供的训练参数，通过PAI Python SDK提交训练任务。\n",
    "\n",
    "在代码中，我们通过`hyerparameters`参数，将训练参数传递给到训练作业，然后在启动命令中，使用环境变量的方式引用这些参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 通过 git_config 指定训练脚本相关的 git 地址和分支\n",
    "git_config = {\"repo\": \"https://github.com/modelscope/swift.git\", \"branch\": \"v1.5.3\"}\n",
    "\n",
    "# Swift的LLM SFT的完整参数支持，请参考：\n",
    "# https://github.com/modelscope/swift/blob/main/docs/source/LLM/%E5%91%BD%E4%BB%A4%E8%A1%8C%E5%8F%82%E6%95%B0.md#sft-%E5%8F%82%E6%95%B0\n",
    "hyperparameters = {\n",
    "    \"model_id_or_path\": \"qwen/Qwen1.5-7B-Chat\",\n",
    "    \"model_revision\": \"master\",\n",
    "    \"sft_type\": \"lora\",\n",
    "    \"tuner_backend\": \"swift\",\n",
    "    \"dtype\": \"AUTO\",\n",
    "    \"dataset\": \"blossom-math-zh\",\n",
    "    # \"dataset\": \"ms-bench-mini\",\n",
    "    \"train_dataset_sample\": -1,\n",
    "    \"num_train_epochs\": 1,\n",
    "    \"max_length\": 1024,\n",
    "    \"check_dataset_strategy\": \"warning\",\n",
    "    \"lora_rank\": 8,\n",
    "    \"lora_alpha\": 32,\n",
    "    \"lora_dropout_p\": \"0.05\",\n",
    "    \"lora_target_modules\": \"DEFAULT\",\n",
    "    \"gradient_checkpointing\": \"true\",\n",
    "    \"batch_size\": \"4\",\n",
    "    \"weight_decay\": \"0.01\",\n",
    "    \"learning_rate\": \"1e-4\",\n",
    "    \"gradient_accumulation_steps\": \"16\",\n",
    "    \"max_grad_norm\": \"0.5\",\n",
    "    \"warmup_ratio\": \"0.03\",\n",
    "    \"eval_steps\": \"100\",\n",
    "    \"save_steps\": \"100\",\n",
    "    \"save_total_limit\": \"2\",\n",
    "    \"logging_steps\": \"10\",\n",
    "    \"use_flash_attn\": \"false\",\n",
    "}\n",
    "\n",
    "# 训练作业需要需要配置训练输出路径\n",
    "hyperparameters.update(\n",
    "    {\n",
    "        # 模型输出地址，请勿修改\n",
    "        \"output_dir\": \"/ml/output/model/\",\n",
    "    }\n",
    ")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pai.modelscope import ModelScopeEstimator\n",
    "\n",
    "\n",
    "# 创建 ModelScopeEstimator 对象\n",
    "est = ModelScopeEstimator(\n",
    "    # 指定训练脚本的启动命令\n",
    "    # 1. Qwen1.5-7B-Chat 需要 ms-swift>=1.6.1 和 transformers>=4.37\n",
    "    # 2. 通过 $PAI_USER_ARGS 环境变量传入所有超参信息\n",
    "    command=\"python -m pip install --upgrade ms-swift 'transformers>=4.37'  && swift sft $PAI_USER_ARGS\",\n",
    "    # instance_type=\"ecs.gn6e-c12g1.3xlarge\",  # 1xV100 GPU (32GB显存)\n",
    "    instance_type=\"ecs.gn7e-c16g1.4xlarge\",\n",
    "    # 用于选择训练镜像\n",
    "    modelscope_version=\"1.12.0\",\n",
    "    hyperparameters=hyperparameters,\n",
    "    base_job_name=\"modelscope-swift-train\",\n",
    ")\n",
    "\n",
    "# 提交训练作业\n",
    "est.fit(wait=False)\n",
    "\n",
    "# 打开TensorBoard作业\n",
    "tb = est.tensorboard()\n",
    "\n",
    "# 打印TensorBoard应用页面\n",
    "print(tb.app_uri)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "TrainingJob launch starting\n",
      "LIBRARY_PATH=/usr/local/cuda/lib64/stubs\n",
      "DSW_95221_PORT_22_TCP_PROTO=tcp\n",
      "DSW_95221_SERVICE_PORT_SSH_DSW_95221=22\n",
      "DSW_95221_PORT_80_TCP_ADDR=10.192.9.246\n",
      "NV_CUDA_COMPAT_PACKAGE=cuda-compat-12-1\n",
      "NV_LIBCUBLAS_VERSION=12.1.0.26-1\n",
      "NV_NVPROF_DEV_PACKAGE=cuda-nvprof-12-1=12.1.55-1\n",
      "KUBERNETES_PORT=tcp://10.192.0.1:443\n",
      "KUBERNETES_SERVICE_PORT=6443\n",
      "NV_CUDA_NSIGHT_COMPUTE_VERSION=12.1.0-1\n",
      "LANGUAGE=zh_CN.UTF-8\n",
      "PIP_TRUSTED_HOST=mirrors.cloud.aliyuncs.com\n",
      "SCRAPE_PROMETHEUS_METRICS=yes\n",
      "MASTER_ADDR=trainpiyqdicammv-master-0\n",
      "DSW_95221_PORT_80_TCP_PORT=80\n",
      "PAI_HPS_SFT_TYPE=lora\n",
      "HOSTNAME=trainpiyqdicammv-master-0\n",
      "DSW_98084_SERVICE_PORT=80\n",
      "DSW_95221_SERVICE_PORT_HTTP_DSW_95221=80\n",
      "DSW_95221_PORT_80_TCP_PROTO=tcp\n",
      "DSW_98084_PORT=tcp://10.192.30.20:80\n",
      "LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64\n",
      "NV_LIBNCCL_PACKAGE_VERSION=2.17.1-1\n",
      "NVIDIA_CUDA_END_OF_LIFE=1\n",
      "DSW_107274_SERVICE_PORT=80\n",
      "DSW_107274_PORT=tcp://10.192.12.78:80\n",
      "MASTER_PORT=23456\n",
      "DSW_96358_PORT=tcp://10.192.28.168:80\n",
      "DSW_98084_PORT_22_TCP_ADDR=10.192.30.20\n",
      "DSW_96358_SERVICE_PORT=80\n",
      "HOME=/root\n",
      "NV_LIBCUBLAS_DEV_VERSION=12.1.0.26-1\n",
      "NV_CUDNN_PACKAGE_NAME=libcudnn8\n",
      "DSW_107274_PORT_22_TCP_ADDR=10.192.12.78\n",
      "PAI_HPS_MODEL_REVISION=master\n",
      "PAI_HPS_MAX_LENGTH=1024\n",
      "PAI_HPS_LORA_RANK=8\n",
      "PAI_USER_ARGS=--tuner_backend swift --dataset blossom-math-zh --learning_rate 1e-4 --weight_decay 0.01 --gradient_accumulation_steps 16 --dtype AUTO --lora_rank 8 --lora_alpha 32 --lora_target_modules DEFAULT --eval_steps 100 --save_steps 100 --train_dataset_sample -1 --batch_size 1 --check_dataset_strategy warning --use_flash_attn false --output_dir /ml/output/model/ --model_revision master --max_length 1024 --logging_steps 10 --max_grad_norm 0.5 --gradient_checkpointing true --save_total_limit 2 --model_id_or_path qwen/Qwen1.5-7B-Chat --warmup_ratio 0.03 --sft_type lora --num_train_epochs 1 --lora_dropout_p 0.05\n",
      "PYTHONUNBUFFERED=0\n",
      "DSW_96358_PORT_22_TCP_ADDR=10.192.28.168\n",
      "DSW_95221_PORT_22_TCP=tcp://10.192.9.246:22\n",
      "PAI_HPS_OUTPUT_DIR=/ml/output/model/\n",
      "NPROC_PER_NODE=1\n",
      "DSW_98084_PORT_22_TCP_PORT=22\n",
      "PAI_OUTPUT_CHECKPOINTS=/ml/output/checkpoints/\n",
      "PAI_CONFIG_DIR=/ml/input/config/\n",
      "WORLD_SIZE=1\n",
      "DSW_98084_PORT_22_TCP_PROTO=tcp\n",
      "DSW_98084_PORT_80_TCP_ADDR=10.192.30.20\n",
      "DSW_107274_PORT_22_TCP_PORT=22\n",
      "PAI_HPS_LORA_DROPOUT_P=0.05\n",
      "NV_LIBNCCL_DEV_PACKAGE_VERSION=2.17.1-1\n",
      "REGION_ID=cn-hangzhou\n",
      "DSW_107274_PORT_22_TCP_PROTO=tcp\n",
      "DSW_96358_PORT_22_TCP_PORT=22\n",
      "DSW_107274_PORT_80_TCP_ADDR=10.192.12.78\n",
      "PAI_HPS_LOGGING_STEPS=10\n",
      "NV_LIBNPP_PACKAGE=libnpp-12-1=12.0.2.50-1\n",
      "DSW_96358_PORT_80_TCP_ADDR=10.192.28.168\n",
      "DSW_95221_PORT_80_TCP=tcp://10.192.9.246:80\n",
      "DSW_96358_PORT_22_TCP_PROTO=tcp\n",
      "PAI_HPS_DTYPE=AUTO\n",
      "CUDA_VERSION=12.1.0\n",
      "NV_CUDNN_PACKAGE=libcudnn8=8.9.0.131-1+cuda12.1\n",
      "RANK=0\n",
      "DSW_98084_PORT_80_TCP_PORT=80\n",
      "NV_NVPROF_VERSION=12.1.55-1\n",
      "DSW_107274_PORT_80_TCP_PORT=80\n",
      "DSW_98084_PORT_80_TCP_PROTO=tcp\n",
      "PAI_HPS_LORA_TARGET_MODULES=DEFAULT\n",
      "NV_LIBCUBLAS_PACKAGE_NAME=libcublas-12-1\n",
      "DSW_96358_PORT_80_TCP_PORT=80\n",
      "DSW_107274_PORT_80_TCP_PROTO=tcp\n",
      "NVIDIA_REQUIRE_CUDA=cuda>=12.1 brand=tesla,driver>=450,driver<451 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=510,driver<511 brand=unknown,driver>=510,driver<511 brand=nvidia,driver>=510,driver<511 brand=nvidiartx,driver>=510,driver<511 brand=geforce,driver>=510,driver<511 brand=geforcertx,driver>=510,driver<511 brand=quadro,driver>=510,driver<511 brand=quadrortx,driver>=510,driver<511 brand=titan,driver>=510,driver<511 brand=titanrtx,driver>=510,driver<511 brand=tesla,driver>=515,driver<516 brand=unknown,driver>=515,driver<516 brand=nvidia,driver>=515,driver<516 brand=nvidiartx,driver>=515,driver<516 brand=geforce,driver>=515,driver<516 brand=geforcertx,driver>=515,driver<516 brand=quadro,driver>=515,driver<516 brand=quadrortx,driver>=515,driver<516 brand=titan,driver>=515,driver<516 brand=titanrtx,driver>=515,driver<516 brand=tesla,driver>=525,driver<526 brand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 brand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 brand=titanrtx,driver>=525,driver<526\n",
      "arch=x86_64\n",
      "TENANT_API_SERVER_URL=https://10.224.148.12:6443\n",
      "DSW_96358_PORT_80_TCP_PROTO=tcp\n",
      "NVIDIA_DRIVER_CAPABILITIES=compute,utility\n",
      "NV_CUDA_LIB_VERSION=12.1.0-1\n",
      "NV_LIBCUSPARSE_VERSION=12.0.2.55-1\n",
      "NV_LIBNCCL_PACKAGE_NAME=libnccl2\n",
      "PAI_TRAINING_JOB_ID=trainpiyqdicammv\n",
      "PAI_OUTPUT_TENSORBOARD=/ml/output/tensorboard/\n",
      "NV_NVML_DEV_VERSION=12.1.55-1\n",
      "NV_LIBNPP_DEV_PACKAGE=libnpp-dev-12-1=12.0.2.50-1\n",
      "VLLM_USE_MODELSCOPE=True\n",
      "DSW_98084_PORT_22_TCP=tcp://10.192.30.20:22\n",
      "PAI_HPS_TUNER_BACKEND=swift\n",
      "NV_CUDA_CUDART_VERSION=12.1.55-1\n",
      "NV_CUDNN_PACKAGE_DEV=libcudnn8-dev=8.9.0.131-1+cuda12.1\n",
      "KUBERNETES_PORT_443_TCP_ADDR=10.192.0.1\n",
      "DSW_107274_PORT_22_TCP=tcp://10.192.12.78:22\n",
      "PAI_OUTPUT_MODEL=/ml/output/model/\n",
      "DSW_96358_PORT_22_TCP=tcp://10.192.28.168:22\n",
      "DSW_98084_SERVICE_PORT_SSH_DSW_98084=22\n",
      "PAI_HPS_CHECK_DATASET_STRATEGY=warning\n",
      "PAI_HPS_MAX_GRAD_NORM=0.5\n",
      "PATH=/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\n",
      "NVARCH=x86_64\n",
      "NV_LIBCUBLAS_PACKAGE=libcublas-12-1=12.1.0.26-1\n",
      "NV_LIBCUBLAS_DEV_PACKAGE_NAME=libcublas-dev-12-1\n",
      "PIP_INDEX_URL=https://mirrors.cloud.aliyuncs.com/pypi/simple\n",
      "PAI_HPS_MODEL_ID_OR_PATH=qwen/Qwen1.5-7B-Chat\n",
      "KUBERNETES_PORT_443_TCP_PORT=443\n",
      "DSW_107274_SERVICE_PORT_SSH_DSW_107274=22\n",
      "DSW_98084_PORT_80_TCP=tcp://10.192.30.20:80\n",
      "PAI_HPS_EVAL_STEPS=100\n",
      "NV_LIBNCCL_PACKAGE=libnccl2=2.17.1-1+cuda12.1\n",
      "NV_LIBCUSPARSE_DEV_VERSION=12.0.2.55-1\n",
      "NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev\n",
      "KUBERNETES_PORT_443_TCP_PROTO=tcp\n",
      "DSW_107274_PORT_80_TCP=tcp://10.192.12.78:80\n",
      "PAI_HPS_DATASET=blossom-math-zh\n",
      "NVIDIA_PRODUCT_NAME=CUDA\n",
      "LANG=zh_CN.UTF-8\n",
      "DSW_96358_PORT_80_TCP=tcp://10.192.28.168:80\n",
      "DSW_98084_SERVICE_PORT_HTTP_DSW_98084=80\n",
      "DSW_96358_SERVICE_PORT_SSH_DSW_96358=22\n",
      "PAI_HPS_LEARNING_RATE=1e-4\n",
      "PAI_HPS_WEIGHT_DECAY=0.01\n",
      "PAI_HPS_USE_FLASH_ATTN=false\n",
      "NV_CUDA_CUDART_DEV_VERSION=12.1.55-1\n",
      "PAI_TRAINING_USE_ECI=true\n",
      "PAI_HPS_GRADIENT_CHECKPOINTING=true\n",
      "PAI_HPS_GRADIENT_ACCUMULATION_STEPS=16\n",
      "DSW_107274_SERVICE_PORT_HTTP_DSW_107274=80\n",
      "DSW_95221_SERVICE_HOST=10.192.9.246\n",
      "PAI_HPS_NUM_TRAIN_EPOCHS=1\n",
      "PAI_HPS_LORA_ALPHA=32\n",
      "NV_LIBCUBLAS_DEV_PACKAGE=libcublas-dev-12-1=12.1.0.26-1\n",
      "SHELL=/bin/bash\n",
      "NV_CUDA_NSIGHT_COMPUTE_DEV_PACKAGE=cuda-nsight-compute-12-1=12.1.0-1\n",
      "KUBERNETES_CONTAINER_RESOURCE_GPU=1\n",
      "DSW_96358_SERVICE_PORT_HTTP_DSW_96358=80\n",
      "PAI_HPS_SAVE_TOTAL_LIMIT=2\n",
      "NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.17.1-1+cuda12.1\n",
      "SETUPTOOLS_USE_DISTUTILS=stdlib\n",
      "PAI_HPS_SAVE_STEPS=100\n",
      "PAI_HPS_WARMUP_RATIO=0.03\n",
      "NV_NVTX_VERSION=12.1.66-1\n",
      "NV_LIBNPP_VERSION=12.0.2.50-1\n",
      "CONDA_DIR=/opt/conda\n",
      "KUBERNETES_SERVICE_PORT_HTTPS=443\n",
      "KUBERNETES_PORT_443_TCP=tcp://10.192.0.1:443\n",
      "PAI_HPS_TRAIN_DATASET_SAMPLE=-1\n",
      "NV_CUDNN_VERSION=8.9.0.131\n",
      "LC_ALL=zh_CN.UTF-8\n",
      "KUBERNETES_SERVICE_HOST=10.224.148.12\n",
      "DSW_95221_PORT=tcp://10.192.9.246:80\n",
      "DSW_95221_SERVICE_PORT=80\n",
      "PWD=/\n",
      "PAI_HPS={\"batch_size\":\"1\",\"check_dataset_strategy\":\"warning\",\"dataset\":\"blossom-math-zh\",\"dtype\":\"AUTO\",\"eval_steps\":\"100\",\"gradient_accumulation_steps\":\"16\",\"gradient_checkpointing\":\"true\",\"learning_rate\":\"1e-4\",\"logging_steps\":\"10\",\"lora_alpha\":\"32\",\"lora_dropout_p\":\"0.05\",\"lora_rank\":\"8\",\"lora_target_modules\":\"DEFAULT\",\"max_grad_norm\":\"0.5\",\"max_length\":\"1024\",\"model_id_or_path\":\"qwen/Qwen1.5-7B-Chat\",\"model_revision\":\"master\",\"num_train_epochs\":\"1\",\"output_dir\":\"/ml/output/model/\",\"save_steps\":\"100\",\"save_total_limit\":\"2\",\"sft_type\":\"lora\",\"train_dataset_sample\":\"-1\",\"tuner_backend\":\"swift\",\"use_flash_attn\":\"false\",\"warmup_ratio\":\"0.03\",\"weight_decay\":\"0.01\"}\n",
      "MODELSCOPE_CACHE=/mnt/workspace/.cache/modelscope\n",
      "DSW_95221_PORT_22_TCP_ADDR=10.192.9.246\n",
      "PAI_HPS_BATCH_SIZE=1\n",
      "NVIDIA_VISIBLE_DEVICES=0\n",
      "NCCL_VERSION=2.17.1-1\n",
      "TZ=Asia/Shanghai\n",
      "DSW_98084_SERVICE_HOST=10.192.30.20\n",
      "NV_LIBNPP_DEV_VERSION=12.0.2.50-1\n",
      "DSW_107274_SERVICE_HOST=10.192.12.78\n",
      "DSW_96358_SERVICE_HOST=10.192.28.168\n",
      "DSW_95221_PORT_22_TCP_PORT=22\n",
      "PAI_ODPS_CREDENTIAL=/ml/input/credential/odps.json\n",
      "User program launching\n",
      "-----------------------------------------------------------------\n",
      "Looking in indexes: https://mirrors.cloud.aliyuncs.com/pypi/simple\n",
      "Requirement already satisfied: ms-swift in /opt/conda/lib/python3.10/site-packages (1.5.4)\n",
      "Collecting ms-swift\n",
      "  Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/f5/83/1c3f20ac674c8db3aed609eb08687e72b8a46b4decbb93654f6b28363176/ms_swift-1.6.1-py3-none-any.whl (424 kB)\n",
      "     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 424.8/424.8 kB 23.4 MB/s eta 0:00:00\n",
      "Requirement already satisfied: transformers>=4.37 in /opt/conda/lib/python3.10/site-packages (4.37.2)\n",
      "Collecting transformers>=4.37\n",
      "  Downloading https://mirrors.cloud.aliyuncs.com/pypi/packages/3e/6b/1b589f7b69aaea8193cf5bc91cf97410284aecd97b6312cdb08baedbdffe/transformers-4.38.1-py3-none-any.whl (8.5 MB)\n",
      "     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 8.5/8.5 MB 92.7 MB/s eta 0:00:00\n",
      "Requirement already satisfied: accelerate in /opt/conda/lib/python3.10/site-packages (from ms-swift) (0.26.1)\n",
      "Requirement already satisfied: dacite in /opt/conda/lib/python3.10/site-packages (from ms-swift) (1.8.1)\n",
      "Requirement already satisfied: datasets in /opt/conda/lib/python3.10/site-packages (from ms-swift) (2.16.1)\n",
      "Requirement already satisfied: jieba in /opt/conda/lib/python3.10/site-packages (from ms-swift) (0.42.1)\n",
      "Requirement already satisfied: matplotlib in /opt/conda/lib/python3.10/site-packages (from ms-swift) (3.5.3)\n",
      "Requirement already satisfied: modelscope>=1.9.3 in /opt/conda/lib/python3.10/site-packages (from ms-swift) (1.12.0)\n",
      "Requirement already satisfied: nltk in /opt/conda/lib/python3.10/site-packages (from ms-swift) (3.8.1)\n",
      "Requirement already satisfied: numpy in /opt/conda/lib/python3.10/site-packages (from ms-swift) (1.26.3)\n",
      "Requirement already satisfied: optimum in /opt/conda/lib/python3.10/site-packages (from ms-swift) (1.16.2)\n",
      "Requirement already satisfied: pandas in /opt/conda/lib/python3.10/site-packages (from ms-swift) (2.2.0)\n",
      "Requirement already satisfied: peft<0.8.0,>=0.7.1 in /opt/conda/lib/python3.10/site-packages (from ms-swift) (0.7.1)\n",
      "Requirement already satisfied: requests in /opt/conda/lib/python3.10/site-packages (from ms-swift) (2.31.0)\n",
      "Requirement already satisfied: rouge in /opt/conda/lib/python3.10/site-packages (from ms-swift) (1.0.1)\n",
      "Requirement already satisfied: safetensors in /opt/conda/lib/python3.10/site-packages (from ms-swift) (0.4.1)\n",
      "Requirement already satisfied: tensorboard in /opt/conda/lib/python3.10/site-packages (from ms-swift) (2.15.1)\n",
      "Requirement already satisfied: tqdm in /opt/conda/lib/python3.10/site-packages (from ms-swift) (4.65.0)\n",
      "Requirement already satisfied: trl>=0.7.7 in /opt/conda/lib/python3.10/site-packages (from ms-swift) (0.7.10)\n",
      "Requirement already satisfied: filelock in /opt/conda/lib/python3.10/site-packages (from transformers>=4.37) (3.13.1)\n",
      "Requirement already satisfied: huggingface-hub<1.0,>=0.19.3 in /opt/conda/lib/python3.10/site-packages (from transformers>=4.37) (0.20.3)\n",
      "Requirement already satisfied: packaging>=20.0 in /opt/conda/lib/python3.10/site-packages (from transformers>=4.37) (23.1)\n",
      "Requirement already satisfied: pyyaml>=5.1 in /opt/conda/lib/python3.10/site-packages (from transformers>=4.37) (6.0.1)\n",
      "Requirement already satisfied: regex!=2019.12.17 in /opt/conda/lib/python3.10/site-packages (from transformers>=4.37) (2023.12.25)\n",
      "Requirement already satisfied: tokenizers<0.19,>=0.14 in /opt/conda/lib/python3.10/site-packages (from transformers>=4.37) (0.15.1)\n",
      "Requirement already satisfied: fsspec>=2023.5.0 in /opt/conda/lib/python3.10/site-packages (from huggingface-hub<1.0,>=0.19.3->transformers>=4.37) (2023.10.0)\n",
      "Requirement already satisfied: typing-extensions>=3.7.4.3 in /opt/conda/lib/python3.10/site-packages (from huggingface-hub<1.0,>=0.19.3->transformers>=4.37) (4.9.0)\n",
      "Requirement already satisfied: addict in /opt/conda/lib/python3.10/site-packages (from modelscope>=1.9.3->ms-swift) (2.4.0)\n",
      "Requirement already satisfied: attrs in /opt/conda/lib/python3.10/site-packages (from modelscope>=1.9.3->ms-swift) (23.2.0)\n",
      "Requirement already satisfied: einops in /opt/conda/lib/python3.10/site-packages (from modelscope>=1.9.3->ms-swift) (0.7.0)\n",
      "Requirement already satisfied: gast>=0.2.2 in /opt/conda/lib/python3.10/site-packages (from modelscope>=1.9.3->ms-swift) (0.5.4)\n",
      "Requirement already satisfied: oss2 in /opt/conda/lib/python3.10/site-packages (from modelscope>=1.9.3->ms-swift) (2.18.4)\n",
      "Requirement already satisfied: Pillow>=6.2.0 in /opt/conda/lib/python3.10/site-packages (from modelscope>=1.9.3->ms-swift) (10.2.0)\n",
      "Requirement already satisfied: pyarrow!=9.0.0,>=6.0.0 in /opt/conda/lib/python3.10/site-packages (from modelscope>=1.9.3->ms-swift) (15.0.0)\n",
      "Requirement already satisfied: python-dateutil>=2.1 in /opt/conda/lib/python3.10/site-packages (from modelscope>=1.9.3->ms-swift) (2.8.2)\n",
      "Requirement already satisfied: scipy in /opt/conda/lib/python3.10/site-packages (from modelscope>=1.9.3->ms-swift) (1.11.4)\n",
      "Requirement already satisfied: setuptools in /opt/conda/lib/python3.10/site-packages (from modelscope>=1.9.3->ms-swift) (68.0.0)\n",
      "Requirement already satisfied: simplejson>=3.3.0 in /opt/conda/lib/python3.10/site-packages (from modelscope>=1.9.3->ms-swift) (3.19.2)\n",
      "Requirement already satisfied: sortedcontainers>=1.5.9 in /opt/conda/lib/python3.10/site-packages (from modelscope>=1.9.3->ms-swift) (2.4.0)\n",
      "Requirement already satisfied: urllib3>=1.26 in /opt/conda/lib/python3.10/site-packages (from modelscope>=1.9.3->ms-swift) (1.26.16)\n",
      "Requirement already satisfied: yapf in /opt/conda/lib/python3.10/site-packages (from modelscope>=1.9.3->ms-swift) (0.30.0)\n",
      "Requirement already satisfied: pyarrow-hotfix in /opt/conda/lib/python3.10/site-packages (from datasets->ms-swift) (0.6)\n",
      "Requirement already satisfied: dill<0.3.8,>=0.3.0 in /opt/conda/lib/python3.10/site-packages (from datasets->ms-swift) (0.3.7)\n",
      "Requirement already satisfied: xxhash in /opt/conda/lib/python3.10/site-packages (from datasets->ms-swift) (3.4.1)\n",
      "Requirement already satisfied: multiprocess in /opt/conda/lib/python3.10/site-packages (from datasets->ms-swift) (0.70.15)\n",
      "Requirement already satisfied: aiohttp in /opt/conda/lib/python3.10/site-packages (from datasets->ms-swift) (3.9.3)\n",
      "Requirement already satisfied: psutil in /opt/conda/lib/python3.10/site-packages (from peft<0.8.0,>=0.7.1->ms-swift) (5.9.7)\n",
      "Requirement already satisfied: torch>=1.13.0 in /opt/conda/lib/python3.10/site-packages (from peft<0.8.0,>=0.7.1->ms-swift) (2.1.2+cu121)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in /opt/conda/lib/python3.10/site-packages (from requests->ms-swift) (2.0.4)\n",
      "Requirement already satisfied: idna<4,>=2.5 in /opt/conda/lib/python3.10/site-packages (from requests->ms-swift) (3.4)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.10/site-packages (from requests->ms-swift) (2023.11.17)\n",
      "Requirement already satisfied: tyro>=0.5.11 in /opt/conda/lib/python3.10/site-packages (from trl>=0.7.7->ms-swift) (0.7.1)\n",
      "Requirement already satisfied: cycler>=0.10 in /opt/conda/lib/python3.10/site-packages (from matplotlib->ms-swift) (0.12.1)\n",
      "Requirement already satisfied: fonttools>=4.22.0 in /opt/conda/lib/python3.10/site-packages (from matplotlib->ms-swift) (4.47.0)\n",
      "Requirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/lib/python3.10/site-packages (from matplotlib->ms-swift) (1.4.5)\n",
      "Requirement already satisfied: pyparsing>=2.2.1 in /opt/conda/lib/python3.10/site-packages (from matplotlib->ms-swift) (3.1.1)\n",
      "Requirement already satisfied: click in /opt/conda/lib/python3.10/site-packages (from nltk->ms-swift) (8.1.7)\n",
      "Requirement already satisfied: joblib in /opt/conda/lib/python3.10/site-packages (from nltk->ms-swift) (1.3.2)\n",
      "Requirement already satisfied: coloredlogs in /opt/conda/lib/python3.10/site-packages (from optimum->ms-swift) (14.0)\n",
      "Requirement already satisfied: sympy in /opt/conda/lib/python3.10/site-packages (from optimum->ms-swift) (1.12)\n",
      "Requirement already satisfied: pytz>=2020.1 in /opt/conda/lib/python3.10/site-packages (from pandas->ms-swift) (2023.4)\n",
      "Requirement already satisfied: tzdata>=2022.7 in /opt/conda/lib/python3.10/site-packages (from pandas->ms-swift) (2023.4)\n",
      "Requirement already satisfied: six in /opt/conda/lib/python3.10/site-packages (from rouge->ms-swift) (1.16.0)\n",
      "Requirement already satisfied: absl-py>=0.4 in /opt/conda/lib/python3.10/site-packages (from tensorboard->ms-swift) (2.0.0)\n",
      "Requirement already satisfied: grpcio>=1.48.2 in /opt/conda/lib/python3.10/site-packages (from tensorboard->ms-swift) (1.60.0)\n",
      "Requirement already satisfied: google-auth<3,>=1.6.3 in /opt/conda/lib/python3.10/site-packages (from tensorboard->ms-swift) (2.26.1)\n",
      "Requirement already satisfied: google-auth-oauthlib<2,>=0.5 in /opt/conda/lib/python3.10/site-packages (from tensorboard->ms-swift) (1.0.0)\n",
      "Requirement already satisfied: markdown>=2.6.8 in /opt/conda/lib/python3.10/site-packages (from tensorboard->ms-swift) (3.5.1)\n",
      "Requirement already satisfied: protobuf<4.24,>=3.19.6 in /opt/conda/lib/python3.10/site-packages (from tensorboard->ms-swift) (3.20.3)\n",
      "Requirement already satisfied: tensorboard-data-server<0.8.0,>=0.7.0 in /opt/conda/lib/python3.10/site-packages (from tensorboard->ms-swift) (0.7.2)\n",
      "Requirement already satisfied: werkzeug>=1.0.1 in /opt/conda/lib/python3.10/site-packages (from tensorboard->ms-swift) (2.2.3)\n",
      "Requirement already satisfied: aiosignal>=1.1.2 in /opt/conda/lib/python3.10/site-packages (from aiohttp->datasets->ms-swift) (1.3.1)\n",
      "Requirement already satisfied: frozenlist>=1.1.1 in /opt/conda/lib/python3.10/site-packages (from aiohttp->datasets->ms-swift) (1.4.1)\n",
      "Requirement already satisfied: multidict<7.0,>=4.5 in /opt/conda/lib/python3.10/site-packages (from aiohttp->datasets->ms-swift) (6.0.4)\n",
      "Requirement already satisfied: yarl<2.0,>=1.0 in /opt/conda/lib/python3.10/site-packages (from aiohttp->datasets->ms-swift) (1.9.4)\n",
      "Requirement already satisfied: async-timeout<5.0,>=4.0 in /opt/conda/lib/python3.10/site-packages (from aiohttp->datasets->ms-swift) (4.0.3)\n",
      "Requirement already satisfied: cachetools<6.0,>=2.0.0 in /opt/conda/lib/python3.10/site-packages (from google-auth<3,>=1.6.3->tensorboard->ms-swift) (5.3.2)\n",
      "Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.10/site-packages (from google-auth<3,>=1.6.3->tensorboard->ms-swift) (0.3.0)\n",
      "Requirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.10/site-packages (from google-auth<3,>=1.6.3->tensorboard->ms-swift) (4.9)\n",
      "Requirement already satisfied: requests-oauthlib>=0.7.0 in /opt/conda/lib/python3.10/site-packages (from google-auth-oauthlib<2,>=0.5->tensorboard->ms-swift) (1.3.1)\n",
      "Requirement already satisfied: networkx in /opt/conda/lib/python3.10/site-packages (from torch>=1.13.0->peft<0.8.0,>=0.7.1->ms-swift) (2.8.4)\n",
      "Requirement already satisfied: jinja2 in /opt/conda/lib/python3.10/site-packages (from torch>=1.13.0->peft<0.8.0,>=0.7.1->ms-swift) (3.1.2)\n",
      "Requirement already satisfied: triton==2.1.0 in /opt/conda/lib/python3.10/site-packages (from torch>=1.13.0->peft<0.8.0,>=0.7.1->ms-swift) (2.1.0)\n",
      "Requirement already satisfied: sentencepiece!=0.1.92,>=0.1.91 in /opt/conda/lib/python3.10/site-packages (from transformers[sentencepiece]>=4.26.0->optimum->ms-swift) (0.1.99)\n",
      "Requirement already satisfied: docstring-parser>=0.14.1 in /opt/conda/lib/python3.10/site-packages (from tyro>=0.5.11->trl>=0.7.7->ms-swift) (0.15)\n",
      "Requirement already satisfied: rich>=11.1.0 in /opt/conda/lib/python3.10/site-packages (from tyro>=0.5.11->trl>=0.7.7->ms-swift) (13.7.0)\n",
      "Requirement already satisfied: shtab>=1.5.6 in /opt/conda/lib/python3.10/site-packages (from tyro>=0.5.11->trl>=0.7.7->ms-swift) (1.6.5)\n",
      "Requirement already satisfied: MarkupSafe>=2.1.1 in /opt/conda/lib/python3.10/site-packages (from werkzeug>=1.0.1->tensorboard->ms-swift) (2.1.3)\n",
      "Requirement already satisfied: humanfriendly>=7.1 in /opt/conda/lib/python3.10/site-packages (from coloredlogs->optimum->ms-swift) (10.0)\n",
      "Requirement already satisfied: crcmod>=1.7 in /opt/conda/lib/python3.10/site-packages (from oss2->modelscope>=1.9.3->ms-swift) (1.7)\n",
      "Requirement already satisfied: pycryptodome>=3.4.7 in /opt/conda/lib/python3.10/site-packages (from oss2->modelscope>=1.9.3->ms-swift) (3.20.0)\n",
      "Requirement already satisfied: aliyun-python-sdk-kms>=2.4.1 in /opt/conda/lib/python3.10/site-packages (from oss2->modelscope>=1.9.3->ms-swift) (2.16.2)\n",
      "Requirement already satisfied: aliyun-python-sdk-core>=2.13.12 in /opt/conda/lib/python3.10/site-packages (from oss2->modelscope>=1.9.3->ms-swift) (2.14.0)\n",
      "Requirement already satisfied: mpmath>=0.19 in /opt/conda/lib/python3.10/site-packages (from sympy->optimum->ms-swift) (1.3.0)\n",
      "Requirement already satisfied: jmespath<1.0.0,>=0.9.3 in /opt/conda/lib/python3.10/site-packages (from aliyun-python-sdk-core>=2.13.12->oss2->modelscope>=1.9.3->ms-swift) (0.10.0)\n",
      "Requirement already satisfied: cryptography>=2.6.0 in /opt/conda/lib/python3.10/site-packages (from aliyun-python-sdk-core>=2.13.12->oss2->modelscope>=1.9.3->ms-swift) (41.0.3)\n",
      "Requirement already satisfied: pyasn1<0.6.0,>=0.4.6 in /opt/conda/lib/python3.10/site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard->ms-swift) (0.5.1)\n",
      "Requirement already satisfied: oauthlib>=3.0.0 in /opt/conda/lib/python3.10/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<2,>=0.5->tensorboard->ms-swift) (3.2.2)\n",
      "Requirement already satisfied: markdown-it-py>=2.2.0 in /opt/conda/lib/python3.10/site-packages (from rich>=11.1.0->tyro>=0.5.11->trl>=0.7.7->ms-swift) (3.0.0)\n",
      "Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /opt/conda/lib/python3.10/site-packages (from rich>=11.1.0->tyro>=0.5.11->trl>=0.7.7->ms-swift) (2.17.2)\n",
      "Requirement already satisfied: cffi>=1.12 in /opt/conda/lib/python3.10/site-packages (from cryptography>=2.6.0->aliyun-python-sdk-core>=2.13.12->oss2->modelscope>=1.9.3->ms-swift) (1.15.1)\n",
      "Requirement already satisfied: mdurl~=0.1 in /opt/conda/lib/python3.10/site-packages (from markdown-it-py>=2.2.0->rich>=11.1.0->tyro>=0.5.11->trl>=0.7.7->ms-swift) (0.1.2)\n",
      "Requirement already satisfied: pycparser in /opt/conda/lib/python3.10/site-packages (from cffi>=1.12->cryptography>=2.6.0->aliyun-python-sdk-core>=2.13.12->oss2->modelscope>=1.9.3->ms-swift) (2.21)\n",
      "DEPRECATION: pytorch-lightning 1.7.7 has a non-standard dependency specifier torch>=1.9.*. pip 24.0 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063\n",
      "Installing collected packages: ms-swift\n",
      "  Attempting uninstall: ms-swift\n",
      "    Found existing installation: ms-swift 1.5.4\n",
      "    Uninstalling ms-swift-1.5.4:\n",
      "      Successfully uninstalled ms-swift-1.5.4\n",
      "Successfully installed ms-swift-1.6.1\n",
      "WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\n",
      "\n",
      "[notice] A new release of pip is available: 23.3.2 -> 24.0\n",
      "[notice] To update, run: pip install --upgrade pip\n",
      "run sh: `torchrun --nproc_per_node 1 --master_port 23456 --master_addr trainpiyqdicammv-master-0 /opt/conda/lib/python3.10/site-packages/swift/cli/sft.py --tuner_backend swift --dataset blossom-math-zh --learning_rate 1e-4 --weight_decay 0.01 --gradient_accumulation_steps 16 --dtype AUTO --lora_rank 8 --lora_alpha 32 --lora_target_modules DEFAULT --eval_steps 100 --save_steps 100 --train_dataset_sample -1 --batch_size 1 --check_dataset_strategy warning --use_flash_attn false --output_dir /ml/output/model/ --model_revision master --max_length 1024 --logging_steps 10 --max_grad_norm 0.5 --gradient_checkpointing true --save_total_limit 2 --model_id_or_path qwen/Qwen1.5-7B-Chat --warmup_ratio 0.03 --sft_type lora --num_train_epochs 1 --lora_dropout_p 0.05`\n",
      "2024-02-23 13:23:12,006 - modelscope - INFO - PyTorch version 2.1.2+cu121 Found.\n",
      "2024-02-23 13:23:12,007 - modelscope - INFO - TensorFlow version 2.14.0 Found.\n",
      "2024-02-23 13:23:12,007 - modelscope - INFO - Loading ast index from /mnt/workspace/.cache/modelscope/ast_indexer\n",
      "2024-02-23 13:23:12,271 - modelscope - INFO - Updating the files for the changes of local files, first time updating will take longer time! Please wait till updating done!\n",
      "2024-02-23 13:23:12,280 - modelscope - INFO - AST-Scanning the path \"/opt/conda/lib/python3.10/site-packages/modelscope\" with the following sub folders ['models', 'metrics', 'pipelines', 'preprocessors', 'trainers', 'msdatasets', 'exporters']\n",
      "2024-02-23 13:23:24,611 - modelscope - INFO - Scanning done! A number of 964 components indexed or updated! Time consumed 12.330960035324097s\n",
      "2024-02-23 13:23:24,637 - modelscope - INFO - Loading done! Current index file version is 1.12.0, with md5 509123dba36c5e70a95f6780df348471 and a total number of 964 components indexed\n",
      "2024-02-23 13:23:33.414871: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
      "2024-02-23 13:23:37.512202: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n",
      "2024-02-23 13:23:44.525241: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
      "2024-02-23 13:23:44.525270: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
      "2024-02-23 13:23:44.593086: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
      "2024-02-23 13:23:47.231684: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.\n",
      "2024-02-23 13:23:47.232910: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
      "To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
      "2024-02-23 13:23:52.646237: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n",
      "[INFO:swift] Start time of running main: 2024-02-23 13:25:04.707658\n",
      "[INFO:swift] Handle pai compat...\n",
      "[INFO:swift] Setting args.logging_dir: /ml/output/tensorboard/\n",
      "[INFO:swift] Setting args.add_output_dir_suffix: False\n",
      "[INFO:swift] Setting model_info['revision']: master\n",
      "[INFO:swift] Setting template_type: qwen\n",
      "[INFO:swift] Setting hub_model_id: qwen1half-7b-chat-lora\n",
      "[INFO:swift] Setting args.lazy_tokenize: False\n",
      "device_count: 1\n",
      "[INFO:swift] args: SftArguments(model_type='qwen1half-7b-chat', model_id_or_path='qwen/Qwen1.5-7B-Chat', model_revision='master', model_cache_dir=None, sft_type='lora', freeze_parameters=0.0, additional_trainable_parameters=[], tuner_backend='swift', template_type='qwen', output_dir='/ml/output/model', add_output_dir_suffix=False, ddp_backend='nccl', seed=42, resume_from_checkpoint=None, dtype='bf16', dataset=['blossom-math-zh'], dataset_seed=42, dataset_test_ratio=0.01, train_dataset_sample=-1, train_dataset_mix_ratio=None, train_dataset_mix_ds=['ms-bench'], val_dataset_sample=None, use_loss_scale=False, system=None, max_length=1024, truncation_strategy='delete', check_dataset_strategy='warning', custom_train_dataset_path=[], custom_val_dataset_path=[], self_cognition_sample=0, model_name=[None, None], model_author=[None, None], quantization_bit=0, bnb_4bit_comp_dtype='bf16', bnb_4bit_quant_type='nf4', bnb_4bit_use_double_quant=True, lora_target_modules=['q_proj', 'k_proj', 'v_proj'], lora_rank=8, lora_alpha=32, lora_dropout_p=0.05, lora_bias_trainable='none', lora_modules_to_save=[], lora_dtype='fp32', use_rslora=False, lora_layers_to_transform=None, lora_layers_pattern=None, lora_rank_pattern={}, lora_alpha_pattern={}, lora_loftq_config={}, adalora_target_r=8, adalora_init_r=12, adalora_tinit=0, adalora_tfinal=0, adalora_deltaT=1, adalora_beta1=0.85, adalora_beta2=0.85, adalora_orth_reg_weight=0.5, ia3_target_modules=['DEFAULT'], ia3_feedforward_modules=[], ia3_modules_to_save=[], neftune_noise_alpha=None, gradient_checkpointing=True, deepspeed=None, batch_size=1, eval_batch_size=1, num_train_epochs=1, max_steps=-1, optim='adamw_torch', adam_beta1=0.9, adam_beta2=0.999, learning_rate=0.0001, weight_decay=0.01, gradient_accumulation_steps=16, max_grad_norm=0.5, predict_with_generate=False, lr_scheduler_type='linear', warmup_ratio=0.03, eval_steps=100, save_steps=100, save_only_model=False, save_total_limit=2, logging_steps=10, dataloader_num_workers=1, dataloader_pin_memory=True, push_to_hub=False, hub_model_id='qwen1half-7b-chat-lora', hub_private_repo=True, push_hub_strategy='push_best', hub_token=None, test_oom_error=False, disable_tqdm=False, lazy_tokenize=False, preprocess_num_proc=1, use_flash_attn=False, ignore_args_error=False, check_model_is_latest=True, logging_dir='/ml/output/tensorboard', report_to=['tensorboard'], acc_strategy='token', save_on_each_node=True, evaluation_strategy='steps', save_strategy='steps', save_safetensors=True, gpu_memory_fraction=None, max_new_tokens=2048, do_sample=True, temperature=0.3, top_k=20, top_p=0.7, repetition_penalty=1.0, num_beams=1, per_device_train_batch_size=None, per_device_eval_batch_size=None, only_save_model=None, neftune_alpha=None, deepspeed_config_path=None)\n",
      "rank: 0, local_rank: 0, world_size: 1, local_world_size: 1\n",
      "[INFO:swift] Global seed set to 42\n",
      "Downloading: 100%|██████████| 663/663 [00:00<00:00, 9.21MB/s]\n",
      "Downloading: 100%|██████████| 51.0/51.0 [00:00<00:00, 849kB/s]\n",
      "Downloading: 100%|██████████| 216/216 [00:00<00:00, 3.55MB/s]\n",
      "Downloading: 100%|██████████| 1.59M/1.59M [00:00<00:00, 139MB/s]\n",
      "Downloading: 100%|█████████▉| 3.71G/3.71G [00:14<00:00, 275MB/s]\n",
      "Downloading: 100%|█████████▉| 3.69G/3.69G [00:11<00:00, 357MB/s]\n",
      "Downloading: 100%|█████████▉| 3.69G/3.69G [00:13<00:00, 301MB/s]\n",
      "Downloading: 100%|█████████▉| 3.30G/3.30G [00:10<00:00, 326MB/s]\n",
      "Downloading: 100%|██████████| 31.0k/31.0k [00:00<00:00, 138MB/s]\n",
      "Downloading: 100%|██████████| 4.15k/4.15k [00:00<00:00, 3.70MB/s]\n",
      "Downloading: 100%|██████████| 6.70M/6.70M [00:00<00:00, 55.3MB/s]\n",
      "Downloading: 100%|██████████| 1.13k/1.13k [00:00<00:00, 13.8MB/s]\n",
      "Downloading: 100%|██████████| 2.65M/2.65M [00:00<00:00, 25.6MB/s]\n",
      "Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.\n",
      "Loading checkpoint shards: 100%|██████████| 4/4 [00:24<00:00,  6.21s/it]\n",
      "[INFO:swift] model_config: Qwen2Config {\n",
      "  \"_name_or_path\": \"/mnt/workspace/.cache/modelscope/qwen/Qwen1___5-7B-Chat\",\n",
      "  \"architectures\": [\n",
      "    \"Qwen2ForCausalLM\"\n",
      "  ],\n",
      "  \"attention_dropout\": 0.0,\n",
      "  \"bos_token_id\": 151643,\n",
      "  \"eos_token_id\": 151643,\n",
      "  \"hidden_act\": \"silu\",\n",
      "  \"hidden_size\": 4096,\n",
      "  \"initializer_range\": 0.02,\n",
      "  \"intermediate_size\": 11008,\n",
      "  \"max_position_embeddings\": 32768,\n",
      "  \"max_window_layers\": 28,\n",
      "  \"model_type\": \"qwen2\",\n",
      "  \"num_attention_heads\": 32,\n",
      "  \"num_hidden_layers\": 32,\n",
      "  \"num_key_value_heads\": 32,\n",
      "  \"rms_norm_eps\": 1e-06,\n",
      "  \"rope_theta\": 1000000.0,\n",
      "  \"sliding_window\": 32768,\n",
      "  \"tie_word_embeddings\": false,\n",
      "  \"torch_dtype\": \"bfloat16\",\n",
      "  \"transformers_version\": \"4.37.2\",\n",
      "  \"use_cache\": true,\n",
      "  \"use_sliding_window\": false,\n",
      "  \"vocab_size\": 151936\n",
      "}\n",
      "\n",
      "[INFO:swift] generation_config: GenerationConfig {\n",
      "  \"do_sample\": true,\n",
      "  \"eos_token_id\": 151645,\n",
      "  \"max_new_tokens\": 2048,\n",
      "  \"pad_token_id\": 151643,\n",
      "  \"temperature\": 0.3,\n",
      "  \"top_k\": 20,\n",
      "  \"top_p\": 0.7\n",
      "}\n",
      "\n",
      "[INFO:swift] lora_target_modules: ['q_proj', 'k_proj', 'v_proj']\n",
      "[INFO:swift] lora_config: LoRAConfig(swift_type='LORA', peft_type=None, auto_mapping=None, base_model_name_or_path=None, revision=None, task_type=None, inference_mode=False, r=8, target_modules=['q_proj', 'k_proj', 'v_proj'], lora_alpha=32, lora_dropout=0.05, fan_in_fan_out=False, bias='none', modules_to_save=[], init_lora_weights=True, layers_to_transform=None, layers_pattern=None, rank_pattern={}, alpha_pattern={}, megatron_config=None, megatron_core='megatron.core', loftq_config={}, use_qa_lora=False, use_merged_linear=False, enable_lora=None, lora_dtype='fp32')\n",
      "[INFO:swift] [model.model.embed_tokens.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.self_attn.q_proj.base_layer.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.self_attn.q_proj.base_layer.bias]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.self_attn.q_proj.lora_A.default.weight]: requires_grad=True, dtype=torch.float32, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.self_attn.q_proj.lora_B.default.weight]: requires_grad=True, dtype=torch.float32, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.self_attn.k_proj.base_layer.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.self_attn.k_proj.base_layer.bias]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.self_attn.k_proj.lora_A.default.weight]: requires_grad=True, dtype=torch.float32, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.self_attn.k_proj.lora_B.default.weight]: requires_grad=True, dtype=torch.float32, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.self_attn.v_proj.base_layer.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.self_attn.v_proj.base_layer.bias]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.self_attn.v_proj.lora_A.default.weight]: requires_grad=True, dtype=torch.float32, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.self_attn.v_proj.lora_B.default.weight]: requires_grad=True, dtype=torch.float32, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.self_attn.o_proj.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.mlp.gate_proj.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.mlp.up_proj.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.mlp.down_proj.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.input_layernorm.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.0.post_attention_layernorm.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0\n",
      "[INFO:swift] [model.model.layers.1.self_attn.q_proj.base_layer.weight]: requires_grad=False, dtype=torch.bfloat16, device=cuda:0\n",
      "[INFO:swift] ...\n",
      "[INFO:swift] SwiftModel: 7727.6160M Params (6.2915M Trainable [0.0814%]), 268.4375M Buffers.\n",
      "[INFO:swift] SwiftModel(\n",
      "  (model): Qwen2ForCausalLM(\n",
      "    (model): Qwen2Model(\n",
      "      (embed_tokens): Embedding(151936, 4096)\n",
      "      (layers): ModuleList(\n",
      "        (0-31): 32 x Qwen2DecoderLayer(\n",
      "          (self_attn): Qwen2SdpaAttention(\n",
      "            (q_proj): lora.Linear(\n",
      "              (base_layer): Linear(in_features=4096, out_features=4096, bias=True)\n",
      "              (lora_dropout): ModuleDict(\n",
      "                (default): Dropout(p=0.05, inplace=False)\n",
      "              )\n",
      "              (lora_A): ModuleDict(\n",
      "                (default): Linear(in_features=4096, out_features=8, bias=False)\n",
      "              )\n",
      "              (lora_B): ModuleDict(\n",
      "                (default): Linear(in_features=8, out_features=4096, bias=False)\n",
      "              )\n",
      "              (lora_embedding_A): ParameterDict()\n",
      "              (lora_embedding_B): ParameterDict()\n",
      "            )\n",
      "            (k_proj): lora.Linear(\n",
      "              (base_layer): Linear(in_features=4096, out_features=4096, bias=True)\n",
      "              (lora_dropout): ModuleDict(\n",
      "                (default): Dropout(p=0.05, inplace=False)\n",
      "              )\n",
      "              (lora_A): ModuleDict(\n",
      "                (default): Linear(in_features=4096, out_features=8, bias=False)\n",
      "              )\n",
      "              (lora_B): ModuleDict(\n",
      "                (default): Linear(in_features=8, out_features=4096, bias=False)\n",
      "              )\n",
      "              (lora_embedding_A): ParameterDict()\n",
      "              (lora_embedding_B): ParameterDict()\n",
      "            )\n",
      "            (v_proj): lora.Linear(\n",
      "              (base_layer): Linear(in_features=4096, out_features=4096, bias=True)\n",
      "              (lora_dropout): ModuleDict(\n",
      "                (default): Dropout(p=0.05, inplace=False)\n",
      "              )\n",
      "              (lora_A): ModuleDict(\n",
      "                (default): Linear(in_features=4096, out_features=8, bias=False)\n",
      "              )\n",
      "              (lora_B): ModuleDict(\n",
      "                (default): Linear(in_features=8, out_features=4096, bias=False)\n",
      "              )\n",
      "              (lora_embedding_A): ParameterDict()\n",
      "              (lora_embedding_B): ParameterDict()\n",
      "            )\n",
      "            (o_proj): Linear(in_features=4096, out_features=4096, bias=False)\n",
      "            (rotary_emb): Qwen2RotaryEmbedding()\n",
      "          )\n",
      "          (mlp): Qwen2MLP(\n",
      "            (gate_proj): Linear(in_features=4096, out_features=11008, bias=False)\n",
      "            (up_proj): Linear(in_features=4096, out_features=11008, bias=False)\n",
      "            (down_proj): Linear(in_features=11008, out_features=4096, bias=False)\n",
      "            (act_fn): SiLU()\n",
      "          )\n",
      "          (input_layernorm): Qwen2RMSNorm()\n",
      "          (post_attention_layernorm): Qwen2RMSNorm()\n",
      "        )\n",
      "      )\n",
      "      (norm): Qwen2RMSNorm()\n",
      "    )\n",
      "    (lm_head): Linear(in_features=4096, out_features=151936, bias=False)\n",
      "  )\n",
      ")\n",
      "[WARNING:modelscope] Reusing dataset dataset_builder (/root/.cache/modelscope/hub/datasets/AI-ModelScope/blossom-math-v2/master/data_files)\n",
      "[INFO:modelscope] Generating dataset dataset_builder (/root/.cache/modelscope/hub/datasets/AI-ModelScope/blossom-math-v2/master/data_files)\n",
      "[INFO:modelscope] Loading meta-data file ...\n",
      "10000it [00:00, 25449.82it/s]\n",
      "100%|██████████| 10000/10000 [00:00<00:00, 25644.66it/s]\n",
      "[INFO:swift] check dataset...\n",
      "[INFO:swift] check_dataset_strategy: 'warning'\n",
      "100%|██████████| 9900/9900 [00:00<00:00, 22509.46it/s]\n",
      "100%|██████████| 100/100 [00:00<00:00, 21844.20it/s]\n",
      "[INFO:swift] train_dataset: Dataset({\n",
      "    features: ['query', 'response'],\n",
      "    num_rows: 9900\n",
      "})\n",
      "[INFO:swift] val_dataset: Dataset({\n",
      "    features: ['query', 'response'],\n",
      "    num_rows: 100\n",
      "})\n",
      "[INFO:swift] system: You are a helpful assistant.\n",
      "[INFO:swift] args.lazy_tokenize: False\n",
      "[INFO:swift] Using num_proc: 1\n",
      "100%|██████████| 9900/9900 [00:04<00:00, 2032.30it/s]\n",
      "100%|██████████| 100/100 [00:00<00:00, 1985.37it/s]\n",
      "[INFO:swift] [INPUT_IDS] [151644, 8948, 198, 2610, 525, 264, 10950, 17847, 13, 151645, 198, 151644, 872, 198, 101133, 3837, 41, 6670, 108677, 21, 15, 82847, 102131, 101962, 17447, 104178, 34187, 105863, 1773, 102119, 104705, 3837, 104205, 105625, 17447, 104178, 9370, 105863, 73157, 82847, 102131, 104825, 17714, 19, 15, 15, 110168, 28330, 100229, 3837, 68536, 110638, 109167, 72990, 9370, 105625, 101913, 104825, 101043, 100844, 105625, 99774, 99369, 1773, 41, 6670, 9370, 21, 15, 82847, 102131, 101962, 101047, 113690, 20412, 110638, 109167, 72990, 9370, 105625, 3837, 106177, 109633, 20412, 104205, 105625, 1773, 101133, 41, 6670, 9370, 72990, 101145, 110599, 110168, 28330, 100229, 9370, 105863, 101036, 11319, 151645, 198, 151644, 77091, 198, 41, 6670, 9370, 21, 15, 82847, 102131, 101962, 15946, 3837, 113690, 20412, 110638, 109167, 72990, 9370, 105625, 3837, 91676, 17, 15, 82847, 102131, 1773, 108719, 19, 15, 82847, 102131, 20412, 104205, 105625, 8997, 110638, 109167, 72990, 9370, 105625, 17447, 104178, 9370, 105863, 73157, 82847, 102131, 104825, 20412, 100844, 105625, 99774, 99369, 3837, 91676, 19, 15, 15, 14, 17, 284, 220, 17, 15, 15, 110168, 28330, 100229, 14, 82847, 102131, 8997, 18493, 110638, 109167, 72990, 9370, 17, 15, 82847, 102131, 17447, 104178, 9370, 105863, 104825, 17714, 17, 15, 82847, 102131, 856, 220, 17, 15, 15, 110168, 28330, 100229, 14, 82847, 102131, 284, 220, 19, 15, 15, 15, 110168, 28330, 100229, 8997, 104205, 105625, 17447, 104178, 9370, 105863, 73157, 82847, 102131, 104825, 17714, 19, 15, 15, 110168, 28330, 100229, 8997, 18493, 100844, 105625, 9370, 19, 15, 82847, 102131, 17447, 104178, 9370, 105863, 104825, 17714, 19, 15, 82847, 102131, 856, 220, 19, 15, 15, 110168, 28330, 100229, 14, 82847, 102131, 284, 220, 16, 21, 15, 15, 15, 110168, 28330, 100229, 8997, 99999, 3837, 101133, 41, 6670, 109633, 17447, 51232, 34187, 19, 15, 15, 15, 110168, 28330, 100229, 488, 220, 16, 21, 15, 15, 15, 110168, 28330, 100229, 284, 220, 17, 15, 15, 15, 15, 110168, 28330, 100229, 9370, 105863, 3407, 16141, 25, 220, 17, 15, 15, 15, 15, 151645]\n",
      "[INFO:swift] [INPUT] <|im_start|>system\n",
      "You are a helpful assistant.<|im_end|>\n",
      "<|im_start|>user\n",
      "去年，Jorge在他的60英亩土地上种植了玉米。通常情况下，良好的土壤上种植的玉米每英亩产量为400蒲式耳，而富含黏土的土壤上的产量只有良好土壤的一半。Jorge的60英亩土地中的三分之一是富含黏土的土壤，其余的土地是良好的土壤。去年Jorge的土地产了多少蒲式耳的玉米呢？<|im_end|>\n",
      "<|im_start|>assistant\n",
      "Jorge的60英亩土地中，三分之一是富含黏土的土壤，即20英亩。剩下的40英亩是良好的土壤。\n",
      "富含黏土的土壤上种植的玉米每英亩产量是良好土壤的一半，即400/2 = 200蒲式耳/英亩。\n",
      "在富含黏土的20英亩上种植的玉米产量为20英亩 x 200蒲式耳/英亩 = 4000蒲式耳。\n",
      "良好的土壤上种植的玉米每英亩产量为400蒲式耳。\n",
      "在良好土壤的40英亩上种植的玉米产量为40英亩 x 400蒲式耳/英亩 = 16000蒲式耳。\n",
      "所以，去年Jorge的土地上产了4000蒲式耳 + 16000蒲式耳 = 20000蒲式耳的玉米。\n",
      "\n",
      "Answer: 20000<|im_end|>\n",
      "[INFO:swift] [LABELS_IDS] [-100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 41, 6670, 9370, 21, 15, 82847, 102131, 101962, 15946, 3837, 113690, 20412, 110638, 109167, 72990, 9370, 105625, 3837, 91676, 17, 15, 82847, 102131, 1773, 108719, 19, 15, 82847, 102131, 20412, 104205, 105625, 8997, 110638, 109167, 72990, 9370, 105625, 17447, 104178, 9370, 105863, 73157, 82847, 102131, 104825, 20412, 100844, 105625, 99774, 99369, 3837, 91676, 19, 15, 15, 14, 17, 284, 220, 17, 15, 15, 110168, 28330, 100229, 14, 82847, 102131, 8997, 18493, 110638, 109167, 72990, 9370, 17, 15, 82847, 102131, 17447, 104178, 9370, 105863, 104825, 17714, 17, 15, 82847, 102131, 856, 220, 17, 15, 15, 110168, 28330, 100229, 14, 82847, 102131, 284, 220, 19, 15, 15, 15, 110168, 28330, 100229, 8997, 104205, 105625, 17447, 104178, 9370, 105863, 73157, 82847, 102131, 104825, 17714, 19, 15, 15, 110168, 28330, 100229, 8997, 18493, 100844, 105625, 9370, 19, 15, 82847, 102131, 17447, 104178, 9370, 105863, 104825, 17714, 19, 15, 82847, 102131, 856, 220, 19, 15, 15, 110168, 28330, 100229, 14, 82847, 102131, 284, 220, 16, 21, 15, 15, 15, 110168, 28330, 100229, 8997, 99999, 3837, 101133, 41, 6670, 109633, 17447, 51232, 34187, 19, 15, 15, 15, 110168, 28330, 100229, 488, 220, 16, 21, 15, 15, 15, 110168, 28330, 100229, 284, 220, 17, 15, 15, 15, 15, 110168, 28330, 100229, 9370, 105863, 3407, 16141, 25, 220, 17, 15, 15, 15, 15, 151645]\n",
      "[INFO:swift] [LABELS] [-100 * 106]Jorge的60英亩土地中，三分之一是富含黏土的土壤，即20英亩。剩下的40英亩是良好的土壤。\n",
      "富含黏土的土壤上种植的玉米每英亩产量是良好土壤的一半，即400/2 = 200蒲式耳/英亩。\n",
      "在富含黏土的20英亩上种植的玉米产量为20英亩 x 200蒲式耳/英亩 = 4000蒲式耳。\n",
      "良好的土壤上种植的玉米每英亩产量为400蒲式耳。\n",
      "在良好土壤的40英亩上种植的玉米产量为40英亩 x 400蒲式耳/英亩 = 16000蒲式耳。\n",
      "所以，去年Jorge的土地上产了4000蒲式耳 + 16000蒲式耳 = 20000蒲式耳的玉米。\n",
      "\n",
      "Answer: 20000<|im_end|>\n",
      "[INFO:swift] Dataset Token Length: 169.296465±58.663952, min=35.000000, max=563.000000, size=9900\n",
      "[INFO:swift] Dataset Token Length: 168.860000±58.182647, min=63.000000, max=361.000000, size=100\n",
      "[INFO:swift] Setting model.config.use_cache: False\n",
      "[INFO:swift] training_args: Seq2SeqTrainingArguments(\n",
      "_n_gpu=1,\n",
      "acc_strategy=token,\n",
      "adafactor=False,\n",
      "adam_beta1=0.9,\n",
      "adam_beta2=0.999,\n",
      "adam_epsilon=1e-08,\n",
      "additional_saved_files=[],\n",
      "auto_find_batch_size=False,\n",
      "bf16=True,\n",
      "bf16_full_eval=False,\n",
      "data_seed=None,\n",
      "dataloader_drop_last=False,\n",
      "dataloader_num_workers=1,\n",
      "dataloader_persistent_workers=False,\n",
      "dataloader_pin_memory=True,\n",
      "ddp_backend=nccl,\n",
      "ddp_broadcast_buffers=False,\n",
      "ddp_bucket_cap_mb=None,\n",
      "ddp_find_unused_parameters=False,\n",
      "ddp_timeout=1800,\n",
      "debug=[],\n",
      "deepspeed=None,\n",
      "disable_tqdm=False,\n",
      "dispatch_batches=None,\n",
      "do_eval=True,\n",
      "do_predict=False,\n",
      "do_train=False,\n",
      "eval_accumulation_steps=None,\n",
      "eval_delay=0,\n",
      "eval_steps=100,\n",
      "evaluation_strategy=steps,\n",
      "fp16=False,\n",
      "fp16_backend=auto,\n",
      "fp16_full_eval=False,\n",
      "fp16_opt_level=O1,\n",
      "fsdp=[],\n",
      "fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},\n",
      "fsdp_min_num_params=0,\n",
      "fsdp_transformer_layer_cls_to_wrap=None,\n",
      "full_determinism=False,\n",
      "generation_config=GenerationConfig {\n",
      "  \"do_sample\": true,\n",
      "  \"eos_token_id\": 151645,\n",
      "  \"max_new_tokens\": 2048,\n",
      "  \"pad_token_id\": 151643,\n",
      "  \"temperature\": 0.3,\n",
      "  \"top_k\": 20,\n",
      "  \"top_p\": 0.7\n",
      "}\n",
      ",\n",
      "generation_max_length=None,\n",
      "generation_num_beams=None,\n",
      "gradient_accumulation_steps=16,\n",
      "gradient_checkpointing=True,\n",
      "gradient_checkpointing_kwargs=None,\n",
      "greater_is_better=False,\n",
      "group_by_length=False,\n",
      "half_precision_backend=auto,\n",
      "hub_always_push=False,\n",
      "hub_model_id=qwen1half-7b-chat-lora,\n",
      "hub_private_repo=True,\n",
      "hub_strategy=every_save,\n",
      "hub_token=<HUB_TOKEN>,\n",
      "ignore_data_skip=False,\n",
      "include_inputs_for_metrics=False,\n",
      "include_num_input_tokens_seen=False,\n",
      "include_tokens_per_second=False,\n",
      "jit_mode_eval=False,\n",
      "label_names=None,\n",
      "label_smoothing_factor=0.0,\n",
      "learning_rate=0.0001,\n",
      "length_column_name=length,\n",
      "load_best_model_at_end=False,\n",
      "local_rank=0,\n",
      "log_level=passive,\n",
      "log_level_replica=warning,\n",
      "log_on_each_node=True,\n",
      "logging_dir=/ml/output/tensorboard,\n",
      "logging_first_step=True,\n",
      "logging_nan_inf_filter=True,\n",
      "logging_steps=10,\n",
      "logging_strategy=steps,\n",
      "lr_scheduler_kwargs={},\n",
      "lr_scheduler_type=linear,\n",
      "max_grad_norm=0.5,\n",
      "max_steps=-1,\n",
      "metric_for_best_model=loss,\n",
      "mp_parameters=,\n",
      "neftune_noise_alpha=None,\n",
      "no_cuda=False,\n",
      "num_train_epochs=1,\n",
      "optim=adamw_torch,\n",
      "optim_args=None,\n",
      "output_dir=/ml/output/model,\n",
      "overwrite_output_dir=False,\n",
      "past_index=-1,\n",
      "per_device_eval_batch_size=1,\n",
      "per_device_train_batch_size=1,\n",
      "predict_with_generate=False,\n",
      "prediction_loss_only=False,\n",
      "push_hub_strategy=push_best,\n",
      "push_to_hub=False,\n",
      "push_to_hub_model_id=None,\n",
      "push_to_hub_organization=None,\n",
      "push_to_hub_token=<PUSH_TO_HUB_TOKEN>,\n",
      "ray_scope=last,\n",
      "remove_unused_columns=False,\n",
      "report_to=['tensorboard'],\n",
      "resume_from_checkpoint=None,\n",
      "run_name=/ml/output/model,\n",
      "save_on_each_node=True,\n",
      "save_only_model=False,\n",
      "save_safetensors=True,\n",
      "save_steps=100,\n",
      "save_strategy=steps,\n",
      "save_total_limit=2,\n",
      "seed=42,\n",
      "skip_memory_metrics=True,\n",
      "sortish_sampler=True,\n",
      "split_batches=False,\n",
      "tf32=None,\n",
      "torch_compile=False,\n",
      "torch_compile_backend=None,\n",
      "torch_compile_mode=None,\n",
      "torchdynamo=None,\n",
      "tpu_metrics_debug=False,\n",
      "tpu_num_cores=None,\n",
      "train_sampler_random=True,\n",
      "use_cpu=False,\n",
      "use_ipex=False,\n",
      "use_legacy_prediction_loop=False,\n",
      "use_mps_device=False,\n",
      "warmup_ratio=0.03,\n",
      "warmup_steps=0,\n",
      "weight_decay=0.01,\n",
      ")\n",
      "[ERROR:swift] Authentication token does not exist, failed to access model qwen/Qwen1___5-7B-Chat which may not exist or may be                 private. Please login first.\n",
      "[ERROR:swift] Authentication token does not exist, failed to access model qwen/Qwen1___5-7B-Chat which may not exist or may be                 private. Please login first.\n",
      "Detected kernel version 4.19.24, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.\n",
      "[INFO:swift] The SftArguments will be saved in: /ml/output/model/sft_args.json\n",
      "[INFO:swift] The Seq2SeqTrainingArguments will be saved in: /ml/output/model/training_args.json\n",
      "[INFO:swift] The logging file will be saved in: /ml/output/model/logging.jsonl\n",
      "{'loss': 1.54105508, 'acc': 0.74976408, 'learning_rate': 5.26e-06, 'epoch': 0.0, 'global_step': 1}\n",
      "{'loss': 1.62215816, 'acc': 0.74622122, 'learning_rate': 5.263e-05, 'epoch': 0.02, 'global_step': 10}\n",
      "{'loss': 0.99780579, 'acc': 0.79739442, 'learning_rate': 9.983e-05, 'epoch': 0.03, 'global_step': 20}\n",
      "{'loss': 0.55699477, 'acc': 0.83835688, 'learning_rate': 9.816e-05, 'epoch': 0.05, 'global_step': 30}\n",
      "{'loss': 0.45281067, 'acc': 0.86557188, 'learning_rate': 9.649e-05, 'epoch': 0.06, 'global_step': 40}\n",
      "{'loss': 0.43421435, 'acc': 0.8655056, 'learning_rate': 9.482e-05, 'epoch': 0.08, 'global_step': 50}\n",
      "{'loss': 0.40880346, 'acc': 0.86961002, 'learning_rate': 9.316e-05, 'epoch': 0.1, 'global_step': 60}\n",
      "{'loss': 0.40410023, 'acc': 0.8763093, 'learning_rate': 9.149e-05, 'epoch': 0.11, 'global_step': 70}\n",
      "{'loss': 0.43470798, 'acc': 0.86318979, 'learning_rate': 8.982e-05, 'epoch': 0.13, 'global_step': 80}\n",
      "{'loss': 0.42842798, 'acc': 0.86792212, 'learning_rate': 8.815e-05, 'epoch': 0.15, 'global_step': 90}\n",
      "{'loss': 0.42398038, 'acc': 0.86915607, 'learning_rate': 8.648e-05, 'epoch': 0.16, 'global_step': 100}\n",
      "Train:  16%|█▌        | 100/618 [05:30<27:32,  3.19s/it]\n",
      "{'eval_loss': 0.4130066, 'eval_acc': 0.86948321, 'eval_runtime': 3.6709, 'eval_samples_per_second': 27.241, 'eval_steps_per_second': 27.241, 'epoch': 0.16, 'global_step': 100}\n",
      "Val: 100%|██████████| 100/100 [00:03<00:00, 27.98it/s]t]\n",
      "[INFO:swift] Saving model checkpoint to /ml/output/model/checkpoint-100\n",
      "{'loss': 0.40386291, 'acc': 0.87378187, 'learning_rate': 8.481e-05, 'epoch': 0.18, 'global_step': 110}\n",
      "{'loss': 0.42900181, 'acc': 0.86530523, 'learning_rate': 8.314e-05, 'epoch': 0.19, 'global_step': 120}\n",
      "{'loss': 0.41559343, 'acc': 0.87044754, 'learning_rate': 8.147e-05, 'epoch': 0.21, 'global_step': 130}\n",
      "{'loss': 0.42365618, 'acc': 0.86905994, 'learning_rate': 7.98e-05, 'epoch': 0.23, 'global_step': 140}\n",
      "{'loss': 0.4003653, 'acc': 0.87290335, 'learning_rate': 7.813e-05, 'epoch': 0.24, 'global_step': 150}\n",
      "{'loss': 0.39991553, 'acc': 0.87602005, 'learning_rate': 7.646e-05, 'epoch': 0.26, 'global_step': 160}\n",
      "{'loss': 0.39574809, 'acc': 0.88023663, 'learning_rate': 7.479e-05, 'epoch': 0.27, 'global_step': 170}\n",
      "{'loss': 0.40609832, 'acc': 0.87066364, 'learning_rate': 7.312e-05, 'epoch': 0.29, 'global_step': 180}\n",
      "{'loss': 0.3937706, 'acc': 0.87937489, 'learning_rate': 7.145e-05, 'epoch': 0.31, 'global_step': 190}\n",
      "{'loss': 0.43629007, 'acc': 0.86459599, 'learning_rate': 6.978e-05, 'epoch': 0.32, 'global_step': 200}\n",
      "Train:  32%|███▏      | 200/618 [10:52<22:05,  3.17s/it]\n",
      "{'eval_loss': 0.4045023, 'eval_acc': 0.87410411, 'eval_runtime': 3.6405, 'eval_samples_per_second': 27.468, 'eval_steps_per_second': 27.468, 'epoch': 0.32, 'global_step': 200}\n",
      "Val: 100%|██████████| 100/100 [00:03<00:00, 28.10it/s]t]\n",
      "[INFO:swift] Saving model checkpoint to /ml/output/model/checkpoint-200\n",
      "{'loss': 0.41512051, 'acc': 0.87068996, 'learning_rate': 6.811e-05, 'epoch': 0.34, 'global_step': 210}\n",
      "{'loss': 0.41681356, 'acc': 0.8721693, 'learning_rate': 6.644e-05, 'epoch': 0.36, 'global_step': 220}\n",
      "{'loss': 0.38484142, 'acc': 0.88141241, 'learning_rate': 6.477e-05, 'epoch': 0.37, 'global_step': 230}\n",
      "{'loss': 0.41272192, 'acc': 0.87364502, 'learning_rate': 6.311e-05, 'epoch': 0.39, 'global_step': 240}\n",
      "{'loss': 0.40615811, 'acc': 0.87337313, 'learning_rate': 6.144e-05, 'epoch': 0.4, 'global_step': 250}\n",
      "{'loss': 0.38701057, 'acc': 0.87577715, 'learning_rate': 5.977e-05, 'epoch': 0.42, 'global_step': 260}\n",
      "{'loss': 0.39820256, 'acc': 0.87572756, 'learning_rate': 5.81e-05, 'epoch': 0.44, 'global_step': 270}\n",
      "{'loss': 0.40803494, 'acc': 0.87359161, 'learning_rate': 5.643e-05, 'epoch': 0.45, 'global_step': 280}\n",
      "{'loss': 0.39741104, 'acc': 0.8752018, 'learning_rate': 5.476e-05, 'epoch': 0.47, 'global_step': 290}\n",
      "{'loss': 0.39963336, 'acc': 0.87681875, 'learning_rate': 5.309e-05, 'epoch': 0.48, 'global_step': 300}\n",
      "Train:  49%|████▊     | 300/618 [16:15<16:49,  3.17s/it]\n",
      "{'eval_loss': 0.39902455, 'eval_acc': 0.87259525, 'eval_runtime': 3.6278, 'eval_samples_per_second': 27.565, 'eval_steps_per_second': 27.565, 'epoch': 0.48, 'global_step': 300}\n",
      "Val: 100%|██████████| 100/100 [00:03<00:00, 28.21it/s]t]\n",
      "[INFO:swift] Saving model checkpoint to /ml/output/model/checkpoint-300\n",
      "{'loss': 0.40259352, 'acc': 0.87652578, 'learning_rate': 5.142e-05, 'epoch': 0.5, 'global_step': 310}\n",
      "{'loss': 0.4186729, 'acc': 0.8698246, 'learning_rate': 4.975e-05, 'epoch': 0.52, 'global_step': 320}\n",
      "{'loss': 0.39255619, 'acc': 0.87981691, 'learning_rate': 4.808e-05, 'epoch': 0.53, 'global_step': 330}\n",
      "{'loss': 0.40641832, 'acc': 0.87367649, 'learning_rate': 4.641e-05, 'epoch': 0.55, 'global_step': 340}\n",
      "{'loss': 0.40061021, 'acc': 0.87454872, 'learning_rate': 4.474e-05, 'epoch': 0.57, 'global_step': 350}\n",
      "{'loss': 0.40386815, 'acc': 0.87534952, 'learning_rate': 4.307e-05, 'epoch': 0.58, 'global_step': 360}\n",
      "{'loss': 0.39730177, 'acc': 0.87716103, 'learning_rate': 4.14e-05, 'epoch': 0.6, 'global_step': 370}\n",
      "{'loss': 0.40094142, 'acc': 0.87525463, 'learning_rate': 3.973e-05, 'epoch': 0.61, 'global_step': 380}\n",
      "{'loss': 0.38105037, 'acc': 0.88001919, 'learning_rate': 3.806e-05, 'epoch': 0.63, 'global_step': 390}\n",
      "{'loss': 0.40307741, 'acc': 0.87481976, 'learning_rate': 3.639e-05, 'epoch': 0.65, 'global_step': 400}\n",
      "Train:  65%|██████▍   | 400/618 [21:41<11:38,  3.21s/it]\n",
      "{'eval_loss': 0.39547482, 'eval_acc': 0.87240664, 'eval_runtime': 3.6283, 'eval_samples_per_second': 27.561, 'eval_steps_per_second': 27.561, 'epoch': 0.65, 'global_step': 400}\n",
      "Val: 100%|██████████| 100/100 [00:03<00:00, 28.20it/s]t]\n",
      "[INFO:swift] Saving model checkpoint to /ml/output/model/checkpoint-400\n",
      "{'loss': 0.4126945, 'acc': 0.87186394, 'learning_rate': 3.472e-05, 'epoch': 0.66, 'global_step': 410}\n",
      "{'loss': 0.38192344, 'acc': 0.88026819, 'learning_rate': 3.306e-05, 'epoch': 0.68, 'global_step': 420}\n",
      "{'loss': 0.40945592, 'acc': 0.87196274, 'learning_rate': 3.139e-05, 'epoch': 0.69, 'global_step': 430}\n",
      "{'loss': 0.4063736, 'acc': 0.87294664, 'learning_rate': 2.972e-05, 'epoch': 0.71, 'global_step': 440}\n",
      "{'loss': 0.38606803, 'acc': 0.87636414, 'learning_rate': 2.805e-05, 'epoch': 0.73, 'global_step': 450}\n",
      "{'loss': 0.42150102, 'acc': 0.87209835, 'learning_rate': 2.638e-05, 'epoch': 0.74, 'global_step': 460}\n",
      "{'loss': 0.40928059, 'acc': 0.87474785, 'learning_rate': 2.471e-05, 'epoch': 0.76, 'global_step': 470}\n",
      "{'loss': 0.38865559, 'acc': 0.87709408, 'learning_rate': 2.304e-05, 'epoch': 0.78, 'global_step': 480}\n",
      "{'loss': 0.39822729, 'acc': 0.87394524, 'learning_rate': 2.137e-05, 'epoch': 0.79, 'global_step': 490}\n",
      "{'loss': 0.38081129, 'acc': 0.880583, 'learning_rate': 1.97e-05, 'epoch': 0.81, 'global_step': 500}\n",
      "Train:  81%|████████  | 500/618 [27:07<06:20,  3.22s/it]\n",
      "{'eval_loss': 0.39326638, 'eval_acc': 0.87533006, 'eval_runtime': 3.6956, 'eval_samples_per_second': 27.059, 'eval_steps_per_second': 27.059, 'epoch': 0.81, 'global_step': 500}\n",
      "Val: 100%|██████████| 100/100 [00:03<00:00, 27.72it/s]t]\n",
      "[INFO:swift] Saving model checkpoint to /ml/output/model/checkpoint-500\n",
      "{'loss': 0.40122414, 'acc': 0.87647648, 'learning_rate': 1.803e-05, 'epoch': 0.82, 'global_step': 510}\n",
      "{'loss': 0.40300426, 'acc': 0.87283649, 'learning_rate': 1.636e-05, 'epoch': 0.84, 'global_step': 520}\n",
      "{'loss': 0.4052043, 'acc': 0.87474051, 'learning_rate': 1.469e-05, 'epoch': 0.86, 'global_step': 530}\n",
      "{'loss': 0.37932935, 'acc': 0.88045588, 'learning_rate': 1.302e-05, 'epoch': 0.87, 'global_step': 540}\n",
      "{'loss': 0.37019732, 'acc': 0.88617592, 'learning_rate': 1.135e-05, 'epoch': 0.89, 'global_step': 550}\n",
      "{'loss': 0.3966295, 'acc': 0.87634029, 'learning_rate': 9.68e-06, 'epoch': 0.91, 'global_step': 560}\n",
      "{'loss': 0.3865283, 'acc': 0.87757654, 'learning_rate': 8.01e-06, 'epoch': 0.92, 'global_step': 570}\n",
      "{'loss': 0.37923799, 'acc': 0.88160257, 'learning_rate': 6.34e-06, 'epoch': 0.94, 'global_step': 580}\n",
      "{'loss': 0.39962213, 'acc': 0.87641945, 'learning_rate': 4.67e-06, 'epoch': 0.95, 'global_step': 590}\n",
      "{'loss': 0.37904813, 'acc': 0.88130722, 'learning_rate': 3.01e-06, 'epoch': 0.97, 'global_step': 600}\n",
      "Train:  97%|█████████▋| 600/618 [32:32<00:57,  3.17s/it]\n",
      "{'eval_loss': 0.39279515, 'eval_acc': 0.87485854, 'eval_runtime': 3.6494, 'eval_samples_per_second': 27.401, 'eval_steps_per_second': 27.401, 'epoch': 0.97, 'global_step': 600}\n",
      "Val: 100%|██████████| 100/100 [00:03<00:00, 28.04it/s]t]\n",
      "[INFO:swift] Saving model checkpoint to /ml/output/model/checkpoint-600\n",
      "{'loss': 0.40877252, 'acc': 0.87145739, 'learning_rate': 1.34e-06, 'epoch': 0.99, 'global_step': 610}\n",
      "Train: 100%|██████████| 618/618 [33:37<00:00,  3.19s/it]\n",
      "{'eval_loss': 0.39259773, 'eval_acc': 0.87551867, 'eval_runtime': 3.6762, 'eval_samples_per_second': 27.202, 'eval_steps_per_second': 27.202, 'epoch': 1.0, 'global_step': 618}\n",
      "Val: 100%|██████████| 100/100 [00:03<00:00, 27.83it/s]t]\n",
      "[INFO:swift] Saving model checkpoint to /ml/output/model/checkpoint-618\n",
      "{'train_runtime': 2024.9629, 'train_samples_per_second': 4.889, 'train_steps_per_second': 0.305, 'train_loss': 0.43524854, 'epoch': 1.0, 'global_step': 618}\n",
      "Train: 100%|██████████| 618/618 [33:44<00:00,  3.28s/it]\n",
      "[INFO:swift] last_model_checkpoint: /ml/output/model/checkpoint-618\n",
      "[INFO:swift] best_model_checkpoint: /ml/output/model/checkpoint-618\n",
      "[INFO:swift] images_dir: /ml/output/model/images\n",
      "[INFO:swift] End time of running main: 2024-02-23 14:00:42.899375\n",
      "\n",
      "Training job (trainpiyqdicammv) succeeded, you can check the logs/metrics/output in  the console:\n",
      "https://pai.console.aliyun.com/?regionId=cn-hangzhou&workspaceId=58670#/training/jobs/trainpiyqdicammv\n",
      "oss://lq-pai-test-2-hz/pai/training_job/modelscope_swift_train_20240223_131924_u0orbq/model/\n"
     ]
    }
   ],
   "source": [
    "# 等待到训练作业结束\n",
    "est.wait()\n",
    "\n",
    "# 查看训练任务所产出的模型地址，用户可以通过ossutils，或是其他方式下载模型到本地.\n",
    "print(est.model_data())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "以上训练作业中，我们通过`--dataset`参数，使用了预置的数据集`blossom-math-zh`，Swift框架将自动完成数据集的下载准备工作。\n",
    "\n",
    "Swift也支持使用自定义数据集完成微调训练，通过`--custom_train_dataset_path`和`--custom_val_dataset_path`参数，用户可以指定自定义数据集的路径，具体介绍可以参考Swift的文档：[自定义数据集微调训练](https://github.com/modelscope/swift/blob/main/docs/source/LLM/%E8%87%AA%E5%AE%9A%E4%B9%89%E4%B8%8E%E6%8B%93%E5%B1%95.md#%E8%87%AA%E5%AE%9A%E4%B9%89%E6%95%B0%E6%8D%AE%E9%9B%86)\n",
    "\n",
    "当开发者使用`ModelScopeEstimator`在PAI提交训练作业时，支持在训练作业中使用OSS、NAS、MaxCompute表等数据源。通过`fit`方法的`inputs`参数，我们可以指定训练使用的数据集的路径，相应的数据会被准备到训练作业环境中，支持训练作业直接读取使用，具体可以参考文档：[使用训练数据](https://alipai.readthedocs.io/zh/latest/user-guide/training/use-data.html)\n",
    "\n",
    "\n",
    "```python\n",
    "\n",
    "from pai.modelscope import ModelScopeEstimator\n",
    "\n",
    "\n",
    "hps = {\n",
    "\t\"custom_train_dataset_path\": \"/ml/input/data/train/<TRAIN_FILE_NAME>\",\n",
    "\t\"custom_val_dataset_path\": \"/ml/input/data/validation/<VALIDATION_FILE_NAME>\",\n",
    "\t# more parameters\n",
    "\t# ...\n",
    "}\n",
    "est = ModelScopeEstimator(\n",
    "\t# 通过 --custom_train_dataset_path 和 --custom_val_dataset_path 参数，传递数据集路径\n",
    "\tcommand=\"python llm_sft.py $PAI_USER_ARGS\",\n",
    "\t# more parameters...\n",
    ")\n",
    "\n",
    "\n",
    "# 使用自定义数据集微调训练\n",
    "est.fit(\n",
    "\tinputs={\n",
    "\t\t# 可以使用本地路径或者OSS路径，相应的数据集会被准备到 /ml/input/data/{channel_name}/ 路径下.\n",
    "\t\t\"train\": \"oss://<YourOssBucketName>/<PathToTrainData>\",\n",
    "\t\t# 本地文件会被上传到OSS Bucket，然后再被挂载到训练作业中。\n",
    "\t\t\"validation\": \"/path/to/validation/data\",\n",
    "\t}\n",
    ")\n",
    "```\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 训练任务完成之后，删除TensorBoard实例，释放资源（每一个账号下最多同时能够开通5个免费的TensorBoard实例）\n",
    "tb.delete()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 模型部署\n",
    "\n",
    "Swift提供了命令行工具，支持开发者将模型部署为在线推理服务，具体介绍可以参考文档：[Swift：vLLM推理加速与部署](https://github.com/modelscope/swift/blob/main/docs/source/LLM/VLLM%E6%8E%A8%E7%90%86%E5%8A%A0%E9%80%9F%E4%B8%8E%E9%83%A8%E7%BD%B2.md#%E9%83%A8%E7%BD%B2)。在本章节中，我们将通过 PAI Python SDK 将训练产出的模型部署到 PAI-EAS，创建在线推理服务。\n",
    "\n",
    "\n",
    "### 下载合并模型\n",
    "\n",
    "在模型部署之前，我们需要将微调训练获得的LoRA模型下载到本地，与原始模型合并获得完整的模型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/opt/conda/lib/python3.10/site-packages/pai/common/oss_utils.py:30: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
      "  from tqdm.autonotebook import tqdm\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/README.md: 100%|██████████| 125/125 [00:00<00:00, 2.39kB/s]<?, ?it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/configuration.json: 100%|██████████| 358/358 [00:00<00:00, 9.42kB/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/default/adapter_config.json: 100%|██████████| 655/655 [00:00<00:00, 9.47kB/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/default/adapter_model.safetensors: 100%|██████████| 16.8M/16.8M [00:00<00:00, 31.1MB/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/generation_config.json: 100%|██████████| 275/275 [00:00<00:00, 7.31kB/s]it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/optimizer.pt: 100%|██████████| 33.6M/33.6M [00:00<00:00, 50.2MB/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/qwen.tiktoken: 100%|██████████| 2.56M/2.56M [00:00<00:00, 14.3MB/s] 3.59it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/rng_state.pth: 100%|██████████| 14.2k/14.2k [00:00<00:00, 153kB/s]  3.91it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/scheduler.pt: 100%|██████████| 1.06k/1.06k [00:00<00:00, 20.6kB/s]  4.64it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/sft_args.json: 100%|██████████| 2.72k/2.72k [00:00<00:00, 19.4kB/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/special_tokens_map.json: 100%|██████████| 61.0/61.0 [00:00<00:00, 1.20kB/s]/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/tokenization_qwen.py: 100%|██████████| 9.62k/9.62k [00:00<00:00, 89.9kB/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/tokenizer_config.json: 100%|██████████| 299/299 [00:00<00:00, 2.86kB/s]28it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/trainer_state.json: 100%|██████████| 10.9k/10.9k [00:00<00:00, 67.9kB/s]7it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-600/training_args.bin: 100%|██████████| 6.46k/6.46k [00:00<00:00, 116kB/s].15it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/README.md: 100%|██████████| 125/125 [00:00<00:00, 2.39kB/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/configuration.json: 100%|██████████| 358/358 [00:00<00:00, 19.9kB/s] 8.80it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/default/adapter_config.json: 100%|██████████| 655/655 [00:00<00:00, 8.28kB/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/default/adapter_model.safetensors: 100%|██████████| 16.8M/16.8M [00:00<00:00, 53.2MB/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/generation_config.json: 100%|██████████| 275/275 [00:00<00:00, 2.11kB/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/optimizer.pt: 100%|██████████| 33.6M/33.6M [00:00<00:00, 52.9MB/s],  7.18it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/qwen.tiktoken: 100%|██████████| 2.56M/2.56M [00:00<00:00, 11.6MB/s]  4.20it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/rng_state.pth: 100%|██████████| 14.2k/14.2k [00:00<00:00, 167kB/s],  4.21it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/scheduler.pt: 100%|██████████| 1.06k/1.06k [00:00<00:00, 8.77kB/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/sft_args.json: 100%|██████████| 2.72k/2.72k [00:00<00:00, 31.6kB/s]  5.29it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/special_tokens_map.json: 100%|██████████| 61.0/61.0 [00:00<00:00, 896B/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/tokenization_qwen.py: 100%|██████████| 9.62k/9.62k [00:00<00:00, 113kB/s]it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/tokenizer_config.json: 100%|██████████| 299/299 [00:00<00:00, 5.53kB/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/trainer_state.json: 100%|██████████| 11.3k/11.3k [00:00<00:00, 90.7kB/s]0it/s]\n",
      "Downloading file: ./qwen-lora-model/checkpoint-618/training_args.bin: 100%|██████████| 6.46k/6.46k [00:00<00:00, 84.5kB/s]83it/s]\n",
      "Downloading file: ./qwen-lora-model/images/eval_acc.png: 100%|██████████| 26.2k/26.2k [00:00<00:00, 333kB/s]\n",
      "Downloading file: ./qwen-lora-model/images/eval_loss.png: 100%|██████████| 18.4k/18.4k [00:00<00:00, 181kB/s]04<00:01,  8.76it/s]\n",
      "Downloading file: ./qwen-lora-model/images/eval_runtime.png: 100%|██████████| 19.7k/19.7k [00:00<00:00, 211kB/s]00:01,  8.88it/s]\n",
      "Downloading file: ./qwen-lora-model/images/eval_samples_per_second.png: 100%|██████████| 26.4k/26.4k [00:00<00:00, 193kB/s]7it/s]\n",
      "Downloading file: ./qwen-lora-model/images/eval_steps_per_second.png: 100%|██████████| 25.8k/25.8k [00:00<00:00, 192kB/s].43it/s]\n",
      "Downloading file: ./qwen-lora-model/images/train_acc.png: 100%|██████████| 28.7k/28.7k [00:00<00:00, 399kB/s]05<00:01,  8.08it/s]\n",
      "Downloading file: ./qwen-lora-model/images/train_epoch.png: 100%|██████████| 18.7k/18.7k [00:00<00:00, 188kB/s]\n",
      "Downloading file: ./qwen-lora-model/images/train_learning_rate.png: 100%|██████████| 25.4k/25.4k [00:00<00:00, 225kB/s] 9.09it/s]\n",
      "Downloading file: ./qwen-lora-model/images/train_loss.png: 100%|██████████| 24.1k/24.1k [00:00<00:00, 177kB/s]5<00:01,  8.89it/s]\n",
      "Downloading file: ./qwen-lora-model/images/train_total_flos.png: 100%|██████████| 12.6k/12.6k [00:00<00:00, 127kB/s]1,  8.18it/s]\n",
      "Downloading file: ./qwen-lora-model/images/train_train_loss.png: 100%|██████████| 11.7k/11.7k [00:00<00:00, 133kB/s]0,  8.44it/s]\n",
      "Downloading file: ./qwen-lora-model/images/train_train_runtime.png: 100%|██████████| 13.8k/13.8k [00:00<00:00, 187kB/s]\n",
      "Downloading file: ./qwen-lora-model/images/train_train_samples_per_second.png: 100%|██████████| 17.5k/17.5k [00:00<00:00, 191kB/s]\n",
      "Downloading file: ./qwen-lora-model/images/train_train_steps_per_second.png: 100%|██████████| 14.8k/14.8k [00:00<00:00, 271kB/s]]\n",
      "Downloading file: ./qwen-lora-model/logging.jsonl: 100%|██████████| 7.68k/7.68k [00:00<00:00, 96.9kB/s]\n",
      "Downloading file: ./qwen-lora-model/runs/events.out.tfevents.1707147764.trainb1y4g5b473j-master-0.11.0: 100%|██████████| 21.6k/21.6k [00:00<00:00, 212kB/s]\n",
      "Downloading file: ./qwen-lora-model/sft_args.json: 100%|██████████| 2.72k/2.72k [00:00<00:00, 24.6kB/s]\n",
      "Downloading file: ./qwen-lora-model/training_args.json: 100%|██████████| 4.23k/4.23k [00:00<00:00, 82.4kB/s]:06<00:00,  9.78it/s]\n",
      "Downloading: pai/training_job/modelscope_sdk_train_20240205_233710_7p4x8h/model/: 100%|██████████| 48/48 [00:06<00:00,  7.21it/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "./qwen-lora-model/checkpoint-618\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "from pai.common.oss_utils import download\n",
    "import glob\n",
    "import os\n",
    "\n",
    "\n",
    "# 下载模型到本地\n",
    "local_model_dir = download(est.model_data(), \"./qwen-lora-model\")\n",
    "\n",
    "\n",
    "# 获取模型的最新的一个checkpoint用于部署\n",
    "checkpoint_dirs = glob.glob(os.path.join(local_model_dir, \"checkpoint-*\"))\n",
    "latest_version = max(int(os.path.basename(d).split(\"-\")[1]) for d in checkpoint_dirs)\n",
    "latest_checkpoint_dir = os.path.join(local_model_dir, f\"checkpoint-{latest_version}\")\n",
    "\n",
    "print(latest_checkpoint_dir)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "以下代码中，我们将使用Swift提供的命令行工具，将训练获得的LoRA模型和原始的模型合并，获得完整的模型。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 安装ModelScope Swift\n",
    "!python -m pip install -q -U ms-swift\n",
    "\n",
    "# 使用Swift merge-lora 工具合并模型\n",
    "!swift merge-lora --ckpt_dir {latest_checkpoint_dir}\n",
    "\n",
    "\n",
    "merged_model_dir = os.path.join(local_model_dir, f\"checkpoint-{latest_version}-merged\")\n",
    "print(merged_model_dir)\n",
    "\n",
    "# 查看合并后的模型\n",
    "!ls {merged_model_dir}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "上传模型到OSS Bucket，供后续推理服务加载使用。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Uploading file: qwen-lora-model/checkpoint-618-merged/configuration_qwen.py: 100%|██████████| 2.35k/2.35k [00:00<00:00, 30.6kB/s]\n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/model.safetensors.index.json: 100%|██████████| 19.5k/19.5k [00:00<00:00, 864kB/s]\n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/generation_config.json: 100%|██████████| 275/275 [00:00<00:00, 14.7kB/s]\n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/model-00001-of-00004.safetensors: 100%|██████████| 4.99G/4.99G [00:30<00:00, 161MB/s] \n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/modeling_qwen.py: 100%|██████████| 55.6k/55.6k [00:00<00:00, 2.28MB/s]\n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/configuration.json: 100%|██████████| 76.0/76.0 [00:00<00:00, 3.51kB/s]\n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/qwen.tiktoken: 100%|██████████| 2.56M/2.56M [00:00<00:00, 30.8MB/s]\n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/sft_args.json: 100%|██████████| 2.72k/2.72k [00:00<00:00, 41.9kB/s]\n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/tokenizer_config.json: 100%|██████████| 299/299 [00:00<00:00, 20.3kB/s]\n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/model-00002-of-00004.safetensors: 100%|██████████| 4.98G/4.98G [00:31<00:00, 160MB/s] \n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/config.json: 100%|██████████| 1.10k/1.10k [00:00<00:00, 54.9kB/s]\n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/model-00003-of-00004.safetensors: 100%|██████████| 4.23G/4.23G [00:26<00:00, 162MB/s] \n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/qwen_generation_utils.py: 100%|██████████| 14.6k/14.6k [00:00<00:00, 704kB/s]\n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/cpp_kernels.py: 100%|██████████| 1.92k/1.92k [00:00<00:00, 42.6kB/s]\n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/model-00004-of-00004.safetensors: 100%|██████████| 1.24G/1.24G [00:07<00:00, 159MB/s] \n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/tokenization_qwen.py: 100%|██████████| 9.62k/9.62k [00:00<00:00, 249kB/s]\n",
      "Uploading file: qwen-lora-model/checkpoint-618-merged/special_tokens_map.json: 100%|██████████| 61.0/61.0 [00:00<00:00, 3.77kB/s]\n"
     ]
    }
   ],
   "source": [
    "from pai.common.oss_utils import upload\n",
    "\n",
    "\n",
    "# 上传模型到当前session的bucket.\n",
    "model_data_uri = upload(\n",
    "    merged_model_dir,\n",
    "    \"modelscope-swift-example/qwen-lora-merged-model/\",\n",
    ")\n",
    "print(model_data_uri)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 部署模型\n",
    "\n",
    "通过SDK提供的`ModelScopeEstimator`对象，用户可以配置部署的模型，镜像，机器实例规格等参数，将模型部署为在线推理服务。\n",
    "\n",
    "以下代码中，我们将使用PAI预置的`vLLM`推理服务镜像，使用Swift部署合并后的模型为一个在线推理服务。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "View the service detail by accessing the console URI: \n",
      "https://pai.console.aliyun.com/?regionId=cn-hangzhou#/eas/serviceDetail/qwen_7b_chat_v5/detail\n"
     ]
    }
   ],
   "source": [
    "from pai.modelscope import ModelScopeModel\n",
    "from pai.common.utils import random_str\n",
    "\n",
    "m = ModelScopeModel(\n",
    "    # 模型OSS路径，默认会被挂载到 /eas/workspace/model 路径下\n",
    "    model_data=model_data_uri,\n",
    "    # 使PAI提供的vllm推理镜像\n",
    "    image_uri=\"eas-registry-vpc.{}.cr.aliyuncs.com/pai-eas/chat-llm-webui:3.0-vllm\".format(\n",
    "        sess.region_id\n",
    "    ),\n",
    "    # 推理服务执行前安装依赖\n",
    "    requirements=[\"ms-swift>=1.6.0\"],\n",
    "    # 推理服务执行命令\n",
    "    command=\"swift deploy --ckpt_dir /eas/workspace/model/ --port 8000 --host 0.0.0.0\",\n",
    "    port=8000,\n",
    ")\n",
    "\n",
    "predictor = m.deploy(\n",
    "    # 推理服务名称\n",
    "    service_name=\"qwen1_5_7b_chat_{}\".format(random_str(8)),\n",
    "    # 推理服务使用的机器实例\n",
    "    instance_type=\"ecs.gn6e-c12g1.3xlarge\",  # 1 * NVIDIA V100 (32 GB GPU Memory)\n",
    "    # instance_type=\"ml.gu7i.c16m60.1-gu30\",      # 1 * GU30 GPU (24 GB GPU Memory)\n",
    ")\n",
    "\n",
    "print(predictor.service_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 调用模型推理服务\n",
    "\n",
    "通过`swift deploy`部署的大语言模型，支持通过OpenAI风格的HTTP API进行调用。开发者可以通过openai SDK（推荐）或是使用`Predictor`提供的`raw_predict`方法调用推理服务。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 安装openai SDK.\n",
    "!pip install -q openai"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "我是一个人工智能助手，能够回答问题、提供建议、生成代码、聊天等任务，帮助用户解决问题和提高效率。\n",
      "\n",
      "Answer: I am an AI assistant that can answer questions, provide suggestions, generate code, chat, and help users solve problems and improve efficiency."
     ]
    }
   ],
   "source": [
    "import openai\n",
    "\n",
    "\n",
    "# 获取推理服务的地址和访问密钥\n",
    "endpoint = os.path.join(predictor.internet_endpoint, \"v1\")\n",
    "access_token = predictor.access_token\n",
    "\n",
    "client = openai.OpenAI(\n",
    "    base_url=endpoint,\n",
    "    api_key=access_token,\n",
    ")\n",
    "\n",
    "# 调用流式推理服务接口\n",
    "completion = client.chat.completions.create(\n",
    "    model=\"qwen1half-7b-chat\",\n",
    "    messages=[\n",
    "        {\"role\": \"user\", \"content\": \"一句话介绍一下你自己\"},\n",
    "    ],\n",
    "    max_tokens=128,\n",
    "    stream=True,\n",
    ")\n",
    "\n",
    "for chunk in completion:\n",
    "    if not chunk.choices:\n",
    "        continue\n",
    "    content = chunk.choices[0].delta.content\n",
    "    if content:\n",
    "        print(content, end=\"\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "用户也可以使用`deploy`方法返回的`predictor`对象，直接调用推理服务。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\n",
      "    \"model\": \"qwen-7b-chat\",\n",
      "    \"choices\": [\n",
      "        {\n",
      "            \"index\": 0,\n",
      "            \"message\": {\n",
      "                \"role\": \"assistant\",\n",
      "                \"content\": \"我是一个人工智能助手，可以回答问题、提供信息、进行对话等。我能够帮助用户解决问题，提供有用的信息和建议，以及进行各种任务。\"\n",
      "            },\n",
      "            \"finish_reason\": \"stop\"\n",
      "        }\n",
      "    ],\n",
      "    \"usage\": {\n",
      "        \"prompt_tokens\": 22,\n",
      "        \"completion_tokens\": 34,\n",
      "        \"total_tokens\": 56\n",
      "    },\n",
      "    \"id\": \"chatcmpl-516695f2072d4d3c9aa4bb7b5a5962f2\",\n",
      "    \"object\": \"chat.completion\",\n",
      "    \"created\": 1707296083\n",
      "}\n"
     ]
    }
   ],
   "source": [
    "import json\n",
    "\n",
    "resp = predictor.raw_predict(\n",
    "    path=\"/v1/chat/completions\",\n",
    "    method=\"POST\",\n",
    "    data={\n",
    "        \"model\": \"qwen-7b-chat\",\n",
    "        \"messages\": [{\"role\": \"user\", \"content\": \"一句话介绍一下你自己\"}],\n",
    "    },\n",
    ")\n",
    "print(json.dumps(resp.json(), indent=4, ensure_ascii=False))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 总结\n",
    "\n",
    "通过当前文档，我们了解了如何基于PAI Python SDK在PAI上使用ModelScope Swift框架进行模型微调训练和部署。通过PAI Python SDK，开发者可以轻松得使用使用各种开源框架完成模型的开发和部署，包括Swift，TensorFlow，PyTorch，HuggingFace transformers等，开发者可以通过[文档](https://alipai.readthedocs.io/)和[示例仓库](https://github.com/aliyun/pai-examples)了解更多关于PAI Python SDK的使用方法。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "## 参考文档\n",
    "\n",
    "- 阿里云机器学习平台PAI: https://www.aliyun.com/product/bigdata/learn\n",
    "\n",
    "- ModelScope Swift文档：https://github.com/modelscope/swift/blob/main/docs/source/GetStarted/%E5%BF%AB%E9%80%9F%E4%BD%BF%E7%94%A8.md\n",
    "\n",
    "- PAI Python SDK文档：https://alipai.readthedocs.io/\n",
    "\n",
    "- PAI示例仓库：https://github.com/aliyun/pai-examples\n",
    "\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "base",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "undefined.undefined.undefined"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
