"""使用 Hugging Face Transformers 和 DeepSpeed 在单机多卡上微调 BERT 模型（适用于分类任务）
在华为云上使用Conda环境运行DeepSpeed时遇到MPI库缺失的问题，可以通过以下步骤解决：
    # 在conda环境安装OpenMPI（conda-forge渠道最可靠）
        conda install -c conda-forge openmpi mpi4py
    验证MPI安装：
        # 检查mpi库路径
        conda list openmpi
        find ${CONDA_PREFIX} -name "libmpi*"
    设置环境变量（关键步骤）：
      在运行脚本前添加：
        export LD_LIBRARY_PATH=${CONDA_PREFIX}/lib:$LD_LIBRARY_PATH
        export OPAL_PREFIX=${CONDA_PREFIX}
    单卡运行（避免MPI问题）：deepspeed --no_local_rank --num_gpus=1 deepspeed_train.py
    或多卡运行（如果实际有多个GPU）：deepspeed --include="localhost:0,1" --master_port=29500 deepspeed_train.py
ds_config.json的内存如下：
{
  "train_micro_batch_size_per_gpu": 8,
  "gradient_accumulation_steps": 2,
  "optimizer": {
    "type": "AdamW",
    "params": {
      "lr": 5e-5,
      "weight_decay": 0.01
    }
  },
  "fp16": {
    "enabled": true
  },
  "zero_optimization": {
    "stage": 2,
    "offload_optimizer": {
      "device": "cpu"
    }
  },
  "steps_per_print": 50
}
"""
# 本地无法运行，只能在GPU是英伟达的电脑上运行
from transformers import (
    AutoTokenizer,
    AutoModelForSequenceClassification,
    TrainingArguments,
    Trainer
)
from datasets import load_dataset

# 1. 加载模型和数据
model_name = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2)

# 2. 准备数据集（使用小型子集测试）
dataset = load_dataset("imdb", split="train[:10%]+test[:10%]")
def tokenize_fn(examples):
    return tokenizer(examples["text"], truncation=True, max_length=128, padding="max_length", return_tensors="pt")
tokenized_dataset = dataset.map(tokenize_fn, batched=True).train_test_split(test_size=0.2)

# 3. 定义训练参数（与DeepSpeed配置严格一致）
training_args = TrainingArguments(
    output_dir="./results",
    per_device_train_batch_size=8,
    num_train_epochs=1,
    logging_dir="./logs",
    logging_steps=10,
    save_steps=100,
    gradient_accumulation_steps=2,
    learning_rate=5e-5,
    weight_decay=0.01,
    fp16=True,
    deepspeed="./ds_config.json"
)

# 4. 初始化 Trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized_dataset["train"],
    eval_dataset=tokenized_dataset["test"],
)

# 5. 启动训练
trainer.train()
print("训练完成！模型已保存到 ./results")