Model Card for Model ID

在llama-2-13b上使用huangyt/FINETUNE2資料集進行訓練,採用與platypus相近參數

Fine-Tuning Information

  • GPU: RTX4090 (single core / 24564MiB)
  • base model: meta-llama/Llama-2-13b-hf
  • dataset: huangyt/FINETUNE2 (共約3w筆訓練集)
  • peft_type: LoRA
  • lora_rank: 16
  • lora alpha: 8
  • lora dropout: 0.05
  • lora_target: gate_proj, up_proj, down_proj
  • per_device_train_batch_size: 8
  • gradient_accumulation_steps: 8
  • learning_rate : 4e-4
  • epoch: 1
  • batch size: 8
  • microbatch size: 4
  • warmup steps: 100
  • weight decay: cosine
  • cutoff length: 2048
  • precision: bf16
  • group_by_length: True
  • load_in_8bit: True

Fine-Tuning Detail

  • train_loss: 0.0823
  • train_runtime: 02:40:01

Evaluation

  • 評估結果來自HuggingFaceH4/open_llm_leaderboard
  • 與Llama-2-13b比較4種Benchmark,包含ARCHellaSwagMMLUTruthfulQA
Model Average ARC HellaSwag MMLU TruthfulQA
meta-llama/Llama-2-13b-hf 56.9 58.11 80.97 54.34 34.17
meta-llama/Llama-2-13b-chat-hf 59.93 59.04 81.94 54.64 44.12
CHIH-HUNG/llama-2-13b-FINETUNE2_3w 58.24 58.62 82.32 54.25 38.17
CHIH-HUNG/llama-2-13b-huangyt_Fintune_1_17w-q_k_v_o_proj 58.49 59.73 81.06 54.53 38.64
CHIH-HUNG/llama-2-13b-FINETUNE2_3w-q_k_v_o_proj 58.21 58.53 82.47 53.9 37.92
CHIH-HUNG/llama-2-13b-FINETUNE2_3w-gate_up_down_proj 58.81 57.42 82.42 55.57 39.19
wei123602/llama2-13b-fintune2 wait wait wait wait wait
Downloads last month
1,321
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train wei123602/llama2-13b-fintune2

Spaces using wei123602/llama2-13b-fintune2 6