File size: 1,429 Bytes
f6e7e0d 3565481 f6e7e0d 3565481 ba49af6 3565481 2b99fc9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
---
license: llama3
datasets:
- LooksJuicy/ruozhiba
language:
- zh
---
## 基于ruozhiba对Llama-3-8B-Instruct进行微调。</br>
### 模型:</br>
- https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
### 数据集:
- https://huggingface.co/datasets/LooksJuicy/ruozhiba
### 训练工具
https://github.com/hiyouga/LLaMA-Factory
### 测评方式:
使用opencompass(https://github.com/open-compass/OpenCompass/ ), 测试工具基于CEval和MMLU对微调之后的模型和原始模型进行测试。</br>
测试模型分别为:
- Llama-3-8B
- Llama-3-8B-Instruct
- LLama3-Instruct-sft-ruozhiba,使用ruozhiba数据对Llama-3-8B-Instruct使用sft方式lora微调
### 测试机器
8*A800
### 结果
| 模型名称 | CEVAL | MMLU |
|--------------------------|-------|------|
| LLama3 | 49.91 | 66.62|
| LLama3-Instruct | 50.55 | 67.15|
| LLama3-Instruct-sft-ruozhiba-3epoch | 50.87 | 67.51|
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- num_epochs: 3.0
- mixed_precision_training: Native AMP
|