Junrulu commited on
Commit
05b23e3
1 Parent(s): 6860f57

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -3
README.md CHANGED
@@ -1,3 +1,62 @@
1
- ---
2
- license: llama3
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ model-index:
3
+ - name: Junrulu/Llama-3-8B-Instruct-Iterative-SamPO
4
+ results: []
5
+ datasets:
6
+ - HuggingFaceH4/ultrafeedback_binarized
7
+ language:
8
+ - en
9
+ base_model: meta-llama/Meta-Llama-3-8B-Instruct
10
+ license: llama3
11
+ ---
12
+
13
+ # Model Card for Llama-3-8B-Instruct-Iterative-SamPO
14
+
15
+ This repository provides a fine-tuned version of Llama-3-8B-Instruct, using our proposed [SamPO](https://github.com/LuJunru/SamPO) algorithm. We obey all licenses mentioned in llama3's work.
16
+
17
+ ## Performance
18
+
19
+ | Model | GSM8K | IFEval | PiQA | MMLU | TruthfulQA | AlpacaEval2 | LC AlpacaEval2 | Length in Tokens |
20
+ | ----- | ------| ------ | ---- | ---- | ---------- | ----------- | -------------- | ---------------- |
21
+ | **Llama3-8B-Instruct** | 75.06 | 49.40 | 80.69 | 63.85 | 36.47 | 22.57 | 22.92 | 421 |
22
+ | **Llama3-8B-Instruct-DPO** | 75.59 | 51.80 | **81.94** | 64.06 | 40.39 | 23.34 | 23.20 | 422 |
23
+ | **Llama3-8B-Instruct-Iterative-DPO** | 74.91 | 52.52 | 81.66 | 64.02 | 39.90 | 23.92 | 25.50 | 403 |
24
+ | **Llama3-8B-Instruct-Iterative-SamPO** | **77.81** | **60.55** | 81.18 | **64.12** | **44.07** | **30.68** | **35.14** | 377 |
25
+
26
+ ## Evaluation Details
27
+ Five conditional benchmarks, using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness):
28
+ - GSM8K: 8-shot, report strict match
29
+ - IFEval: 3-shot, report instruction-level strict accuracy
30
+ - PiQA: 3-shot, report accuracy
31
+ - MMLU: 0-shot, report normalized accuracy
32
+ - TruthfulQA: 3-shot, report accuracy of single-true mc1 setting
33
+
34
+ One open-ended benchmark, using official [alpaca_eval](https://github.com/tatsu-lab/alpaca_eval/):
35
+ - AlpacaEval2: win rate (%) judged by GPT-4-turbo between the model's outputs vs. the GPT-4-turbo's response
36
+ - LC AlpacaEval2: length-debiased win rate (%) of AlpacaEval2
37
+ - Length in Tokens: the average output length of AlpacaEval2, calculated in tokens with Llama3's tokenizer
38
+
39
+ ## Input Format
40
+
41
+ The model is trained to use the following format:
42
+ ```
43
+ <|start_header_id|>user<|end_header_id|>
44
+
45
+ {PROMPT}<|eot_id|>
46
+ <|start_header_id|>assistant<|end_header_id|>
47
+
48
+ {Response}
49
+ ```
50
+
51
+ ## Training hyperparameters
52
+
53
+ The following hyperparameters were used during DPO/SamPO training:
54
+ - DPO beta: 0.1
55
+ - learning_rate: 4e-7 * sqrt(Num of Nodes)
56
+ - total_train_batch_size: 128 * Num of Nodes
57
+ - optimizer: AdamW with beta1 0.9, beta2 0.999 and epsilon 1e-8
58
+ - lr_scheduler_type: linear
59
+ - lr_scheduler_warmup_ratio: 0.1
60
+ - Weight Decay: 0.0
61
+ - num_epochs: 3.0
62
+ - Specifically add above input format over training samples