flydust commited on
Commit
4c6d616
1 Parent(s): 80cdcf7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -68
README.md CHANGED
@@ -5,12 +5,90 @@ tags:
5
  - axolotl
6
  - generated_from_trainer
7
  model-index:
8
- - name: Llama-3-8B-SynDa-70BQA-300K-Filtered-MR-L
9
  results: []
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
16
  <details><summary>See axolotl config</summary>
@@ -27,25 +105,18 @@ load_in_4bit: false
27
  strict: false
28
 
29
  datasets:
30
- - path: SynDa/Llama-3-70B-SynDa-MultiRound-300K-Filtered-L
31
  type: sharegpt
32
  conversation: llama3
33
  dataset_prepared_path: last_run_prepared
34
  val_set_size: 0.001
35
- output_dir: ./out_Llama-3-70B-SynDa-300K-Multi-Round2-L
36
 
37
  sequence_len: 8192
38
  sample_packing: true
39
  eval_sample_packing: false
40
  pad_to_sequence_len: true
41
 
42
- wandb_project: SynDa
43
- wandb_entity:
44
- wandb_watch:
45
- wandb_name: Llama-3-70B-SynDa-300K-MR-L-2EP-FFT
46
- wandb_log_model:
47
- hub_model_id: SynDa/Llama-3-8B-SynDa-70BQA-300K-Filtered-MR-L
48
-
49
  gradient_accumulation_steps: 8
50
  micro_batch_size: 1
51
  num_epochs: 2
@@ -83,59 +154,3 @@ special_tokens:
83
  ```
84
 
85
  </details><br>
86
-
87
- # Llama-3-8B-SynDa-70BQA-300K-Filtered-MR-L
88
-
89
- This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset.
90
- It achieves the following results on the evaluation set:
91
- - Loss: 0.4555
92
-
93
- ## Model description
94
-
95
- More information needed
96
-
97
- ## Intended uses & limitations
98
-
99
- More information needed
100
-
101
- ## Training and evaluation data
102
-
103
- More information needed
104
-
105
- ## Training procedure
106
-
107
- ### Training hyperparameters
108
-
109
- The following hyperparameters were used during training:
110
- - learning_rate: 2e-05
111
- - train_batch_size: 1
112
- - eval_batch_size: 1
113
- - seed: 42
114
- - distributed_type: multi-GPU
115
- - num_devices: 4
116
- - gradient_accumulation_steps: 8
117
- - total_train_batch_size: 32
118
- - total_eval_batch_size: 4
119
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
120
- - lr_scheduler_type: cosine
121
- - lr_scheduler_warmup_steps: 100
122
- - num_epochs: 2
123
-
124
- ### Training results
125
-
126
- | Training Loss | Epoch | Step | Validation Loss |
127
- |:-------------:|:------:|:----:|:---------------:|
128
- | 0.8807 | 0.0007 | 1 | 0.9001 |
129
- | 0.5113 | 0.3337 | 464 | 0.5178 |
130
- | 0.4668 | 0.6673 | 928 | 0.4792 |
131
- | 0.4492 | 1.0010 | 1392 | 0.4582 |
132
- | 0.3498 | 1.3205 | 1856 | 0.4575 |
133
- | 0.3525 | 1.6542 | 2320 | 0.4555 |
134
-
135
-
136
- ### Framework versions
137
-
138
- - Transformers 4.40.2
139
- - Pytorch 2.3.0+cu121
140
- - Datasets 2.19.1
141
- - Tokenizers 0.19.1
 
5
  - axolotl
6
  - generated_from_trainer
7
  model-index:
8
+ - name: Llama-3-8B-Magpie-Pro-MT-SFT-v0.1
9
  results: []
10
  ---
11
 
12
+ # 🐦 Llama-3-8B-Magpie-Pro-MT-SFT-v0.1
13
+
14
+ Project Web: [https://magpie-align.github.io/](https://magpie-align.github.io/)
15
+
16
+ Arxiv Technical Report: [https://arxiv.org/abs/2406.08464](https://arxiv.org/abs/2406.08464)
17
+
18
+ Codes: [https://github.com/magpie-align/magpie](https://github.com/magpie-align/magpie)
19
+
20
+ ## Abstract
21
+ <details><summary>Click Here</summary>
22
+ High-quality instruction data is critical for aligning large language models (LLMs). Although some models, such as Llama-3-Instruct, have open weights, their alignment data remain private, which hinders the democratization of AI. High human labor costs and a limited, predefined scope for prompting prevent existing open-source data creation methods from scaling effectively, potentially limiting the diversity and quality of public alignment datasets. Is it possible to synthesize high-quality instruction data at scale by extracting it directly from an aligned LLM? We present a self-synthesis method for generating large-scale alignment data named Magpie. Our key observation is that aligned LLMs like Llama-3-Instruct can generate a user query when we input only the left-side templates up to the position reserved for user messages, thanks to their auto-regressive nature. We use this method to prompt Llama-3-Instruct and generate 4 million instructions along with their corresponding responses. We perform a comprehensive analysis of the extracted data and select 300K high-quality instances. To compare Magpie data with other public instruction datasets, we fine-tune Llama-3-8B-Base with each dataset and evaluate the performance of the fine-tuned models. Our results indicate that in some tasks, models fine-tuned with Magpie perform comparably to the official Llama-3-8B-Instruct, despite the latter being enhanced with 10 million data points through supervised fine-tuning (SFT) and subsequent feedback learning. We also show that using Magpie solely for SFT can surpass the performance of previous public datasets utilized for both SFT and preference optimization, such as direct preference optimization with UltraFeedback. This advantage is evident on alignment benchmarks such as AlpacaEval, ArenaHard, and WildBench.
23
+ </details><be>
24
+
25
+ ## About This Model
26
+
27
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on [Magpie-Align/Magpie-Pro-MT-300K-v0.1](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-MT-300K-v0.1) dataset.
28
+
29
+ It achieves performance comparable with the official Llama-3-8B-Instruct Model with SFT only!
30
+
31
+ - **Alpaca Eval 2 (GPT-4-Turbo-1106): 24.21 (LC), 25.19 (WR)**
32
+ - **Alpaca Eval 2 (Llama-3-8B-Instruct): 52.92 (LC), 54.80 (WR)**
33
+ - **Arena Hard: 20.4**
34
+
35
+ ## Other Information
36
+
37
+ **License**: Please follow [Meta Llama 3 Community License](https://llama.meta.com/llama3/license).
38
+
39
+ **Conversation Template**: Please use Llama 3 **official chat template** for the best performance.
40
+
41
+ ## Citation
42
+
43
+ If you find the model, data, or code useful, please cite our paper:
44
+ ```
45
+ @misc{xu2024magpie,
46
+ title={Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing},
47
+ author={Zhangchen Xu and Fengqing Jiang and Luyao Niu and Yuntian Deng and Radha Poovendran and Yejin Choi and Bill Yuchen Lin},
48
+ year={2024},
49
+ eprint={2406.08464},
50
+ archivePrefix={arXiv},
51
+ primaryClass={cs.CL}
52
+ }
53
+ ```
54
+
55
+ ## Training procedure
56
+
57
+ ### Training hyperparameters
58
+
59
+ The following hyperparameters were used during training:
60
+ - learning_rate: 2e-05
61
+ - train_batch_size: 1
62
+ - eval_batch_size: 1
63
+ - seed: 42
64
+ - distributed_type: multi-GPU
65
+ - num_devices: 4
66
+ - gradient_accumulation_steps: 8
67
+ - total_train_batch_size: 32
68
+ - total_eval_batch_size: 4
69
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
70
+ - lr_scheduler_type: cosine
71
+ - lr_scheduler_warmup_steps: 100
72
+ - num_epochs: 2
73
+
74
+ ### Training results
75
+
76
+ | Training Loss | Epoch | Step | Validation Loss |
77
+ |:-------------:|:------:|:----:|:---------------:|
78
+ | 0.8807 | 0.0007 | 1 | 0.9001 |
79
+ | 0.5113 | 0.3337 | 464 | 0.5178 |
80
+ | 0.4668 | 0.6673 | 928 | 0.4792 |
81
+ | 0.4492 | 1.0010 | 1392 | 0.4582 |
82
+ | 0.3498 | 1.3205 | 1856 | 0.4575 |
83
+ | 0.3525 | 1.6542 | 2320 | 0.4555 |
84
+
85
+
86
+ ### Framework versions
87
+
88
+ - Transformers 4.40.2
89
+ - Pytorch 2.3.0+cu121
90
+ - Datasets 2.19.1
91
+ - Tokenizers 0.19.1
92
 
93
  [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
94
  <details><summary>See axolotl config</summary>
 
105
  strict: false
106
 
107
  datasets:
108
+ - path: Magpie-Align/Magpie-Pro-MT-300K-v0.1
109
  type: sharegpt
110
  conversation: llama3
111
  dataset_prepared_path: last_run_prepared
112
  val_set_size: 0.001
113
+ output_dir: ./out_Llama-3-8B-Magpie-Pro-300K-MT
114
 
115
  sequence_len: 8192
116
  sample_packing: true
117
  eval_sample_packing: false
118
  pad_to_sequence_len: true
119
 
 
 
 
 
 
 
 
120
  gradient_accumulation_steps: 8
121
  micro_batch_size: 1
122
  num_epochs: 2
 
154
  ```
155
 
156
  </details><br>