munish0838 commited on
Commit
41f6747
1 Parent(s): 1026d73

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +162 -0
README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ base_model: Magpie-Align/Llama-3-8B-Magpie-Pro-MT-SFT-v0.1
4
+ tags:
5
+ - axolotl
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: Llama-3-8B-Magpie-Pro-MT-SFT-v0.1
9
+ results: []
10
+ library_name: transformers
11
+ pipeline_tag: text-generation
12
+ ---
13
+
14
+ # 🐦 Llama-3-8B-Magpie-Pro-MT-SFT-v0.1-GGUF
15
+
16
+ This is quantized version of [Magpie-Align/Llama-3-8B-Magpie-Pro-MT-SFT-v0.1](https://huggingface.co/Magpie-Align/Llama-3-8B-Magpie-Pro-MT-SFT-v0.1) created using llama.cpp
17
+
18
+ # Model Description
19
+
20
+ Project Web: [https://magpie-align.github.io/](https://magpie-align.github.io/)
21
+
22
+ Arxiv Technical Report: [https://arxiv.org/abs/2406.08464](https://arxiv.org/abs/2406.08464)
23
+
24
+ Codes: [https://github.com/magpie-align/magpie](https://github.com/magpie-align/magpie)
25
+
26
+ ## Abstract
27
+ <details><summary>Click Here</summary>
28
+ High-quality instruction data is critical for aligning large language models (LLMs). Although some models, such as Llama-3-Instruct, have open weights, their alignment data remain private, which hinders the democratization of AI. High human labor costs and a limited, predefined scope for prompting prevent existing open-source data creation methods from scaling effectively, potentially limiting the diversity and quality of public alignment datasets. Is it possible to synthesize high-quality instruction data at scale by extracting it directly from an aligned LLM? We present a self-synthesis method for generating large-scale alignment data named Magpie. Our key observation is that aligned LLMs like Llama-3-Instruct can generate a user query when we input only the left-side templates up to the position reserved for user messages, thanks to their auto-regressive nature. We use this method to prompt Llama-3-Instruct and generate 4 million instructions along with their corresponding responses. We perform a comprehensive analysis of the extracted data and select 300K high-quality instances. To compare Magpie data with other public instruction datasets, we fine-tune Llama-3-8B-Base with each dataset and evaluate the performance of the fine-tuned models. Our results indicate that in some tasks, models fine-tuned with Magpie perform comparably to the official Llama-3-8B-Instruct, despite the latter being enhanced with 10 million data points through supervised fine-tuning (SFT) and subsequent feedback learning. We also show that using Magpie solely for SFT can surpass the performance of previous public datasets utilized for both SFT and preference optimization, such as direct preference optimization with UltraFeedback. This advantage is evident on alignment benchmarks such as AlpacaEval, ArenaHard, and WildBench.
29
+ </details><be>
30
+
31
+ ## About This Model
32
+
33
+ This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on [Magpie-Align/Magpie-Pro-MT-300K-v0.1](https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-MT-300K-v0.1) dataset.
34
+
35
+ It achieves performance comparable with the official Llama-3-8B-Instruct Model with SFT only!
36
+
37
+ - **Alpaca Eval 2 (GPT-4-Turbo-1106): 24.21 (LC), 25.19 (WR)**
38
+ - **Alpaca Eval 2 (Llama-3-8B-Instruct): 52.92 (LC), 54.80 (WR)**
39
+ - **Arena Hard: 20.4**
40
+
41
+ ## Other Information
42
+
43
+ **License**: Please follow [Meta Llama 3 Community License](https://llama.meta.com/llama3/license).
44
+
45
+ **Conversation Template**: Please use Llama 3 **official chat template** for the best performance.
46
+
47
+ ## Citation
48
+
49
+ If you find the model, data, or code useful, please cite our paper:
50
+ ```
51
+ @misc{xu2024magpie,
52
+ title={Magpie: Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing},
53
+ author={Zhangchen Xu and Fengqing Jiang and Luyao Niu and Yuntian Deng and Radha Poovendran and Yejin Choi and Bill Yuchen Lin},
54
+ year={2024},
55
+ eprint={2406.08464},
56
+ archivePrefix={arXiv},
57
+ primaryClass={cs.CL}
58
+ }
59
+ ```
60
+
61
+ ## Training procedure
62
+
63
+ ### Training hyperparameters
64
+
65
+ The following hyperparameters were used during training:
66
+ - learning_rate: 2e-05
67
+ - train_batch_size: 1
68
+ - eval_batch_size: 1
69
+ - seed: 42
70
+ - distributed_type: multi-GPU
71
+ - num_devices: 4
72
+ - gradient_accumulation_steps: 8
73
+ - total_train_batch_size: 32
74
+ - total_eval_batch_size: 4
75
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
76
+ - lr_scheduler_type: cosine
77
+ - lr_scheduler_warmup_steps: 100
78
+ - num_epochs: 2
79
+
80
+ ### Training results
81
+
82
+ | Training Loss | Epoch | Step | Validation Loss |
83
+ |:-------------:|:------:|:----:|:---------------:|
84
+ | 0.8807 | 0.0007 | 1 | 0.9001 |
85
+ | 0.5113 | 0.3337 | 464 | 0.5178 |
86
+ | 0.4668 | 0.6673 | 928 | 0.4792 |
87
+ | 0.4492 | 1.0010 | 1392 | 0.4582 |
88
+ | 0.3498 | 1.3205 | 1856 | 0.4575 |
89
+ | 0.3525 | 1.6542 | 2320 | 0.4555 |
90
+
91
+
92
+ ### Framework versions
93
+
94
+ - Transformers 4.40.2
95
+ - Pytorch 2.3.0+cu121
96
+ - Datasets 2.19.1
97
+ - Tokenizers 0.19.1
98
+
99
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
100
+ <details><summary>See axolotl config</summary>
101
+
102
+ axolotl version: `0.4.0`
103
+ ```yaml
104
+
105
+ base_model: meta-llama/Meta-Llama-3-8B
106
+ model_type: LlamaForCausalLM
107
+ tokenizer_type: AutoTokenizer
108
+
109
+ load_in_8bit: false
110
+ load_in_4bit: false
111
+ strict: false
112
+
113
+ datasets:
114
+ - path: Magpie-Align/Magpie-Pro-MT-300K-v0.1
115
+ type: sharegpt
116
+ conversation: llama3
117
+ dataset_prepared_path: last_run_prepared
118
+ val_set_size: 0.001
119
+ output_dir: ./out_Llama-3-8B-Magpie-Pro-300K-MT
120
+
121
+ sequence_len: 8192
122
+ sample_packing: true
123
+ eval_sample_packing: false
124
+ pad_to_sequence_len: true
125
+
126
+ gradient_accumulation_steps: 8
127
+ micro_batch_size: 1
128
+ num_epochs: 2
129
+ optimizer: paged_adamw_8bit
130
+ lr_scheduler: cosine
131
+ learning_rate: 2e-5
132
+
133
+ train_on_inputs: false
134
+ group_by_length: false
135
+ bf16: auto
136
+ fp16:
137
+ tf32: false
138
+
139
+ gradient_checkpointing: true
140
+ gradient_checkpointing_kwargs:
141
+ use_reentrant: false
142
+ early_stopping_patience:
143
+ resume_from_checkpoint:
144
+ logging_steps: 1
145
+ xformers_attention:
146
+ flash_attention: true
147
+
148
+ warmup_steps: 100
149
+ evals_per_epoch: 3
150
+ eval_table_size:
151
+ saves_per_epoch: 3
152
+ debug:
153
+ deepspeed:
154
+ weight_decay: 0.0
155
+ fsdp:
156
+ fsdp_config:
157
+ special_tokens:
158
+ pad_token: <|end_of_text|>
159
+
160
+ ```
161
+
162
+ </details><br>