qwp4w3hyb commited on
Commit
cdc4720
1 Parent(s): c508dcf

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,23 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ imat-bf16-gmerged.dat filter=lfs diff=lfs merge=lfs -text
37
+ sfr-iterative-dpo-llama-3-8b-r-bf16.gguf filter=lfs diff=lfs merge=lfs -text
38
+ sfr-iterative-dpo-llama-3-8b-r-imat-IQ1_S.gguf filter=lfs diff=lfs merge=lfs -text
39
+ sfr-iterative-dpo-llama-3-8b-r-imat-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
40
+ sfr-iterative-dpo-llama-3-8b-r-imat-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
41
+ sfr-iterative-dpo-llama-3-8b-r-imat-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
42
+ sfr-iterative-dpo-llama-3-8b-r-imat-IQ2_XXS.gguf filter=lfs diff=lfs merge=lfs -text
43
+ sfr-iterative-dpo-llama-3-8b-r-imat-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
44
+ sfr-iterative-dpo-llama-3-8b-r-imat-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
45
+ sfr-iterative-dpo-llama-3-8b-r-imat-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
46
+ sfr-iterative-dpo-llama-3-8b-r-imat-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
47
+ sfr-iterative-dpo-llama-3-8b-r-imat-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
48
+ sfr-iterative-dpo-llama-3-8b-r-imat-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
49
+ sfr-iterative-dpo-llama-3-8b-r-imat-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
50
+ sfr-iterative-dpo-llama-3-8b-r-imat-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
51
+ sfr-iterative-dpo-llama-3-8b-r-imat-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
52
+ sfr-iterative-dpo-llama-3-8b-r-imat-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
53
+ sfr-iterative-dpo-llama-3-8b-r-imat-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
54
+ sfr-iterative-dpo-llama-3-8b-r-imat-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
55
+ sfr-iterative-dpo-llama-3-8b-r-imat-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-nd-3.0
3
+ pipeline_tag: text-generation
4
+ base_model: Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R
5
+ tags:
6
+ - salesforce
7
+ - llama
8
+ - llama-3
9
+ - instruct
10
+ - finetune
11
+ - gguf
12
+ - imatrix
13
+ - importance matrix
14
+ model-index:
15
+ - name: SFR-Iterative-DPO-LLaMA-3-8B-R-iMat-GGUF
16
+ results: []
17
+ ---
18
+
19
+ # Quant Infos
20
+
21
+ - quants done with an importance matrix for improved quantization loss
22
+ - gguf & imatrix generated from bf16 for "optimal" accuracy loss
23
+ - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
24
+ - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [dc685be46622a8fabfd57cfa804237c8f15679b8](https://github.com/ggerganov/llama.cpp/commit/dc685be46622a8fabfd57cfa804237c8f15679b8) (master as of 2024-05-12)
25
+ - Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset.
26
+ ```
27
+ ./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
28
+ ```
29
+
30
+ # Original Model Card:
31
+
32
+
33
+ # SFR-Iterative-DPO-Llama-3-8B-R
34
+
35
+ ## Introduction
36
+ We release a state-of-the-art instruct model of its class, **SFR-Iterative-DPO-LLaMA-3-8B-R**.
37
+ On all three widely-used instruct model benchmarks: **Alpaca-Eval-V2**, **MT-Bench**, **Chat-Arena-Hard**, our model outperforms all models of similar size (e.g., LLaMA-3-8B-it), most large open-sourced models (e.g., Mixtral-8x7B-it),
38
+ and strong proprietary models (e.g., GPT-3.5-turbo-0613). The model is trained with open-sourced datasets without any additional human-/GPT4-labeling.
39
+
40
+ ## Model Releases
41
+ - [SFT model](https://huggingface.co/Salesforce/SFR-SFT-LLaMA-3-8B-R)
42
+ - [Reward model](https://huggingface.co/Salesforce/SFR-RM-LLaMA-3-8B-R)
43
+ - [RLHF model](https://huggingface.co/Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R)
44
+
45
+
46
+ ## Training methods
47
+ We have developed a simple and efficient online RLHF recipe for LLM instruct training. Our recipe is DPO-based and thus much cheaper and simpler to train and tune compared to PPO-based approaches.
48
+ Unlike widely-used offline DPO, the online component of our approach effectively mitigates distribution shifts during policy optimization.
49
+ For a detailed exposition, please refer to our accompanying technical report.
50
+
51
+
52
+ ## Chat Benchmarks
53
+
54
+ | **Model** | **Size** | **Method** | **LC Alpaca-Eval-V2** | **MT-Bench** | **Chat-Arena-Hard** |
55
+ |-------------------------|----------|-------------------|-----------------------|--------------|---------------------|
56
+ | **Small Open-Sourced Models** | | | | | |
57
+ | Gemma-7B-it | 7B | SFT | 10.4 | 6.38 | 7.5 |
58
+ | Zephyr-7B-beta | 7B | Vanilla DPO | 13.1 | 7.34 | - |
59
+ | Mistral-7B-v0.2-it | 7B | SFT | 17.1 | 7.51 | 12.6 |
60
+ | Open-Chat-0106 | 7B | SFT | 15.6 | 7.8 | - |
61
+ | Starling-7B-beta | 7B | PPO | 25.8 | 8.12 | 23.0 |
62
+ | LLaMA-3-8B-it | 8B | RS+DPO+PPO | 22.9 | 8.16 | 20.6 |
63
+ | **Ours** | | | | | |
64
+ | Ours (SFT baseline) | 8B | SFT | 10.2 | 7.69 | 5.6 |
65
+ | Ours (DPO baseline) | 8B | Vanilla DPO | 22.5 | 8.17 | 22.4 |
66
+ | Ours (Online RLHF) | 8B | Iterative DPO | **37.2** | **8.46** | **29.1** |
67
+ | **Large Open-Sourced Models** | | | | | |
68
+ | Vicuna-33b-v1.3 | 33B | SFT | 17.6 | 7.12 | 8.6 |
69
+ | Yi-34B-Chat | 34B | SFT | 27.2 | - | 23.1 |
70
+ | Mixtral-8x7B-it | 45B* | SFT | 23.7 | 8.30 | 23.4 |
71
+ | Tulu-2-DPO-70B | 70B | Vanilla DPO | 21.2 | 7.89 | 15.0 |
72
+ | LLaMA-3-70B-it | 70B | RS+DPO+PPO | 34.4 | 8.95 | 41.1 |
73
+ | Mixtral-8x22B-it | 141B* | SFT | 30.9 | 8.66 | 36.4 |
74
+ | **Proprietary Models** | | | | | |
75
+ | GPT-3.5-turbo-1106 | - | - | 19.3 | 8.35 | 18.9 |
76
+ | GPT-3.5-turbo-0613 | - | - | 22.7 | 8.39 | 24.8 |
77
+ | GPT-4-0613 | - | - | 30.2 | 9.18 | 37.9 |
78
+ | Claude-3-Opus | - | - | 40.5 | 9.00 | 60.4 |
79
+ | GPT-4 Turbo (04/09) | - | - | 55.0 | - | 82.6 |
80
+
81
+
82
+ ## Academic Benchmarks
83
+
84
+ | **Model** | **Size** | **Method** | **GSM-8K** | **MMLU** | **HumanEval** | **TruthfulQA** | **ARC** | **MBPP** |
85
+ |----------------------------|----------|-----------------|------------|----------|---------------|----------------|---------|----------|
86
+ | LLaMA-3-8B-it | 8B | RS+DPO+PPO | 79.6 | 66.0 | 61.6 | 43.9 | 59.5 | 61.1 |
87
+ | Ours (SFT baseline) | 8B | SFT | 74.2 | 64.7 | 65.2 | 53.4 | 61.4 | 62.3 |
88
+ | Ours (DPO baseline) | 8B | Vanilla DPO | 79.8 | 64.5 | 63.4 | 61.8 | 65.2 | 60.3 |
89
+ | Ours (Iterative RLHF) | 8B | Iterative DPO | 80.7 | 65.3 | 64.6 | 60.4 | 64.3 | 60.8 |
90
+
91
+
92
+ ## Usage
93
+ ```python
94
+ from transformers import AutoModelForCausalLM, AutoTokenizer
95
+
96
+ device = "cuda"
97
+
98
+ model = AutoModelForCausalLM.from_pretrained("Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R")
99
+ tokenizer = AutoTokenizer.from_pretrained("Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R")
100
+
101
+ messages = [
102
+ {"role": "user", "content": "I'm trying to teach myself to have nicer handwriting. Can you help?"},
103
+ ]
104
+
105
+ model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt")
106
+
107
+ model_inputs = model_inputs.to(device)
108
+ model.to(device)
109
+
110
+ output_tokens = model.generate(model_inputs, max_new_tokens=1024, do_sample=True)
111
+ model_outputs = tokenizer.batch_decode(output_tokens)
112
+ print(model_outputs[0])
113
+ ```
114
+
115
+
116
+ ## Limitations
117
+ SFR-Iterative-DPO-LLaMA-3-8B-R is a research model developed as part of our RLHF initiative at Salesforce.
118
+ While safety and ethical considerations are integral to our alignment process,
119
+ there remains the possibility that the model could generate offensive or unethical content, particularly under adversarial conditions.
120
+ We are committed to continuous improvement in our models to minimize such risks and encourage responsible usage.
121
+
122
+ ## Citation
123
+ Please cite our papers if you find our models are useful.
124
+
125
+ ```bibtex
126
+ @misc{dong2024rlhf,
127
+ title={RLHF Workflow: From Reward Modeling to Online RLHF},
128
+ author={Hanze Dong and Wei Xiong and Bo Pang and Haoxiang Wang and Han Zhao and Yingbo Zhou and Nan Jiang and Doyen Sahoo and Caiming Xiong and Tong Zhang},
129
+ year={2024},
130
+ eprint={2405.07863},
131
+ archivePrefix={arXiv},
132
+ primaryClass={cs.LG}
133
+ }
134
+
135
+ @misc{xiong2024iterative,
136
+ title={Iterative Preference Learning from Human Feedback: Bridging Theory and Practice for RLHF under KL-Constraint},
137
+ author={Wei Xiong and Hanze Dong and Chenlu Ye and Ziqi Wang and Han Zhong and Heng Ji and Nan Jiang and Tong Zhang},
138
+ year={2024},
139
+ eprint={2312.11456},
140
+ archivePrefix={arXiv},
141
+ primaryClass={cs.LG}
142
+ }
143
+ ```
imat-bf16-gmerged.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee53f56f2874d69c49737c949b2f85d0ce5eba44576dae4037eb5ceecfc6f9cd
3
+ size 4988185
sfr-iterative-dpo-llama-3-8b-r-bf16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9662fac39ebb5cd1efc15758d2a1158beead31afc4de58fbbb8f9a2e6aa90339
3
+ size 16068890912
sfr-iterative-dpo-llama-3-8b-r-imat-IQ1_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78e90d03a961e7be9161af5a9e4b2252d53c3ee3d3f0f2c3fff600a358768142
3
+ size 2019627616
sfr-iterative-dpo-llama-3-8b-r-imat-IQ2_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23fdb43c8f18011fa2156a3711241e7a74d70c4d1913c7995a578f52b7242e76
3
+ size 2948280928
sfr-iterative-dpo-llama-3-8b-r-imat-IQ2_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0703b7a6fcd83badd3180052226b7ca8139c13c0563f6e2769918fd19763a492
3
+ size 2758488672
sfr-iterative-dpo-llama-3-8b-r-imat-IQ2_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:700dcbc8b551af6d4e2638ec3eece08d10ad515d641e245a0606a3038ff7382a
3
+ size 2605781600
sfr-iterative-dpo-llama-3-8b-r-imat-IQ2_XXS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9bdaac3b6f1b8401382dd667acb36c057360c0c18e1f13f4579aaf7aabdca7ad
3
+ size 2399212128
sfr-iterative-dpo-llama-3-8b-r-imat-IQ3_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d98e52a724f9ef2acd8688cc7825a5b776f789468be177bf808c9c12b5f04687
3
+ size 3784823392
sfr-iterative-dpo-llama-3-8b-r-imat-IQ3_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1cd74ab4fa8673330c6abaa8386d530c1f25f5d695252d73d06de38ec4b57835
3
+ size 3682325088
sfr-iterative-dpo-llama-3-8b-r-imat-IQ3_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d08930c8b3cdc29c30f636f089b67739dda5c4bdb149333de4aa19fcc1afb5aa
3
+ size 3518747232
sfr-iterative-dpo-llama-3-8b-r-imat-IQ3_XXS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:763c30bfe2c434917057c66f56ac51b9dbd5515e5b09ef445eea6f475af6702e
3
+ size 3274912352
sfr-iterative-dpo-llama-3-8b-r-imat-IQ4_NL.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f95ab77f9ab096ced6ff03525f9fc70087c90b5fb66d11bddc471b7cb2fc50df
3
+ size 4677988960
sfr-iterative-dpo-llama-3-8b-r-imat-IQ4_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05d1a19f0ef23540e53356d4c1476194b433c53f798155bb2c27becb87139d6d
3
+ size 4447662688
sfr-iterative-dpo-llama-3-8b-r-imat-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7cc7b1459aa4a706752d3146e5d900e68d7af692960b7741abdb624985a9643d
3
+ size 4675891808
sfr-iterative-dpo-llama-3-8b-r-imat-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d3d68484fcfd893799e5f6776f4c7c6ef902fb3728f106dba468c9e5c61277b
3
+ size 4920734304
sfr-iterative-dpo-llama-3-8b-r-imat-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0704dae075b19ef21d5d9ca870ab1e3a12e973aa71bbc05433aa464f7537320b
3
+ size 4692669024
sfr-iterative-dpo-llama-3-8b-r-imat-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2c2ae93d7b565aa96008986a419bf7c6456c1e31d92736f2961d7630420d90a9
3
+ size 5732987488
sfr-iterative-dpo-llama-3-8b-r-imat-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30e443649a761f1bcbb0eb50a8dfb93a09d32bcb099a102920f4f5b34aab7915
3
+ size 5599294048
sfr-iterative-dpo-llama-3-8b-r-imat-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:393af1163456e7176163de61ed25bcba2a101be796bb0f31c9bdbf4c29bec421
3
+ size 6596006496
sfr-iterative-dpo-llama-3-8b-r-imat-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61e322d34bf50ded57638458bd051424b5591fd51d5d7942e2b7bc90867646d6
3
+ size 8540770912