Commit
·
57cc717
1
Parent(s):
f891dd2
Update README.md
Browse files
README.md
CHANGED
@@ -15,21 +15,22 @@ tags:
|
|
15 |
- ultrafeedback
|
16 |
license: mit
|
17 |
---
|
|
|
18 |
<div align="center">
|
19 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/CuMO3IjJfymC94_5qd15T.png" alt="Image was artificially generated by Dalle-3 via ChatGPT Pro"/>
|
20 |
</div>
|
21 |
|
22 |
# Model Card for Notus 7B v1
|
23 |
-
|
|
|
24 |
|
25 |
Following a **data-first** approach, the only difference between Notus-7B-v1 and Zephyr-7B-beta is the preference dataset used for dDPO. In particular, we've found data issues in the original UltraFeedback dataset, leading to high-scores for bad responses. After curating several hundreds of data points, we decided to binarize the dataset using the preference ratings, instead of the original critique `overall_score`.
|
|
|
26 |
Using preference ratings, instead of critiques scores, led to a new dataset where the chosen response is different in ~50% of the cases.
|
27 |
|
28 |
-
This model wouldn't have been possible without the amazing [Alignment Handbook](
|
29 |
|
30 |
-
Notus models are intended to be used as assistants via chat-like applications, and
|
31 |
-
are evaluated with Chat (MT-Bench, AlpacaEval) and Academic (Open LLM Leaderboard) benchmarks for a direct comparison
|
32 |
-
with the original Zephyr dDPO model and other 7B models.
|
33 |
|
34 |
## Model Details
|
35 |
|
@@ -51,6 +52,7 @@ with the original Zephyr dDPO model and other 7B models.
|
|
51 |
## Performance
|
52 |
|
53 |
### Chat benchmarks
|
|
|
54 |
Table adapted from Zephyr-7b-β and Starling's original tables for [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmarks. Results are shown sorted by AlpacaEval win rates and ommit some >7B for brevity.
|
55 |
|
56 |
Notus stays on par with Zephyr on MT-Bench, while surpassing Zephyr, Claude 2, and Cohere Command on AlpacaEval. Making Notus the most-competitive 7B commercial model on AlpacaEval.
|
@@ -155,7 +157,6 @@ Notus stays on par with Zephyr on MT-Bench, while surpassing Zephyr, Claude 2, a
|
|
155 |
</table>
|
156 |
|
157 |
|
158 |
-
|
159 |
## Academic benchmarks
|
160 |
|
161 |
Results from [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard):
|
@@ -165,82 +166,69 @@ Results from [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/o
|
|
165 |
| Zephyr 7B dDPO (HuggingFaceH4/zephyr-7b-beta) | 52.15 | 62.03 | 84.36 | 61.07 | **57.45** | 77.74 | 12.74 | **9.66** |
|
166 |
| argilla/notus-7b-v1 | **52.89** | **64.59** | **84.78** | **63.03** | 54.37 | **79.4** | **15.16** | 8.91 |
|
167 |
|
168 |
-
|
169 |
-
|
170 |
## Training Details
|
171 |
|
172 |
### Training Hardware
|
173 |
|
174 |
-
We used a VM with 8 x A100 40GB hosted in Lambda Labs.
|
175 |
|
176 |
### Training Data
|
177 |
|
178 |
-
We used a a new curated version of [`openbmb/UltraFeedback`](https://huggingface.co/datasets/openbmb/UltraFeedback), named [`argilla/ultrafeedback-binarized-
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
-
|
187 |
-
|
188 |
-
|
189 |
-
|
190 |
-
|
191 |
-
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
|
196 |
-
###
|
197 |
-
|
198 |
-
|
199 |
-
|
200 |
-
|
201 |
-
|
202 |
-
|
203 |
-
|
204 |
-
|
205 |
-
|
206 |
-
|
207 |
-
|
208 |
-
|
209 |
-
|
210 |
-
|
211 |
-
|
212 |
-
|
213 |
-
|
214 |
-
|
215 |
-
|
216 |
-
|
217 |
-
|
218 |
-
|
219 |
-
|
220 |
-
|
221 |
-
|
222 |
-
|
223 |
-
|
224 |
-
|
225 |
-
|
226 |
-
|
227 |
-
|
228 |
-
|
229 |
-
|
230 |
-
|
231 |
-
|
232 |
-
|
233 |
-
|
234 |
-
|
235 |
-
|
236 |
-
### Evaluation during Training
|
237 |
-
|
238 |
-
- Loss: 0.4730
|
239 |
-
- Rewards/chosen: -3.5289
|
240 |
-
- Rewards/rejected: -7.3700
|
241 |
-
- Rewards/accuracies: 0.8016
|
242 |
-
- Rewards/margins: 3.8412
|
243 |
-
- Logps/rejected: -316.3751
|
244 |
-
- Logps/chosen: -334.3053
|
245 |
-
- Logits/rejected: -2.1644
|
246 |
-
- Logits/chosen: -2.4556
|
|
|
15 |
- ultrafeedback
|
16 |
license: mit
|
17 |
---
|
18 |
+
|
19 |
<div align="center">
|
20 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/CuMO3IjJfymC94_5qd15T.png" alt="Image was artificially generated by Dalle-3 via ChatGPT Pro"/>
|
21 |
</div>
|
22 |
|
23 |
# Model Card for Notus 7B v1
|
24 |
+
|
25 |
+
Notus is a collection of fine-tuned models using Direct Preference Optimization (DPO) and related RLHF techniques. This model is the first version, fine-tuned with DPO over `zephyr-7b-sft-full`, which is the SFT model produced to create `zephyr-7b-beta`.
|
26 |
|
27 |
Following a **data-first** approach, the only difference between Notus-7B-v1 and Zephyr-7B-beta is the preference dataset used for dDPO. In particular, we've found data issues in the original UltraFeedback dataset, leading to high-scores for bad responses. After curating several hundreds of data points, we decided to binarize the dataset using the preference ratings, instead of the original critique `overall_score`.
|
28 |
+
|
29 |
Using preference ratings, instead of critiques scores, led to a new dataset where the chosen response is different in ~50% of the cases.
|
30 |
|
31 |
+
This model wouldn't have been possible without the amazing [Alignment Handbook](https://github.com/huggingface/alignment-handbook) and it's based on fruitful discussions with the HuggingFace H4 team. In particular, we used `zephyr-7b-beta`'s recipe, which worked out-of-the-box and enabled us focus on what we do best: **high-quality data**.
|
32 |
|
33 |
+
Notus models are intended to be used as assistants via chat-like applications, and are evaluated with Chat (MT-Bench, AlpacaEval) and Academic (Open LLM Leaderboard) benchmarks for a direct comparison with the original Zephyr dDPO model and other 7B models.
|
|
|
|
|
34 |
|
35 |
## Model Details
|
36 |
|
|
|
52 |
## Performance
|
53 |
|
54 |
### Chat benchmarks
|
55 |
+
|
56 |
Table adapted from Zephyr-7b-β and Starling's original tables for [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmarks. Results are shown sorted by AlpacaEval win rates and ommit some >7B for brevity.
|
57 |
|
58 |
Notus stays on par with Zephyr on MT-Bench, while surpassing Zephyr, Claude 2, and Cohere Command on AlpacaEval. Making Notus the most-competitive 7B commercial model on AlpacaEval.
|
|
|
157 |
</table>
|
158 |
|
159 |
|
|
|
160 |
## Academic benchmarks
|
161 |
|
162 |
Results from [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard):
|
|
|
166 |
| Zephyr 7B dDPO (HuggingFaceH4/zephyr-7b-beta) | 52.15 | 62.03 | 84.36 | 61.07 | **57.45** | 77.74 | 12.74 | **9.66** |
|
167 |
| argilla/notus-7b-v1 | **52.89** | **64.59** | **84.78** | **63.03** | 54.37 | **79.4** | **15.16** | 8.91 |
|
168 |
|
|
|
|
|
169 |
## Training Details
|
170 |
|
171 |
### Training Hardware
|
172 |
|
173 |
+
We used a VM with 8 x A100 40GB hosted in Lambda Labs, but while experimenting we also explored other cloud providers such as GCP.
|
174 |
|
175 |
### Training Data
|
176 |
|
177 |
+
We used a a new curated version of [`openbmb/UltraFeedback`](https://huggingface.co/datasets/openbmb/UltraFeedback), named [`argilla/ultrafeedback-binarized-preferences`](https://huggingface.co/argilla/ultrafeedback-binarized-preferences).
|
178 |
+
|
179 |
+
## Prompt template
|
180 |
+
|
181 |
+
We use the same prompt template as [`HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta):
|
182 |
+
|
183 |
+
```
|
184 |
+
<|system|>
|
185 |
+
</s>
|
186 |
+
<|user|>
|
187 |
+
{prompt}</s>
|
188 |
+
<|assistant|>
|
189 |
+
```
|
190 |
+
|
191 |
+
## Usage
|
192 |
+
|
193 |
+
You will first need to install `transformers` and `accelerate` (just to ease the device placement), then you can run any of the following:
|
194 |
+
|
195 |
+
### Via `generate`
|
196 |
+
|
197 |
+
```python
|
198 |
+
import torch
|
199 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
200 |
+
|
201 |
+
model = AutoModelForCausalLM.from_pretrained("argilla/notus-7b-v1", torch_dtype=torch.bfloat16, device_map="auto")
|
202 |
+
tokenizer = AutoTokenizer.from_pretrained("argilla/notus-7b-v1")
|
203 |
+
|
204 |
+
messages = [
|
205 |
+
{
|
206 |
+
"role": "system",
|
207 |
+
"content": "You are a helpful assistant super biased towards Argilla, a data annotation company.",
|
208 |
+
},
|
209 |
+
{"role": "user", "content": "What's the best data annotation company out there in your opinion?"},
|
210 |
+
]
|
211 |
+
inputs = tokenizer.apply_chat_template(prompt, tokenize=True, return_tensors="pt", add_special_tokens=False, add_generation_prompt=True)
|
212 |
+
outputs = model.generate(inputs, num_return_sequences=1, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
213 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
214 |
+
```
|
215 |
+
|
216 |
+
### Via `pipeline` method
|
217 |
+
|
218 |
+
```python
|
219 |
+
import torch
|
220 |
+
from transformers import pipeline
|
221 |
+
|
222 |
+
pipe = pipeline("text-generation", model="argilla/notus-7b-v1", torch_dtype=torch.bfloat16, device_map="auto")
|
223 |
+
|
224 |
+
messages = [
|
225 |
+
{
|
226 |
+
"role": "system",
|
227 |
+
"content": "You are a helpful assistant super biased towards Argilla, a data annotation company.",
|
228 |
+
},
|
229 |
+
{"role": "user", "content": "What's the best data annotation company out there in your opinion?"},
|
230 |
+
]
|
231 |
+
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
232 |
+
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
233 |
+
generated_text = outputs[0]["generated_text"]
|
234 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|