alvarobartt HF staff commited on
Commit
e2232ed
1 Parent(s): 2761b29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -121
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  model-index:
3
- - name: notus-7b-dpo
4
  results: []
5
  datasets:
6
  - argilla/ultrafeedback-binarized-avg-rating-for-dpo
@@ -16,13 +16,10 @@ tags:
16
  license: apache-2.0
17
  ---
18
 
19
- # Model Card for Notus 7B
20
 
21
  <div align="center">
22
- <img src="https://cdn-uploads.huggingface.co/production/uploads/60f0608166e5701b80ed3f02/LU-vKiC0R7UxxITrwE1F_.png"/>
23
- <p style="text-align: center;">
24
- Image was artificially generated by Dalle-3 via ChatGPT Pro
25
- </p>
26
  </div>
27
 
28
  Notus is going to be a collection of fine-tuned models using DPO, similarly to Zephyr, but mainly focused
@@ -44,57 +41,32 @@ also using DPO.
44
 
45
  ### Model Sources [optional]
46
 
47
- - **Repository:** https://github.com/argilla-io/notus-7b-dpo
48
  - **Paper:** N/A
49
  - **Demo:** https://argilla-notus-chat-ui.hf.space/
50
 
51
- ## Uses
52
 
53
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
54
 
55
- ### Direct Use
56
-
57
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
58
-
59
- [More Information Needed]
60
-
61
- ### Downstream Use [optional]
62
-
63
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
64
-
65
- [More Information Needed]
66
-
67
- ### Out-of-Scope Use
68
-
69
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
70
-
71
- [More Information Needed]
72
-
73
- ## Bias, Risks, and Limitations
74
-
75
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
76
-
77
- [More Information Needed]
78
-
79
- ### Recommendations
80
-
81
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
82
-
83
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
84
 
85
- ## How to Get Started with the Model
86
 
87
- Use the code below to get started with the model.
88
 
89
- [More Information Needed]
 
 
 
 
 
90
 
91
  ## Training Details
92
 
93
  ### Training Data
94
 
95
- <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
96
-
97
- [More Information Needed]
98
 
99
  ### Training hyperparameters
100
 
@@ -152,7 +124,7 @@ The following hyperparameters were used during training:
152
  - Datasets 2.14.6
153
  - Tokenizers 0.14.1
154
 
155
- ## Evaluation
156
 
157
  - Loss: 0.4730
158
  - Rewards/chosen: -3.5289
@@ -162,79 +134,4 @@ The following hyperparameters were used during training:
162
  - Logps/rejected: -316.3751
163
  - Logps/chosen: -334.3053
164
  - Logits/rejected: -2.1644
165
- - Logits/chosen: -2.4556
166
-
167
- ### Testing Data, Factors & Metrics
168
-
169
- #### Testing Data
170
-
171
- <!-- This should link to a Data Card if possible. -->
172
-
173
- [More Information Needed]
174
-
175
- #### Factors
176
-
177
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
178
-
179
- [More Information Needed]
180
-
181
- #### Metrics
182
-
183
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
184
-
185
- [More Information Needed]
186
-
187
- ### Results
188
-
189
- [More Information Needed]
190
-
191
- #### Summary
192
-
193
-
194
- ## Technical Specifications
195
-
196
- ### Model Architecture and Objective
197
-
198
- [More Information Needed]
199
-
200
- ### Compute Infrastructure
201
-
202
- [More Information Needed]
203
-
204
- #### Hardware
205
-
206
- 8 x A100 40GB
207
-
208
- #### Software
209
-
210
- [More Information Needed]
211
-
212
- ## Citation [optional]
213
-
214
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
215
-
216
- **BibTeX:**
217
-
218
- [More Information Needed]
219
-
220
- **APA:**
221
-
222
- [More Information Needed]
223
-
224
- ## Glossary [optional]
225
-
226
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
227
-
228
- [More Information Needed]
229
-
230
- ## More Information [optional]
231
-
232
- [More Information Needed]
233
-
234
- ## Model Card Authors [optional]
235
-
236
- [More Information Needed]
237
-
238
- ## Model Card Contact
239
-
240
- [More Information Needed]
 
1
  ---
2
  model-index:
3
+ - name: notus-7b-v1
4
  results: []
5
  datasets:
6
  - argilla/ultrafeedback-binarized-avg-rating-for-dpo
 
16
  license: apache-2.0
17
  ---
18
 
19
+ # Model Card for Notus 7B v1
20
 
21
  <div align="center">
22
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/60f0608166e5701b80ed3f02/LU-vKiC0R7UxxITrwE1F_.png" alt="Image was artificially generated by Dalle-3 via ChatGPT Pro"/>
 
 
 
23
  </div>
24
 
25
  Notus is going to be a collection of fine-tuned models using DPO, similarly to Zephyr, but mainly focused
 
41
 
42
  ### Model Sources [optional]
43
 
44
+ - **Repository:** https://github.com/argilla-io/notus-7b
45
  - **Paper:** N/A
46
  - **Demo:** https://argilla-notus-chat-ui.hf.space/
47
 
48
+ ### Model Date
49
 
50
+ Notus 7B v1 was trained along November, 2023. And the data as generated by GPT-4 without the usage of external resources, has a cutoff at September, 2021.
51
 
52
+ ## Evaluation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
+ We ran the evaluation using [`EleutherAI/lm-eval-harness`](https://github.com/EleutherAI/lm-evaluation-harness/tree/big-refactor) from the `big-refactor` branch, aiming to mimic the [Open LLM Leaderboard by HuggingFace H4](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), but running everything on our VMs instead, as we're still experimenting.
55
 
56
+ From a first evaluation on the benchmark, we could see that Notus 7B DPO **slightly improved** compared to Zephyr 7B Beta/Alpha and Mistral 7B as we see from the average metric of 7 tasks from the leaderboard.
57
 
58
+ | Model | Average ⬆️ | ARC (25-s) ⬆️ | HellaSwag (10-s) ⬆️ | MMLU (5-s) ⬆️ | TruthfulQA (MC2) (0-s) ⬇️ | Winogrande (5-s) ⬇️ | GSM8K (5-s) ⬆️ | DROP (3-s) ⬇️ |
59
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- |
60
+ |[mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 50.32 | 59.58 | 83.31 | 64.16 | 42.15 | 78.37 | 18.12 | 6.14 |
61
+ |[HuggingFaceH4/zephyr-7b-alpha](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha) | 52.4 | 61.01 | 84.04 | 61.39 | 57.9 | 78.61 | 14.03 | 9.82 |
62
+ |[HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) | 52.15 | 62.03 | 84.36 | 61.07 | 57.45 | 77.74 | 12.74 | 9.66 |
63
+ | **Ours** | **54.09** | 64.25 | 84.90 | 61.69 | 52.77 | 74.51 | 39.5 | 0.98 |
64
 
65
  ## Training Details
66
 
67
  ### Training Data
68
 
69
+ We used a slightly curated version of [`openbmb/UltraFeedback`](https://huggingface.co/datasets/openbmb/UltraFeedback), named [`argilla/ultrafeedback-binarized-avg-rating-for-dpo`](https://huggingface.co/argilla/ultrafeedback-binarized-avg-rating-for-dpo).
 
 
70
 
71
  ### Training hyperparameters
72
 
 
124
  - Datasets 2.14.6
125
  - Tokenizers 0.14.1
126
 
127
+ ### Evaluation during Training
128
 
129
  - Loss: 0.4730
130
  - Rewards/chosen: -3.5289
 
134
  - Logps/rejected: -316.3751
135
  - Logps/chosen: -334.3053
136
  - Logits/rejected: -2.1644
137
+ - Logits/chosen: -2.4556