Commit
·
2ebbfb4
1
Parent(s):
49945e6
Update README.md
Browse files
README.md
CHANGED
@@ -13,14 +13,15 @@ tags:
|
|
13 |
- dpo
|
14 |
- preference
|
15 |
- ultrafeedback
|
16 |
-
license:
|
17 |
---
|
|
|
|
|
|
|
18 |
|
19 |
# Model Card for Notus 7B v1
|
20 |
|
21 |
-
|
22 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/60f0608166e5701b80ed3f02/LU-vKiC0R7UxxITrwE1F_.png" alt="Image was artificially generated by Dalle-3 via ChatGPT Pro"/>
|
23 |
-
</div>
|
24 |
|
25 |
Notus is a collection of fine-tuned models using Direct Preference Optimization (DPO) and related RLHF techniques. This model is version 1, fine-tuned with DPO starting with zephyr-7b-beta's SFT model.
|
26 |
|
@@ -41,7 +42,7 @@ with the original Zephyr dDPO model and other 7B models.
|
|
41 |
- **Shared by:** Argilla
|
42 |
- **Model type:** GPT-like 7B model DPO fine-tuned
|
43 |
- **Language(s) (NLP):** Mainly English
|
44 |
-
- **License:**
|
45 |
- **Finetuned from model:** [`alignment-handbook/zephyr-7b-sft-full`](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full)
|
46 |
|
47 |
### Model Sources
|
@@ -53,18 +54,18 @@ with the original Zephyr dDPO model and other 7B models.
|
|
53 |
## Performance
|
54 |
|
55 |
### Chat benchmarks
|
56 |
-
|
57 |
|
58 |
| Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) |
|
59 |
|-------------|-----|----|---------------|--------------|
|
60 |
| MPT-Chat | 7B |dSFT |5.42| -|
|
61 |
| Xwin-LMv0.1 | 7B| dPPO| 6.19| 87.83|
|
62 |
| Mistral-Instructv0.1 | 7B| - | 6.84 |-|
|
63 |
-
| Zephyr-7b-β 🪁 | 7B |
|
64 |
-
| **Notus-7b-v1** | 7B |
|
65 |
| GPT-3.5-turbo | - |RLHF |7.94 |89.37|
|
66 |
| Claude 2 | - |RLHF |8.06| 91.36|
|
67 |
-
| Cohere Command
|
68 |
| GPT-4 | -| RLHF |8.99| 95.28|
|
69 |
| Falcon-Instruct | 40B |dSFT |5.17 |45.71|
|
70 |
| Guanaco | 65B | SFT |6.41| 71.80|
|
@@ -75,6 +76,8 @@ This shows the updated table, based on Zephyr-7b-β original table for [MT-Bench
|
|
75 |
|
76 |
## Academic benchmarks
|
77 |
|
|
|
|
|
78 |
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP |
|
79 |
|-----------------------------------------------|---------|-------|-----------|-------|------------|------------|-------|-------|
|
80 |
| Zephyr 7B dDPO (HuggingFaceH4/zephyr-7b-beta) | 52.15 | 62.03 | 84.36 | 61.07 | **57.45** | 77.74 | 12.74 | **9.66** |
|
|
|
13 |
- dpo
|
14 |
- preference
|
15 |
- ultrafeedback
|
16 |
+
license: mit
|
17 |
---
|
18 |
+
<div align="center">
|
19 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/60420dccc15e823a685f2b03/CuMO3IjJfymC94_5qd15T.png" alt="Image was artificially generated by Dalle-3 via ChatGPT Pro" width="50%"/>
|
20 |
+
</div>
|
21 |
|
22 |
# Model Card for Notus 7B v1
|
23 |
|
24 |
+
|
|
|
|
|
25 |
|
26 |
Notus is a collection of fine-tuned models using Direct Preference Optimization (DPO) and related RLHF techniques. This model is version 1, fine-tuned with DPO starting with zephyr-7b-beta's SFT model.
|
27 |
|
|
|
42 |
- **Shared by:** Argilla
|
43 |
- **Model type:** GPT-like 7B model DPO fine-tuned
|
44 |
- **Language(s) (NLP):** Mainly English
|
45 |
+
- **License:** MIT (same as Zephyr 7B-beta)
|
46 |
- **Finetuned from model:** [`alignment-handbook/zephyr-7b-sft-full`](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full)
|
47 |
|
48 |
### Model Sources
|
|
|
54 |
## Performance
|
55 |
|
56 |
### Chat benchmarks
|
57 |
+
Table adapted from Zephyr-7b-β original table for [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmarks. Notus stays on par with Zephyr on MT-Bench, while surpassing Zephyr, Claude 2, and Cohere Command on AlpacaEval. Making Notus the most-competitive 7B commercial model on AlpacaEval.
|
58 |
|
59 |
| Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) |
|
60 |
|-------------|-----|----|---------------|--------------|
|
61 |
| MPT-Chat | 7B |dSFT |5.42| -|
|
62 |
| Xwin-LMv0.1 | 7B| dPPO| 6.19| 87.83|
|
63 |
| Mistral-Instructv0.1 | 7B| - | 6.84 |-|
|
64 |
+
| Zephyr-7b-β 🪁 | 7B | dDPO | **7.34** | 90.60 |
|
65 |
+
| **Notus-7b-v1** | 7B | dDPO | 7.30 | **91.42** |
|
66 |
| GPT-3.5-turbo | - |RLHF |7.94 |89.37|
|
67 |
| Claude 2 | - |RLHF |8.06| 91.36|
|
68 |
+
| Cohere Command | - |RLHF |-| 90.62|
|
69 |
| GPT-4 | -| RLHF |8.99| 95.28|
|
70 |
| Falcon-Instruct | 40B |dSFT |5.17 |45.71|
|
71 |
| Guanaco | 65B | SFT |6.41| 71.80|
|
|
|
76 |
|
77 |
## Academic benchmarks
|
78 |
|
79 |
+
Results from [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard):
|
80 |
+
|
81 |
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | DROP |
|
82 |
|-----------------------------------------------|---------|-------|-----------|-------|------------|------------|-------|-------|
|
83 |
| Zephyr 7B dDPO (HuggingFaceH4/zephyr-7b-beta) | 52.15 | 62.03 | 84.36 | 61.07 | **57.45** | 77.74 | 12.74 | **9.66** |
|