Gxl
/

English
Generated from Trainer
Gxl commited on
Commit
1b85c81
1 Parent(s): 857db05

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ model-index:
5
+ - name: zephyr-7b-beta
6
+ results: []
7
+ license: mit
8
+ datasets:
9
+ - HuggingFaceH4/ultrachat_200k
10
+ - HuggingFaceH4/ultrafeedback_binarized
11
+ language:
12
+ - en
13
+ base_model: mistralai/Mistral-7B-v0.1
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ <img src="https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha/resolve/main/thumbnail.png" alt="Zephyr Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
20
+
21
+
22
+ # Model Card for Zephyr 7B β
23
+
24
+ Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) that was trained on on a mix of publicly available, synthetic datasets using [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290). We found that removing the in-built alignment of these datasets boosted performance on [MT Bench](https://huggingface.co/spaces/lmsys/mt-bench) and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes. You can find more details in the [technical report](https://arxiv.org/abs/2310.16944).
25
+
26
+
27
+ ## Model description
28
+
29
+ - **Model type:** A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
30
+ - **Language(s) (NLP):** Primarily English
31
+ - **License:** MIT
32
+ - **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
33
+
34
+ ### Model Sources
35
+
36
+ <!-- Provide the basic links for the model. -->
37
+
38
+ - **Repository:** https://github.com/huggingface/alignment-handbook
39
+ - **Demo:** https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat
40
+ - **Chatbot Arena:** Evaluate Zephyr 7B against 10+ LLMs in the LMSYS arena: http://arena.lmsys.org
41
+
42
+ ## Performance
43
+
44
+ At the time of release, Zephyr-7B-β is the highest ranked 7B chat model on the [MT-Bench](https://huggingface.co/spaces/lmsys/mt-bench) and [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmarks:
45
+
46
+ | Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) |
47
+ |-------------|-----|----|---------------|--------------|
48
+ | StableLM-Tuned-α | 7B| dSFT |2.75| -|
49
+ | MPT-Chat | 7B |dSFT |5.42| -|
50
+ | Xwin-LMv0.1 | 7B| dPPO| 6.19| 87.83|
51
+ | Mistral-Instructv0.1 | 7B| - | 6.84 |-|
52
+ | Zephyr-7b-α |7B| dDPO| 6.88| -|
53
+ | **Zephyr-7b-β** 🪁 | **7B** | **dDPO** | **7.34** | **90.60** |
54
+ | Falcon-Instruct | 40B |dSFT |5.17 |45.71|
55
+ | Guanaco | 65B | SFT |6.41| 71.80|
56
+ | Llama2-Chat | 70B |RLHF |6.86| 92.66|
57
+ | Vicuna v1.3 | 33B |dSFT |7.12 |88.99|
58
+ | WizardLM v1.0 | 70B |dSFT |7.71 |-|
59
+ | Xwin-LM v0.1 | 70B |dPPO |- |95.57|
60
+ | GPT-3.5-turbo | - |RLHF |7.94 |89.37|
61
+ | Claude 2 | - |RLHF |8.06| 91.36|
62
+ | GPT-4 | -| RLHF |8.99| 95.28|