Model Card for Zephyr 7B β (context size extended to 16k)
(Quantized GGUF Models)
Zephyr is a series of language models that are trained to act as helpful assistants. Zephyr-7B-β is the second model in the series, and is a fine-tuned version of mistralai/Mistral-7B-v0.1 that was trained on on a mix of publicly available, synthetic datasets using Direct Preference Optimization (DPO). We found that removing the in-built alignment of these datasets boosted performance on MT Bench and made the model more helpful. However, this means that model is likely to generate problematic text when prompted to do so and should only be used for educational and research purposes. You can find more details in the technical report.
Original Model Card
Model description
- Model type: A 7B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- Language(s) (NLP): Primarily English
- License: MIT
- Finetuned from model: mistralai/Mistral-7B-v0.1
Model Sources
- Repository: https://github.com/huggingface/alignment-handbook
- Demo: https://huggingface.co/spaces/HuggingFaceH4/zephyr-chat
- Chatbot Arena: Evaluate Zephyr 7B against 10+ LLMs in the LMSYS arena: http://arena.lmsys.org
Performance
At the time of release, Zephyr-7B-β is the highest ranked 7B chat model on the MT-Bench and AlpacaEval benchmarks:
Model | Size | Alignment | MT-Bench (score) | AlpacaEval (win rate %) |
---|---|---|---|---|
StableLM-Tuned-α | 7B | dSFT | 2.75 | - |
MPT-Chat | 7B | dSFT | 5.42 | - |
Xwin-LMv0.1 | 7B | dPPO | 6.19 | 87.83 |
Mistral-Instructv0.1 | 7B | - | 6.84 | - |
Zephyr-7b-α | 7B | dDPO | 6.88 | - |
Zephyr-7b-β 🪁 | 7B | dDPO | 7.34 | 90.60 |
Falcon-Instruct | 40B | dSFT | 5.17 | 45.71 |
Guanaco | 65B | SFT | 6.41 | 71.80 |
Llama2-Chat | 70B | RLHF | 6.86 | 92.66 |
Vicuna v1.3 | 33B | dSFT | 7.12 | 88.99 |
WizardLM v1.0 | 70B | dSFT | 7.71 | - |
Xwin-LM v0.1 | 70B | dPPO | - | 95.57 |
GPT-3.5-turbo | - | RLHF | 7.94 | 89.37 |
Claude 2 | - | RLHF | 8.06 | 91.36 |
GPT-4 | - | RLHF | 8.99 | 95.28 |
In particular, on several categories of MT-Bench, Zephyr-7B-β has strong performance compared to larger open models like Llama2-Chat-70B:
However, on more complex tasks like coding and mathematics, Zephyr-7B-β lags behind proprietary models and more research is needed to close the gap.
- Downloads last month
- 220