File size: 555 Bytes
f184e3e
 
3ba2a0c
 
 
 
 
 
 
91674c2
3ba2a0c
ef0da84
f184e3e
3ba2a0c
 
91674c2
3ba2a0c
ef0da84
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
---
license: mit
datasets:
- vicgalle/alpaca-gpt4
- BelleGroup/train_1M_CN
- stingning/ultrachat
- HuggingFaceH4/no_robots
- Open-Orca/OpenOrca
language:
- zh
- en
pipeline_tag: conversational
---
# Zephyr-8x7b:Zephyr Models but Mixtral 8x7B

We present to you the Zephyr-8x7b, a Mixtral 8x7B MoE model that SFT-only training on a dataset of nearly four million conversation corpora.

It has demonstrated strong contextual understanding, reasoning, and human moral alignment without alignment like DPO, and we invite you to participate in our exploration!