Running
on
A100
893
🪁
Models, datasets, and demos associated with Zephyr 7B. For code to train the models, see: https://github.com/huggingface/alignment-handbook
Note Chat with our Zephyr 7B models!
Note A state-of-the-art chat model at the 7B parameter scale. Trained on synthetic data with a mix of SFT and DPO.
Note The precursor to Zephyr-7B-β. Trained on synthetic data with a mix of SFT and DPO.
Note The SFT model used for the DPO training of Zephyr-7B-β
Note The SFT model used for the DPO training of Zephyr-7B-α
Note The SFT dataset used to train Zephyr-7B-β
Note The dataset of AI preferences used to train Zephyr-7B-β with DPO