zephyr-7b-sft-full-SPIN
Collection
Models fine-tuned with SPIN across iterations 0,1,2,3
•
4 items
•
Updated
•
8
Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models (https://arxiv.org/abs/2401.01335)
This model is a self-play fine-tuned model at iteration 1 from alignment-handbook/zephyr-7b-sft-full using synthetic data based on on the HuggingFaceH4/ultrachat_200k dataset.
The following hyperparameters were used during training:
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 62.86 |
ARC (25-shot) | 65.87 |
HellaSwag (10-shot) | 85.44 |
MMLU (5-shot) | 60.95 |
TruthfulQA (0-shot) | 57.39 |
Winogrande (5-shot) | 76.64 |
GSM8K (5-shot) | 30.86 |
@misc{chen2024selfplay,
title={Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models},
author={Zixiang Chen and Yihe Deng and Huizhuo Yuan and Kaixuan Ji and Quanquan Gu},
year={2024},
eprint={2401.01335},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Base model
mistralai/Mistral-7B-v0.1