Edit model card

Trained on over 20k instruct generated all by gpt-4 or humans

Dataset features: 1000 long evolved conversations based off LIMA Subsection of correct PRM800k data Subsection of CamelAI's Physics and Chemistry data

The model is trained with Qlora as well as Axolotl.

Downloads last month
3

Dataset used to train waldie/Yi-34B-GiftedConvo-merged-4bpw-h6-exl2