Fine Tune Training

#2
by DazzlingXeno - opened

How did you fine tune this? Did you convert the Gutenberg dataset to mistral instruct format or did you just use a JSON/parquet? Thanks in advance.

DazzlingXeno changed discussion title from Training to Fine Tune Training

I used a modified version of Maxime Labonne's ORPO notebook. The data was formatted using ChatML. The changes are shown in this thread: https://huggingface.co/nbeerbower/mistral-nemo-gutenberg-12B-v2/discussions/1

Thank you 😊

I'm thinking of using the new command-r 32 or magnum 34b. Leaning towards Magnum as that's already ChatML. So I'm going to have to test your model on both formats to see if it matters.

Sign up or log in to comment