please share source code for finetuning this model. Thank You.

#1
by Teera - opened

Not a problem.

I used my own PEFT and TRL pipeline with a bit of adjustment for the Mistral architecture. I haven't had time to clean it up for public release yet, but I found this blog that uses a similar pipeline to mine, complete with full explanations, which might be useful:

https://blog.neuralwork.ai/an-llm-fine-tuning-cookbook-with-mistral-7b/

Thank you.

Sign up or log in to comment