please share source code for finetuning this model. Thank You.
#1
by
Teera
- opened
This comment has been hidden
I used my own PEFT and TRL pipeline with a bit of adjustment for the Mistral architecture. I haven't had time to clean it up for public release yet, but I found this blog that uses a similar pipeline to mine, complete with full explanations, which might be useful:
https://blog.neuralwork.ai/an-llm-fine-tuning-cookbook-with-mistral-7b/
This comment has been hidden