Text Generation
Transformers
PyTorch
English
German
llama
conversational
custom_code
Inference Endpoints
text-generation-inference

How to achieve better results with fine-tuning

#5
by jdjayakaran - opened

I have implemented this model and tried to fine-tune to domain specific data (in German). But the models results are not accurate and vague, what could be the parameters tweaked to get better results from this LEO model.

LAION LeoLM org

In general, it is not recommended to finetuned from an already instruction-tuned model. Try using our base model LeoLM/leo-hessianai-13b instead. Also the quality of your domain specific instruction data is super important. Make sure to cover a wide range of tasks and difficulties. Perhaps consider including some of the datasets we used during the training for better alignment. Hope this helps :)

jdjayakaran changed discussion status to closed

Sign up or log in to comment