Text Generation
Transformers
PyTorch
mistral
openchat
C-RLFT
conversational
Inference Endpoints
text-generation-inference

PEFT based Fine Tuned model hallucinates values from the fine tuning training data while inferencing

#39
by Pradeep1995 - opened

I have fine-tuned the openchat model using a set of training data in PEFT Method. I have done the instruction finetuning.
after finetuning while inferencing, it hallucinates values from the data from the finetuning dataset. How to solve this issue?

Hello, could you please suggest a prompt template for OpenChat finetuning? Sometimes it seems that my templates are inaccurate...

Sign up or log in to comment