Text Generation
Transformers
PyTorch
mistral
openchat
C-RLFT
conversational
Inference Endpoints
text-generation-inference

should we follow the same openchat prompt structure while finetuning time?

#38
by Pradeep1995 - opened

should we follow the same openchat prompt structure while finetuning time? or shall we use our own prompt structure for finetuning?

Pradeep1995 changed discussion title from should we follow the same mistral prompt structure while finetuning time? to should we follow the same openchat prompt structure while finetuning time?

Hello, could you please suggest a prompt template for OpenChat finetuning? Sometimes it seems that my templates are inaccurate...

@Merceley
can you try this?
https://github.com/imoneoi/openchat#-inference-with-transformers
This is for prompting. I just want to know whether I should follow this structure in finetuning also.

Thank you! I personally use this exact prompt structure for finetuning

Sign up or log in to comment