The difference of prompt template between base models

#8
by Cartinoe5930 - opened

Hello, I found phixtral very interesting. While trying to conduct an experiment in this regard, I suddenly became curious about what effect it would have if the prompt templates of these models were different when combining several models. So, I would like to ask how you took these points into consideration when creating phixtral.

Additionally, I found NeuralMarcoro, which uses model-merging, very interesting, and I am also curious about the impact of differences in prompt templates between base models in terms of model merging rather than MoE.

try alpaca at first

Thanks! This is an interesting point. From experience, models trained on different templates tend to be good at following any of them.

For phixtral, I chose chatml because dolphin-2_6-phi-2 has been fine-tuned using this template (and it's my favorite one). I tested it and found that it works well.

For merges, it's quite similar: I'd choose the template used by the majority (in terms of weights) of the models. Chatml is quite popular now so it's often a good option. Like @cloudyu said, Alpaca or Llama Chat work well too.

Thanks for your response @mlabonne !! I'll try it!!

Cartinoe5930 changed discussion status to closed

Sign up or log in to comment