Maybe you could try on Yi-34B

#12
by Yhyu13 - opened

Hi,

Your base model was Openchat3.5 whose base was Mistral-7B, the sad thing is that they have not yet released larger models.

And from AlpacaEval https://tatsu-lab.github.io/alpaca_eval/, we generally observe the trend that larger models with same FT method perform better, e.g. the XWin series (which is also a RLHF model).

Maybe you can try out Yi-34B, seems to be the best mid-size model so far.

Thanks!

Berkeley-Nest org

Thank you for the suggestion! That's also on our TO-DO list.

Currently we still observe some instable and weird behavior of the model, so we are working on a beta version first before testing a larger reward & policy model.

During our evaluation, we also found that 7B model tends to hallucilate a lot, which is incomparable to 30+B models and greatly affect the human evaluation score. So having a larger model seems to be a must in this case. We believe that our dataset might have larger potential when scaling the reward model and language model, although the biggest problem is still the limited compute for training large reward & language model.

Sign up or log in to comment