Text Generation
Transformers
PyTorch
mistral
openchat
C-RLFT
conversational
Inference Endpoints
text-generation-inference

Some feedback

#33
by cmp-nct - opened

I've been testing this closer now, comparing it to CausalLM 14B and a few others and I'm quite positively intrigued.
The one negative thing I noticed is a bit lack of context understanding and taking instructions (when multiple instructions combine) not by the word, it needs more training on larger inputs!

However I'm quite happy by now how witty it can write, given the tiny size of it .. that's remarkable.
I also provided examples of it's output to people, had it choose and they favored it over the much larger Causal 14B cousin which is also very good though but twice the size.

OpenChat org

Thanks for your feedback! Can you post some examples of combining multiple instructions into one? We will try our best to improve πŸ€—

Sign up or log in to comment