anyone tuned this?

#1
by ehartford - opened

My tune has some issues, it won't generate EOS and also it mumbles, I think I have to turn the temperature way down

I tuned the version labelled as base and yeah I’ve had issues with the repetition penalty. I found setting it very low (1.02) did the trick. No EOS issues but I’m using the default EOS token for this model as I’m using the OrcaMini prompt format. I think this model has a lot of potential. Good long context, very good at instruction following, generally quite smart for its size and no issues with switching between English and Chinese. I’d like to try doing a FFT on it as I’ve only tried a LoRA so far.

I tried and failed

failed. can anyone/author provide some lora-finetune script example?

Sign up or log in to comment