The perplexity is bad.

#3
by jackboot - opened

i finally can test the model for loss when extending context. It's like the superhot lora didn't take.

Normal PTB should be smaller than that. And I did RP with it for a while. It has trouble to play characters and not just itself. It's not bad to talk to and it writes so long. It's just inherently flawed I think. On longer context after a while it got very repetitive. With compressed pos emb of 4 it actually started gibberish. Tests show why.

epsilon.png

Caldera AI org

Thank you for the feedback; it's a bit of a misnomer, and my fault as well since I took as long to clarify with a model card; the LoRA used was the SuperHOT-prototype13b-8192 [which has been removed from HF] - not the actual 8K ctx working LoRA. As far as perplexity, I am looking into assembling a toolset to [potentially] mix-tune models based on user-decided datasets. A bit of a meta way of merging and auto-exploring ideal ratios on a per-layer basis 🀞. [PTB is my go-to until I modularize the script.]

It may be interesting to see how this hold up as a springboard for merges with other finetunes. Some do a lot better with extended ctx, so I can see it piggybacking onto that.

Sign up or log in to comment