Interesting

#1
by BlueNipples - opened

So not sure if you wanted feedback, but I found this surprisingly coherent story context following for a 7b model. Insistingly poetic/romantic prose in a sort of GPTish fashion (I imagine there's some amount of synthetic datasets in here for sure), reverting to love and soppy metaphors - which I don't think is ideal, but seemed to follow story context better than mythologic mini or even stable beluga 7b.

Only tested a dozen gens or so though, but even that left an impression. 13B models are pretty noisy often, let alone 7b models. This was pretty bang on. Like it really seemed to 'understand' what was going on. Unfortunately it's prose tends to be ultra-romantic/soppy, even moreso than GPT itself. It's like the ultimate romance model lol.

Sign up or log in to comment