I'm loving this model, thank you.

#5
by Trappu - opened

Hiii, this model is awesome! I read your rant so I thought I should say this; you are, as far as I know, the only person who's made really good Solar finetunes since Solar Instruct Uncensored. I've been recommending Fimbulvetr to a lot of people since it seems like many of the other people who've made Solar finetunes only care about scoring high on benchmarks, and end up putting out models that are basically just filled to the brim with GPT poison and are worthless when it comes to chatting/RP.

This model, like the previous version, in my opinion, outperform 13b models and even a lot of the 8x7b models out there for RP so I'll be looking forward to your future models :).

Found this model thanks entirely to you, Trappu. Your recommendation on that rentry page (ALLMRR) got me here and I want you to know just how incredibly grateful I am for that.

I left this whole scene around when Pygmalion was still "the latest and greatest," and somehow, finding accurate, digestible information has been even more ridiculously difficult than it was back then. As soon as I landed on yours and Ali's pages, that issue disappeared almost entirely.

As far as this model goes, it is very easily the best model I've used to date. I've tested dozens of popular models and couldn't for the life of me get any of them to function properly. Frequently, once the context is nearing max context size, everything just starts falling apart. This is especially pronounced on Mistral models. I'm an amateur dev, so it's not as if I'm totally clueless here, but regardless of my knowledge, I struggled for way, waaaay too many hours trying to find a model that didn't inevitably start looping, or start using awkward prose.

With all of that said, this model is nothing short of superb. It's simply excellent. None of the aforementioned issues, which I experienced on practically every other model I tested, exist within this model. It just WORKS. It doesn't get stuck looping, if it ever does loop it's simple to step back a message or two and boom, problem solved. Mistral models I'd need to step back dozens of messages to unstick a loop, and it'd just recur after another five to ten messages. It handles practically everything I've thrown at it with ease, the prose is exactly as expected - I mean, fuck: it doesn't sound like I'm using the first few iterations of ChatGPT, but with added profanity! Instead it sounds exactly the way I ask it to. It takes cues perfectly from the context and grasps my intention perfectly the vast majority of the time.

TL;DR: Thank you, Trappu, Alicat, and Sao10K. Y'all are wonderful, and I appreciate you folks. I'm certain there are many others who feel the same way, too.

Sign up or log in to comment