Transformers
GGUF
Inference Endpoints

Feedback

#1
by EloyOn - opened

I've been testing SAO10K's Stheno 3.1, which is pretty good and likes to write VERY long messages.

Then I tried your Mahou 1.2, after enjoying Mahou 1.1. It was only a short chat, but I'm pleased and surprised that her messages are only one paragraph long! It's probably due to the fact that I tend to write short messages as well, and the model matched that I guess, but lately as LLM's are more intelligent, they tend to write quite a lot. It's kind of refreshing a LLM that doesn't write long walls of text.

Mahou 1.1 had a distinct pleasant feeling to it too. I'll be adding both to my rotation of fav models to keep testing, alongside Aura Uncensored OAS and Lumimaid OAS.

flammen.ai org

Thanks again for trying the model and providing feedback! I'm glad to hear the model produces nice short results for you; the goal is for it to write messages that are roughly SMS length.

If you'd like any llama3 models merged into this one for 1.3, please let us know!

xDDD As I said it was only a short convo, not enough to give a proper feedback on the overall model perfomance.

I was hesitant to try it because It is an uber merge with other 5 models on top of Mahou 1.1, and I prefer to test smaller merges.

If I had to pick one, I'd say that Mahou 1.1's behavior was better, character wise, despite it's formatting problems and needing a little more uncersoring (that I fixed easily with a jailbreak). Perhaps merging Mahou 1.1 and Stheno 3.1 could be interesting, since that one is very uncensored and is good with format, but I can't predict how it would turn out. You are the brewmaster here.

flammen.ai org

Haha the brewmaster XP

I'll definitely try a merge with Stheno and bringing some 1.1 back in the mix. Thanks again for the feedback!

Sign up or log in to comment