Good for storywriting

#1
by TariTariTu - opened

Just want to say that I've tried a lot of 7B models (mostly q4_k_m, because it's fast on my PC), and so far this one is the best among them for writing. Doesn't produce too many 'gpt-isms', is smart enough, characters behave as they should. Most importantly, doesn't try to quickly wrap up the story on that annoying positive note ("together they will overcome all the challenges bla bla bla"), progresses the plot quite nicely. I mean, it's still a 7B model with its limitations, but it's a good 7B model. Ah yes, neat for RP with asterisks, too. Thanks for creating it!

Owner

Thanks for you review, I'm glad it's useful for you. Yeah, it's good for story writing, I'm also impressed myself. I will try to fix some mistakes in the next one, I need to figure out a balance btw RP while being smart and have a 'humanly' behavior instead of boring gpt-isms.

I have also been playing with this model for a few days now, and wow it is amazing! I've tried a bunch of 7b models for RP, but they were all lacking in coherence during roleplay. Some models recently impressed me, like NeuralBeagle14, insane overall model, but it missed that RP and Non-GPT spark, and this model fixes that, allows for more "personal" dialogues with the AI too. It's a 7b, so sometimes it still hallucinates or misreads the summary or character cards, but overall it's a seriously impressive model. I've had a 400-message chat with it without any repetition, and with amazing quality in the replies! Congrats and thank you, Endevor!

Owner

Really thank you! I'm glad it's working for you guys, have fun! Also thanks for the feedback!

I don't know what magic mix you used to make your model, but if you can make the same kind of mix without gpt-ism and with SOLAR as base model, I would be really in love 😏 10-11b models seem to fix a lot of logical issues that 7b models have...

Owner

Well, that's quite difficult because my PC is meh, you know, by just merging four 7b models + loras, it sometimes crashes or Pytorch gives an error about CUDA timeout, there's time I'm lucky it decides to go, but most is just bunch of errors... So, merging 10~11b seems improbable at moment. But, It would be very cool!

Ooh alright :( Hope it will happen someday hehe!

Sign up or log in to comment