Just wanted to say thank you.

#8
by MB7977 - opened

With creative writing, it's really hard to find a model that follows instructions but isn't so slavish as to just follow instructions, if you know what I mean. One of the reasons I enjoy using the proprietary LLMs is that they're pretty good at stimulating new ideas/angles on a story idea in that way. This is one of the few local models I've used that's pretty capable of that. It's also amazing for actually writing longform replies.

I've tried tuning Yi myself and it's been just okay. This model isn't perfect of course, but it's the best Yi-derived model I've used. I appreciate you're getting quite a bit of feedback but it mostly seems to be from the RP crowd so I thought it might be useful to add some comments from an instruct/write perspective.

Awesome! Yeah I almost exclusively use it for novel-style notebook writing myself.

I'd be interested to see the prompt format you use, if it's something particular that steers the model well.

Iโ€™m using the Orca Vicuna prompt format with varied instruct setups. Sometimes just an outline. Sometimes a template that includes character profiles, background info, setting etc, then a detailed scene breakdown for the chapter followed by a request for a particular scene to be written. The latter requires good instruction following capability.

Yet to test with RAG. I had solid results combining the above with RAG and one of your previous Yi merges. Weโ€™re slowly inching closer to the likes of what Claude and GPT-4 offered last year. I do wish there were more 20-34B sized models with longer context out there. ๐Ÿ™‚

Sign up or log in to comment