General questions/feedback

#3
by jrjustiss - opened

First off I want to say that this has been the best functioning Llama 3 roleplay model for me thus far. Thank you for the constant updates and testing you're putting in. The only issue I've encountered thus far is that the model has a tendency to want to come up with variations of names more often than I would like. Ex. Lucia being converted to Luce or Licia without any prompting or context. Sometimes it will straight up misspell a name, i.e. Jhonny instead of Johnny. I am using your recommended presets as well for Context and Instruction. It may be a quirk of one of the datasets wanting to provide nicknames for characters, but I just wanted to bring attention to it. Thanks again and I look forward to your future releases.

agree, also I noticed what model misspell not only names.
And since this is a topic with such a general name, I will leave my observations as well:
The model is either laconic and generates small (3-6 sentences) responses(with any answer size), or generates a message of the maximum possible size that does not fit even in 2048 tokens, also depending on the length of the message, the model begins to behave like a 12-year-old girl from Twitter, spamming emojis and hashtags, and each subsequent sentence is more and more insane than the previous one, and all this in one paragraph of text, even if this text is 1000 tokens in size.
but i think this is llama 3 issue?

The Chaotic Neutrals org
β€’
edited Apr 26

agree, also I noticed what model misspell not only names.
And since this is a topic with such a general name, I will leave my observations as well:
The model is either laconic and generates small (3-6 sentences) responses(with any answer size), or generates a message of the maximum possible size that does not fit even in 2048 tokens, also depending on the length of the message, the model begins to behave like a 12-year-old girl from Twitter, spamming emojis and hashtags, and each subsequent sentence is more and more insane than the previous one, and all this in one paragraph of text, even if this text is 1000 tokens in size.
but i think this is llama 3 issue?

Never once seen this with any variation of poppy, how peculiar. Try trimming incomplete sentences and make sure you are using the poppy instruct/ context provided in the repo with st.

The Chaotic Neutrals org

First off I want to say that this has been the best functioning Llama 3 roleplay model for me thus far. Thank you for the constant updates and testing you're putting in. The only issue I've encountered thus far is that the model has a tendency to want to come up with variations of names more often than I would like. Ex. Lucia being converted to Luce or Licia without any prompting or context. Sometimes it will straight up misspell a name, i.e. Jhonny instead of Johnny. I am using your recommended presets as well for Context and Instruction. It may be a quirk of one of the datasets wanting to provide nicknames for characters, but I just wanted to bring attention to it. Thanks again and I look forward to your future releases.

Id try dropping the temp in the preset to 3, and see if it still occurs.

Nitral-AI changed discussion title from Review and Question to General questions/feedback
The Chaotic Neutrals org

If you could report back on your experience after verifying/modifying setup it would be helpful. Thank you very much for your time.

Nitral-AI changed discussion status to closed

Sign up or log in to comment