Surprising results

#1
by Utochi - opened

This model was a pleasant surprise in that it is comparable to 70b models that ive tested when doing math and comprehensive tests using complex character cards of 2700+ tokens.
the downside that i noticed is a little into the story it begins getting very repetitive around 4000 tokens in.
used Q6 in my test.

This model does not function on Faraday but works with ooba/sillytavern. would occasionally go out of character at the end of its regular message

So great!!☺️

This model was a pleasant surprise in that it is comparable to 70b models that ive tested when doing math and comprehensive tests using complex character cards of 2700+ tokens.
the downside that i noticed is a little into the story it begins getting very repetitive around 4000 tokens in.
used Q6 in my test.

This model does not function on Faraday but works with ooba/sillytavern. would occasionally go out of character at the end of its regular message

settings for sillytavern plaz , cant get it to out put anything but letters
edit , never mind its working and am loving it already ,

@raincandy-u is there a chance or way that you could get this model to have a larger context size? because in all of my testing the model starts out fabulous but degrades after 3 to 4k tokens

Yes I will! There are already many RoPE llama-3 finetune now, I'll make another version!

Sign up or log in to comment