General discussion.

#1
by Lewdiculous - opened

Slerp request from user feedback.

By @traveltube :

Would it be possible to merge a infinitely-laydiculus with the Fett-uccine-Long-Noodle-7B-120k-Context? I like the outputs and would be exciting if it can put out for an even longer context!

Those merge models showing potential.

Not sure how busy are your queues, @Nitral-AI , @jeiku and the rest of the beautiful Chaotic people... Possible to fit that in the next days? In the name of science? 7b/7b=7b.

Also not sure if others have experienced this but it seems that with these models, chatml format works even better somehow in my experience!

Chatml is all i use with any of the models.

Not sure how busy are your queues, @Nitral-AI , @jeiku and the rest of the beautiful Chaotic people... Possible to fit that in the next days? In the name of science? 7b/7b=7b."
@Lewdiculous @traveltube
I can do this one up tomorrow, im taking a break for tonight.

@Nitral-AI I am an Alpaca heretic.

Thanks mate and good rest.

Its in the oven now, but it already failed once. Hopefully it doesnt this time - will be up here (Nitral-AI/Infinitely-Laydiculous-7b-longtext) when the upload is complete.

@Nitral-AI ty - fingers crossed

thanks for the merge and thanks for the waifu

will be quantized soon(tm) - later in the evening or something

No problem, on a real note though i will be taking a bit of a break here. Will still accept merge requests as a have time, but i probably wont be working on any new experiments until i catch up on testing.

Will still accept merge requests as a have time, but i probably wont be working on any new experiments until i catch up on testing.

No problem, if you're unable to do a request feel free to postpone it. I know the struggle.

Trying this model, I'm having issues with it compared to other 7Bs on my configuration. It seems to not be very coherent and struggles to follow a basic NSFW story... I don't know if it's something wrong with my ST setup, it works well with other models :/

@Varkoyote Curious if these presets work any better:

Context size: 8192

TextGen preset:
https://files.catbox.moe/llkdu8.json

Instruct preset:
https://files.catbox.moe/j8av02.json

Thanks for giving your feedback on the performance so far.

I use a crazy temp of 5 min p of 0.3 and smoothing factor of 2.1 for most models lol
Ends up working for most; for Yi-based ones it seems to like smoothing 1.5

@Varkoyote Curious if these presets work any better:

Context size: 8192

TextGen preset:
https://files.catbox.moe/llkdu8.json

Instruct preset:
https://files.catbox.moe/j8av02.json

Thanks for giving your feedback on the performance so far.

Thank you! It was my prompting that wasn't adapted as well as an english mistake on my part... the model seems pretty good!

Sign up or log in to comment