Experimental merges

#1
by MrDevolver - opened

Hi, thank you for your contributions!

I like the idea of experimental merges of models, but it would be nice if you could add at least a few notes of details about the merges, like percentages of each model in the mix and reasoning behind such decision, if any at all.

Also, I've noticed that for these 7B models you chose Q4_K_M. They are fairly small, it'd be nice if you could do Q6_K as well, I had good experience with those and they are still pretty fast. Just a suggestion.

Thanks for the feedback, later I do quantize the q6_k version, you can get more information in that repository.

But basically I use nous hermes llama 2 7b with kimiko Lora with normal weight (1.0)

Sign up or log in to comment