my rating of the models
Oh, you are the first researcher in my path who diligently combines models.
Thank you, we'll try. Your models I have not yet tried even once. If you are interested in my opinion, I could find only two creative models that can write beautifully at the moment. The rest, either with a dead boring style or highly specialized ones, in which a step to the left and a step to the right means execution. Well, there are still censored zombies where no living life is left in the brain.
Here is my rating of the models:
It is on the verge of insanity but very creative even after cutting down to 4 bits, it is simply out of competition
1 place:
22b-mergellama2-22b-chat-wizard-uncensored-Dendrite-22Bchk2-F16
https://huggingface.co/grimpep/22b-mergellama2-22b-chat-wizard-uncensored-Dendrite-22Bchk2-F16
and 4 bit
https://huggingface.co/grimpep/22b-mergellama2-22b-chat-wizard-uncensored-Dendrite-22Bchk2-F16-GPTQ
2nd place:
Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16
https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16
Unfortunately, in this neural network, the creative part dies if the bit depth drops below 8.
Both models get very dumb if you use --xformers.
If the model 22b-mergellama2-22b-chat-wizard-uncensored-Dendrite-22Bchk2 were to have all the rubbish like programming skills or knowledge of the rules for compiling documents or a bunch of useless reference information cut out of the brain, it would be incredible. And so, alas, sometimes this brain sparkled notably. But all the same, it understands the tasks set better than anyone for some reason. The rest of the models have a bad habit of not following what you ask them to do. And they are not as creative and beautiful.
Eh, I can't wait to delve into the insides of neural models myself properly. While my knowledge is clearly not enough for this. I don't even understand how you connect them and how you see what they have left alive inside after gluing.
Oh, you are the first researcher in my path who diligently combines models.
Thank you, we'll try. Your models I have not yet tried even once. If you are interested in my opinion, I could find only two creative models that can write beautifully at the moment. The rest, either with a dead boring style or highly specialized ones, in which a step to the left and a step to the right means execution. Well, there are still censored zombies where no living life is left in the brain.
Here is my rating of the models:
It is on the verge of insanity but very creative even after cutting down to 4 bits, it is simply out of competition1 place:
22b-mergellama2-22b-chat-wizard-uncensored-Dendrite-22Bchk2-F16
https://huggingface.co/grimpep/22b-mergellama2-22b-chat-wizard-uncensored-Dendrite-22Bchk2-F16
and 4 bit
https://huggingface.co/grimpep/22b-mergellama2-22b-chat-wizard-uncensored-Dendrite-22Bchk2-F16-GPTQ2nd place:
Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16
https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16
Unfortunately, in this neural network, the creative part dies if the bit depth drops below 8.Both models get very dumb if you use --xformers.
If the model 22b-mergellama2-22b-chat-wizard-uncensored-Dendrite-22Bchk2 were to have all the rubbish like programming skills or knowledge of the rules for compiling documents or a bunch of useless reference information cut out of the brain, it would be incredible. And so, alas, sometimes this brain sparkled notably. But all the same, it understands the tasks set better than anyone for some reason. The rest of the models have a bad habit of not following what you ask them to do. And they are not as creative and beautiful.
Eh, I can't wait to delve into the insides of neural models myself properly. While my knowledge is clearly not enough for this. I don't even understand how you connect them and how you see what they have left alive inside after gluing.
Your phrasing is very interesting.
I apologize for the quality of the translation in the last post. I was too lazy to write in English and used a translator without proofreading. But the main thing is that you all understand me.
My list of models that are more or less able to work sufficiently qualitatively is significantly larger. For example, yesterday, I discovered LosslessMegaCoder-Llama2-13B-Mini-8bit-128g-GPTQ. It pleasantly surprised me, although not by all criteria. I can't try all the models, but I've already tried quite a few.
I deleted a dozen models to be away from that horror, so I don’t even remember the names. Here is my small list of models that I have already tried and left to take a closer look at them:
22b-mergellama2-22b-chat-wizard-uncensored-Dendrite-22Bchk2-F16
22b-mergellama2-22b-chat-wizard-uncensored-Dendrite-22Bchk2-4bit-32g-GPTQ
airoboros-33b-gpt4-1.4.1-NTK-16384-GPTQ
airoboros-l2-13B-gpt4-1.4.1-GPTQ_4bit
airoboros-l2-13B-gpt4-1.4.1-GPTQ_8bit
22b-mergellama2-22b-chat-wizard-uncensored-GPTQ-4bit-32g
WizardLM-Uncensored-SuperCOT-StoryTelling-30b-SuperHOT-8k-4bit-32g
wizard-vicuna-30b-uncensored-awq-4bit-g128
L2-MythoMax22b-instruct-Falseblock-fp16
llama2_13b_chat_uncensored-fp16
Luna-AI-Llama2-7b-Uncensored-fp16
Luna-AI-Llama2-7b-Uncensored-GPTQ_4bit_128g
Luna-AI-Llama2-7b-Uncensored-GPTQ_8bit_128g
OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ_4bit
OpenAssistant-Llama2-13B-Orca-8K-3319-GPTQ_8bit
Starcoderplus-Guanaco-GPT4-15B-V1.0-GPTQ_4bit
Starcoderplus-Guanaco-GPT4-15B-V1.0-GPTQ_8bit
Starcoderplus-Guanaco-GPT4-15B-V1.0-fp16
Wizard-Vicuna-7B-Uncensored-SuperHOT-8K-fp16
Wizard-Vicuna-13B-Uncensored-GPTQ-4bit-128g
Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16
Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-GPTQ
Wizard-Vicuna-30B-Uncensored-GPTQ-4bit-128g
WizardCoder-15B-1.0-GPTQ-4bit-128g
WizardCoder-Guanaco-15B-V1.1-GPTQ-8bit-128g
WizardCoder-Guanaco-15B-V1.1-GPTQ_4bit_128g
WizardLM-1.0-Uncensored-Llama2-13B-GPTQ
WizardLM-Uncensored-Falcon-40B-GPTQ
WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ-4bit-128g
llama2-22B-daydreamer-v2-4bit-128g-GPTQ
Llama2-22B-Daydreamer-v3-4bit-128g-GPTQ
llama2-22b-daydreamer-v3-fp16
llama2-22B-GPLATTY-fp16
LosslessMegaCoder-Llama2-13B-Mini-4bit-128g-GPTQ
LosslessMegaCoder-Llama2-13B-Mini-8bit-128g-GPTQ
airoboros-33B-gpt4-1.4-GPTQ
Llama-2-7b-chat-fp16
Llama-2-13B-chat-GPTQ_8bit
wizard-vicuna-13B-GPTQ-8bit-128g
Wizard-Vicuna-30B-Superhot-8K-GPTQ_4b_1g
Primary criteria for me:
- Following the model to the task in each request iteration.
- Remembering and using the required character names if writing literature or variable names if programming.
- Artistic style. The model is required to create something that will be easy to read and, simultaneously, be beautiful and pleasant to sound. Not too dry text and not too frilly with bows and patterns.
- Creative part of the model. How original the text can be rewritten based on the task as a skeleton. Many models quote one-to-one text from the task or throw parts of the task into the text without even linking them to the main text.
- Residual creative part of the model. What will remain after quantization to 4 bits? Really cool models can easily withstand quantization up to 4 bits without significantly losing their positive qualities.
To be honest, I still don't want to bother with the rating of models and test them all honestly, in exactly the same way, with the same number of iterations. And then make a rating table out of it. So my conclusions are pretty subjective. But maybe someone will come in handy.
Here is my rating of the models:
1 place:
22b-mergellama2-22b-chat-wizard-uncensored-Dendrite-22Bchk2-F16
https://huggingface.co/grimpep/22b-mergellama2-22b-chat-wizard-uncensored-Dendrite-22Bchk2-F16
and 4 bit
https://huggingface.co/grimpep/22b-mergellama2-22b-chat-wizard-uncensored-Dendrite-22Bchk2-F16-GPTQ
2nd place:
Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16
https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-SuperHOT-8K-fp16
Unfortunately, in this neural network, the creative part dies when quantization is below 8 bits.
3rd place goes to
Wizard-Vicuna-30B-Uncensored-4bit-128g-GPTQ
4th place
WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GPTQ-4bit-128g
The rest of the models do not yet receive a place in the rating of utility models. Perhaps upon closer examination, it will become clear that they can do more than they showed at the first attempts.