Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Mixtral_11Bx2_MoE_19B - GGUF - Model creator: https://huggingface.co/cloudyu/ - Original model: https://huggingface.co/cloudyu/Mixtral_11Bx2_MoE_19B/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Mixtral_11Bx2_MoE_19B.Q2_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q2_K.gguf) | Q2_K | 6.58GB | | [Mixtral_11Bx2_MoE_19B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.IQ3_XS.gguf) | IQ3_XS | 7.34GB | | [Mixtral_11Bx2_MoE_19B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.IQ3_S.gguf) | IQ3_S | 7.75GB | | [Mixtral_11Bx2_MoE_19B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q3_K_S.gguf) | Q3_K_S | 7.73GB | | [Mixtral_11Bx2_MoE_19B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.IQ3_M.gguf) | IQ3_M | 7.94GB | | [Mixtral_11Bx2_MoE_19B.Q3_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q3_K.gguf) | Q3_K | 8.59GB | | [Mixtral_11Bx2_MoE_19B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q3_K_M.gguf) | Q3_K_M | 8.59GB | | [Mixtral_11Bx2_MoE_19B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q3_K_L.gguf) | Q3_K_L | 9.32GB | | [Mixtral_11Bx2_MoE_19B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.IQ4_XS.gguf) | IQ4_XS | 9.66GB | | [Mixtral_11Bx2_MoE_19B.Q4_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q4_0.gguf) | Q4_0 | 10.09GB | | [Mixtral_11Bx2_MoE_19B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.IQ4_NL.gguf) | IQ4_NL | 10.19GB | | [Mixtral_11Bx2_MoE_19B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q4_K_S.gguf) | Q4_K_S | 10.17GB | | [Mixtral_11Bx2_MoE_19B.Q4_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q4_K.gguf) | Q4_K | 10.79GB | | [Mixtral_11Bx2_MoE_19B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q4_K_M.gguf) | Q4_K_M | 10.79GB | | [Mixtral_11Bx2_MoE_19B.Q4_1.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q4_1.gguf) | Q4_1 | 11.19GB | | [Mixtral_11Bx2_MoE_19B.Q5_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q5_0.gguf) | Q5_0 | 12.3GB | | [Mixtral_11Bx2_MoE_19B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q5_K_S.gguf) | Q5_K_S | 12.3GB | | [Mixtral_11Bx2_MoE_19B.Q5_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q5_K.gguf) | Q5_K | 12.67GB | | [Mixtral_11Bx2_MoE_19B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q5_K_M.gguf) | Q5_K_M | 12.67GB | | [Mixtral_11Bx2_MoE_19B.Q5_1.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q5_1.gguf) | Q5_1 | 13.41GB | | [Mixtral_11Bx2_MoE_19B.Q6_K.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q6_K.gguf) | Q6_K | 14.66GB | | [Mixtral_11Bx2_MoE_19B.Q8_0.gguf](https://huggingface.co/RichardErkhov/cloudyu_-_Mixtral_11Bx2_MoE_19B-gguf/blob/main/Mixtral_11Bx2_MoE_19B.Q8_0.gguf) | Q8_0 | 18.99GB | Original model description: --- license: cc-by-nc-4.0 --- # Mixtral MOE 2x10.7B [One of Best MoE Model reviewd by reddit community](https://www.reddit.com/r/LocalLLaMA/comments/1916896/llm_comparisontest_confirm_leaderboard_big_news/) MoE of the following models : * [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) * [jeonsworld/CarbonVillain-en-10.7B-v1](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v1) * Local Test * hf (pretrained=cloudyu/Mixtral_11Bx2_MoE_19B), gen_kwargs: (None), limit: None, num_fewshot: 10, batch_size: auto (32) | Tasks |Version|Filter|n-shot| Metric |Value | |Stderr| |---------|-------|------|-----:|--------|-----:|---|-----:| |hellaswag|Yaml |none | 10|acc |0.7142|± |0.0045| | | |none | 10|acc_norm|0.8819|± |0.0032| gpu code example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Mixtral_11Bx2_MoE_19B" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda") generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ``` CPU example ``` import torch from transformers import AutoTokenizer, AutoModelForCausalLM import math ## v2 models model_path = "cloudyu/Mixtral_11Bx2_MoE_19B" tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False) model = AutoModelForCausalLM.from_pretrained( model_path, torch_dtype=torch.float32, device_map='cpu',local_files_only=False ) print(model) prompt = input("please input prompt:") while len(prompt) > 0: input_ids = tokenizer(prompt, return_tensors="pt").input_ids generation_output = model.generate( input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2 ) print(tokenizer.decode(generation_output[0])) prompt = input("please input prompt:") ```