YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
Mixtral_7Bx4_MOE_24B - GGUF
- Model creator: https://huggingface.co/cloudyu/
- Original model: https://huggingface.co/cloudyu/Mixtral_7Bx4_MOE_24B/
Name | Quant method | Size |
---|---|---|
Mixtral_7Bx4_MOE_24B.Q2_K.gguf | Q2_K | 8.23GB |
Mixtral_7Bx4_MOE_24B.IQ3_XS.gguf | IQ3_XS | 9.21GB |
Mixtral_7Bx4_MOE_24B.IQ3_S.gguf | IQ3_S | 9.73GB |
Mixtral_7Bx4_MOE_24B.Q3_K_S.gguf | Q3_K_S | 9.72GB |
Mixtral_7Bx4_MOE_24B.IQ3_M.gguf | IQ3_M | 9.92GB |
Mixtral_7Bx4_MOE_24B.Q3_K.gguf | Q3_K | 10.78GB |
Mixtral_7Bx4_MOE_24B.Q3_K_M.gguf | Q3_K_M | 10.78GB |
Mixtral_7Bx4_MOE_24B.Q3_K_L.gguf | Q3_K_L | 11.68GB |
Mixtral_7Bx4_MOE_24B.IQ4_XS.gguf | IQ4_XS | 12.14GB |
Mixtral_7Bx4_MOE_24B.Q4_0.gguf | Q4_0 | 12.69GB |
Mixtral_7Bx4_MOE_24B.IQ4_NL.gguf | IQ4_NL | 12.81GB |
Mixtral_7Bx4_MOE_24B.Q4_K_S.gguf | Q4_K_S | 12.8GB |
Mixtral_7Bx4_MOE_24B.Q4_K.gguf | Q4_K | 13.61GB |
Mixtral_7Bx4_MOE_24B.Q4_K_M.gguf | Q4_K_M | 13.61GB |
Mixtral_7Bx4_MOE_24B.Q4_1.gguf | Q4_1 | 14.09GB |
Mixtral_7Bx4_MOE_24B.Q5_0.gguf | Q5_0 | 15.48GB |
Mixtral_7Bx4_MOE_24B.Q5_K_S.gguf | Q5_K_S | 15.48GB |
Mixtral_7Bx4_MOE_24B.Q5_K.gguf | Q5_K | 15.96GB |
Mixtral_7Bx4_MOE_24B.Q5_K_M.gguf | Q5_K_M | 15.96GB |
Mixtral_7Bx4_MOE_24B.Q5_1.gguf | Q5_1 | 16.88GB |
Mixtral_7Bx4_MOE_24B.Q6_K.gguf | Q6_K | 18.45GB |
Original model description:
license: cc-by-nc-4.0
Now this model is improved by DPO to cloudyu/Pluto_24B_DPO_200
Mixtral MOE 4x7B
MOE the following models by mergekit:
- Q-bert/MetaMath-Cybertron-Starling
- mistralai/Mistral-7B-Instruct-v0.2
- teknium/Mistral-Trismegistus-7B
- meta-math/MetaMath-Mistral-7B
- openchat/openchat-3.5-1210
Metrics
- Average : 68.85
- ARC:65.36
- HellaSwag:85.23
- more details: https://huggingface.co/datasets/open-llm-leaderboard/results/blob/main/cloudyu/Mixtral_7Bx4_MOE_24B/results_2023-12-23T18-05-51.243288.json
gpu code example
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_7Bx4_MOE_24B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
CPU example
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_7Bx4_MOE_24B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='cpu',local_files_only=False
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
- Downloads last month
- 88