YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
Monarch-7B - GGUF
- Model creator: https://huggingface.co/mlabonne/
- Original model: https://huggingface.co/mlabonne/Monarch-7B/
Name | Quant method | Size |
---|---|---|
Monarch-7B.Q2_K.gguf | Q2_K | 2.53GB |
Monarch-7B.IQ3_XS.gguf | IQ3_XS | 2.81GB |
Monarch-7B.IQ3_S.gguf | IQ3_S | 2.96GB |
Monarch-7B.Q3_K_S.gguf | Q3_K_S | 2.95GB |
Monarch-7B.IQ3_M.gguf | IQ3_M | 3.06GB |
Monarch-7B.Q3_K.gguf | Q3_K | 3.28GB |
Monarch-7B.Q3_K_M.gguf | Q3_K_M | 3.28GB |
Monarch-7B.Q3_K_L.gguf | Q3_K_L | 3.56GB |
Monarch-7B.IQ4_XS.gguf | IQ4_XS | 3.67GB |
Monarch-7B.Q4_0.gguf | Q4_0 | 3.83GB |
Monarch-7B.IQ4_NL.gguf | IQ4_NL | 3.87GB |
Monarch-7B.Q4_K_S.gguf | Q4_K_S | 3.86GB |
Monarch-7B.Q4_K.gguf | Q4_K | 4.07GB |
Monarch-7B.Q4_K_M.gguf | Q4_K_M | 4.07GB |
Monarch-7B.Q4_1.gguf | Q4_1 | 4.24GB |
Monarch-7B.Q5_0.gguf | Q5_0 | 4.65GB |
Monarch-7B.Q5_K_S.gguf | Q5_K_S | 4.65GB |
Monarch-7B.Q5_K.gguf | Q5_K | 4.78GB |
Monarch-7B.Q5_K_M.gguf | Q5_K_M | 4.78GB |
Monarch-7B.Q5_1.gguf | Q5_1 | 5.07GB |
Monarch-7B.Q6_K.gguf | Q6_K | 5.53GB |
Monarch-7B.Q8_0.gguf | Q8_0 | 7.17GB |
Original model description:
license: cc-by-nc-4.0 tags: - merge - mergekit - lazymergekit base_model: - mlabonne/OmniTruthyBeagle-7B-v0 - mlabonne/NeuBeagle-7B - mlabonne/NeuralOmniBeagle-7B model-index: - name: Monarch-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 73.04 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Monarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 89.03 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Monarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.41 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Monarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 77.35 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Monarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 84.61 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Monarch-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 69.07 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Monarch-7B name: Open LLM Leaderboard
Monarch-7B
Update 13/02/24: Monarch-7B is the best-performing model on the YALL leaderboard.
Monarch-7B is a merge of the following models using LazyMergekit:
π Evaluation
The evaluation was performed using LLM AutoEval on Nous suite. See the entire leaderboard here.
Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
---|---|---|---|---|---|
Monarch-7B π | 62.68 | 45.48 | 77.07 | 78.04 | 50.14 |
teknium/OpenHermes-2.5-Mistral-7B π | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 |
mlabonne/NeuralHermes-2.5-Mistral-7B π | 53.51 | 43.67 | 73.24 | 55.37 | 41.76 |
mlabonne/NeuralBeagle14-7B π | 60.25 | 46.06 | 76.77 | 70.32 | 47.86 |
eren23/dpo-binarized-NeuralTrix-7B π | 62.5 | 44.57 | 76.34 | 79.81 | 49.27 |
CultriX/NeuralTrix-7B-dpo π | 62.5 | 44.61 | 76.33 | 79.8 | 49.24 |
𧩠Configuration
models:
- model: mistralai/Mistral-7B-v0.1
# no parameters necessary for base model
- model: mlabonne/OmniTruthyBeagle-7B-v0
parameters:
density: 0.65
weight: 0.36
- model: mlabonne/NeuBeagle-7B
parameters:
density: 0.6
weight: 0.34
- model: mlabonne/NeuralOmniBeagle-7B
parameters:
density: 0.6
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
π» Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Monarch-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 76.25 |
AI2 Reasoning Challenge (25-Shot) | 73.04 |
HellaSwag (10-Shot) | 89.03 |
MMLU (5-Shot) | 64.41 |
TruthfulQA (0-shot) | 77.35 |
Winogrande (5-shot) | 84.61 |
GSM8k (5-shot) | 69.07 |
- Downloads last month
- 201