This is Nous Hermes 2 Mistral 7B, quantized with the help of imatrix so it could offer better performance for being quantized, and have quantization levels available for lower-memory devices to run. Kalomaze's "groups_merged.txt" was used for the importance matrix, with context set to 8,192.
Here's a chart that provides an approximation of the HellaSwag score (out of 1,000 tasks) and the RAM usage (with --no-mmap
) with llama.cpp. The chart is incomplete, and thanks to the randomization of tasks, it may be slightly unprecise:
Quantization | HellaSwag | 256 ctx RAM | 512 ctx | 1024 ctx | 2048 ctx | 4096 ctx | 8192 ctx |
---|---|---|---|---|---|---|---|
IQ1_S | 51.7% | 1.6 GiB | 1.6 GiB | 1.7 GiB | 1.8 GiB | 2.0 GiB | 2.5 GiB |
IQ1_M | 60.5% | ||||||
IQ2_XXS | 72.5% | 1.9 GiB | 1.9 GiB | 2.0 GiB | 2.1 GiB | 2.4 GiB | 2.9 GiB |
IQ2_XS | 74.2% | 2.1 GiB | 2.1 GiB | 2.2 GiB | 2.3 GiB | 2.6 GiB | 3.1 GiB |
IQ2_S | 76.8% | 2.2 GiB | 2.2 GiB | 2.3 GiB | 2.4 GiB | 2.7 GiB | 3.2 GiB |
Q2_K (original) | 77.4% | 2.6 GiB | 2.6 GiB | 2.7 GiB | 2.8 GiB | 3.1 GiB | 3.6 GiB |
Q2_K | 78.7% | ||||||
IQ3_XXS | 79.7% | ||||||
IQ3_XS | 80.6% | ||||||
IQ3_S | 81.2% | ||||||
IQ3_M | 81.1% | ||||||
IQ4_XS | 82.0% | ||||||
IQ4_NL | 82.0% | ||||||
Q3_K_M (original) | 80.0% | 3.3 GiB | 3.4 GiB | 3.4 GiB | 3.6 GiB | 3.8 GiB | 4.3 GiB |
Q3_K_M | 80.9% | ||||||
Q4_K_M (original) | 81.8% | 4.1 GiB | 4.2 GiB | 4.2 GiB | 4.3 GiB | 4.6 GiB | 5.1 GiB |
Q4_K_M | 81.9% | ||||||
Q5_K_M (original) | 82.1% | 4.8 GiB | 4.9 GiB | 4.9 GiB | 5.1 GiB | 5.3 GiB | 5.8 GiB |
Q5_K_M | 81.5% | ||||||
Q6_K | 81.7% | 5.6 GiB | 5.6 GiB | 5.7 GiB | 5.8 GiB | 6.1 GiB | 6.6 GiB |
I don't recommend using iq1_S. You may be better off using TinyDolphin-1.1B (HellaSwag: 59.0%) and Dolphin 2.6 Phi-2 (HellaSwag: 71.6%) if you're that limited.
The original GGUFs can be found at NousResearch/Nous-Hermes-2-Mistral-7B-DPO-GGUF. Original model card below.
Nous Hermes 2 - Mistral 7B - DPO
Model Description
Nous Hermes 2 on Mistral 7B DPO is the new flagship 7B Hermes! This model was DPO'd from Teknium/OpenHermes-2.5-Mistral-7B and has improved across the board on all benchmarks tested - AGIEval, BigBench Reasoning, GPT4All, and TruthfulQA.
The model prior to DPO was trained on 1,000,000 instructions/chats of GPT-4 quality or better, primarily synthetic data as well as other high quality datasets, available from the repository teknium/OpenHermes-2.5.
Thank you to FluidStack for sponsoring compute for this model!
Example Outputs
Describing Weather Patterns in Paris:
Making JSON Nested Lists
Roleplaying as a Toaist Master
Benchmark Results
Nous-Hermes 2 DPO on Mistral 7B is an improvement across the board on the benchmarks below compared to the original OpenHermes 2.5 model, as shown here:
GPT4All:
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5776|± |0.0144|
| | |acc_norm|0.6220|± |0.0142|
|arc_easy | 0|acc |0.8380|± |0.0076|
| | |acc_norm|0.8245|± |0.0078|
|boolq | 1|acc |0.8624|± |0.0060|
|hellaswag | 0|acc |0.6418|± |0.0048|
| | |acc_norm|0.8249|± |0.0038|
|openbookqa | 0|acc |0.3420|± |0.0212|
| | |acc_norm|0.4540|± |0.0223|
|piqa | 0|acc |0.8177|± |0.0090|
| | |acc_norm|0.8264|± |0.0088|
|winogrande | 0|acc |0.7466|± |0.0122|
Average: 73.72
AGIEval:
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2047|± |0.0254|
| | |acc_norm|0.2283|± |0.0264|
|agieval_logiqa_en | 0|acc |0.3779|± |0.0190|
| | |acc_norm|0.3932|± |0.0192|
|agieval_lsat_ar | 0|acc |0.2652|± |0.0292|
| | |acc_norm|0.2522|± |0.0287|
|agieval_lsat_lr | 0|acc |0.5216|± |0.0221|
| | |acc_norm|0.5137|± |0.0222|
|agieval_lsat_rc | 0|acc |0.5911|± |0.0300|
| | |acc_norm|0.5836|± |0.0301|
|agieval_sat_en | 0|acc |0.7427|± |0.0305|
| | |acc_norm|0.7184|± |0.0314|
|agieval_sat_en_without_passage| 0|acc |0.4612|± |0.0348|
| | |acc_norm|0.4466|± |0.0347|
|agieval_sat_math | 0|acc |0.3818|± |0.0328|
| | |acc_norm|0.3545|± |0.0323|
Average: 43.63
BigBench:
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5579|± |0.0361|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6694|± |0.0245|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3333|± |0.0294|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.2061|± |0.0214|
| | |exact_str_match |0.2256|± |0.0221|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|± |0.0207|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2114|± |0.0154|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.4900|± |0.0289|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.3600|± |0.0215|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.6660|± |0.0105|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4420|± |0.0235|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2766|± |0.0142|
|bigbench_snarks | 0|multiple_choice_grade|0.6630|± |0.0352|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.6653|± |0.0150|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.3190|± |0.0147|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2128|± |0.0116|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1737|± |0.0091|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.4900|± |0.0289|
Average: 41.94
TruthfulQA:
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.3892|± |0.0171|
| | |mc2 |0.5642|± |0.0153|
Prompt Format
Nous Hermes 2 uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
This prompt is available as a chat template, which means you can format messages using the
tokenizer.apply_chat_template()
method:
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)
When tokenizing messages for generation, set add_generation_prompt=True
when calling apply_chat_template()
. This will append <|im_start|>assistant\n
to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
When quantized versions of the model are released, I recommend using LM Studio for chatting with Nous Hermes 2. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane:
Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers import LlamaTokenizer, MixtralForCausalLM
import bitsandbytes, flash_attn
tokenizer = LlamaTokenizer.from_pretrained('NousResearch/Nous-Hermes-2-Mistral-7B-DPO', trust_remote_code=True)
model = MistralForCausalLM.from_pretrained(
"NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
How to cite:
@misc{Nous-Hermes-2-Mistral-7B-DPO,
url={[https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO)},
title={Nous Hermes 2 Mistral 7B DPO},
author={"Teknium", "theemozilla", "karan4d", "huemin_art"}
}
- Downloads last month
- 1,229
Model tree for Crataco/Nous-Hermes-2-Mistral-7B-DPO-imatrix-GGUF
Base model
mistralai/Mistral-7B-v0.1