koesn's picture
Update README.md
116716c verified
|
raw
history blame
8.51 kB
metadata
license: apache-2.0

NeuralHermes-2.5-Mistral-7B

Description

This repo contains GGUF format model files for NeuralHermes-2.5-Mistral-7B.

Files Provided

Name Quant Bits File Size Remark
neuralhermes-2.5-mistral-7b.IQ3_S.gguf IQ3_S 3 3.18 GB 3.44 bpw quantization
neuralhermes-2.5-mistral-7b.IQ3_M.gguf IQ3_M 3 3.28 GB 3.66 bpw quantization mix
neuralhermes-2.5-mistral-7b.Q4_0.gguf Q4_0 4 4.11 GB 3.56G, +0.2166 ppl
neuralhermes-2.5-mistral-7b.IQ4_NL.gguf IQ4_NL 4 4.16 GB 4.25 bpw non-linear quantization
neuralhermes-2.5-mistral-7b.Q4_K_M.gguf Q4_K_M 4 4.37 GB 3.80G, +0.0532 ppl
neuralhermes-2.5-mistral-7b.Q5_K_M.gguf Q5_K_M 5 5.13 GB 4.45G, +0.0122 ppl
neuralhermes-2.5-mistral-7b.Q6_K.gguf Q6_K 6 5.94 GB 5.15G, +0.0008 ppl
neuralhermes-2.5-mistral-7b.Q8_0.gguf Q8_0 8 7.70 GB 6.70G, +0.0004 ppl

Parameters

path type architecture rope_theta sliding_win max_pos_embed
teknium/OpenHermes-2.5-Mistral-7B mistral MistralForCausalLM 10000 4096 32768

Benchmarks

Original Model Card


language: - en license: apache-2.0 tags: - mistral - instruct - finetune - chatml - gpt4 - synthetic data - distillation - dpo - rlhf datasets: - mlabonne/chatml_dpo_pairs base_model: teknium/OpenHermes-2.5-Mistral-7B model-index: - name: NeuralHermes-2.5-Mistral-7B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 66.55 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.9 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 63.32 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 54.93 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 78.3 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 61.33 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralHermes-2.5-Mistral-7B name: Open LLM Leaderboard

NeuralHermes 2.5 - Mistral 7B

NeuralHermes is based on the teknium/OpenHermes-2.5-Mistral-7B model that has been further fine-tuned with Direct Preference Optimization (DPO) using the mlabonne/chatml_dpo_pairs dataset. It surpasses the original model on most benchmarks (see results).

It is directly inspired by the RLHF process described by Intel/neural-chat-7b-v3-1's authors to improve performance. I used the same dataset and reformatted it to apply the ChatML template.

The code to train this model is available on Google Colab and GitHub. It required an A100 GPU for about an hour.

Quantized models

Results

Update: NeuralHermes-2.5 became the best Hermes-based model on the Open LLM leaderboard and one of the very best 7b models. 🎉

image/png

Teknium (author of OpenHermes-2.5-Mistral-7B) benchmarked the model (see his tweet).

Results are improved on every benchmark: AGIEval (from 43.07% to 43.62%), GPT4All (from 73.12% to 73.25%), and TruthfulQA.

AGIEval

GPT4All

TruthfulQA

You can check the Weights & Biases project here.

Usage

You can run this model using LM Studio or any other frontend.

You can also run this model using the following code:

import transformers
from transformers import AutoTokenizer

# Format prompt
message = [
    {"role": "system", "content": "You are a helpful assistant chatbot."},
    {"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)

# Create pipeline
pipeline = transformers.pipeline(
    "text-generation",
    model=new_model,
    tokenizer=tokenizer
)

# Generate text
sequences = pipeline(
    prompt,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    num_return_sequences=1,
    max_length=200,
)
print(sequences[0]['generated_text'])

Training hyperparameters

LoRA:

  • r=16
  • lora_alpha=16
  • lora_dropout=0.05
  • bias="none"
  • task_type="CAUSAL_LM"
  • target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']

Training arguments:

  • per_device_train_batch_size=4
  • gradient_accumulation_steps=4
  • gradient_checkpointing=True
  • learning_rate=5e-5
  • lr_scheduler_type="cosine"
  • max_steps=200
  • optim="paged_adamw_32bit"
  • warmup_steps=100

DPOTrainer:

  • beta=0.1
  • max_prompt_length=1024
  • max_length=1536