olafgeibig's picture
Update README.md
c12e5a5 verified
metadata
base_model: mistralai/Mistral-7B-v0.1
tags:
  - Mistral
  - instruct
  - finetune
  - chatml
  - DPO
  - RLHF
  - gpt4
  - synthetic data
  - distillation
model-index:
  - name: Nous-Hermes-2-Mistral-7B-DPO
    results: []
license: apache-2.0
language:
  - en
datasets:
  - teknium/OpenHermes-2.5

Nous-Hermes-2-Mistral-7B-DPO

I converted NousResearch/Nous-Hermes-2-Mistral-7B-DPO to GGUF and quantized it to my favorite quantizations. Se their original model card for all the details.

I quickly quantized this model using a modified version of AutoGGUF from Maxime Labonne

Here is my Ollama modelfile. According to llama.cpp the model was trained on 32k tokens, but I set the ollama modelfile to 16k so that 16 GB Macs can still run it.

FROM ./nous-hermes-2-mistral-7b-dpo.Q5_K_M.gguf
PARAMETER num_ctx 16384
TEMPLATE """<|im_start|>system
{{ .System }}<|im_end|>
<|im_start|>user
{{ .Prompt }}<|im_end|>
<|im_start|>assistant
"""
PARAMETER stop "<|im_start|>"
PARAMETER stop "<|im_end|>"