Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b - GGUF

Name Quant method Size
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q2_K.gguf Q2_K 3.95GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_XS.gguf IQ3_XS 4.39GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_S.gguf IQ3_S 4.63GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_S.gguf Q3_K_S 4.61GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ3_M.gguf IQ3_M 4.78GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K.gguf Q3_K 5.13GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_M.gguf Q3_K_M 5.13GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q3_K_L.gguf Q3_K_L 5.58GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ4_XS.gguf IQ4_XS 5.75GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_0.gguf Q4_0 6.0GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.IQ4_NL.gguf IQ4_NL 6.06GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_S.gguf Q4_K_S 6.04GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K.gguf Q4_K 6.38GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_K_M.gguf Q4_K_M 6.38GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q4_1.gguf Q4_1 6.65GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_0.gguf Q5_0 7.31GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K_S.gguf Q5_K_S 7.31GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K.gguf Q5_K 7.5GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_K_M.gguf Q5_K_M 7.5GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q5_1.gguf Q5_1 7.96GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q6_K.gguf Q6_K 8.7GB
Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b.Q8_0.gguf Q8_0 11.27GB

Original model description:

license: apache-2.0

Frankenmerge 11b between teknium/OpenHermes-2.5-Mistral-7B and Intel/neural-chat-7b-v3-1

GGUF: https://huggingface.co/TheBloke/Open-Hermes-2.5-neural-chat-3.1-frankenmerge-11b-GGUF

Merge with the following conditions

  - model: teknium/OpenHermes-2.5-Mistral-7B
    layer_range: [0, 8]

  - model: Intel/neural-chat-7b-v3-1
    layer_range: [4, 12]

  - model: teknium/OpenHermes-2.5-Mistral-7B
    layer_range: [9, 16]

  - model: Intel/neural-chat-7b-v3-1
    layer_range: [13, 20]

  - model: teknium/OpenHermes-2.5-Mistral-7B
    layer_range: [17, 24]

  - model: Intel/neural-chat-7b-v3-1
    layer_range: [21, 28]

  - model: teknium/OpenHermes-2.5-Mistral-7B
    layer_range: [25, 32]

merge_method: passthrough

Benchmarks are coming soon...

Downloads last month
61
GGUF
Model size
11.4B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .