teknium-open-hermes-2.5-mistral-gguf

teknium-open-hermes-2.5-mistral-gguf is a GGUF Q4_K_M int4 quantized version of teknium's popular open hermes finetune of mistral, providing a very fast, very small inference implementation.

teknium-open-hermes-2.5-mistral is a leading chat finetuned version of mistral 7b.

Model Description

  • Developed by: teknium
  • Quantized by: llmware
  • Model type: mistral-7b
  • Parameters: 7 billion
  • Model Parent: teknium/OpenHermes-2.5-Mistral-7B
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Uses: General purpose chat
  • RAG Benchmark Accuracy Score: NA
  • Quantization: int4

Model Card Contact

llmware on github

llmware on hf

llmware website

Downloads last month
67
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for llmware/openhermes-2.5-mistral-7b-gguf

Quantized
(42)
this model