Edit model card

These are quants for an experimental model.

     "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
     "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"

Original model weights:
https://huggingface.co/Nitral-AI/Eris_PrimeV4-Vision-7B

image/png

Vision/multimodal capabilities:

Click here to see how this would work in practice in a roleplay chat.

image/jpeg


Click here to see what your SillyTavern Image Captions extension settings should look like.

image/jpeg


If you want to use vision functionality:

  • Make sure you are using the latest version of KoboldCpp.

To use the multimodal capabilities of this model, such as vision, you also need to load the specified mmproj file, you can get it here, it's also hosted in this repository inside the mmproj folder.

  • You can load the mmproj by using the corresponding section in the interface:

image/png

  • For CLI users, you can load the mmproj file by adding the respective flag to your usual command:
--mmproj your-mmproj-file.gguf

Quantization information:

Steps performed:

Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)

Using the latest llama.cpp at the time.

Downloads last month
434
GGUF
Model size
7.24B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .