Edit model card

This repository hosts GGUF-IQ-Imatrix quants for cgato/TheSpice-7b-v0.1.1.

The return of a cult classic.

Quants:

    quantization_options = [
        "Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M", "Q5_K_S",
        "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
    ]

What does "Imatrix" mean?

It stands for Importance Matrix, a technique used to improve the quality of quantized models. The Imatrix is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance, especially when the calibration data is diverse. [1] [2]

For imatrix data generation, kalomaze's groups_merged.txt with added roleplay chats was used, you can find it here. This was just to add a bit more diversity to the data.

Steps:

Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)

Using the latest llama.cpp at the time.

Submitted model image:

image/jpeg

Original model information:

Officially rebranding Thespis to TheSpice. Why? Cause its a cooler, simpler name. I've focused on making the model more flexible and provide a more unique experience. I'm still working on cleaning up my dataset, but I've shrunken it down a lot to focus on a "less is more" approach. This is ultimate a return to form of the way I used to train Thespis, with more of a focus on a small hand edited dataset.

Datasets Used

  • Dolphin
  • Ultrachat
  • Capybara
  • Augmental
  • ToxicQA
  • Yahoo Answers
  • Airoboros 3.1

Features

Narration

If you request information on objects or characters in the scene, the model will narrate it to you. Most of the time, without moving the story forward.

You can look at anything mostly as long as you end it with "What do I see?"

image/png

You can also request to know what a character is thinking or planning.

image/png

You can ask for a quick summary on the character as well.

image/png

Before continuing the conversation as normal.

image/png

Prompt Format: Chat ( The default Ooba template and Silly Tavern Template )

image/png

If you're using Ooba in verbose mode as a server, you can check if you're console is logging something that looks like this. image/png

{System Prompt}

Username: {Input}
BotName: {Response}
Username: {Input}
BotName: {Response}

Presets

All screenshots above were taken with the below SillyTavern Preset.

Recommended Silly Tavern Preset -> (Temp: 1.25, MinP: 0.1, RepPen: 1.05)

This is a roughly equivalent Kobold Horde Preset.

Recommended Kobold Horde Preset -> MinP

Disclaimer

Please prompt responsibly and take anything outputted by any Language Model with a huge grain of salt. Thanks!

Downloads last month
59
GGUF
Model size
7.24B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Inference API (serverless) has been turned off for this model.

Collection including Lewdiculous/TheSpice-7b-v0.1.1-GGUF-IQ-Imatrix