#llama-3 #roleplay

Version 2 files uploaded!

GGUF-IQ-Imatrix quants for cgato/L3-TheSpice-8b-v0.8.3.

These quants have already been done after the fixes from llama.cpp/pull/6920.
Use KoboldCpp version 1.64 or higher.

Prompt formatting...
Prompt format is relatively simple, author seems to recommend the Default context preset and Instruct Mode - Disabled.
I recommend reading original model card page information.

image/png

Original model information by the author:

Now not overtrained and with the tokenizer fix to base llama3. Trained for 3 epochs.

The latest TheSpice, dipped in Mama Liz's LimaRP Oil. I've focused on making the model more flexible and provide a more unique experience. I'm still working on cleaning up my dataset, but I've shrunken it down a lot to focus on a "less is more" approach. This is ultimate a return to form of the way I used to train Thespis, with more of a focus on a small hand edited dataset.

Datasets Used

  • Capybara
  • Claude Multiround 30k
  • Augmental
  • ToxicQA
  • Yahoo Answers
  • Airoboros 3.1
  • LimaRP

Features ( Examples from 0.1.1 because I'm too lazy to take new screenshots. Its tested tho. )

Narration

If you request information on objects or characters in the scene, the model will narrate it to you. Most of the time, without moving the story forward.

You can look at anything mostly as long as you end it with "What do I see?"

image/png

You can also request to know what a character is thinking or planning.

image/png

You can ask for a quick summary on the character as well.

image/png

Before continuing the conversation as normal.

image/png

Prompt Format: Chat ( The default Ooba template and Silly Tavern Template )

image/png

If you're using Ooba in verbose mode as a server, you can check if you're console is logging something that looks like this. image/png

{System Prompt}

Username: {Input}
BotName: {Response}
Username: {Input}
BotName: {Response}

Presets

All screenshots above were taken with the below SillyTavern Preset.

Recommended Silly Tavern Preset -> (Temp: 1.25, MinP: 0.1, RepPen: 1.05)

This is a roughly equivalent Kobold Horde Preset.

Recommended Kobold Horde Preset -> MinP

Disclaimer

Please prompt responsibly and take anything outputted by any Language Model with a huge grain of salt. Thanks!

Downloads last month
250
GGUF
Model size
8.03B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .