File size: 6,431 Bytes
77f1bf9 1c60e4a 77f1bf9 1c60e4a 77f1bf9 af5485e 77f1bf9 2036026 77f1bf9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 |
---
base_model: NeverSleep/Lumimaid-v0.2-12B
quantized_by: Lewdiculous
library_name: transformers
license: cc-by-nc-4.0
inference: false
language:
- en
tags:
- roleplay
- llama3
- sillytavern
---
# #roleplay #sillytavern #llama3
My GGUF-IQ-Imatrix quants for [**NeverSleep/Lumimaid-v0.2-12B**](https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B).
I recommend checking their page for feedback and support.
> [!IMPORTANT]
> **Quantization process:** <br>
> Imatrix data was generated from the FP16-GGUF and conversions directly from the BF16-GGUF. <br>
> This is a bit more disk and compute intensive but hopefully avoids any losses during conversion. <br>
> To run this model, please use the [**latest version of KoboldCpp**](https://github.com/LostRuins/koboldcpp/releases/latest). <br>
> If you noticed any issues let me know in the discussions.
> [!NOTE]
> **Presets:** <br>
> Some compatible SillyTavern presets can be found [**here (Virt's Roleplay Presets - v1.9)**](https://huggingface.co/Virt-io/SillyTavern-Presets). <br>
> Check [**discussions such as this one**](https://huggingface.co/Virt-io/SillyTavern-Presets/discussions/5#664d6fb87c563d4d95151baa) and [**this one**](https://www.reddit.com/r/SillyTavernAI/comments/1dff2tl/my_personal_llama3_stheno_presets/) for other presets and samplers recommendations. <br>
> Lower temperatures are recommended by the authors, so make sure to experiment. <br>
<details>
<summary>⇲ Click here to expand/hide information – General chart with relative quant parformances.</summary>
> [!NOTE]
> **Recommended read:** <br>
>
> [**"Which GGUF is right for me? (Opinionated)" by Artefact2**](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
>
> *Click the image to view full size.*
> !["Which GGUF is right for me? (Opinionated)" by Artefact2 - Firs Graph](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/fScWdHIPix5IzNJ8yswCB.webp)
</details>
> [!TIP]
> **Personal-support:** <br>
> I apologize for disrupting your experience. <br>
> Eventually I may be able to use a dedicated server for this, but for now hopefully these quants are helpful. <br>
> If you **want** and you are **able to**... <br>
> You can [**spare some change over here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br>
>
> **Author-support:** <br>
> You can support the authors [**at their pages**](https://ko-fi.com/undiai)/[**here**](https://ikaridevgit.github.io/).
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/qEH7KuSGfUGXSHeyWSwS-.png)
<details>
<summary>Original model card information.</summary>
## **Original card:**
## Lumimaid 0.2
<img src="https://cdn-uploads.huggingface.co/production/uploads/63ab1241ad514ca8d1430003/TUcHg7LKNjfo0sni88Ps7.png" alt="Image" style="display: block; margin-left: auto; margin-right: auto; width: 65%;">
<div style="text-align: center; font-size: 30px;">
<a href="https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B">[8b]</a> -
<a href="https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B">12b</a> -
<a href="https://huggingface.co/NeverSleep/Lumimaid-v0.2-70B">70b</a> -
<a href="https://huggingface.co/NeverSleep/Lumimaid-v0.2-123B">123b</a>
</div>
### This model is based on: [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
Wandb: https://wandb.ai/undis95/Lumi-Llama-3-1-8B?nw=nwuserundis95
Lumimaid 0.1 -> 0.2 is a HUGE step up dataset wise.
As some people have told us our models are sloppy, Ikari decided to say fuck it and literally nuke all chats out with most slop.
Our dataset stayed the same since day one, we added data over time, cleaned them, and repeat. After not releasing model for a while because we were never satisfied, we think it's time to come back!
## Prompt template: Llama-3-Instruct
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
## Credits:
- Undi
- IkariDev
## Training data we used to make our dataset:
- [Epiculous/Gnosis](https://huggingface.co/Epiculous/Gnosis)
- [ChaoticNeutrals/Luminous_Opus](https://huggingface.co/datasets/ChaoticNeutrals/Luminous_Opus)
- [ChaoticNeutrals/Synthetic-Dark-RP](https://huggingface.co/datasets/ChaoticNeutrals/Synthetic-Dark-RP)
- [ChaoticNeutrals/Synthetic-RP](https://huggingface.co/datasets/ChaoticNeutrals/Synthetic-RP)
- [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned)
- [Gryphe/Opus-WritingPrompts](https://huggingface.co/datasets/Gryphe/Opus-WritingPrompts)
- [meseca/writing-opus-6k](https://huggingface.co/datasets/meseca/writing-opus-6k)
- [meseca/opus-instruct-9k](https://huggingface.co/datasets/meseca/opus-instruct-9k)
- [PJMixers/grimulkan_theory-of-mind-ShareGPT](https://huggingface.co/datasets/PJMixers/grimulkan_theory-of-mind-ShareGPT)
- [NobodyExistsOnTheInternet/ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
- [Undi95/toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
- [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned)
- [kalomaze/Opus_Instruct_25k](https://huggingface.co/datasets/kalomaze/Opus_Instruct_25k)
- [Doctor-Shotgun/no-robots-sharegpt](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
- [Norquinal/claude_multiround_chat_30k](https://huggingface.co/datasets/Norquinal/claude_multiround_chat_30k)
- [nothingiisreal/Claude-3-Opus-Instruct-15K](https://huggingface.co/datasets/nothingiisreal/Claude-3-Opus-Instruct-15K)
- All the Aesirs dataset, cleaned, unslopped
- All le luminae dataset, cleaned, unslopped
- Small part of Airoboros reduced
We sadly didn't find the sources of the following, DM us if you recognize your set !
- Opus_Instruct-v2-6.5K-Filtered-v2-sharegpt
- claude_sharegpt_trimmed
- CapybaraPure_Decontaminated-ShareGPT_reduced
## Datasets credits:
- Epiculous
- ChaoticNeutrals
- Gryphe
- meseca
- PJMixers
- NobodyExistsOnTheInternet
- cgato
- kalomaze
- Doctor-Shotgun
- Norquinal
- nothingiisreal
## Others
Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
</details> |