|
Quantization made by Richard Erkhov. |
|
|
|
[Github](https://github.com/RichardErkhov) |
|
|
|
[Discord](https://discord.gg/pvy7H8DZMG) |
|
|
|
[Request more models](https://github.com/RichardErkhov/quant_request) |
|
|
|
|
|
Narumashi-RT-11B - GGUF |
|
- Model creator: https://huggingface.co/Alsebay/ |
|
- Original model: https://huggingface.co/Alsebay/Narumashi-RT-11B/ |
|
|
|
|
|
| Name | Quant method | Size | |
|
| ---- | ---- | ---- | |
|
| [Narumashi-RT-11B.Q2_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q2_K.gguf) | Q2_K | 3.73GB | |
|
| [Narumashi-RT-11B.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.IQ3_XS.gguf) | IQ3_XS | 4.14GB | |
|
| [Narumashi-RT-11B.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.IQ3_S.gguf) | IQ3_S | 4.37GB | |
|
| [Narumashi-RT-11B.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q3_K_S.gguf) | Q3_K_S | 4.34GB | |
|
| [Narumashi-RT-11B.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.IQ3_M.gguf) | IQ3_M | 4.51GB | |
|
| [Narumashi-RT-11B.Q3_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q3_K.gguf) | Q3_K | 4.84GB | |
|
| [Narumashi-RT-11B.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q3_K_M.gguf) | Q3_K_M | 4.84GB | |
|
| [Narumashi-RT-11B.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q3_K_L.gguf) | Q3_K_L | 5.26GB | |
|
| [Narumashi-RT-11B.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.IQ4_XS.gguf) | IQ4_XS | 5.43GB | |
|
| [Narumashi-RT-11B.Q4_0.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q4_0.gguf) | Q4_0 | 5.66GB | |
|
| [Narumashi-RT-11B.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.IQ4_NL.gguf) | IQ4_NL | 5.72GB | |
|
| [Narumashi-RT-11B.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q4_K_S.gguf) | Q4_K_S | 5.7GB | |
|
| [Narumashi-RT-11B.Q4_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q4_K.gguf) | Q4_K | 6.02GB | |
|
| [Narumashi-RT-11B.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q4_K_M.gguf) | Q4_K_M | 6.02GB | |
|
| [Narumashi-RT-11B.Q4_1.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q4_1.gguf) | Q4_1 | 6.27GB | |
|
| [Narumashi-RT-11B.Q5_0.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q5_0.gguf) | Q5_0 | 6.89GB | |
|
| [Narumashi-RT-11B.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q5_K_S.gguf) | Q5_K_S | 6.89GB | |
|
| [Narumashi-RT-11B.Q5_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q5_K.gguf) | Q5_K | 7.08GB | |
|
| [Narumashi-RT-11B.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q5_K_M.gguf) | Q5_K_M | 7.08GB | |
|
| [Narumashi-RT-11B.Q5_1.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q5_1.gguf) | Q5_1 | 7.51GB | |
|
| [Narumashi-RT-11B.Q6_K.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q6_K.gguf) | Q6_K | 8.2GB | |
|
| [Narumashi-RT-11B.Q8_0.gguf](https://huggingface.co/RichardErkhov/Alsebay_-_Narumashi-RT-11B-gguf/blob/main/Narumashi-RT-11B.Q8_0.gguf) | Q8_0 | 10.62GB | |
|
|
|
|
|
|
|
|
|
Original model description: |
|
--- |
|
language: |
|
- en |
|
license: cc-by-nc-4.0 |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- llama |
|
- trl |
|
- sft |
|
- Roleplay |
|
- roleplay |
|
base_model: Sao10K/Fimbulvetr-11B-v2 |
|
--- |
|
> [!Important] |
|
> Still in experiment |
|
# About this model |
|
|
|
This model now can handle (limited) TSF content. If you Character Card have complex plot, maybe you should try other model (maybe bigger parameter?). |
|
|
|
**Update: I think it worse than original model: Sao10K/Fimbulvetr-11B-v2. This model was trained with rough translated dataset, so the responses is short, the IQ logic go down, also it will response wrong name, nonsense sentences sometimes...** |
|
Anyways, if you find this is good, please let me know. Will have another update later. |
|
|
|
Do you know TSF, TS, TG? A lot of model don't really know about that, so I do some experiment to finetune TSF dataset. |
|
|
|
- **Finetuned with rough translate dataset, to increase the accuracy in TSF theme, which is not quite popular. (lewd dataset)** |
|
- **Finetuned from model :** Sao10K/Fimbulvetr-11B-v2 . Thank Sao10K a lot :) |
|
|
|
## Still testing, but seem it good enough for handle information. But the logic go down a bit because the rough translate dataset. |
|
## GGUF version? [here is it](https://huggingface.co/Alsebay/Narumashi-RT-11B-GGUF). |
|
## Dataset |
|
Rough translated dataset, you could say that this is bad quality dataset. |
|
``` |
|
Dataset(all are novels): |
|
30% skinsuit |
|
30% possession |
|
35% transform(shapeshift) |
|
5% other |
|
``` |
|
|
|
# Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. |
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|
|