Narumashi-RT-11B / README.md
Alsebay's picture
Update README.md
2300cb3 verified
metadata
language:
  - en
license: cc-by-nc-4.0
tags:
  - text-generation-inference
  - transformers
  - unsloth
  - llama
  - trl
  - sft
  - Roleplay
  - roleplay
base_model: Sao10K/Fimbulvetr-11B-v2

Still in experiment

About this model

This model now can handle (limited) TSF content. If you Character Card have complex plot, maybe you should try other model (maybe bigger parameter?).

Update: I think it worse than original model: Sao10K/Fimbulvetr-11B-v2. This model was trained with rough translated dataset, so the responses is short, the IQ logic go down, also it will response wrong name, nonsense sentences sometimes... Anyways, if you find this is good, please let me know. Will have another update later.

Do you know TSF, TS, TG? A lot of model don't really know about that, so I do some experiment to finetune TSF dataset.

  • Finetuned with rough translate dataset, to increase the accuracy in TSF theme, which is not quite popular. (lewd dataset)
  • Finetuned from model : Sao10K/Fimbulvetr-11B-v2 . Thank Sao10K a lot :)

Still testing, but seem it good enough for handle information. But the logic go down a bit because the rough translate dataset.

GGUF version? here is it.

Dataset

Rough translated dataset, you could say that this is bad quality dataset.

Dataset(all are novels):
30% skinsuit
30% possession
35% transform(shapeshift)
5% other

Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.