Alsebay commited on
Commit
599bb63
1 Parent(s): 0b85640

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -7
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  language:
3
  - en
4
- license: apache-2.0
5
  tags:
6
  - text-generation-inference
7
  - transformers
@@ -9,15 +9,28 @@ tags:
9
  - llama
10
  - trl
11
  - sft
 
 
12
  base_model: Sao10K/Fimbulvetr-11B-v2
13
  ---
 
14
 
15
- # Uploaded model
16
 
17
- - **Developed by:** Alsebay
18
- - **License:** apache-2.0
19
- - **Finetuned from model :** Sao10K/Fimbulvetr-11B-v2
20
 
21
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
 
 
 
 
 
 
 
 
 
 
22
 
23
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
1
  ---
2
  language:
3
  - en
4
+ license: cc-by-nc-4.0
5
  tags:
6
  - text-generation-inference
7
  - transformers
 
9
  - llama
10
  - trl
11
  - sft
12
+ - Roleplay
13
+ - roleplay
14
  base_model: Sao10K/Fimbulvetr-11B-v2
15
  ---
16
+ # About this model
17
 
18
+ Beta for V2 of https://huggingface.co/Alsebay/Narumashi-11B-v0.9 (wrong typo but I'm too lazy to fix), but have only 32 rank and 32 lora rank, which cause the model didn't learn well all dataset information, it just know basis information. Anyways, it good if your have a chinese, japanese prompt to trigger TSF content. Maybe not smart, I haven't test yet.
19
 
20
+ - **Finetuned from model :** Sao10K/Fimbulvetr-11B-v2 . Thank Sao10K a lot :)
 
 
21
 
22
+ ## I have text and found that Sao10K/Fimbulvetr-11B-v2 could unlock as 8K context length (maybe logic will go down a bit?), so I leave it alone to reduce RAM and VRAM. That mean you can use as 8k context length although this model say only 4k.
23
+ ## GGUF version? [here is it](https://huggingface.co/Alsebay/Narisumashi-GGUF).
24
+ ## Dataset
25
+ All chinese novels dataset
26
+ ```
27
+ Dataset(all are novels):
28
+ 60% skinsuit
29
+ 25% possession
30
+ 5% transform(shapeshift)
31
+ 10% other
32
+ ```
33
 
34
+ # Thank Unsloth for good finetuning tool. This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
35
+
36
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)