Suparious commited on
Commit
a2b16ae
1 Parent(s): 628d5f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -2
README.md CHANGED
@@ -1,4 +1,7 @@
1
  ---
 
 
 
2
  library_name: transformers
3
  tags:
4
  - 4-bit
@@ -6,10 +9,30 @@ tags:
6
  - text-generation
7
  - autotrain_compatible
8
  - endpoints_compatible
 
 
 
 
 
 
 
 
 
9
  pipeline_tag: text-generation
10
  inference: false
11
  quantized_by: Suparious
12
  ---
13
- #
14
 
15
- **UPLOAD IN PROGRESS**
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: cc-by-nc-4.0
5
  library_name: transformers
6
  tags:
7
  - 4-bit
 
9
  - text-generation
10
  - autotrain_compatible
11
  - endpoints_compatible
12
+ - text-generation-inference
13
+ - transformers
14
+ - unsloth
15
+ - mistral
16
+ - trl
17
+ - sft
18
+ - Roleplay
19
+ - roleplay
20
+ base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
21
  pipeline_tag: text-generation
22
  inference: false
23
  quantized_by: Suparious
24
  ---
25
+ # Alsebay/NarumashiRTS-7B-V2-1 AWQ
26
 
27
+ - Model creator: [Alsebay](https://huggingface.co/Alsebay)
28
+ - Original model: [NarumashiRTS-7B-V2-1](https://huggingface.co/Alsebay/NarumashiRTS-7B-V2-1)
29
+
30
+ ## Model Summary
31
+
32
+ > [!Important]
33
+ > Still in experiment
34
+
35
+ Remake [version 2](https://huggingface.co/Alsebay/NarumashiRTS-V2) with safetensor format, more safety and stable method, nothing change too much (base on the model hash). But to be real, in the previous version 2, I used unsafety method to save pretrain model, which could lead apply Lora layer twice to model, that make model have terrible performance. (Thanks Unsloth community told me about this :D )
36
+
37
+ - **Finetuned with rough translate dataset, to increase the accuracy in TSF theme, which is not quite popular. (lewd dataset)**
38
+ - **Finetuned from model :** SanjiWatsuki/Kunoichi-DPO-v2-7B . Thank SanjiWatsuki a lot :)