Update README.md
Browse files
README.md
CHANGED
@@ -14,7 +14,7 @@ With RWKV world tokenizer,multi-langs have 1:1 tokenization ratio ,one word to o
|
|
14 |
This model trained with instructions datasets and chinese web novel and tradition wuxia,
|
15 |
more trainning details would be updated.
|
16 |
|
17 |
-
|
18 |
|
19 |
Full finetuned using this repo to train 128k context model , 4*A800 40hours with 1.3B tokens.
|
20 |
https://github.com/SynthiaDL/TrainChatGalRWKV/blob/main/train_world.sh
|
|
|
14 |
This model trained with instructions datasets and chinese web novel and tradition wuxia,
|
15 |
more trainning details would be updated.
|
16 |
|
17 |
+
Tested to summary 85k tokens to 5 keypoints ,can find conversation files in example folders ,more cases are coming.
|
18 |
|
19 |
Full finetuned using this repo to train 128k context model , 4*A800 40hours with 1.3B tokens.
|
20 |
https://github.com/SynthiaDL/TrainChatGalRWKV/blob/main/train_world.sh
|