Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,26 @@
|
|
1 |
---
|
2 |
license: agpl-3.0
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: agpl-3.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: text-generation
|
6 |
---
|
7 |
+
|
8 |
+
# OpenChatRWKV 430m r2
|
9 |
+
|
10 |
+
This is a finetune of RWKV-v4neo 430m on `openchatgpt safe r2` dataset. `r2` shares no data with the `r1`, even the greetings, making this finetune, in some ways, inferiour to the original on `r1`.
|
11 |
+
|
12 |
+
## Key differences with `openchatrwkv-430m`
|
13 |
+
|
14 |
+
One of the key differences is using an actual token for the instant message separation, apart from the new dataset.
|
15 |
+
|
16 |
+
## Differences with `openchatgpt-neox-125m`
|
17 |
+
|
18 |
+
Increased parameter size and different dataset.
|
19 |
+
|
20 |
+
## Training data
|
21 |
+
|
22 |
+
New dataset was obviously made at a later point in time. Many speculate that ChatGPT has degraded over the months, and I am a strong believer of that aswell - style in which model speaks started to sound different compared to 2-3 months ago.
|
23 |
+
|
24 |
+
This model was trained on a mix of natural language and code.
|
25 |
+
|
26 |
+
This model does not know how to greet you.
|