Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,8 @@ license: apache-2.0
|
|
6 |
|
7 |
This is an experimental version of LimaRP for Llama2, using a somewhat updated dataset (1800 training samples)
|
8 |
and a 2-pass training procedure. The first pass includes unsupervised tuning on 2800 stories within
|
9 |
-
4k tokens length and the second pass is LimaRP with
|
|
|
10 |
|
11 |
For more details about LimaRP, see the model page for the [previously released version](https://huggingface.co/lemonilia/limarp-llama2-v2).
|
12 |
Most details written there apply for this version as well.
|
|
|
6 |
|
7 |
This is an experimental version of LimaRP for Llama2, using a somewhat updated dataset (1800 training samples)
|
8 |
and a 2-pass training procedure. The first pass includes unsupervised tuning on 2800 stories within
|
9 |
+
4k tokens length and the second pass is LimaRP with changes which introduce direct and effective
|
10 |
+
control on bot response length.
|
11 |
|
12 |
For more details about LimaRP, see the model page for the [previously released version](https://huggingface.co/lemonilia/limarp-llama2-v2).
|
13 |
Most details written there apply for this version as well.
|