Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ license: apache-2.0
|
|
4 |
|
5 |
# LimaRP-Llama2-7B-v3 (Alpaca, experimental, 4-bit LoRA adapter)
|
6 |
|
7 |
-
This is an experimental version of LimaRP using a somewhat updated dataset (1800 training samples)
|
8 |
and a 2-pass training procedure. The first pass includes unsupervised tuning on 2800 stories within
|
9 |
4k tokens length and the second is LimaRP.
|
10 |
|
|
|
4 |
|
5 |
# LimaRP-Llama2-7B-v3 (Alpaca, experimental, 4-bit LoRA adapter)
|
6 |
|
7 |
+
This is an experimental version of LimaRP for Llama2, using a somewhat updated dataset (1800 training samples)
|
8 |
and a 2-pass training procedure. The first pass includes unsupervised tuning on 2800 stories within
|
9 |
4k tokens length and the second is LimaRP.
|
10 |
|