Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,6 @@
|
|
1 |
---
|
2 |
library_name: peft
|
|
|
3 |
---
|
4 |
## Training procedure
|
5 |
|
@@ -19,3 +20,15 @@ The following `bitsandbytes` quantization config was used during training:
|
|
19 |
|
20 |
|
21 |
- PEFT 0.5.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
+
license: other
|
4 |
---
|
5 |
## Training procedure
|
6 |
|
|
|
20 |
|
21 |
|
22 |
- PEFT 0.5.0
|
23 |
+
|
24 |
+
I'm NOT the author of this work.
|
25 |
+
|
26 |
+
I cite anon :
|
27 |
+
|
28 |
+
```shell
|
29 |
+
Storytelling-V2 Qlora. Trained on base Llama-2-13B, works on every L2 13B.
|
30 |
+
150.5MB of books. Over ten thousand 4096 token samples.
|
31 |
+
*** for separating chapters, ⁂ for separating books.
|
32 |
+
```
|
33 |
+
|
34 |
+
Credit to "anon49"
|