SGaleshchuk
commited on
Commit
•
335e816
1
Parent(s):
f36b052
Update README.md
Browse files
README.md
CHANGED
@@ -12,5 +12,10 @@ pipeline_tag: text-generation
|
|
12 |
---
|
13 |
|
14 |
The following training arguments used for Llama-2 finetuning with Ukrainian corpora pf XL-SUM:
|
15 |
-
learning-rate=2e-4,
|
16 |
-
|
|
|
|
|
|
|
|
|
|
|
|
12 |
---
|
13 |
|
14 |
The following training arguments used for Llama-2 finetuning with Ukrainian corpora pf XL-SUM:
|
15 |
+
- learning-rate=2e-4,
|
16 |
+
- maximum number of tokens=512,
|
17 |
+
- 5 epochs.
|
18 |
+
Lora perf arguments:
|
19 |
+
- rank = 32,
|
20 |
+
- lora-alpha=16,
|
21 |
+
- dropout = 0.1.
|