SGaleshchuk commited on
Commit
335e816
1 Parent(s): f36b052

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -12,5 +12,10 @@ pipeline_tag: text-generation
12
  ---
13
 
14
  The following training arguments used for Llama-2 finetuning with Ukrainian corpora pf XL-SUM:
15
- learning-rate=2e-4,warm-up ratio = 0.03, maximum number of tokens =512, truncate otherwise, 5 epochs. Lora perf arguments:
16
- rank = 32, lora-alpha=16, dropout = 0.1.
 
 
 
 
 
 
12
  ---
13
 
14
  The following training arguments used for Llama-2 finetuning with Ukrainian corpora pf XL-SUM:
15
+ - learning-rate=2e-4,
16
+ - maximum number of tokens=512,
17
+ - 5 epochs.
18
+ Lora perf arguments:
19
+ - rank = 32,
20
+ - lora-alpha=16,
21
+ - dropout = 0.1.