Edit model card

The following training arguments used for Llama-2 finetuning with Ukrainian corpora pf XL-SUM:

  • learning-rate=2e-4,
  • maximum number of tokens=512,
  • 5 epochs. Lora perf arguments:
  • rank = 32,
  • lora-alpha=16,
  • dropout = 0.1.
Downloads last month
0
Inference Examples
Unable to determine this model's library. Check the docs .

Datasets used to train SGaleshchuk/Llama-2-13b-hf_uk_rank-32_ft