This repo contains a qlora adapter for Llama-2-7b, trained on 1B tokens, only in Polish.

The training took 20 days on a single RTX 4090 with the following hyperparameters:

  • context length: 4096
  • batch_size: 128
  • learning_rate: 0.0002, cosine with warmup
  • lora_r: 64
  • lora_alpha: 16
  • lora_modules: all
  • lora_dropout: 0.0
  • weight_decay: 0.1
  • max_grad_norm: 0.3
  • double_quant, nf4
  • optimizer: paged_adamw_32bit

This adapter allows the model to speak Polish more accurately than vanilla Llama-2-7b.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .