Edit model card

We continual pre-train meta-llama/Llama-2-7b-hf on monolingual WURA corpus for 20 languages. All languages are uniformly sampled.

Important Parameters

  • num_gpus: 8
  • max_steps: 8000 # see here
  • gradient_accumulation_steps: 16
  • per_device_batch_size: 2
  • learning_rate: 2e-5
Downloads last month
18
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for llama-lang-adapt/pretrain-wura

Adapters
1 model

Dataset used to train llama-lang-adapt/pretrain-wura