sft

This model is a fine-tuned version of IlyaGusev/saiga_mistral_7b_merged on the gazeta dataset.

Model description

This model was trained on IlyaGusev/gazeta dataset for summarization on Russian language. Context window size is 2048.

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • total_eval_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 2.0
  • mixed_precision_training: Native AMP

Training results

Framework versions

  • PEFT 0.10.0
  • Transformers 4.39.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
12
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text2text-generation models for peft library.

Model tree for SouthMemphis/Saiga-lora-2048-2epochs

Adapter
(6)
this model

Dataset used to train SouthMemphis/Saiga-lora-2048-2epochs