med-alex's picture
Update README.md
ffe7263 verified
metadata
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
  - generated_from_trainer
  - xlm-roberta
model-index:
  - name: xlm-roberta-large-ft-qa-tr-mt-to-kaz
    results: []
datasets:
  - med-alex/qa_mt_tr_to_kaz
language:
  - kk
metrics:
  - exact_match
  - f1
library_name: transformers
pipeline_tag: question-answering

xlm-roberta-large-ft-qa-tr-mt-to-kaz

This model is a fine-tuned version of FacebookAI/xlm-roberta-large on the med-alex/qa_mt_tr_to_kaz dataset.

Model description

This model is one of many models created within the framework of a project to study the solution of a QA task for low-resource languages using the example of Kazakh and Uzbek.

Please see the description of the project, where there is a description of the solution and the results of the models in order to choose the best model for the Kazakh or Uzbek language.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 28
  • eval_batch_size: 28
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.2
  • num_epochs: 5.0

Framework versions

  • Transformers 4.40.1
  • Pytorch 2.0.0+cu118
  • Datasets 2.18.0
  • Tokenizers 0.19.1