med-alex commited on
Commit
1a46c9b
1 Parent(s): e4d97ef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -14
README.md CHANGED
@@ -6,6 +6,15 @@ tags:
6
  model-index:
7
  - name: xlm-roberta-large-ft-qa-ru-mt-to-kaz
8
  results: []
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -13,19 +22,13 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # xlm-roberta-large-ft-qa-ru-mt-to-kaz
15
 
16
- This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the None dataset.
17
 
18
  ## Model description
19
 
20
- More information needed
21
 
22
- ## Intended uses & limitations
23
-
24
- More information needed
25
-
26
- ## Training and evaluation data
27
-
28
- More information needed
29
 
30
  ## Training procedure
31
 
@@ -41,13 +44,9 @@ The following hyperparameters were used during training:
41
  - lr_scheduler_warmup_ratio: 0.2
42
  - num_epochs: 5.0
43
 
44
- ### Training results
45
-
46
-
47
-
48
  ### Framework versions
49
 
50
  - Transformers 4.40.1
51
  - Pytorch 2.0.0+cu118
52
  - Datasets 2.18.0
53
- - Tokenizers 0.19.1
 
6
  model-index:
7
  - name: xlm-roberta-large-ft-qa-ru-mt-to-kaz
8
  results: []
9
+ datasets:
10
+ - med-alex/qa_mt_ru_to_kaz
11
+ language:
12
+ - kk
13
+ metrics:
14
+ - exact_match
15
+ - f1
16
+ library_name: transformers
17
+ pipeline_tag: question-answering
18
  ---
19
 
20
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
22
 
23
  # xlm-roberta-large-ft-qa-ru-mt-to-kaz
24
 
25
+ This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on the med-alex/qa_mt_ru_to_kaz dataset.
26
 
27
  ## Model description
28
 
29
+ This model is one of many models created within the framework of a project to study the solution of a QA task for low-resource languages using the example of Kazakh and Uzbek.
30
 
31
+ Please see the [description](https://github.com/med-alex/turkic_qa?tab=readme-ov-file#добро-пожаловать-на-студенческий-проект-посвященный-решению-задачи-qa-для-низкоресурсных-языков-на-примере-казахского-и-узбекского-языка) of the project, where there is a description of the solution and the results of the models in order to choose the best model for the Kazakh or Uzbek language.
 
 
 
 
 
 
32
 
33
  ## Training procedure
34
 
 
44
  - lr_scheduler_warmup_ratio: 0.2
45
  - num_epochs: 5.0
46
 
 
 
 
 
47
  ### Framework versions
48
 
49
  - Transformers 4.40.1
50
  - Pytorch 2.0.0+cu118
51
  - Datasets 2.18.0
52
+ - Tokenizers 0.19.1