med-alex commited on
Commit
06f94a7
1 Parent(s): d5c5137

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -14
README.md CHANGED
@@ -3,9 +3,19 @@ license: cc-by-4.0
3
  base_model: deepset/xlm-roberta-large-squad2
4
  tags:
5
  - generated_from_trainer
 
6
  model-index:
7
  - name: xlm-roberta-large-squad-ft-qa-ru-mt-to-kaz
8
  results: []
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -13,19 +23,13 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # xlm-roberta-large-squad-ft-qa-ru-mt-to-kaz
15
 
16
- This model is a fine-tuned version of [deepset/xlm-roberta-large-squad2](https://huggingface.co/deepset/xlm-roberta-large-squad2) on the None dataset.
17
 
18
  ## Model description
19
 
20
- More information needed
21
 
22
- ## Intended uses & limitations
23
-
24
- More information needed
25
-
26
- ## Training and evaluation data
27
-
28
- More information needed
29
 
30
  ## Training procedure
31
 
@@ -41,13 +45,9 @@ The following hyperparameters were used during training:
41
  - lr_scheduler_warmup_ratio: 0.2
42
  - num_epochs: 5.0
43
 
44
- ### Training results
45
-
46
-
47
-
48
  ### Framework versions
49
 
50
  - Transformers 4.40.1
51
  - Pytorch 2.0.0+cu118
52
  - Datasets 2.18.0
53
- - Tokenizers 0.19.1
 
3
  base_model: deepset/xlm-roberta-large-squad2
4
  tags:
5
  - generated_from_trainer
6
+ - xlm-roberta
7
  model-index:
8
  - name: xlm-roberta-large-squad-ft-qa-ru-mt-to-kaz
9
  results: []
10
+ datasets:
11
+ - med-alex/qa_mt_ru_to_kaz
12
+ language:
13
+ - kk
14
+ metrics:
15
+ - exact_match
16
+ - f1
17
+ library_name: transformers
18
+ pipeline_tag: question-answering
19
  ---
20
 
21
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
23
 
24
  # xlm-roberta-large-squad-ft-qa-ru-mt-to-kaz
25
 
26
+ This model is a fine-tuned version of [deepset/xlm-roberta-large-squad2](https://huggingface.co/deepset/xlm-roberta-large-squad2) on the med-alex/qa_mt_ru_to_kaz dataset.
27
 
28
  ## Model description
29
 
30
+ This model is one of many models created within the framework of a project to study the solution of a QA task for low-resource languages using the example of Kazakh and Uzbek.
31
 
32
+ Please see the [description](https://github.com/med-alex/turkic_qa?tab=readme-ov-file#добро-пожаловать-на-студенческий-проект-посвященный-решению-задачи-qa-для-низкоресурсных-языков-на-примере-казахского-и-узбекского-языка) of the project, where there is a description of the solution and the results of the models in order to choose the best model for the Kazakh or Uzbek language.
 
 
 
 
 
 
33
 
34
  ## Training procedure
35
 
 
45
  - lr_scheduler_warmup_ratio: 0.2
46
  - num_epochs: 5.0
47
 
 
 
 
 
48
  ### Framework versions
49
 
50
  - Transformers 4.40.1
51
  - Pytorch 2.0.0+cu118
52
  - Datasets 2.18.0
53
+ - Tokenizers 0.19.1