med-alex commited on
Commit
19b52d3
1 Parent(s): 1062451

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -14
README.md CHANGED
@@ -3,9 +3,19 @@ license: apache-2.0
3
  base_model: rifkat/uztext-3Gb-BPE-Roberta
4
  tags:
5
  - generated_from_trainer
 
6
  model-index:
7
  - name: uzn-roberta-base-ft-qa-ru-mt-to-uzn
8
  results: []
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -13,19 +23,13 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # uzn-roberta-base-ft-qa-ru-mt-to-uzn
15
 
16
- This model is a fine-tuned version of [rifkat/uztext-3Gb-BPE-Roberta](https://huggingface.co/rifkat/uztext-3Gb-BPE-Roberta) on the None dataset.
17
 
18
  ## Model description
19
 
20
- More information needed
21
 
22
- ## Intended uses & limitations
23
-
24
- More information needed
25
-
26
- ## Training and evaluation data
27
-
28
- More information needed
29
 
30
  ## Training procedure
31
 
@@ -41,13 +45,9 @@ The following hyperparameters were used during training:
41
  - lr_scheduler_warmup_ratio: 0.2
42
  - num_epochs: 10.0
43
 
44
- ### Training results
45
-
46
-
47
-
48
  ### Framework versions
49
 
50
  - Transformers 4.40.1
51
  - Pytorch 2.0.0+cu118
52
  - Datasets 2.18.0
53
- - Tokenizers 0.19.1
 
3
  base_model: rifkat/uztext-3Gb-BPE-Roberta
4
  tags:
5
  - generated_from_trainer
6
+ - roberta
7
  model-index:
8
  - name: uzn-roberta-base-ft-qa-ru-mt-to-uzn
9
  results: []
10
+ datasets:
11
+ - med-alex/qa_mt_ru_to_uzn
12
+ language:
13
+ - uz
14
+ metrics:
15
+ - exact_match
16
+ - f1
17
+ library_name: transformers
18
+ pipeline_tag: question-answering
19
  ---
20
 
21
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
23
 
24
  # uzn-roberta-base-ft-qa-ru-mt-to-uzn
25
 
26
+ This model is a fine-tuned version of [rifkat/uztext-3Gb-BPE-Roberta](https://huggingface.co/rifkat/uztext-3Gb-BPE-Roberta) on the med-alex/qa_mt_ru_to_uzn dataset.
27
 
28
  ## Model description
29
 
30
+ This model is one of many models created within the framework of a project to study the solution of a QA task for low-resource languages using the example of Kazakh and Uzbek.
31
 
32
+ Please see the [description](https://github.com/med-alex/turkic_qa?tab=readme-ov-file#добро-пожаловать-на-студенческий-проект-посвященный-решению-задачи-qa-для-низкоресурсных-языков-на-примере-казахского-и-узбекского-языка) of the project, where there is a description of the solution and the results of the models in order to choose the best model for the Kazakh or Uzbek language.
 
 
 
 
 
 
33
 
34
  ## Training procedure
35
 
 
45
  - lr_scheduler_warmup_ratio: 0.2
46
  - num_epochs: 10.0
47
 
 
 
 
 
48
  ### Framework versions
49
 
50
  - Transformers 4.40.1
51
  - Pytorch 2.0.0+cu118
52
  - Datasets 2.18.0
53
+ - Tokenizers 0.19.1