asahi417 commited on
Commit
49defca
1 Parent(s): d326397

commit files to HF hub

Browse files
README.md CHANGED
@@ -31,33 +31,33 @@ model-index:
31
  metrics:
32
  - name: BLEU4 (Question Answering)
33
  type: bleu4_question_answering
34
- value: 5.66
35
  - name: ROUGE-L (Question Answering)
36
  type: rouge_l_question_answering
37
- value: 18.23
38
  - name: METEOR (Question Answering)
39
  type: meteor_question_answering
40
- value: 13.89
41
  - name: BERTScore (Question Answering)
42
  type: bertscore_question_answering
43
- value: 86.03
44
  - name: MoverScore (Question Answering)
45
  type: moverscore_question_answering
46
- value: 64.35
47
  - name: AnswerF1Score (Question Answering)
48
  type: answer_f1_score__question_answering
49
- value: 23.1
50
  - name: AnswerExactMatch (Question Answering)
51
  type: answer_exact_match_question_answering
52
- value: 5.52
53
  ---
54
 
55
  # Model Card of `vocabtrimmer/mt5-small-trimmed-ru-120000-ruquad-qa`
56
- This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-ru-120000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ru-120000) for question answering task on the [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
57
 
58
 
59
  ### Overview
60
- - **Language model:** [vocabtrimmer/mt5-small-trimmed-ru-120000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ru-120000)
61
  - **Language:** ru
62
  - **Training data:** [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) (default)
63
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
@@ -93,16 +93,16 @@ output = pipe("question: чем соответствует абсолютная
93
 
94
  | | Score | Type | Dataset |
95
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
96
- | AnswerExactMatch | 5.52 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
97
- | AnswerF1Score | 23.1 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
98
- | BERTScore | 86.03 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
99
- | Bleu_1 | 13.04 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
100
- | Bleu_2 | 9.76 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
101
- | Bleu_3 | 7.47 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
102
- | Bleu_4 | 5.66 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
103
- | METEOR | 13.89 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
104
- | MoverScore | 64.35 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
105
- | ROUGE_L | 18.23 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
106
 
107
 
108
 
@@ -114,12 +114,12 @@ The following hyperparameters were used during fine-tuning:
114
  - input_types: ['paragraph_question']
115
  - output_types: ['answer']
116
  - prefix_types: None
117
- - model: vocabtrimmer/mt5-small-trimmed-ru-120000
118
  - max_length: 512
119
  - max_length_output: 32
120
- - epoch: 5
121
  - batch: 32
122
- - lr: 0.0005
123
  - fp16: False
124
  - random_seed: 1
125
  - gradient_accumulation_steps: 2
 
31
  metrics:
32
  - name: BLEU4 (Question Answering)
33
  type: bleu4_question_answering
34
+ value: 29.71
35
  - name: ROUGE-L (Question Answering)
36
  type: rouge_l_question_answering
37
+ value: 55.07
38
  - name: METEOR (Question Answering)
39
  type: meteor_question_answering
40
+ value: 41.65
41
  - name: BERTScore (Question Answering)
42
  type: bertscore_question_answering
43
+ value: 94.96
44
  - name: MoverScore (Question Answering)
45
  type: moverscore_question_answering
46
+ value: 83.99
47
  - name: AnswerF1Score (Question Answering)
48
  type: answer_f1_score__question_answering
49
+ value: 73.33
50
  - name: AnswerExactMatch (Question Answering)
51
  type: answer_exact_match_question_answering
52
+ value: 51.37
53
  ---
54
 
55
  # Model Card of `vocabtrimmer/mt5-small-trimmed-ru-120000-ruquad-qa`
56
+ This model is fine-tuned version of [ckpts/mt5-small-trimmed-ru-120000](https://huggingface.co/ckpts/mt5-small-trimmed-ru-120000) for question answering task on the [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
57
 
58
 
59
  ### Overview
60
+ - **Language model:** [ckpts/mt5-small-trimmed-ru-120000](https://huggingface.co/ckpts/mt5-small-trimmed-ru-120000)
61
  - **Language:** ru
62
  - **Training data:** [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) (default)
63
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
 
93
 
94
  | | Score | Type | Dataset |
95
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
96
+ | AnswerExactMatch | 51.37 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
97
+ | AnswerF1Score | 73.33 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
98
+ | BERTScore | 94.96 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
99
+ | Bleu_1 | 46.17 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
100
+ | Bleu_2 | 40.21 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
101
+ | Bleu_3 | 34.84 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
102
+ | Bleu_4 | 29.71 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
103
+ | METEOR | 41.65 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
104
+ | MoverScore | 83.99 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
105
+ | ROUGE_L | 55.07 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
106
 
107
 
108
 
 
114
  - input_types: ['paragraph_question']
115
  - output_types: ['answer']
116
  - prefix_types: None
117
+ - model: ckpts/mt5-small-trimmed-ru-120000
118
  - max_length: 512
119
  - max_length_output: 32
120
+ - epoch: 15
121
  - batch: 32
122
+ - lr: 0.001
123
  - fp16: False
124
  - random_seed: 1
125
  - gradient_accumulation_steps: 2
eval/metric.first.answer.paragraph_question.answer.lmqg_qg_ruquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.13580593536793847, "Bleu_2": 0.10180848153479834, "Bleu_3": 0.078104486616958, "Bleu_4": 0.05876059281210598, "METEOR": 0.14077606116271782, "ROUGE_L": 0.1878528155634676, "BERTScore": 0.862298309187552, "MoverScore": 0.6441214873855503, "AnswerF1Score": 23.783942259933223, "AnswerExactMatch": 5.083399523431295}, "test": {"Bleu_1": 0.13035002966352713, "Bleu_2": 0.09760592518211351, "Bleu_3": 0.07470465677821343, "Bleu_4": 0.05661204561255439, "METEOR": 0.13894982525240993, "ROUGE_L": 0.18225659441453088, "BERTScore": 0.8602602633575867, "MoverScore": 0.6435025334899285, "AnswerF1Score": 23.104944192723003, "AnswerExactMatch": 5.520254169976171}}
 
1
+ {"validation": {"Bleu_1": 0.48334806955494575, "Bleu_2": 0.42446150972912017, "Bleu_3": 0.37188475189985354, "Bleu_4": 0.32157021715601214, "METEOR": 0.42513802327329575, "ROUGE_L": 0.5669332394759723, "BERTScore": 0.953661481582806, "MoverScore": 0.8476297354040059, "AnswerF1Score": 75.26023674768254, "AnswerExactMatch": 53.4154090548054}, "test": {"Bleu_1": 0.4617225885020593, "Bleu_2": 0.40205062666505914, "Bleu_3": 0.34837286365826947, "Bleu_4": 0.2971343256502367, "METEOR": 0.4164766087640628, "ROUGE_L": 0.5506870736401595, "BERTScore": 0.9496106096470138, "MoverScore": 0.8398631419298846, "AnswerF1Score": 73.32765776541284, "AnswerExactMatch": 51.37013502779984}}
eval/samples.test.hyp.paragraph_question.answer.lmqg_qg_ruquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_question.answer.lmqg_qg_ruquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff