asahi417 commited on
Commit
a2a11eb
1 Parent(s): bd86f4e

commit files to HF hub

Browse files
README.md CHANGED
@@ -33,27 +33,27 @@ model-index:
33
  metrics:
34
  - name: BLEU4 (Question Generation)
35
  type: bleu4_question_generation
36
- value: 0.0
37
  - name: ROUGE-L (Question Generation)
38
  type: rouge_l_question_generation
39
- value: 1.0
40
  - name: METEOR (Question Generation)
41
  type: meteor_question_generation
42
- value: 1.6
43
  - name: BERTScore (Question Generation)
44
  type: bertscore_question_generation
45
- value: 54.59
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
- value: 46.64
49
  ---
50
 
51
  # Model Card of `vocabtrimmer/mt5-small-trimmed-ru-30000-ruquad-qg`
52
- This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-ru-30000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ru-30000) for question generation task on the [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
53
 
54
 
55
  ### Overview
56
- - **Language model:** [vocabtrimmer/mt5-small-trimmed-ru-30000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ru-30000)
57
  - **Language:** ru
58
  - **Training data:** [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) (default)
59
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
@@ -89,14 +89,14 @@ output = pipe("Нелишним будет отметить, что, разви
89
 
90
  | | Score | Type | Dataset |
91
  |:-----------|--------:|:--------|:-----------------------------------------------------------------|
92
- | BERTScore | 54.59 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
93
- | Bleu_1 | 0.94 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
94
- | Bleu_2 | 0 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
95
- | Bleu_3 | 0 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
96
- | Bleu_4 | 0 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
97
- | METEOR | 1.6 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
98
- | MoverScore | 46.64 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
99
- | ROUGE_L | 1 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
100
 
101
 
102
 
@@ -108,12 +108,12 @@ The following hyperparameters were used during fine-tuning:
108
  - input_types: paragraph_answer
109
  - output_types: question
110
  - prefix_types: None
111
- - model: vocabtrimmer/mt5-small-trimmed-ru-30000
112
  - max_length: 512
113
  - max_length_output: 32
114
- - epoch: 17
115
  - batch: 16
116
- - lr: 0.0005
117
  - fp16: False
118
  - random_seed: 1
119
  - gradient_accumulation_steps: 4
 
33
  metrics:
34
  - name: BLEU4 (Question Generation)
35
  type: bleu4_question_generation
36
+ value: 18.44
37
  - name: ROUGE-L (Question Generation)
38
  type: rouge_l_question_generation
39
+ value: 33.96
40
  - name: METEOR (Question Generation)
41
  type: meteor_question_generation
42
+ value: 29.21
43
  - name: BERTScore (Question Generation)
44
  type: bertscore_question_generation
45
+ value: 86.35
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
+ value: 65.04
49
  ---
50
 
51
  # Model Card of `vocabtrimmer/mt5-small-trimmed-ru-30000-ruquad-qg`
52
+ This model is fine-tuned version of [ckpts/mt5-small-trimmed-ru-30000](https://huggingface.co/ckpts/mt5-small-trimmed-ru-30000) for question generation task on the [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
53
 
54
 
55
  ### Overview
56
+ - **Language model:** [ckpts/mt5-small-trimmed-ru-30000](https://huggingface.co/ckpts/mt5-small-trimmed-ru-30000)
57
  - **Language:** ru
58
  - **Training data:** [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) (default)
59
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
 
89
 
90
  | | Score | Type | Dataset |
91
  |:-----------|--------:|:--------|:-----------------------------------------------------------------|
92
+ | BERTScore | 86.35 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
93
+ | Bleu_1 | 34.2 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
94
+ | Bleu_2 | 27.36 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
95
+ | Bleu_3 | 22.33 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
96
+ | Bleu_4 | 18.44 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
97
+ | METEOR | 29.21 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
98
+ | MoverScore | 65.04 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
99
+ | ROUGE_L | 33.96 | default | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) |
100
 
101
 
102
 
 
108
  - input_types: paragraph_answer
109
  - output_types: question
110
  - prefix_types: None
111
+ - model: ckpts/mt5-small-trimmed-ru-30000
112
  - max_length: 512
113
  - max_length_output: 32
114
+ - epoch: 13
115
  - batch: 16
116
+ - lr: 0.001
117
  - fp16: False
118
  - random_seed: 1
119
  - gradient_accumulation_steps: 4
eval/metric.first.answer.paragraph_answer.question.lmqg_qg_ruquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.009799196787148438, "Bleu_2": 0.0005852737978964647, "Bleu_3": 1.872450066297381e-09, "Bleu_4": 3.4351742662096258e-12}, "test": {"Bleu_1": 0.009271586917116997, "Bleu_2": 1.2719814522167759e-11, "Bleu_3": 1.4573788253750512e-14, "Bleu_4": 5.059486786015269e-16}}
 
1
+ {"validation": {"Bleu_1": 0.3363966955215555, "Bleu_2": 0.2685513229212724, "Bleu_3": 0.21805385924412107, "Bleu_4": 0.17879331823675573}, "test": {"Bleu_1": 0.3402116714952928, "Bleu_2": 0.27223546029454754, "Bleu_3": 0.22215274979551314, "Bleu_4": 0.18342740166657898}}
eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_ruquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.00984820074676738, "Bleu_2": 0.0005911220113366271, "Bleu_3": 1.8943038111384596e-09, "Bleu_4": 3.4781834393905875e-12, "METEOR": 0.016419043854511567, "ROUGE_L": 0.010682781811433693, "BERTScore": 0.5459694829158621, "MoverScore": 0.4663293462842938}, "test": {"Bleu_1": 0.00937225422239563, "Bleu_2": 1.2880262413169668e-11, "Bleu_3": 1.4766198831191623e-14, "Bleu_4": 5.127782990981187e-16, "METEOR": 0.01595506138034452, "ROUGE_L": 0.009991331725971244, "BERTScore": 0.5458722341336752, "MoverScore": 0.4663653540034606}}
 
1
+ {"validation": {"Bleu_1": 0.33797951395861947, "Bleu_2": 0.26973605882492135, "Bleu_3": 0.2189382598511983, "Bleu_4": 0.17950680759167334, "METEOR": 0.29138918909137734, "ROUGE_L": 0.3394018581752814, "BERTScore": 0.8635220512996968, "MoverScore": 0.6509559853998595}, "test": {"Bleu_1": 0.34198476335205574, "Bleu_2": 0.27362625179438677, "Bleu_3": 0.22330512922189116, "Bleu_4": 0.18435824142429255, "METEOR": 0.29214592246635307, "ROUGE_L": 0.33959203064606674, "BERTScore": 0.8634884899956702, "MoverScore": 0.650355332058682}}
eval/samples.test.hyp.paragraph_answer.question.lmqg_qg_ruquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_answer.question.lmqg_qg_ruquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff