asahi417 commited on
Commit
96fd51a
1 Parent(s): 6641b19

commit files to HF hub

Browse files
README.md CHANGED
@@ -33,27 +33,27 @@ model-index:
33
  metrics:
34
  - name: BLEU4 (Question Generation)
35
  type: bleu4_question_generation
36
- value: 0.0
37
  - name: ROUGE-L (Question Generation)
38
  type: rouge_l_question_generation
39
- value: 0.05
40
  - name: METEOR (Question Generation)
41
  type: meteor_question_generation
42
- value: 1.52
43
  - name: BERTScore (Question Generation)
44
  type: bertscore_question_generation
45
- value: 53.33
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
- value: 48.19
49
  ---
50
 
51
  # Model Card of `vocabtrimmer/mt5-small-trimmed-ko-15000-koquad-qg`
52
- This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-ko-15000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ko-15000) for question generation task on the [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
53
 
54
 
55
  ### Overview
56
- - **Language model:** [vocabtrimmer/mt5-small-trimmed-ko-15000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ko-15000)
57
  - **Language:** ko
58
  - **Training data:** [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (default)
59
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
@@ -89,14 +89,14 @@ output = pipe("1990년 영화 《 <hl> 남부군 <hl> 》에서 단역으로 영
89
 
90
  | | Score | Type | Dataset |
91
  |:-----------|--------:|:--------|:-----------------------------------------------------------------|
92
- | BERTScore | 53.33 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
93
- | Bleu_1 | 0.01 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
94
- | Bleu_2 | 0 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
95
- | Bleu_3 | 0 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
96
- | Bleu_4 | 0 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
97
- | METEOR | 1.52 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
98
- | MoverScore | 48.19 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
99
- | ROUGE_L | 0.05 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
100
 
101
 
102
 
@@ -108,15 +108,15 @@ The following hyperparameters were used during fine-tuning:
108
  - input_types: paragraph_answer
109
  - output_types: question
110
  - prefix_types: None
111
- - model: vocabtrimmer/mt5-small-trimmed-ko-15000
112
  - max_length: 512
113
  - max_length_output: 32
114
- - epoch: 5
115
- - batch: 16
116
- - lr: 0.0005
117
  - fp16: False
118
  - random_seed: 1
119
- - gradient_accumulation_steps: 4
120
  - label_smoothing: 0.15
121
 
122
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ko-15000-koquad-qg/raw/main/trainer_config.json).
 
33
  metrics:
34
  - name: BLEU4 (Question Generation)
35
  type: bleu4_question_generation
36
+ value: 11.41
37
  - name: ROUGE-L (Question Generation)
38
  type: rouge_l_question_generation
39
+ value: 26.96
40
  - name: METEOR (Question Generation)
41
  type: meteor_question_generation
42
+ value: 28.19
43
  - name: BERTScore (Question Generation)
44
  type: bertscore_question_generation
45
+ value: 84.1
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
+ value: 82.89
49
  ---
50
 
51
  # Model Card of `vocabtrimmer/mt5-small-trimmed-ko-15000-koquad-qg`
52
+ This model is fine-tuned version of [ckpts/mt5-small-trimmed-ko-15000](https://huggingface.co/ckpts/mt5-small-trimmed-ko-15000) for question generation task on the [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
53
 
54
 
55
  ### Overview
56
+ - **Language model:** [ckpts/mt5-small-trimmed-ko-15000](https://huggingface.co/ckpts/mt5-small-trimmed-ko-15000)
57
  - **Language:** ko
58
  - **Training data:** [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (default)
59
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
 
89
 
90
  | | Score | Type | Dataset |
91
  |:-----------|--------:|:--------|:-----------------------------------------------------------------|
92
+ | BERTScore | 84.1 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
93
+ | Bleu_1 | 27.29 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
94
+ | Bleu_2 | 20.08 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
95
+ | Bleu_3 | 15.08 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
96
+ | Bleu_4 | 11.41 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
97
+ | METEOR | 28.19 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
98
+ | MoverScore | 82.89 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
99
+ | ROUGE_L | 26.96 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
100
 
101
 
102
 
 
108
  - input_types: paragraph_answer
109
  - output_types: question
110
  - prefix_types: None
111
+ - model: ckpts/mt5-small-trimmed-ko-15000
112
  - max_length: 512
113
  - max_length_output: 32
114
+ - epoch: 15
115
+ - batch: 64
116
+ - lr: 0.001
117
  - fp16: False
118
  - random_seed: 1
119
+ - gradient_accumulation_steps: 1
120
  - label_smoothing: 0.15
121
 
122
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ko-15000-koquad-qg/raw/main/trainer_config.json).
eval/metric.first.answer.paragraph_answer.question.lmqg_qg_koquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.00025834548172205476, "Bleu_2": 5.0175636956104234e-05, "Bleu_3": 2.962297835379666e-10, "Bleu_4": 7.309011298875784e-13}, "test": {"Bleu_1": 7.411112963889285e-05, "Bleu_2": 8.516453261694912e-13, "Bleu_3": 1.95941221532042e-15, "Bleu_4": 9.544535591096194e-17}}
 
1
+ {"validation": {"Bleu_1": 0.2573962622743056, "Bleu_2": 0.18663320899957148, "Bleu_3": 0.1388819585282331, "Bleu_4": 0.1047906213707409}, "test": {"Bleu_1": 0.2697019760980053, "Bleu_2": 0.1982222326565144, "Bleu_3": 0.14876549480725032, "Bleu_4": 0.1126191285903239}}
eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_koquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.00032984753713838534, "Bleu_2": 6.523498336271802e-05, "Bleu_3": 3.87458489407366e-10, "Bleu_4": 9.588408525859448e-13, "METEOR": 0.01634679868665742, "ROUGE_L": 0.0008804828189812416, "BERTScore": 0.5326547119599363, "MoverScore": 0.4813786173822095}, "test": {"Bleu_1": 7.596908058420152e-05, "Bleu_2": 8.729478120255356e-13, "Bleu_3": 2.0083041936305836e-15, "Bleu_4": 9.78206351113566e-17, "METEOR": 0.01520266564946064, "ROUGE_L": 0.0005290375067912147, "BERTScore": 0.5332766564813083, "MoverScore": 0.4819022589874832}}
 
1
+ {"validation": {"Bleu_1": 0.2905307262569751, "Bleu_2": 0.21507857851291853, "Bleu_3": 0.16229763963337976, "Bleu_4": 0.12363771040093172, "METEOR": 0.28697164611245923, "ROUGE_L": 0.2782579478956691, "BERTScore": 0.8328077109594413, "MoverScore": 0.8304753003503604}, "test": {"Bleu_1": 0.27286551769735373, "Bleu_2": 0.20081309705653386, "Bleu_3": 0.1507696338491648, "Bleu_4": 0.11411980750832641, "METEOR": 0.281893492811955, "ROUGE_L": 0.269631536938204, "BERTScore": 0.8410498351761145, "MoverScore": 0.8289075803783602}}
eval/samples.test.hyp.paragraph_answer.question.lmqg_qg_koquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_answer.question.lmqg_qg_koquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff