asahi417 commited on
Commit
02c2ef3
1 Parent(s): 006c74a

commit files to HF hub

Browse files
README.md CHANGED
@@ -33,27 +33,27 @@ model-index:
33
  metrics:
34
  - name: BLEU4 (Question Generation)
35
  type: bleu4_question_generation
36
- value: 4.0
37
  - name: ROUGE-L (Question Generation)
38
  type: rouge_l_question_generation
39
- value: 15.25
40
  - name: METEOR (Question Generation)
41
  type: meteor_question_generation
42
- value: 10.2
43
  - name: BERTScore (Question Generation)
44
  type: bertscore_question_generation
45
- value: 70.25
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
- value: 50.2
49
  ---
50
 
51
  # Model Card of `vocabtrimmer/mt5-small-trimmed-fr-90000-frquad-qg`
52
- This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-fr-90000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-90000) for question generation task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
53
 
54
 
55
  ### Overview
56
- - **Language model:** [vocabtrimmer/mt5-small-trimmed-fr-90000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-90000)
57
  - **Language:** fr
58
  - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default)
59
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
@@ -89,14 +89,14 @@ output = pipe("Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême
89
 
90
  | | Score | Type | Dataset |
91
  |:-----------|--------:|:--------|:-----------------------------------------------------------------|
92
- | BERTScore | 70.25 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
93
- | Bleu_1 | 16.27 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
94
- | Bleu_2 | 9.5 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
95
- | Bleu_3 | 6.07 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
96
- | Bleu_4 | 4 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
97
- | METEOR | 10.2 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
98
- | MoverScore | 50.2 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
99
- | ROUGE_L | 15.25 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
100
 
101
 
102
 
@@ -108,12 +108,12 @@ The following hyperparameters were used during fine-tuning:
108
  - input_types: paragraph_answer
109
  - output_types: question
110
  - prefix_types: None
111
- - model: vocabtrimmer/mt5-small-trimmed-fr-90000
112
  - max_length: 512
113
  - max_length_output: 32
114
- - epoch: 15
115
  - batch: 16
116
- - lr: 0.0005
117
  - fp16: False
118
  - random_seed: 1
119
  - gradient_accumulation_steps: 4
 
33
  metrics:
34
  - name: BLEU4 (Question Generation)
35
  type: bleu4_question_generation
36
+ value: 7.1
37
  - name: ROUGE-L (Question Generation)
38
  type: rouge_l_question_generation
39
+ value: 26.69
40
  - name: METEOR (Question Generation)
41
  type: meteor_question_generation
42
+ value: 15.72
43
  - name: BERTScore (Question Generation)
44
  type: bertscore_question_generation
45
+ value: 79.0
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
+ value: 55.18
49
  ---
50
 
51
  # Model Card of `vocabtrimmer/mt5-small-trimmed-fr-90000-frquad-qg`
52
+ This model is fine-tuned version of [ckpts/mt5-small-trimmed-fr-90000](https://huggingface.co/ckpts/mt5-small-trimmed-fr-90000) for question generation task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
53
 
54
 
55
  ### Overview
56
+ - **Language model:** [ckpts/mt5-small-trimmed-fr-90000](https://huggingface.co/ckpts/mt5-small-trimmed-fr-90000)
57
  - **Language:** fr
58
  - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default)
59
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
 
89
 
90
  | | Score | Type | Dataset |
91
  |:-----------|--------:|:--------|:-----------------------------------------------------------------|
92
+ | BERTScore | 79 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
93
+ | Bleu_1 | 27.02 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
94
+ | Bleu_2 | 15.4 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
95
+ | Bleu_3 | 10.21 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
96
+ | Bleu_4 | 7.1 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
97
+ | METEOR | 15.72 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
98
+ | MoverScore | 55.18 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
99
+ | ROUGE_L | 26.69 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
100
 
101
 
102
 
 
108
  - input_types: paragraph_answer
109
  - output_types: question
110
  - prefix_types: None
111
+ - model: ckpts/mt5-small-trimmed-fr-90000
112
  - max_length: 512
113
  - max_length_output: 32
114
+ - epoch: 13
115
  - batch: 16
116
+ - lr: 0.001
117
  - fp16: False
118
  - random_seed: 1
119
  - gradient_accumulation_steps: 4
eval/metric.first.answer.paragraph_answer.question.lmqg_qg_frquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.15844031547856238, "Bleu_2": 0.09241141858929372, "Bleu_3": 0.05904239820282034, "Bleu_4": 0.03822901721151452}, "test": {"Bleu_1": 0.1610889853794724, "Bleu_2": 0.09400881296452392, "Bleu_3": 0.059994060808414636, "Bleu_4": 0.039535045257377645}}
 
1
+ {"validation": {"Bleu_1": 0.27194073902600563, "Bleu_2": 0.14896798316369608, "Bleu_3": 0.09799160106047569, "Bleu_4": 0.06776030648066865}, "test": {"Bleu_1": 0.26865804461318615, "Bleu_2": 0.15292376417129053, "Bleu_3": 0.10115797365122045, "Bleu_4": 0.07027613455414455}}
eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.15992288580673075, "Bleu_2": 0.09327381723108208, "Bleu_3": 0.05955655132495215, "Bleu_4": 0.03853502323406688, "METEOR": 0.09475159659929087, "ROUGE_L": 0.14609194882888013, "BERTScore": 0.6927544384103048, "MoverScore": 0.4993774354052682}, "test": {"Bleu_1": 0.16269895557115005, "Bleu_2": 0.09503009899621674, "Bleu_3": 0.06070472020206799, "Bleu_4": 0.04001954266553894, "METEOR": 0.10198954300292216, "ROUGE_L": 0.15250461385855024, "BERTScore": 0.7025219590436294, "MoverScore": 0.5020140289434077}}
 
1
+ {"validation": {"Bleu_1": 0.2738904556924366, "Bleu_2": 0.1504603411168633, "Bleu_3": 0.0992515904990098, "Bleu_4": 0.06880046501310207, "METEOR": 0.14769747960785995, "ROUGE_L": 0.2830454361222303, "BERTScore": 0.7790486432401258, "MoverScore": 0.5469365639008806}, "test": {"Bleu_1": 0.27019839207653623, "Bleu_2": 0.15402706392294171, "Bleu_3": 0.1020903970198885, "Bleu_4": 0.07097220930248134, "METEOR": 0.15716465876295369, "ROUGE_L": 0.26689498813211016, "BERTScore": 0.7899962979034313, "MoverScore": 0.5517562897710804}}
eval/samples.test.hyp.paragraph_answer.question.lmqg_qg_frquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_answer.question.lmqg_qg_frquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff