asahi417 commited on
Commit
35fc712
1 Parent(s): bf5c0ae

commit files to HF hub

Browse files
README.md CHANGED
@@ -33,27 +33,27 @@ model-index:
33
  metrics:
34
  - name: BLEU4 (Question Generation)
35
  type: bleu4_question_generation
36
- value: 0.0
37
  - name: ROUGE-L (Question Generation)
38
  type: rouge_l_question_generation
39
- value: 0.12
40
  - name: METEOR (Question Generation)
41
  type: meteor_question_generation
42
- value: 0.01
43
  - name: BERTScore (Question Generation)
44
  type: bertscore_question_generation
45
- value: 63.27
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
- value: 44.99
49
  ---
50
 
51
  # Model Card of `vocabtrimmer/mt5-small-trimmed-fr-60000-frquad-qg`
52
- This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-fr-60000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-60000) for question generation task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
53
 
54
 
55
  ### Overview
56
- - **Language model:** [vocabtrimmer/mt5-small-trimmed-fr-60000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-fr-60000)
57
  - **Language:** fr
58
  - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default)
59
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
@@ -89,14 +89,14 @@ output = pipe("Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême
89
 
90
  | | Score | Type | Dataset |
91
  |:-----------|--------:|:--------|:-----------------------------------------------------------------|
92
- | BERTScore | 63.27 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
93
- | Bleu_1 | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
94
- | Bleu_2 | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
95
- | Bleu_3 | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
96
- | Bleu_4 | 0 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
97
- | METEOR | 0.01 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
98
- | MoverScore | 44.99 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
99
- | ROUGE_L | 0.12 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
100
 
101
 
102
 
@@ -108,12 +108,12 @@ The following hyperparameters were used during fine-tuning:
108
  - input_types: paragraph_answer
109
  - output_types: question
110
  - prefix_types: None
111
- - model: vocabtrimmer/mt5-small-trimmed-fr-60000
112
  - max_length: 512
113
  - max_length_output: 32
114
- - epoch: 12
115
  - batch: 16
116
- - lr: 0.0001
117
  - fp16: False
118
  - random_seed: 1
119
  - gradient_accumulation_steps: 4
 
33
  metrics:
34
  - name: BLEU4 (Question Generation)
35
  type: bleu4_question_generation
36
+ value: 6.05
37
  - name: ROUGE-L (Question Generation)
38
  type: rouge_l_question_generation
39
+ value: 25.14
40
  - name: METEOR (Question Generation)
41
  type: meteor_question_generation
42
+ value: 14.64
43
  - name: BERTScore (Question Generation)
44
  type: bertscore_question_generation
45
+ value: 78.03
46
  - name: MoverScore (Question Generation)
47
  type: moverscore_question_generation
48
+ value: 54.29
49
  ---
50
 
51
  # Model Card of `vocabtrimmer/mt5-small-trimmed-fr-60000-frquad-qg`
52
+ This model is fine-tuned version of [ckpts/mt5-small-trimmed-fr-60000](https://huggingface.co/ckpts/mt5-small-trimmed-fr-60000) for question generation task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
53
 
54
 
55
  ### Overview
56
+ - **Language model:** [ckpts/mt5-small-trimmed-fr-60000](https://huggingface.co/ckpts/mt5-small-trimmed-fr-60000)
57
  - **Language:** fr
58
  - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default)
59
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
 
89
 
90
  | | Score | Type | Dataset |
91
  |:-----------|--------:|:--------|:-----------------------------------------------------------------|
92
+ | BERTScore | 78.03 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
93
+ | Bleu_1 | 25.16 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
94
+ | Bleu_2 | 13.62 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
95
+ | Bleu_3 | 8.85 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
96
+ | Bleu_4 | 6.05 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
97
+ | METEOR | 14.64 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
98
+ | MoverScore | 54.29 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
99
+ | ROUGE_L | 25.14 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
100
 
101
 
102
 
 
108
  - input_types: paragraph_answer
109
  - output_types: question
110
  - prefix_types: None
111
+ - model: ckpts/mt5-small-trimmed-fr-60000
112
  - max_length: 512
113
  - max_length_output: 32
114
+ - epoch: 15
115
  - batch: 16
116
+ - lr: 0.0005
117
  - fp16: False
118
  - random_seed: 1
119
  - gradient_accumulation_steps: 4
eval/metric.first.answer.paragraph_answer.question.lmqg_qg_frquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 3.640775730149816e-21, "Bleu_2": 5.148834415136274e-21, "Bleu_3": 8.505895224925895e-17, "Bleu_4": 1.0932653419045678e-14}, "test": {"Bleu_1": 2.9949672108421435e-21, "Bleu_2": 4.235523248435348e-21, "Bleu_3": 6.997101493110517e-17, "Bleu_4": 8.993396172797412e-15}}
 
1
+ {"validation": {"Bleu_1": 0.2560221214971848, "Bleu_2": 0.1334999688749848, "Bleu_3": 0.08539753277404635, "Bleu_4": 0.0576964460765076}, "test": {"Bleu_1": 0.2504652561055845, "Bleu_2": 0.13536557523114698, "Bleu_3": 0.08786650993016741, "Bleu_4": 0.06007580753309587}}
eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 3.7822024640188776e-21, "Bleu_2": 5.348842020256005e-21, "Bleu_3": 8.765525680516315e-17, "Bleu_4": 1.122114215171343e-14, "METEOR": 0.00016821571452022558, "ROUGE_L": 0.00013995829830148413, "BERTScore": 0.632249019771891, "MoverScore": 0.45022997918805396}, "test": {"Bleu_1": 3.0840474231986844e-21, "Bleu_2": 4.361501692889023e-21, "Bleu_3": 7.130618547236184e-17, "Bleu_4": 9.117437271169e-15, "METEOR": 5.2173408711299115e-05, "ROUGE_L": 0.0012095525450174294, "BERTScore": 0.6327488172401502, "MoverScore": 0.4498655172241212}}
 
1
+ {"validation": {"Bleu_1": 0.2576310793047975, "Bleu_2": 0.1347648746601711, "Bleu_3": 0.0863781168057983, "Bleu_4": 0.05835483236081858, "METEOR": 0.13637530589619343, "ROUGE_L": 0.26853968786569693, "BERTScore": 0.768718460474192, "MoverScore": 0.5390085184058023}, "test": {"Bleu_1": 0.2516083338251711, "Bleu_2": 0.1362446692499852, "Bleu_3": 0.08854043563812196, "Bleu_4": 0.06049580671587121, "METEOR": 0.14636413895787062, "ROUGE_L": 0.25144462549993857, "BERTScore": 0.780288153153793, "MoverScore": 0.5429103204494686}}
eval/samples.test.hyp.paragraph_answer.question.lmqg_qg_frquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_answer.question.lmqg_qg_frquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff