asahi417 commited on
Commit
97f6834
1 Parent(s): 78ea20f

commit files to HF hub

Browse files
README.md CHANGED
@@ -29,33 +29,33 @@ model-index:
29
  metrics:
30
  - name: BLEU4 (Question Answering)
31
  type: bleu4_question_answering
32
- value: 0.0
33
  - name: ROUGE-L (Question Answering)
34
  type: rouge_l_question_answering
35
- value: 0.0
36
  - name: METEOR (Question Answering)
37
  type: meteor_question_answering
38
- value: 0.0
39
  - name: BERTScore (Question Answering)
40
  type: bertscore_question_answering
41
- value: 53.44
42
  - name: MoverScore (Question Answering)
43
  type: moverscore_question_answering
44
- value: 57.38
45
  - name: AnswerF1Score (Question Answering)
46
  type: answer_f1_score__question_answering
47
- value: 0.0
48
  - name: AnswerExactMatch (Question Answering)
49
  type: answer_exact_match_question_answering
50
- value: 0.01
51
  ---
52
 
53
  # Model Card of `vocabtrimmer/mt5-small-trimmed-it-15000-itquad-qa`
54
- This model is fine-tuned version of [vocabtrimmer/mt5-small-trimmed-it-15000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-15000) for question answering task on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
55
 
56
 
57
  ### Overview
58
- - **Language model:** [vocabtrimmer/mt5-small-trimmed-it-15000](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-15000)
59
  - **Language:** it
60
  - **Training data:** [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (default)
61
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
@@ -91,16 +91,16 @@ output = pipe("question: Quale batterio ha il nome del paese che colpisce di pi
91
 
92
  | | Score | Type | Dataset |
93
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
94
- | AnswerExactMatch | 0.01 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
95
- | AnswerF1Score | 0 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
96
- | BERTScore | 53.44 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
97
- | Bleu_1 | 0 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
98
- | Bleu_2 | 0 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
99
- | Bleu_3 | 0 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
100
- | Bleu_4 | 0 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
101
- | METEOR | 0 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
102
- | MoverScore | 57.38 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
103
- | ROUGE_L | 0 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
104
 
105
 
106
 
@@ -112,15 +112,15 @@ The following hyperparameters were used during fine-tuning:
112
  - input_types: ['paragraph_question']
113
  - output_types: ['answer']
114
  - prefix_types: None
115
- - model: vocabtrimmer/mt5-small-trimmed-it-15000
116
  - max_length: 512
117
  - max_length_output: 32
118
- - epoch: 2
119
  - batch: 32
120
- - lr: 0.0001
121
  - fp16: False
122
  - random_seed: 1
123
- - gradient_accumulation_steps: 4
124
  - label_smoothing: 0.15
125
 
126
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-15000-itquad-qa/raw/main/trainer_config.json).
 
29
  metrics:
30
  - name: BLEU4 (Question Answering)
31
  type: bleu4_question_answering
32
+ value: 12.72
33
  - name: ROUGE-L (Question Answering)
34
  type: rouge_l_question_answering
35
+ value: 35.29
36
  - name: METEOR (Question Answering)
37
  type: meteor_question_answering
38
+ value: 31.94
39
  - name: BERTScore (Question Answering)
40
  type: bertscore_question_answering
41
+ value: 91.81
42
  - name: MoverScore (Question Answering)
43
  type: moverscore_question_answering
44
+ value: 78.23
45
  - name: AnswerF1Score (Question Answering)
46
  type: answer_f1_score__question_answering
47
+ value: 61.17
48
  - name: AnswerExactMatch (Question Answering)
49
  type: answer_exact_match_question_answering
50
+ value: 45.63
51
  ---
52
 
53
  # Model Card of `vocabtrimmer/mt5-small-trimmed-it-15000-itquad-qa`
54
+ This model is fine-tuned version of [ckpts/mt5-small-trimmed-it-15000](https://huggingface.co/ckpts/mt5-small-trimmed-it-15000) for question answering task on the [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
55
 
56
 
57
  ### Overview
58
+ - **Language model:** [ckpts/mt5-small-trimmed-it-15000](https://huggingface.co/ckpts/mt5-small-trimmed-it-15000)
59
  - **Language:** it
60
  - **Training data:** [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) (default)
61
  - **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
 
91
 
92
  | | Score | Type | Dataset |
93
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
94
+ | AnswerExactMatch | 45.63 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
95
+ | AnswerF1Score | 61.17 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
96
+ | BERTScore | 91.81 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
97
+ | Bleu_1 | 24.63 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
98
+ | Bleu_2 | 19.27 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
99
+ | Bleu_3 | 15.67 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
100
+ | Bleu_4 | 12.72 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
101
+ | METEOR | 31.94 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
102
+ | MoverScore | 78.23 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
103
+ | ROUGE_L | 35.29 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
104
 
105
 
106
 
 
112
  - input_types: ['paragraph_question']
113
  - output_types: ['answer']
114
  - prefix_types: None
115
+ - model: ckpts/mt5-small-trimmed-it-15000
116
  - max_length: 512
117
  - max_length_output: 32
118
+ - epoch: 14
119
  - batch: 32
120
+ - lr: 0.001
121
  - fp16: False
122
  - random_seed: 1
123
+ - gradient_accumulation_steps: 2
124
  - label_smoothing: 0.15
125
 
126
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-15000-itquad-qa/raw/main/trainer_config.json).
eval/metric.first.answer.paragraph_question.answer.lmqg_qg_itquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 1.4566504428404137e-20, "Bleu_2": 6.65999673968088e-20, "Bleu_3": 1.266191296030615e-19, "Bleu_4": 2.0793211550277366e-19, "METEOR": 6.297650582729482e-05, "ROUGE_L": 0.0, "BERTScore": 0.538802718337567, "MoverScore": 0.5782300506330317, "AnswerF1Score": 0.0, "AnswerExactMatch": 0.0}, "test": {"Bleu_1": 1.1464762199873616e-20, "Bleu_2": 6.542641457860311e-20, "Bleu_3": 1.34123408700942e-19, "Bleu_4": 2.294806670903765e-19, "METEOR": 7.575901058732173e-06, "ROUGE_L": 0.0, "BERTScore": 0.5343892611675811, "MoverScore": 0.5738221023340756, "AnswerF1Score": 0.0, "AnswerExactMatch": 0.013142331449599158}}
 
1
+ {"validation": {"Bleu_1": 0.2580067857275919, "Bleu_2": 0.20108730465281174, "Bleu_3": 0.16401300224060814, "Bleu_4": 0.13360125215512206, "METEOR": 0.3419925245916212, "ROUGE_L": 0.3576084751547699, "BERTScore": 0.9314583699373272, "MoverScore": 0.8156556259999925, "AnswerF1Score": 66.06181535700281, "AnswerExactMatch": 53.52871599421737}, "test": {"Bleu_1": 0.24630429838525467, "Bleu_2": 0.19269581000798502, "Bleu_3": 0.15672132801810706, "Bleu_4": 0.12721339743498133, "METEOR": 0.3194144078191241, "ROUGE_L": 0.3529487866544257, "BERTScore": 0.9180507633121745, "MoverScore": 0.7823140510035724, "AnswerF1Score": 61.16997980920731, "AnswerExactMatch": 45.63017479300828}}
eval/samples.test.hyp.paragraph_question.answer.lmqg_qg_itquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_question.answer.lmqg_qg_itquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff