commit files to HF hub
Browse files
README.md
CHANGED
@@ -29,25 +29,25 @@ model-index:
|
|
29 |
metrics:
|
30 |
- name: BLEU4 (Question Answering)
|
31 |
type: bleu4_question_answering
|
32 |
-
value:
|
33 |
- name: ROUGE-L (Question Answering)
|
34 |
type: rouge_l_question_answering
|
35 |
-
value:
|
36 |
- name: METEOR (Question Answering)
|
37 |
type: meteor_question_answering
|
38 |
-
value:
|
39 |
- name: BERTScore (Question Answering)
|
40 |
type: bertscore_question_answering
|
41 |
-
value:
|
42 |
- name: MoverScore (Question Answering)
|
43 |
type: moverscore_question_answering
|
44 |
-
value:
|
45 |
- name: AnswerF1Score (Question Answering)
|
46 |
type: answer_f1_score__question_answering
|
47 |
-
value:
|
48 |
- name: AnswerExactMatch (Question Answering)
|
49 |
type: answer_exact_match_question_answering
|
50 |
-
value:
|
51 |
---
|
52 |
|
53 |
# Model Card of `vocabtrimmer/mt5-small-trimmed-it-itquad-qa`
|
@@ -91,16 +91,16 @@ output = pipe("question: Quale batterio ha il nome del paese che colpisce di pi
|
|
91 |
|
92 |
| | Score | Type | Dataset |
|
93 |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
|
94 |
-
| AnswerExactMatch |
|
95 |
-
| AnswerF1Score |
|
96 |
-
| BERTScore |
|
97 |
-
| Bleu_1 |
|
98 |
-
| Bleu_2 |
|
99 |
-
| Bleu_3 |
|
100 |
-
| Bleu_4 |
|
101 |
-
| METEOR |
|
102 |
-
| MoverScore |
|
103 |
-
| ROUGE_L |
|
104 |
|
105 |
|
106 |
|
@@ -115,12 +115,12 @@ The following hyperparameters were used during fine-tuning:
|
|
115 |
- model: vocabtrimmer/mt5-small-trimmed-it
|
116 |
- max_length: 512
|
117 |
- max_length_output: 32
|
118 |
-
- epoch:
|
119 |
- batch: 32
|
120 |
-
- lr: 0.
|
121 |
- fp16: False
|
122 |
- random_seed: 1
|
123 |
-
- gradient_accumulation_steps:
|
124 |
- label_smoothing: 0.15
|
125 |
|
126 |
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-itquad-qa/raw/main/trainer_config.json).
|
|
|
29 |
metrics:
|
30 |
- name: BLEU4 (Question Answering)
|
31 |
type: bleu4_question_answering
|
32 |
+
value: 9.62
|
33 |
- name: ROUGE-L (Question Answering)
|
34 |
type: rouge_l_question_answering
|
35 |
+
value: 30.92
|
36 |
- name: METEOR (Question Answering)
|
37 |
type: meteor_question_answering
|
38 |
+
value: 26.47
|
39 |
- name: BERTScore (Question Answering)
|
40 |
type: bertscore_question_answering
|
41 |
+
value: 90.14
|
42 |
- name: MoverScore (Question Answering)
|
43 |
type: moverscore_question_answering
|
44 |
+
value: 74.5
|
45 |
- name: AnswerF1Score (Question Answering)
|
46 |
type: answer_f1_score__question_answering
|
47 |
+
value: 51.47
|
48 |
- name: AnswerExactMatch (Question Answering)
|
49 |
type: answer_exact_match_question_answering
|
50 |
+
value: 36.0
|
51 |
---
|
52 |
|
53 |
# Model Card of `vocabtrimmer/mt5-small-trimmed-it-itquad-qa`
|
|
|
91 |
|
92 |
| | Score | Type | Dataset |
|
93 |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
|
94 |
+
| AnswerExactMatch | 36 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
|
95 |
+
| AnswerF1Score | 51.47 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
|
96 |
+
| BERTScore | 90.14 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
|
97 |
+
| Bleu_1 | 20.22 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
|
98 |
+
| Bleu_2 | 15.36 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
|
99 |
+
| Bleu_3 | 12.16 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
|
100 |
+
| Bleu_4 | 9.62 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
|
101 |
+
| METEOR | 26.47 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
|
102 |
+
| MoverScore | 74.5 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
|
103 |
+
| ROUGE_L | 30.92 | default | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) |
|
104 |
|
105 |
|
106 |
|
|
|
115 |
- model: vocabtrimmer/mt5-small-trimmed-it
|
116 |
- max_length: 512
|
117 |
- max_length_output: 32
|
118 |
+
- epoch: 13
|
119 |
- batch: 32
|
120 |
+
- lr: 0.0005
|
121 |
- fp16: False
|
122 |
- random_seed: 1
|
123 |
+
- gradient_accumulation_steps: 2
|
124 |
- label_smoothing: 0.15
|
125 |
|
126 |
The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-it-itquad-qa/raw/main/trainer_config.json).
|
eval/metric.first.answer.paragraph_question.answer.lmqg_qg_itquad.default.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"validation": {"Bleu_1": 0.
|
|
|
1 |
+
{"validation": {"Bleu_1": 0.21083394217721843, "Bleu_2": 0.16030694672703755, "Bleu_3": 0.12768239875259021, "Bleu_4": 0.10107736671762473, "METEOR": 0.2817387368332181, "ROUGE_L": 0.3112476321282083, "BERTScore": 0.9142447584623857, "MoverScore": 0.7745192501933702, "AnswerF1Score": 55.97960751263587, "AnswerExactMatch": 42.98856617163885}, "test": {"Bleu_1": 0.2022175606832424, "Bleu_2": 0.15355997720884773, "Bleu_3": 0.12164802976426284, "Bleu_4": 0.09622641132646627, "METEOR": 0.2646909834754643, "ROUGE_L": 0.3092117112734317, "BERTScore": 0.9014100162690597, "MoverScore": 0.7449889153606168, "AnswerF1Score": 51.46633302083642, "AnswerExactMatch": 35.996845840452096}}
|
eval/samples.test.hyp.paragraph_question.answer.lmqg_qg_itquad.default.txt
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
eval/samples.validation.hyp.paragraph_question.answer.lmqg_qg_itquad.default.txt
CHANGED
The diff for this file is too large to render.
See raw diff
|
|