commit files to HF hub
Browse files
README.md
CHANGED
@@ -31,25 +31,25 @@ model-index:
|
|
31 |
metrics:
|
32 |
- name: BLEU4 (Question Answering)
|
33 |
type: bleu4_question_answering
|
34 |
-
value:
|
35 |
- name: ROUGE-L (Question Answering)
|
36 |
type: rouge_l_question_answering
|
37 |
-
value:
|
38 |
- name: METEOR (Question Answering)
|
39 |
type: meteor_question_answering
|
40 |
-
value:
|
41 |
- name: BERTScore (Question Answering)
|
42 |
type: bertscore_question_answering
|
43 |
-
value:
|
44 |
- name: MoverScore (Question Answering)
|
45 |
type: moverscore_question_answering
|
46 |
-
value:
|
47 |
- name: AnswerF1Score (Question Answering)
|
48 |
type: answer_f1_score__question_answering
|
49 |
-
value:
|
50 |
- name: AnswerExactMatch (Question Answering)
|
51 |
type: answer_exact_match_question_answering
|
52 |
-
value:
|
53 |
---
|
54 |
|
55 |
# Model Card of `vocabtrimmer/mt5-small-trimmed-fr-60000-frquad-qa`
|
@@ -93,16 +93,16 @@ output = pipe("question: En quelle année a-t-on trouvé trace d'un haut fournea
|
|
93 |
|
94 |
| | Score | Type | Dataset |
|
95 |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
|
96 |
-
| AnswerExactMatch |
|
97 |
-
| AnswerF1Score |
|
98 |
-
| BERTScore |
|
99 |
-
| Bleu_1 |
|
100 |
-
| Bleu_2 |
|
101 |
-
| Bleu_3 |
|
102 |
-
| Bleu_4 |
|
103 |
-
| METEOR |
|
104 |
-
| MoverScore |
|
105 |
-
| ROUGE_L |
|
106 |
|
107 |
|
108 |
|
@@ -117,7 +117,7 @@ The following hyperparameters were used during fine-tuning:
|
|
117 |
- model: ckpts/mt5-small-trimmed-fr-60000
|
118 |
- max_length: 512
|
119 |
- max_length_output: 32
|
120 |
-
- epoch:
|
121 |
- batch: 32
|
122 |
- lr: 0.0005
|
123 |
- fp16: False
|
|
|
31 |
metrics:
|
32 |
- name: BLEU4 (Question Answering)
|
33 |
type: bleu4_question_answering
|
34 |
+
value: 10.43
|
35 |
- name: ROUGE-L (Question Answering)
|
36 |
type: rouge_l_question_answering
|
37 |
+
value: 22.59
|
38 |
- name: METEOR (Question Answering)
|
39 |
type: meteor_question_answering
|
40 |
+
value: 17.44
|
41 |
- name: BERTScore (Question Answering)
|
42 |
type: bertscore_question_answering
|
43 |
+
value: 86.74
|
44 |
- name: MoverScore (Question Answering)
|
45 |
type: moverscore_question_answering
|
46 |
+
value: 66.71
|
47 |
- name: AnswerF1Score (Question Answering)
|
48 |
type: answer_f1_score__question_answering
|
49 |
+
value: 34.34
|
50 |
- name: AnswerExactMatch (Question Answering)
|
51 |
type: answer_exact_match_question_answering
|
52 |
+
value: 20.01
|
53 |
---
|
54 |
|
55 |
# Model Card of `vocabtrimmer/mt5-small-trimmed-fr-60000-frquad-qa`
|
|
|
93 |
|
94 |
| | Score | Type | Dataset |
|
95 |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
|
96 |
+
| AnswerExactMatch | 20.01 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
97 |
+
| AnswerF1Score | 34.34 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
98 |
+
| BERTScore | 86.74 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
99 |
+
| Bleu_1 | 17.96 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
100 |
+
| Bleu_2 | 14.51 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
101 |
+
| Bleu_3 | 12.22 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
102 |
+
| Bleu_4 | 10.43 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
103 |
+
| METEOR | 17.44 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
104 |
+
| MoverScore | 66.71 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
105 |
+
| ROUGE_L | 22.59 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
|
106 |
|
107 |
|
108 |
|
|
|
117 |
- model: ckpts/mt5-small-trimmed-fr-60000
|
118 |
- max_length: 512
|
119 |
- max_length_output: 32
|
120 |
+
- epoch: 24
|
121 |
- batch: 32
|
122 |
- lr: 0.0005
|
123 |
- fp16: False
|
eval/metric.first.answer.paragraph_question.answer.lmqg_qg_frquad.default.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"validation": {"Bleu_1": 0.
|
|
|
1 |
+
{"validation": {"Bleu_1": 0.18361390524833565, "Bleu_2": 0.14940535626034082, "Bleu_3": 0.12650520009641847, "Bleu_4": 0.10922585702634707, "METEOR": 0.16348777639671275, "ROUGE_L": 0.2188013665475582, "BERTScore": 0.8620781717889731, "MoverScore": 0.6509440679371835, "AnswerF1Score": 32.60951345488031, "AnswerExactMatch": 14.993726474278544}, "test": {"Bleu_1": 0.17964281545464109, "Bleu_2": 0.14512391122940552, "Bleu_3": 0.12217183104550612, "Bleu_4": 0.10427414797071774, "METEOR": 0.17442012749210453, "ROUGE_L": 0.22593770887738, "BERTScore": 0.8673907224663826, "MoverScore": 0.6670935982663652, "AnswerF1Score": 34.342144673201744, "AnswerExactMatch": 20.012547051442912}}
|
eval/samples.test.hyp.paragraph_question.answer.lmqg_qg_frquad.default.txt
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
eval/samples.validation.hyp.paragraph_question.answer.lmqg_qg_frquad.default.txt
CHANGED
The diff for this file is too large to render.
See raw diff
|
|