asahi417 commited on
Commit
2fb0659
1 Parent(s): 68c1333

commit files to HF hub

Browse files
README.md CHANGED
@@ -31,25 +31,25 @@ model-index:
31
  metrics:
32
  - name: BLEU4 (Question Answering)
33
  type: bleu4_question_answering
34
- value: 3.43
35
  - name: ROUGE-L (Question Answering)
36
  type: rouge_l_question_answering
37
- value: 9.98
38
  - name: METEOR (Question Answering)
39
  type: meteor_question_answering
40
- value: 11.09
41
  - name: BERTScore (Question Answering)
42
  type: bertscore_question_answering
43
- value: 79.63
44
  - name: MoverScore (Question Answering)
45
  type: moverscore_question_answering
46
- value: 54.54
47
  - name: AnswerF1Score (Question Answering)
48
  type: answer_f1_score__question_answering
49
- value: 13.74
50
  - name: AnswerExactMatch (Question Answering)
51
  type: answer_exact_match_question_answering
52
- value: 2.29
53
  ---
54
 
55
  # Model Card of `vocabtrimmer/mt5-small-trimmed-fr-60000-frquad-qa`
@@ -93,16 +93,16 @@ output = pipe("question: En quelle année a-t-on trouvé trace d'un haut fournea
93
 
94
  | | Score | Type | Dataset |
95
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
96
- | AnswerExactMatch | 2.29 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
97
- | AnswerF1Score | 13.74 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
98
- | BERTScore | 79.63 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
99
- | Bleu_1 | 6.85 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
100
- | Bleu_2 | 5.05 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
101
- | Bleu_3 | 4.08 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
102
- | Bleu_4 | 3.43 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
103
- | METEOR | 11.09 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
104
- | MoverScore | 54.54 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
105
- | ROUGE_L | 9.98 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
106
 
107
 
108
 
@@ -117,7 +117,7 @@ The following hyperparameters were used during fine-tuning:
117
  - model: ckpts/mt5-small-trimmed-fr-60000
118
  - max_length: 512
119
  - max_length_output: 32
120
- - epoch: 14
121
  - batch: 32
122
  - lr: 0.0005
123
  - fp16: False
 
31
  metrics:
32
  - name: BLEU4 (Question Answering)
33
  type: bleu4_question_answering
34
+ value: 10.43
35
  - name: ROUGE-L (Question Answering)
36
  type: rouge_l_question_answering
37
+ value: 22.59
38
  - name: METEOR (Question Answering)
39
  type: meteor_question_answering
40
+ value: 17.44
41
  - name: BERTScore (Question Answering)
42
  type: bertscore_question_answering
43
+ value: 86.74
44
  - name: MoverScore (Question Answering)
45
  type: moverscore_question_answering
46
+ value: 66.71
47
  - name: AnswerF1Score (Question Answering)
48
  type: answer_f1_score__question_answering
49
+ value: 34.34
50
  - name: AnswerExactMatch (Question Answering)
51
  type: answer_exact_match_question_answering
52
+ value: 20.01
53
  ---
54
 
55
  # Model Card of `vocabtrimmer/mt5-small-trimmed-fr-60000-frquad-qa`
 
93
 
94
  | | Score | Type | Dataset |
95
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
96
+ | AnswerExactMatch | 20.01 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
97
+ | AnswerF1Score | 34.34 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
98
+ | BERTScore | 86.74 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
99
+ | Bleu_1 | 17.96 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
100
+ | Bleu_2 | 14.51 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
101
+ | Bleu_3 | 12.22 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
102
+ | Bleu_4 | 10.43 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
103
+ | METEOR | 17.44 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
104
+ | MoverScore | 66.71 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
105
+ | ROUGE_L | 22.59 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
106
 
107
 
108
 
 
117
  - model: ckpts/mt5-small-trimmed-fr-60000
118
  - max_length: 512
119
  - max_length_output: 32
120
+ - epoch: 24
121
  - batch: 32
122
  - lr: 0.0005
123
  - fp16: False
eval/metric.first.answer.paragraph_question.answer.lmqg_qg_frquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.0776411712340284, "Bleu_2": 0.05919646714023496, "Bleu_3": 0.048886117715481486, "Bleu_4": 0.04161576757902402, "METEOR": 0.11490506330094395, "ROUGE_L": 0.10494095800579178, "BERTScore": 0.7970382315077477, "MoverScore": 0.5483601216756452, "AnswerF1Score": 13.6789676958257, "AnswerExactMatch": 1.9761606022584692}, "test": {"Bleu_1": 0.06851636400424735, "Bleu_2": 0.05048162102637841, "Bleu_3": 0.040817811323531045, "Bleu_4": 0.034250823769442315, "METEOR": 0.11089020375578844, "ROUGE_L": 0.09984820634670778, "BERTScore": 0.796322394082956, "MoverScore": 0.5453837306563544, "AnswerF1Score": 13.743517143818531, "AnswerExactMatch": 2.289836888331242}}
 
1
+ {"validation": {"Bleu_1": 0.18361390524833565, "Bleu_2": 0.14940535626034082, "Bleu_3": 0.12650520009641847, "Bleu_4": 0.10922585702634707, "METEOR": 0.16348777639671275, "ROUGE_L": 0.2188013665475582, "BERTScore": 0.8620781717889731, "MoverScore": 0.6509440679371835, "AnswerF1Score": 32.60951345488031, "AnswerExactMatch": 14.993726474278544}, "test": {"Bleu_1": 0.17964281545464109, "Bleu_2": 0.14512391122940552, "Bleu_3": 0.12217183104550612, "Bleu_4": 0.10427414797071774, "METEOR": 0.17442012749210453, "ROUGE_L": 0.22593770887738, "BERTScore": 0.8673907224663826, "MoverScore": 0.6670935982663652, "AnswerF1Score": 34.342144673201744, "AnswerExactMatch": 20.012547051442912}}
eval/samples.test.hyp.paragraph_question.answer.lmqg_qg_frquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_question.answer.lmqg_qg_frquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff