asahi417 commited on
Commit
64f9302
1 Parent(s): 2dbd926

commit files to HF hub

Browse files
README.md CHANGED
@@ -31,25 +31,25 @@ model-index:
31
  metrics:
32
  - name: BLEU4 (Question Answering)
33
  type: bleu4_question_answering
34
- value: 11.64
35
  - name: ROUGE-L (Question Answering)
36
  type: rouge_l_question_answering
37
- value: 25.15
38
  - name: METEOR (Question Answering)
39
  type: meteor_question_answering
40
- value: 18.54
41
  - name: BERTScore (Question Answering)
42
  type: bertscore_question_answering
43
- value: 87.94
44
  - name: MoverScore (Question Answering)
45
  type: moverscore_question_answering
46
- value: 67.71
47
  - name: AnswerF1Score (Question Answering)
48
  type: answer_f1_score__question_answering
49
- value: 36.62
50
  - name: AnswerExactMatch (Question Answering)
51
  type: answer_exact_match_question_answering
52
- value: 18.88
53
  ---
54
 
55
  # Model Card of `lmqg/mbart-large-cc25-frquad-qa`
@@ -93,16 +93,16 @@ output = pipe("question: En quelle année a-t-on trouvé trace d'un haut fournea
93
 
94
  | | Score | Type | Dataset |
95
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
96
- | AnswerExactMatch | 18.88 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
97
- | AnswerF1Score | 36.62 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
98
- | BERTScore | 87.94 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
99
- | Bleu_1 | 22.45 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
100
- | Bleu_2 | 17.59 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
101
- | Bleu_3 | 14.24 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
102
- | Bleu_4 | 11.64 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
103
- | METEOR | 18.54 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
104
- | MoverScore | 67.71 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
105
- | ROUGE_L | 25.15 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
106
 
107
 
108
 
@@ -117,13 +117,13 @@ The following hyperparameters were used during fine-tuning:
117
  - model: facebook/mbart-large-cc25
118
  - max_length: 512
119
  - max_length_output: 32
120
- - epoch: 8
121
- - batch: 4
122
- - lr: 0.0005
123
  - fp16: False
124
  - random_seed: 1
125
- - gradient_accumulation_steps: 16
126
- - label_smoothing: 0.0
127
 
128
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qa/raw/main/trainer_config.json).
129
 
 
31
  metrics:
32
  - name: BLEU4 (Question Answering)
33
  type: bleu4_question_answering
34
+ value: 26.33
35
  - name: ROUGE-L (Question Answering)
36
  type: rouge_l_question_answering
37
+ value: 38.14
38
  - name: METEOR (Question Answering)
39
  type: meteor_question_answering
40
+ value: 31.8
41
  - name: BERTScore (Question Answering)
42
  type: bertscore_question_answering
43
+ value: 92.2
44
  - name: MoverScore (Question Answering)
45
  type: moverscore_question_answering
46
+ value: 77.16
47
  - name: AnswerF1Score (Question Answering)
48
  type: answer_f1_score__question_answering
49
+ value: 60.48
50
  - name: AnswerExactMatch (Question Answering)
51
  type: answer_exact_match_question_answering
52
+ value: 39.34
53
  ---
54
 
55
  # Model Card of `lmqg/mbart-large-cc25-frquad-qa`
 
93
 
94
  | | Score | Type | Dataset |
95
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
96
+ | AnswerExactMatch | 39.34 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
97
+ | AnswerF1Score | 60.48 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
98
+ | BERTScore | 92.2 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
99
+ | Bleu_1 | 37.27 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
100
+ | Bleu_2 | 32.61 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
101
+ | Bleu_3 | 29.23 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
102
+ | Bleu_4 | 26.33 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
103
+ | METEOR | 31.8 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
104
+ | MoverScore | 77.16 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
105
+ | ROUGE_L | 38.14 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) |
106
 
107
 
108
 
 
117
  - model: facebook/mbart-large-cc25
118
  - max_length: 512
119
  - max_length_output: 32
120
+ - epoch: 15
121
+ - batch: 32
122
+ - lr: 0.0002
123
  - fp16: False
124
  - random_seed: 1
125
+ - gradient_accumulation_steps: 2
126
+ - label_smoothing: 0.15
127
 
128
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-frquad-qa/raw/main/trainer_config.json).
129
 
eval/metric.first.answer.paragraph_question.answer.lmqg_qg_frquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.2486998514115745, "Bleu_2": 0.19685386062905924, "Bleu_3": 0.15927715488613817, "Bleu_4": 0.13079566202215717, "METEOR": 0.17776344222534668, "ROUGE_L": 0.2612935500550596, "BERTScore": 0.8770172560641278, "MoverScore": 0.6603900426723037, "AnswerF1Score": 36.315470546752344, "AnswerExactMatch": 13.582183186951067}, "test": {"Bleu_1": 0.22451698867420222, "Bleu_2": 0.17588940062174876, "Bleu_3": 0.14239704098395764, "Bleu_4": 0.11637303657440054, "METEOR": 0.18537576969401506, "ROUGE_L": 0.25151608670509873, "BERTScore": 0.879374634011776, "MoverScore": 0.6771203356899227, "AnswerF1Score": 36.6193739874486, "AnswerExactMatch": 18.88331242158093}}
 
1
+ {"validation": {"Bleu_1": 0.4045265721243085, "Bleu_2": 0.35834231464728317, "Bleu_3": 0.3224689757382265, "Bleu_4": 0.2904498301545984, "AnswerF1Score": 60.947732333638, "AnswerExactMatch": 34.78670012547052, "METEOR": 0.29381562220907614, "ROUGE_L": 0.38980645663087565, "BERTScore": 0.9229774555118948, "MoverScore": 0.7561687416234889}, "test": {"Bleu_1": 0.37274341521157345, "Bleu_2": 0.3261271410730797, "Bleu_3": 0.2922922705608965, "Bleu_4": 0.2632611604554427, "AnswerF1Score": 60.483258743906056, "AnswerExactMatch": 39.33500627352572, "METEOR": 0.31795391099657566, "ROUGE_L": 0.3813527939689909, "BERTScore": 0.9220344831645862, "MoverScore": 0.77162094614282}}
eval/samples.test.hyp.paragraph_question.answer.lmqg_qg_frquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_question.answer.lmqg_qg_frquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff