asahi417 commited on
Commit
d4d19fb
1 Parent(s): 62f67e6

commit files to HF hub

Browse files
README.md CHANGED
@@ -31,25 +31,25 @@ model-index:
31
  metrics:
32
  - name: BLEU4 (Question Answering)
33
  type: bleu4_question_answering
34
- value: 31.34
35
  - name: ROUGE-L (Question Answering)
36
  type: rouge_l_question_answering
37
- value: 70.66
38
  - name: METEOR (Question Answering)
39
  type: meteor_question_answering
40
- value: 50.53
41
  - name: BERTScore (Question Answering)
42
  type: bertscore_question_answering
43
- value: 96.27
44
  - name: MoverScore (Question Answering)
45
  type: moverscore_question_answering
46
- value: 90.04
47
  - name: AnswerF1Score (Question Answering)
48
  type: answer_f1_score__question_answering
49
- value: 74.71
50
  - name: AnswerExactMatch (Question Answering)
51
  type: answer_exact_match_question_answering
52
- value: 68.16
53
  ---
54
 
55
  # Model Card of `vocabtrimmer/mt5-small-trimmed-ko-30000-koquad-qa`
@@ -93,16 +93,16 @@ output = pipe("question: 매드 클라운이 참가해 큰 화제를 모았던
93
 
94
  | | Score | Type | Dataset |
95
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
96
- | AnswerExactMatch | 68.16 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
97
- | AnswerF1Score | 74.71 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
98
- | BERTScore | 96.27 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
99
- | Bleu_1 | 64.73 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
100
- | Bleu_2 | 56.16 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
101
- | Bleu_3 | 45.24 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
102
- | Bleu_4 | 31.34 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
103
- | METEOR | 50.53 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
104
- | MoverScore | 90.04 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
105
- | ROUGE_L | 70.66 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
106
 
107
 
108
 
@@ -118,11 +118,11 @@ The following hyperparameters were used during fine-tuning:
118
  - max_length: 512
119
  - max_length_output: 32
120
  - epoch: 5
121
- - batch: 64
122
  - lr: 0.001
123
  - fp16: False
124
  - random_seed: 1
125
- - gradient_accumulation_steps: 1
126
  - label_smoothing: 0.15
127
 
128
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ko-30000-koquad-qa/raw/main/trainer_config.json).
 
31
  metrics:
32
  - name: BLEU4 (Question Answering)
33
  type: bleu4_question_answering
34
+ value: 37.41
35
  - name: ROUGE-L (Question Answering)
36
  type: rouge_l_question_answering
37
+ value: 75.9
38
  - name: METEOR (Question Answering)
39
  type: meteor_question_answering
40
+ value: 54.68
41
  - name: BERTScore (Question Answering)
42
  type: bertscore_question_answering
43
+ value: 97.07
44
  - name: MoverScore (Question Answering)
45
  type: moverscore_question_answering
46
+ value: 91.88
47
  - name: AnswerF1Score (Question Answering)
48
  type: answer_f1_score__question_answering
49
+ value: 80.37
50
  - name: AnswerExactMatch (Question Answering)
51
  type: answer_exact_match_question_answering
52
+ value: 73.69
53
  ---
54
 
55
  # Model Card of `vocabtrimmer/mt5-small-trimmed-ko-30000-koquad-qa`
 
93
 
94
  | | Score | Type | Dataset |
95
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
96
+ | AnswerExactMatch | 73.69 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
97
+ | AnswerF1Score | 80.37 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
98
+ | BERTScore | 97.07 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
99
+ | Bleu_1 | 70.24 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
100
+ | Bleu_2 | 61.81 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
101
+ | Bleu_3 | 51.41 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
102
+ | Bleu_4 | 37.41 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
103
+ | METEOR | 54.68 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
104
+ | MoverScore | 91.88 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
105
+ | ROUGE_L | 75.9 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
106
 
107
 
108
 
 
118
  - max_length: 512
119
  - max_length_output: 32
120
  - epoch: 5
121
+ - batch: 32
122
  - lr: 0.001
123
  - fp16: False
124
  - random_seed: 1
125
+ - gradient_accumulation_steps: 2
126
  - label_smoothing: 0.15
127
 
128
  The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mt5-small-trimmed-ko-30000-koquad-qa/raw/main/trainer_config.json).
eval/metric.first.answer.paragraph_question.answer.lmqg_qg_koquad.default.json CHANGED
@@ -1 +1 @@
1
- {"validation": {"Bleu_1": 0.6537011097941328, "Bleu_2": 0.5735758572983447, "Bleu_3": 0.4767258223922961, "Bleu_4": 0.3453850019905082, "METEOR": 0.5026406437140047, "ROUGE_L": 0.6970146393881299, "BERTScore": 0.9623686538113102, "MoverScore": 0.8978806396080128, "AnswerF1Score": 73.82546289414114, "AnswerExactMatch": 66.89212625737079}, "test": {"Bleu_1": 0.6472518457751592, "Bleu_2": 0.5615554084794614, "Bleu_3": 0.4523617968173102, "Bleu_4": 0.31339085747018447, "METEOR": 0.5052834772276389, "ROUGE_L": 0.7066270889175926, "BERTScore": 0.9626775134407808, "MoverScore": 0.9004137617009108, "AnswerF1Score": 74.70867716705372, "AnswerExactMatch": 68.15816857440167}}
 
1
+ {"validation": {"Bleu_1": 0.6992903294141829, "Bleu_2": 0.6127222228962876, "Bleu_3": 0.5083515309130512, "Bleu_4": 0.3635940594823205, "METEOR": 0.5462332421668936, "ROUGE_L": 0.7541467112803363, "BERTScore": 0.9710218722812839, "MoverScore": 0.9181111629442422, "AnswerF1Score": 79.72134499367644, "AnswerExactMatch": 73.06625043357613}, "test": {"Bleu_1": 0.7024203558129186, "Bleu_2": 0.6180556794660192, "Bleu_3": 0.5140638613931836, "Bleu_4": 0.37413409334289116, "METEOR": 0.5468391612861541, "ROUGE_L": 0.7589999956094918, "BERTScore": 0.970729525474637, "MoverScore": 0.9188335398437054, "AnswerF1Score": 80.36954056464982, "AnswerExactMatch": 73.69060006937218}}
eval/samples.test.hyp.paragraph_question.answer.lmqg_qg_koquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff
 
eval/samples.validation.hyp.paragraph_question.answer.lmqg_qg_koquad.default.txt CHANGED
The diff for this file is too large to render. See raw diff