asahi417 commited on
Commit
7fb3d4d
1 Parent(s): 8bcb135

model update

Browse files
Files changed (1) hide show
  1. README.md +37 -22
README.md CHANGED
@@ -26,7 +26,7 @@ widget:
26
  - text: "extract answers: <hl> Los estudiosos y los histori a dores están divididos en cuanto a qué evento señala el final de la era helenística. <hl> El período helenístico se puede ver que termina con la conquista final del corazón griego por Roma en 146 a. C. tras la guerra aquea, con la derrota final del reino ptolemaico en la batalla de Actium en 31 a. Helenístico se distingue de helénico en que el primero abarca toda la esfera de influencia griega antigua directa, mientras que el segundo se refiere a la propia Grecia."
27
  example_title: "Answer Extraction Example 2"
28
  model-index:
29
- - name: lmqg/mt5-base-esquad-multitask
30
  results:
31
  - task:
32
  name: Text2text Generation
@@ -51,34 +51,49 @@ model-index:
51
  - name: MoverScore (Question Generation)
52
  type: moverscore_question_generation
53
  value: 59.15
54
- - name: QAAlignedF1Score-BERTScore
55
- type: qa_aligned_f1_score_bertscore
56
  value: 79.67
57
- - name: QAAlignedRecall-BERTScore
58
- type: qa_aligned_recall_bertscore
59
  value: 82.44
60
- - name: QAAlignedPrecision-BERTScore
61
- type: qa_aligned_precision_bertscore
62
  value: 77.14
63
- - name: QAAlignedF1Score-MoverScore
64
- type: qa_aligned_f1_score_moverscore
65
  value: 54.82
66
- - name: QAAlignedRecall-MoverScore
67
- type: qa_aligned_recall_moverscore
68
  value: 56.56
69
- - name: QAAlignedPrecision-MoverScore
70
- type: qa_aligned_precision_moverscore
71
  value: 53.27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
  - name: AnswerF1Score (Answer Extraction)
73
- type: answer_f1_score_answer_extraction
74
  value: 75.33
75
  - name: AnswerExactMatch (Answer Extraction)
76
  type: answer_exact_match_answer_extraction
77
  value: 57.98
78
  ---
79
 
80
- # Model Card of `lmqg/mt5-base-esquad-multitask`
81
- This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation task and answer extraction jointly on the [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
82
 
83
 
84
  ### Overview
@@ -95,7 +110,7 @@ This model is fine-tuned version of [google/mt5-base](https://huggingface.co/goo
95
  from lmqg import TransformersQG
96
 
97
  # initialize model
98
- model = TransformersQG(language="es", model="lmqg/mt5-base-esquad-multitask")
99
 
100
  # model prediction
101
  question_answer_pairs = model.generate_qa("a noviembre , que es también la estación lluviosa.")
@@ -106,7 +121,7 @@ question_answer_pairs = model.generate_qa("a noviembre , que es también la esta
106
  ```python
107
  from transformers import pipeline
108
 
109
- pipe = pipeline("text2text-generation", "lmqg/mt5-base-esquad-multitask")
110
 
111
  # answer extraction
112
  answer = pipe("generate question: del <hl> Ministerio de Desarrollo Urbano <hl> , Gobierno de la India.")
@@ -119,7 +134,7 @@ question = pipe("extract answers: <hl> En la diáspora somalí, múltiples event
119
  ## Evaluation
120
 
121
 
122
- - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-esquad-multitask/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json)
123
 
124
  | | Score | Type | Dataset |
125
  |:-----------|--------:|:--------|:-----------------------------------------------------------------|
@@ -133,7 +148,7 @@ question = pipe("extract answers: <hl> En la diáspora somalí, múltiples event
133
  | ROUGE_L | 24.82 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
134
 
135
 
136
- - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-esquad-multitask/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_esquad.default.json)
137
 
138
  | | Score | Type | Dataset |
139
  |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
@@ -145,7 +160,7 @@ question = pipe("extract answers: <hl> En la diáspora somalí, múltiples event
145
  | QAAlignedRecall (MoverScore) | 56.56 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
146
 
147
 
148
- - ***Metric (Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-esquad-multitask/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_esquad.default.json)
149
 
150
  | | Score | Type | Dataset |
151
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
@@ -181,7 +196,7 @@ The following hyperparameters were used during fine-tuning:
181
  - gradient_accumulation_steps: 2
182
  - label_smoothing: 0.15
183
 
184
- The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-esquad-multitask/raw/main/trainer_config.json).
185
 
186
  ## Citation
187
  ```
 
26
  - text: "extract answers: <hl> Los estudiosos y los histori a dores están divididos en cuanto a qué evento señala el final de la era helenística. <hl> El período helenístico se puede ver que termina con la conquista final del corazón griego por Roma en 146 a. C. tras la guerra aquea, con la derrota final del reino ptolemaico en la batalla de Actium en 31 a. Helenístico se distingue de helénico en que el primero abarca toda la esfera de influencia griega antigua directa, mientras que el segundo se refiere a la propia Grecia."
27
  example_title: "Answer Extraction Example 2"
28
  model-index:
29
+ - name: lmqg/mt5-base-esquad-qg-ae
30
  results:
31
  - task:
32
  name: Text2text Generation
 
51
  - name: MoverScore (Question Generation)
52
  type: moverscore_question_generation
53
  value: 59.15
54
+ - name: QAAlignedF1Score-BERTScore (Question & Answer Generation)
55
+ type: qa_aligned_f1_score_bertscore_question_answer_generation
56
  value: 79.67
57
+ - name: QAAlignedRecall-BERTScore (Question & Answer Generation)
58
+ type: qa_aligned_recall_bertscore_question_answer_generation
59
  value: 82.44
60
+ - name: QAAlignedPrecision-BERTScore (Question & Answer Generation)
61
+ type: qa_aligned_precision_bertscore_question_answer_generation
62
  value: 77.14
63
+ - name: QAAlignedF1Score-MoverScore (Question & Answer Generation)
64
+ type: qa_aligned_f1_score_moverscore_question_answer_generation
65
  value: 54.82
66
+ - name: QAAlignedRecall-MoverScore (Question & Answer Generation)
67
+ type: qa_aligned_recall_moverscore_question_answer_generation
68
  value: 56.56
69
+ - name: QAAlignedPrecision-MoverScore (Question & Answer Generation)
70
+ type: qa_aligned_precision_moverscore_question_answer_generation
71
  value: 53.27
72
+ - name: BLEU4 (Answer Extraction)
73
+ type: bleu4_answer_extraction
74
+ value: 25.75
75
+ - name: ROUGE-L (Answer Extraction)
76
+ type: rouge_l_answer_extraction
77
+ value: 49.61
78
+ - name: METEOR (Answer Extraction)
79
+ type: meteor_answer_extraction
80
+ value: 43.74
81
+ - name: BERTScore (Answer Extraction)
82
+ type: bertscore_answer_extraction
83
+ value: 90.04
84
+ - name: MoverScore (Answer Extraction)
85
+ type: moverscore_answer_extraction
86
+ value: 80.94
87
  - name: AnswerF1Score (Answer Extraction)
88
+ type: answer_f1_score__answer_extraction
89
  value: 75.33
90
  - name: AnswerExactMatch (Answer Extraction)
91
  type: answer_exact_match_answer_extraction
92
  value: 57.98
93
  ---
94
 
95
+ # Model Card of `lmqg/mt5-base-esquad-qg-ae`
96
+ This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation and answer extraction jointly on the [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
97
 
98
 
99
  ### Overview
 
110
  from lmqg import TransformersQG
111
 
112
  # initialize model
113
+ model = TransformersQG(language="es", model="lmqg/mt5-base-esquad-qg-ae")
114
 
115
  # model prediction
116
  question_answer_pairs = model.generate_qa("a noviembre , que es también la estación lluviosa.")
 
121
  ```python
122
  from transformers import pipeline
123
 
124
+ pipe = pipeline("text2text-generation", "lmqg/mt5-base-esquad-qg-ae")
125
 
126
  # answer extraction
127
  answer = pipe("generate question: del <hl> Ministerio de Desarrollo Urbano <hl> , Gobierno de la India.")
 
134
  ## Evaluation
135
 
136
 
137
+ - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-esquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json)
138
 
139
  | | Score | Type | Dataset |
140
  |:-----------|--------:|:--------|:-----------------------------------------------------------------|
 
148
  | ROUGE_L | 24.82 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
149
 
150
 
151
+ - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-esquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_esquad.default.json)
152
 
153
  | | Score | Type | Dataset |
154
  |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
 
160
  | QAAlignedRecall (MoverScore) | 56.56 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) |
161
 
162
 
163
+ - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-esquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_esquad.default.json)
164
 
165
  | | Score | Type | Dataset |
166
  |:-----------------|--------:|:--------|:-----------------------------------------------------------------|
 
196
  - gradient_accumulation_steps: 2
197
  - label_smoothing: 0.15
198
 
199
+ The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-esquad-qg-ae/raw/main/trainer_config.json).
200
 
201
  ## Citation
202
  ```