asahi417 commited on
Commit
2490a0e
1 Parent(s): 752b1d9

model update

Browse files
Files changed (1) hide show
  1. README.md +52 -62
README.md CHANGED
@@ -33,62 +33,43 @@ model-index:
33
  metrics:
34
  - name: BLEU4
35
  type: bleu4
36
- value: 0.32156776073917387
37
  - name: ROUGE-L
38
  type: rouge-l
39
- value: 0.5294969429504184
40
  - name: METEOR
41
  type: meteor
42
- value: 0.2997311570800795
43
  - name: BERTScore
44
  type: bertscore
45
- value: 0.8225831409256842
46
  - name: MoverScore
47
  type: moverscore
48
- value: 0.5987761972106114
49
- - name: QAAlignedF1Score (BERTScore)
50
- type: qa_aligned_f1_score_bertscore
51
- value: 0.8716389786998847
52
- - name: QAAlignedRecall (BERTScore)
53
- type: qa_aligned_recall_bertscore
54
- value: 0.8715950043633907
55
- - name: QAAlignedPrecision (BERTScore)
56
- type: qa_aligned_precision_bertscore
57
- value: 0.8717014384777142
58
- - name: QAAlignedF1Score (MoverScore)
59
- type: qa_aligned_f1_score_moverscore
60
- value: 0.6308136328891667
61
- - name: QAAlignedRecall (MoverScore)
62
- type: qa_aligned_recall_moverscore
63
- value: 0.630617968288573
64
- - name: QAAlignedPrecision (MoverScore)
65
- type: qa_aligned_precision_moverscore
66
- value: 0.6310307133348039
67
  ---
68
 
69
  # Model Card of `lmqg/mbart-large-cc25-jaquad`
70
- This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question generation task on the
71
- [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
72
 
73
 
74
- Please cite our paper if you use the model ([https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)).
75
-
76
- ```
77
-
78
- @inproceedings{ushio-etal-2022-generative,
79
- title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
80
- author = "Ushio, Asahi and
81
- Alva-Manchego, Fernando and
82
- Camacho-Collados, Jose",
83
- booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
84
- month = dec,
85
- year = "2022",
86
- address = "Abu Dhabi, U.A.E.",
87
- publisher = "Association for Computational Linguistics",
88
- }
89
-
90
- ```
91
-
92
  ### Overview
93
  - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
94
  - **Language:** ja
@@ -100,42 +81,52 @@ Please cite our paper if you use the model ([https://arxiv.org/abs/2210.03992](h
100
  ### Usage
101
  - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
102
  ```python
103
-
104
  from lmqg import TransformersQG
 
105
  # initialize model
106
- model = TransformersQG(language='ja', model='lmqg/mbart-large-cc25-jaquad')
 
107
  # model prediction
108
- question = model.generate_q(list_context=["フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。"], list_answer=["30数点"])
109
 
110
  ```
111
 
112
  - With `transformers`
113
  ```python
114
-
115
  from transformers import pipeline
116
- # initialize model
117
- pipe = pipeline("text2text-generation", 'lmqg/mbart-large-cc25-jaquad')
118
- # question generation
119
- question = pipe('ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚��なった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。')
120
 
121
  ```
122
 
123
- ## Evaluation Metrics
124
 
125
 
126
- ### Metrics
127
 
128
- | Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
129
- |:--------|:-----|------:|--------:|-------:|----------:|-----------:|-----:|
130
- | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | default | 0.322 | 0.529 | 0.3 | 0.823 | 0.599 | [link](https://huggingface.co/lmqg/mbart-large-cc25-jaquad/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_jaquad.default.json) |
 
 
 
 
 
 
 
131
 
132
 
133
- ### Metrics (QAG)
134
 
135
- | Dataset | Type | QA Aligned F1 Score (BERTScore) | QA Aligned F1 Score (MoverScore) | Link |
136
- |:--------|:-----|--------------------------------:|---------------------------------:|-----:|
137
- | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | default | 0.872 | 0.631 | [link](https://huggingface.co/lmqg/mbart-large-cc25-jaquad/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_jaquad.default.json) |
138
-
 
 
 
 
139
 
140
 
141
 
@@ -162,7 +153,6 @@ The full configuration can be found at [fine-tuning config file](https://hugging
162
 
163
  ## Citation
164
  ```
165
-
166
  @inproceedings{ushio-etal-2022-generative,
167
  title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
168
  author = "Ushio, Asahi and
 
33
  metrics:
34
  - name: BLEU4
35
  type: bleu4
36
+ value: 32.16
37
  - name: ROUGE-L
38
  type: rouge-l
39
+ value: 52.95
40
  - name: METEOR
41
  type: meteor
42
+ value: 29.97
43
  - name: BERTScore
44
  type: bertscore
45
+ value: 82.26
46
  - name: MoverScore
47
  type: moverscore
48
+ value: 59.88
49
+ - name: QAAlignedF1Score (BERTScore) [Gold Answer]
50
+ type: qa_aligned_f1_score_bertscore_gold_answer
51
+ value: 87.16
52
+ - name: QAAlignedRecall (BERTScore) [Gold Answer]
53
+ type: qa_aligned_recall_bertscore_gold_answer
54
+ value: 87.16
55
+ - name: QAAlignedPrecision (BERTScore) [Gold Answer]
56
+ type: qa_aligned_precision_bertscore_gold_answer
57
+ value: 87.17
58
+ - name: QAAlignedF1Score (MoverScore) [Gold Answer]
59
+ type: qa_aligned_f1_score_moverscore_gold_answer
60
+ value: 63.08
61
+ - name: QAAlignedRecall (MoverScore) [Gold Answer]
62
+ type: qa_aligned_recall_moverscore_gold_answer
63
+ value: 63.06
64
+ - name: QAAlignedPrecision (MoverScore) [Gold Answer]
65
+ type: qa_aligned_precision_moverscore_gold_answer
66
+ value: 63.1
67
  ---
68
 
69
  # Model Card of `lmqg/mbart-large-cc25-jaquad`
70
+ This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question generation task on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
 
71
 
72
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
  ### Overview
74
  - **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
75
  - **Language:** ja
 
81
  ### Usage
82
  - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
83
  ```python
 
84
  from lmqg import TransformersQG
85
+
86
  # initialize model
87
+ model = TransformersQG(language="ja", model="lmqg/mbart-large-cc25-jaquad")
88
+
89
  # model prediction
90
+ questions = model.generate_q(list_context="フェルメールの作品では、17世紀のオランダの画家、ヨハネス・フェルメールの作品について記述する。フェルメールの作品は、疑問作も含め30数点しか現存しない。現存作品はすべて油彩画で、版画、下絵、素描などは残っていない。", list_answer="30数点")
91
 
92
  ```
93
 
94
  - With `transformers`
95
  ```python
 
96
  from transformers import pipeline
97
+
98
+ pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-jaquad")
99
+ output = pipe("ゾフィーは貴族出身ではあったが王族出身ではなく、ハプスブルク家の皇位継承者であるフランツ・フェルディナントとの結婚は貴賤結婚となった。皇帝フランツ・ヨーゼフは、2人の間に生まれた子孫が皇位を継がないことを条件として結婚を承認していた。視察が予定されている<hl>6月28日<hl>は2人の14回目の結婚記念日であった。")
 
100
 
101
  ```
102
 
103
+ ## Evaluation
104
 
105
 
106
+ - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-jaquad/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_jaquad.default.json)
107
 
108
+ | | Score | Type | Dataset |
109
+ |:-----------|--------:|:--------|:-----------------------------------------------------------------|
110
+ | BERTScore | 82.26 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
111
+ | Bleu_1 | 57.05 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
112
+ | Bleu_2 | 45.45 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
113
+ | Bleu_3 | 37.81 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
114
+ | Bleu_4 | 32.16 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
115
+ | METEOR | 29.97 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
116
+ | MoverScore | 59.88 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
117
+ | ROUGE_L | 52.95 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
118
 
119
 
120
+ - ***Metric (Question & Answer Generation)***: QAG metrics are computed with *the gold answer* and generated question on it for this model, as the model cannot provide an answer. [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-jaquad/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_jaquad.default.json)
121
 
122
+ | | Score | Type | Dataset |
123
+ |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
124
+ | QAAlignedF1Score (BERTScore) | 87.16 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
125
+ | QAAlignedF1Score (MoverScore) | 63.08 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
126
+ | QAAlignedPrecision (BERTScore) | 87.17 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
127
+ | QAAlignedPrecision (MoverScore) | 63.1 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
128
+ | QAAlignedRecall (BERTScore) | 87.16 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
129
+ | QAAlignedRecall (MoverScore) | 63.06 | default | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) |
130
 
131
 
132
 
 
153
 
154
  ## Citation
155
  ```
 
156
  @inproceedings{ushio-etal-2022-generative,
157
  title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
158
  author = "Ushio, Asahi and