asahi417 commited on
Commit
7b281c3
1 Parent(s): b31852b

model update

Browse files
Files changed (1) hide show
  1. README.md +39 -55
README.md CHANGED
@@ -29,61 +29,42 @@ model-index:
29
  metrics:
30
  - name: BLEU4
31
  type: bleu4
32
- value: 0.13399772044578695
33
  - name: ROUGE-L
34
  type: rouge-l
35
- value: 0.3723033124655649
36
  - name: METEOR
37
  type: meteor
38
- value: 0.3113835606745017
39
  - name: BERTScore
40
  type: bertscore
41
- value: 0.9079808960852451
42
  - name: MoverScore
43
  type: moverscore
44
- value: 0.6226022717045362
45
  - name: QAAlignedF1Score (BERTScore)
46
  type: qa_aligned_f1_score_bertscore
47
- value: 0.9239789708710728
48
  - name: QAAlignedRecall (BERTScore)
49
  type: qa_aligned_recall_bertscore
50
- value: 0.9203316206004332
51
  - name: QAAlignedPrecision (BERTScore)
52
  type: qa_aligned_precision_bertscore
53
- value: 0.9277822063983406
54
  - name: QAAlignedF1Score (MoverScore)
55
  type: qa_aligned_f1_score_moverscore
56
- value: 0.6483101074792541
57
  - name: QAAlignedRecall (MoverScore)
58
  type: qa_aligned_recall_moverscore
59
- value: 0.6406891596773278
60
  - name: QAAlignedPrecision (MoverScore)
61
  type: qa_aligned_precision_moverscore
62
- value: 0.656827668947676
63
  ---
64
 
65
  # Model Card of `lmqg/t5-base-tweetqa-qag-np`
66
- This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question generation task on the
67
- [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
68
- This model is fine-tuned on the end-to-end question and answer generation.
69
-
70
- Please cite our paper if you use the model ([https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)).
71
-
72
- ```
73
-
74
- @inproceedings{ushio-etal-2022-generative,
75
- title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
76
- author = "Ushio, Asahi and
77
- Alva-Manchego, Fernando and
78
- Camacho-Collados, Jose",
79
- booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
80
- month = dec,
81
- year = "2022",
82
- address = "Abu Dhabi, U.A.E.",
83
- publisher = "Association for Computational Linguistics",
84
- }
85
-
86
- ```
87
 
88
  ### Overview
89
  - **Language model:** [t5-base](https://huggingface.co/t5-base)
@@ -96,42 +77,46 @@ Please cite our paper if you use the model ([https://arxiv.org/abs/2210.03992](h
96
  ### Usage
97
  - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
98
  ```python
99
-
100
  from lmqg import TransformersQG
 
101
  # initialize model
102
- model = TransformersQG(language='en', model='lmqg/t5-base-tweetqa-qag-np')
 
103
  # model prediction
104
- question = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
105
-
106
  ```
107
 
108
  - With `transformers`
109
  ```python
110
-
111
  from transformers import pipeline
112
- # initialize model
113
- pipe = pipeline("text2text-generation", 'lmqg/t5-base-tweetqa-qag-np')
114
- # question generation
115
- question = pipe('Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.')
116
-
117
- ```
118
-
119
- ## Evaluation Metrics
120
 
 
 
121
 
122
- ### Metrics
123
-
124
- | Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
125
- |:--------|:-----|------:|--------:|-------:|----------:|-----------:|-----:|
126
- | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | default | 0.134 | 0.372 | 0.311 | 0.908 | 0.623 | [link](https://huggingface.co/lmqg/t5-base-tweetqa-qag-np/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json) |
127
 
 
128
 
129
- ### Metrics (QAG)
130
 
131
- | Dataset | Type | QA Aligned F1 Score (BERTScore) | QA Aligned F1 Score (MoverScore) | Link |
132
- |:--------|:-----|--------------------------------:|---------------------------------:|-----:|
133
- | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) | default | 0.924 | 0.648 | [link](https://huggingface.co/lmqg/t5-base-tweetqa-qag-np/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json) |
134
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
135
 
136
 
137
 
@@ -158,7 +143,6 @@ The full configuration can be found at [fine-tuning config file](https://hugging
158
 
159
  ## Citation
160
  ```
161
-
162
  @inproceedings{ushio-etal-2022-generative,
163
  title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
164
  author = "Ushio, Asahi and
 
29
  metrics:
30
  - name: BLEU4
31
  type: bleu4
32
+ value: 13.4
33
  - name: ROUGE-L
34
  type: rouge-l
35
+ value: 37.23
36
  - name: METEOR
37
  type: meteor
38
+ value: 31.14
39
  - name: BERTScore
40
  type: bertscore
41
+ value: 90.8
42
  - name: MoverScore
43
  type: moverscore
44
+ value: 62.26
45
  - name: QAAlignedF1Score (BERTScore)
46
  type: qa_aligned_f1_score_bertscore
47
+ value: 92.4
48
  - name: QAAlignedRecall (BERTScore)
49
  type: qa_aligned_recall_bertscore
50
+ value: 92.03
51
  - name: QAAlignedPrecision (BERTScore)
52
  type: qa_aligned_precision_bertscore
53
+ value: 92.78
54
  - name: QAAlignedF1Score (MoverScore)
55
  type: qa_aligned_f1_score_moverscore
56
+ value: 64.83
57
  - name: QAAlignedRecall (MoverScore)
58
  type: qa_aligned_recall_moverscore
59
+ value: 64.07
60
  - name: QAAlignedPrecision (MoverScore)
61
  type: qa_aligned_precision_moverscore
62
+ value: 65.68
63
  ---
64
 
65
  # Model Card of `lmqg/t5-base-tweetqa-qag-np`
66
+ This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question & answer pair generation task on the [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
67
+ This model is fine-tuned without a task prefix.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
 
69
  ### Overview
70
  - **Language model:** [t5-base](https://huggingface.co/t5-base)
 
77
  ### Usage
78
  - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
79
  ```python
 
80
  from lmqg import TransformersQG
81
+
82
  # initialize model
83
+ model = TransformersQG(language="en", model="lmqg/t5-base-tweetqa-qag-np")
84
+
85
  # model prediction
86
+ question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
87
+
88
  ```
89
 
90
  - With `transformers`
91
  ```python
 
92
  from transformers import pipeline
 
 
 
 
 
 
 
 
93
 
94
+ pipe = pipeline("text2text-generation", "lmqg/t5-base-tweetqa-qag-np")
95
+ output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
96
 
97
+ ```
 
 
 
 
98
 
99
+ ## Evaluation
100
 
 
101
 
102
+ - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-base-tweetqa-qag-np/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json)
 
 
103
 
104
+ | | Score | Type | Dataset |
105
+ |:--------------------------------|--------:|:--------|:---------------------------------------------------------------------|
106
+ | BERTScore | 90.8 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
107
+ | Bleu_1 | 40.49 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
108
+ | Bleu_2 | 27.77 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
109
+ | Bleu_3 | 19.18 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
110
+ | Bleu_4 | 13.4 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
111
+ | METEOR | 31.14 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
112
+ | MoverScore | 62.26 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
113
+ | QAAlignedF1Score (BERTScore) | 92.4 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
114
+ | QAAlignedF1Score (MoverScore) | 64.83 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
115
+ | QAAlignedPrecision (BERTScore) | 92.78 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
116
+ | QAAlignedPrecision (MoverScore) | 65.68 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
117
+ | QAAlignedRecall (BERTScore) | 92.03 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
118
+ | QAAlignedRecall (MoverScore) | 64.07 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
119
+ | ROUGE_L | 37.23 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
120
 
121
 
122
 
 
143
 
144
  ## Citation
145
  ```
 
146
  @inproceedings{ushio-etal-2022-generative,
147
  title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
148
  author = "Ushio, Asahi and