asahi417 commited on
Commit
b22c83f
1 Parent(s): 3639466

model update

Browse files
Files changed (1) hide show
  1. README.md +25 -40
README.md CHANGED
@@ -33,44 +33,25 @@ model-index:
33
  metrics:
34
  - name: BLEU4
35
  type: bleu4
36
- value: 2.1631297696229156e-06
37
  - name: ROUGE-L
38
  type: rouge-l
39
- value: 0.19768945312207964
40
  - name: METEOR
41
  type: meteor
42
- value: 0.1851683947377195
43
  - name: BERTScore
44
  type: bertscore
45
- value: 0.9239521706915651
46
  - name: MoverScore
47
  type: moverscore
48
- value: 0.6146001367099585
49
  ---
50
 
51
  # Model Card of `lmqg/t5-small-subjqa-books`
52
- This model is fine-tuned version of [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad) for question generation task on the
53
- [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: books) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
54
  This model is continuously fine-tuned with [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad).
55
 
56
- Please cite our paper if you use the model ([https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)).
57
-
58
- ```
59
-
60
- @inproceedings{ushio-etal-2022-generative,
61
- title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
62
- author = "Ushio, Asahi and
63
- Alva-Manchego, Fernando and
64
- Camacho-Collados, Jose",
65
- booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
66
- month = dec,
67
- year = "2022",
68
- address = "Abu Dhabi, U.A.E.",
69
- publisher = "Association for Computational Linguistics",
70
- }
71
-
72
- ```
73
-
74
  ### Overview
75
  - **Language model:** [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad)
76
  - **Language:** en
@@ -82,35 +63,40 @@ Please cite our paper if you use the model ([https://arxiv.org/abs/2210.03992](h
82
  ### Usage
83
  - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
84
  ```python
85
-
86
  from lmqg import TransformersQG
 
87
  # initialize model
88
- model = TransformersQG(language='en', model='lmqg/t5-small-subjqa-books')
 
89
  # model prediction
90
- question = model.generate_q(list_context=["William Turner was an English painter who specialised in watercolour landscapes"], list_answer=["William Turner"])
91
 
92
  ```
93
 
94
  - With `transformers`
95
  ```python
96
-
97
  from transformers import pipeline
98
- # initialize model
99
- pipe = pipeline("text2text-generation", 'lmqg/t5-small-subjqa-books')
100
- # question generation
101
- question = pipe('generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.')
102
 
103
- ```
 
104
 
105
- ## Evaluation Metrics
106
 
 
107
 
108
- ### Metrics
109
 
110
- | Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
111
- |:--------|:-----|------:|--------:|-------:|----------:|-----------:|-----:|
112
- | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | books | 0.0 | 0.198 | 0.185 | 0.924 | 0.615 | [link](https://huggingface.co/lmqg/t5-small-subjqa-books/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) |
113
 
 
 
 
 
 
 
 
 
 
 
114
 
115
 
116
 
@@ -137,7 +123,6 @@ The full configuration can be found at [fine-tuning config file](https://hugging
137
 
138
  ## Citation
139
  ```
140
-
141
  @inproceedings{ushio-etal-2022-generative,
142
  title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
143
  author = "Ushio, Asahi and
 
33
  metrics:
34
  - name: BLEU4
35
  type: bleu4
36
+ value: 0.0
37
  - name: ROUGE-L
38
  type: rouge-l
39
+ value: 19.77
40
  - name: METEOR
41
  type: meteor
42
+ value: 18.52
43
  - name: BERTScore
44
  type: bertscore
45
+ value: 92.4
46
  - name: MoverScore
47
  type: moverscore
48
+ value: 61.46
49
  ---
50
 
51
  # Model Card of `lmqg/t5-small-subjqa-books`
52
+ This model is fine-tuned version of [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: books) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
 
53
  This model is continuously fine-tuned with [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad).
54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ### Overview
56
  - **Language model:** [lmqg/t5-small-squad](https://huggingface.co/lmqg/t5-small-squad)
57
  - **Language:** en
 
63
  ### Usage
64
  - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
65
  ```python
 
66
  from lmqg import TransformersQG
67
+
68
  # initialize model
69
+ model = TransformersQG(language="en", model="lmqg/t5-small-subjqa-books")
70
+
71
  # model prediction
72
+ questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
73
 
74
  ```
75
 
76
  - With `transformers`
77
  ```python
 
78
  from transformers import pipeline
 
 
 
 
79
 
80
+ pipe = pipeline("text2text-generation", "lmqg/t5-small-subjqa-books")
81
+ output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
82
 
83
+ ```
84
 
85
+ ## Evaluation
86
 
 
87
 
88
+ - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-small-subjqa-books/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json)
 
 
89
 
90
+ | | Score | Type | Dataset |
91
+ |:-----------|--------:|:-------|:-----------------------------------------------------------------|
92
+ | BERTScore | 92.4 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
93
+ | Bleu_1 | 18.61 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
94
+ | Bleu_2 | 9.85 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
95
+ | Bleu_3 | 2.33 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
96
+ | Bleu_4 | 0 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
97
+ | METEOR | 18.52 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
98
+ | MoverScore | 61.46 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
99
+ | ROUGE_L | 19.77 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
100
 
101
 
102
 
 
123
 
124
  ## Citation
125
  ```
 
126
  @inproceedings{ushio-etal-2022-generative,
127
  title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
128
  author = "Ushio, Asahi and