asahi417 commited on
Commit
0d04487
1 Parent(s): 41c00bf

model update

Browse files
Files changed (1) hide show
  1. README.md +25 -40
README.md CHANGED
@@ -33,44 +33,25 @@ model-index:
33
  metrics:
34
  - name: BLEU4
35
  type: bleu4
36
- value: 0.12184665382055122
37
  - name: ROUGE-L
38
  type: rouge-l
39
- value: 0.2856948017709817
40
  - name: METEOR
41
  type: meteor
42
- value: 0.29623847263524816
43
  - name: BERTScore
44
  type: bertscore
45
- value: 0.8451586993172961
46
  - name: MoverScore
47
  type: moverscore
48
- value: 0.8335888774638588
49
  ---
50
 
51
  # Model Card of `lmqg/mt5-base-koquad`
52
- This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation task on the
53
- [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
54
 
55
 
56
- Please cite our paper if you use the model ([https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)).
57
-
58
- ```
59
-
60
- @inproceedings{ushio-etal-2022-generative,
61
- title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
62
- author = "Ushio, Asahi and
63
- Alva-Manchego, Fernando and
64
- Camacho-Collados, Jose",
65
- booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
66
- month = dec,
67
- year = "2022",
68
- address = "Abu Dhabi, U.A.E.",
69
- publisher = "Association for Computational Linguistics",
70
- }
71
-
72
- ```
73
-
74
  ### Overview
75
  - **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base)
76
  - **Language:** ko
@@ -82,35 +63,40 @@ Please cite our paper if you use the model ([https://arxiv.org/abs/2210.03992](h
82
  ### Usage
83
  - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
84
  ```python
85
-
86
  from lmqg import TransformersQG
 
87
  # initialize model
88
- model = TransformersQG(language='ko', model='lmqg/mt5-base-koquad')
 
89
  # model prediction
90
- question = model.generate_q(list_context=["1990년 영화 《 남부군 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다."], list_answer=["남부군"])
91
 
92
  ```
93
 
94
  - With `transformers`
95
  ```python
96
-
97
  from transformers import pipeline
98
- # initialize model
99
- pipe = pipeline("text2text-generation", 'lmqg/mt5-base-koquad')
100
- # question generation
101
- question = pipe('1990년 영화 《 <hl> 남부군 <hl> 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.')
102
 
103
- ```
 
104
 
105
- ## Evaluation Metrics
106
 
 
107
 
108
- ### Metrics
109
 
110
- | Dataset | Type | BLEU4 | ROUGE-L | METEOR | BERTScore | MoverScore | Link |
111
- |:--------|:-----|------:|--------:|-------:|----------:|-----------:|-----:|
112
- | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | default | 0.122 | 0.286 | 0.296 | 0.845 | 0.834 | [link](https://huggingface.co/lmqg/mt5-base-koquad/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_koquad.default.json) |
113
 
 
 
 
 
 
 
 
 
 
 
114
 
115
 
116
 
@@ -137,7 +123,6 @@ The full configuration can be found at [fine-tuning config file](https://hugging
137
 
138
  ## Citation
139
  ```
140
-
141
  @inproceedings{ushio-etal-2022-generative,
142
  title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
143
  author = "Ushio, Asahi and
 
33
  metrics:
34
  - name: BLEU4
35
  type: bleu4
36
+ value: 12.18
37
  - name: ROUGE-L
38
  type: rouge-l
39
+ value: 28.57
40
  - name: METEOR
41
  type: meteor
42
+ value: 29.62
43
  - name: BERTScore
44
  type: bertscore
45
+ value: 84.52
46
  - name: MoverScore
47
  type: moverscore
48
+ value: 83.36
49
  ---
50
 
51
  # Model Card of `lmqg/mt5-base-koquad`
52
+ This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation task on the [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
 
53
 
54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ### Overview
56
  - **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base)
57
  - **Language:** ko
 
63
  ### Usage
64
  - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
65
  ```python
 
66
  from lmqg import TransformersQG
67
+
68
  # initialize model
69
+ model = TransformersQG(language="ko", model="lmqg/mt5-base-koquad")
70
+
71
  # model prediction
72
+ questions = model.generate_q(list_context="1990년 영화 《 남부군 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.", list_answer="남부군")
73
 
74
  ```
75
 
76
  - With `transformers`
77
  ```python
 
78
  from transformers import pipeline
 
 
 
 
79
 
80
+ pipe = pipeline("text2text-generation", "lmqg/mt5-base-koquad")
81
+ output = pipe("1990년 영화 《 <hl> 남부군 <hl> 》에서 단역으로 영화배우 첫 데뷔에 이어 같은 해 KBS 드라마 《지구인》에서 단역으로 출연하였고 이듬해 MBC 《여명의 눈동자》를 통해 단역으로 출연하였다.")
82
 
83
+ ```
84
 
85
+ ## Evaluation
86
 
 
87
 
88
+ - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-koquad/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_koquad.default.json)
 
 
89
 
90
+ | | Score | Type | Dataset |
91
+ |:-----------|--------:|:--------|:-----------------------------------------------------------------|
92
+ | BERTScore | 84.52 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
93
+ | Bleu_1 | 28.54 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
94
+ | Bleu_2 | 21.05 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
95
+ | Bleu_3 | 15.92 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
96
+ | Bleu_4 | 12.18 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
97
+ | METEOR | 29.62 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
98
+ | MoverScore | 83.36 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
99
+ | ROUGE_L | 28.57 | default | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) |
100
 
101
 
102
 
 
123
 
124
  ## Citation
125
  ```
 
126
  @inproceedings{ushio-etal-2022-generative,
127
  title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
128
  author = "Ushio, Asahi and