File size: 5,435 Bytes
d4c035d 534072c d4c035d 534072c d4c035d 534072c d4c035d 5ebc17a d4c035d 64cad36 777d981 64cad36 777d981 64cad36 777d981 64cad36 777d981 64cad36 777d981 d4c035d 5ebc17a 777d981 5ebc17a d4c035d 108b39d d4c035d 108b39d d4c035d 108b39d 777d981 108b39d 5ebc17a 777d981 108b39d 777d981 108b39d d4c035d 108b39d 23c57a8 5ebc17a 777d981 d4c035d 777d981 d4c035d 777d981 d4c035d 5ebc17a d4c035d 777d981 d4c035d 5ebc17a d4c035d 108b39d 23c57a8 108b39d 23c57a8 108b39d 23c57a8 108b39d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qg_subjqa
pipeline_tag: text2text-generation
tags:
- question generation
widget:
- text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 1"
- text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records."
example_title: "Question Generation Example 2"
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ."
example_title: "Question Generation Example 3"
model-index:
- name: lmqg/bart-large-subjqa-books-qg
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_subjqa
type: books
args: books
metrics:
- name: BLEU4 (Question Generation)
type: bleu4_question_generation
value: 0.0
- name: ROUGE-L (Question Generation)
type: rouge_l_question_generation
value: 23.71
- name: METEOR (Question Generation)
type: meteor_question_generation
value: 20.6
- name: BERTScore (Question Generation)
type: bertscore_question_generation
value: 92.84
- name: MoverScore (Question Generation)
type: moverscore_question_generation
value: 62.45
---
# Model Card of `lmqg/bart-large-subjqa-books-qg`
This model is fine-tuned version of [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: books) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad)
- **Language:** en
- **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (books)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/bart-large-subjqa-books-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/bart-large-subjqa-books-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-large-subjqa-books-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json)
| | Score | Type | Dataset |
|:-----------|--------:|:-------|:-----------------------------------------------------------------|
| BERTScore | 92.84 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_1 | 22.32 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_2 | 13.74 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_3 | 4.78 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| Bleu_4 | 0 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| METEOR | 20.6 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| MoverScore | 62.45 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
| ROUGE_L | 23.71 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_subjqa
- dataset_name: books
- input_types: ['paragraph_answer']
- output_types: ['question']
- prefix_types: None
- model: lmqg/bart-large-squad
- max_length: 512
- max_length_output: 32
- epoch: 2
- batch: 8
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-large-subjqa-books-qg/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|