|
--- |
|
language: |
|
- en |
|
tags: |
|
- question generation |
|
license: mit |
|
datasets: |
|
- asahi417/qg_squad |
|
metrics: |
|
- bleu |
|
- meteor |
|
- rouge |
|
- bertscore |
|
- moverscore |
|
widget: |
|
- text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." |
|
example_title: "Example 1" |
|
- text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records." |
|
example_title: "Example 2" |
|
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ." |
|
example_title: "Example 3" |
|
--- |
|
|
|
# BART LARGE fine-tuned for English Question Generation |
|
BART LARGE Model fine-tuned on English question generation dataset (SQuAD) with an extensive hyper-parameter search. |
|
- [Online Demo](https://autoqg.net/) |
|
- [Project Repository](https://github.com/asahi417/lm-question-generation) |
|
|
|
## Overview |
|
|
|
**Language model:** facebook/bart-large |
|
**Language:** English (en) |
|
**Downstream-task:** Question Generation |
|
**Training data:** SQuAD |
|
**Eval data:** SQuAD |
|
**Code:** See [our repository](https://github.com/asahi417/lm-question-generation) |
|
|
|
## Usage |
|
### In Transformers |
|
```python |
|
from transformers import pipeline |
|
|
|
model_path = 'asahi417/lmqg-t5-small-squad' |
|
pipe = pipeline("text2text-generation", model_path) |
|
|
|
paragraph = 'Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.' |
|
# highlight an answer in the paragraph to generate question |
|
answer = 'Etta James' |
|
highlight_token = '<hl>' |
|
input_text = paragraph.replace(answer, '{0} {1} {0}'.format(highlight_token, answer)) |
|
input_text = 'generate question: {}'.format(input_text) # add task specific prefix |
|
generation = pipe(input_text) |
|
print(generation) |
|
>>> [{'generated_text': 'What is the name of the biopic that Beyonce starred in?'}] |
|
``` |
|
|
|
## Evaluations |
|
|
|
Evaluation on the test set of [SQuAD QG dataset](https://huggingface.co/datasets/asahi417/qg_squad). |
|
The results are comparable with the [leaderboard](https://paperswithcode.com/sota/question-generation-on-squad11) and previous works. |
|
All evaluations were done using our [evaluation script](https://github.com/asahi417/lm-question-generation). |
|
|
|
|
|
| BLEU 4 | ROUGE L | METEOR | BERTScore | MoverScore | |
|
| ------ | -------- | ------ | --------- | ---------- | |
|
| 26.16 | 53.84 | 27.07 | 91.00 | 64.99 | |
|
|
|
- [metric file](https://huggingface.co/asahi417/lmqg-bart-large-squad/raw/main/eval/metric.first.sentence.paragraph_answer.question.asahi417_qg_squad.default.json) |
|
|
|
|
|
## Fine-tuning Parameters |
|
We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease. |
|
The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-bart-large-squad/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation). |
|
|
|
## Citation |
|
TBA |
|
|
|
|
|
|