asahi417 commited on
Commit
2f4073c
1 Parent(s): 5317188

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -2
README.md CHANGED
@@ -11,6 +11,8 @@ metrics:
11
  - bleu
12
  - meteor
13
  - rouge
 
 
14
  widget:
15
  - text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
16
  example_title: "Example 1"
@@ -20,5 +22,55 @@ widget:
20
  example_title: "Example 3"
21
  ---
22
 
23
- # T5 finetuned on Question Generation
24
- T5 model for question generation. Please visit [our repository](https://github.com/asahi417/t5-question-generation) for more detail.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  - bleu
12
  - meteor
13
  - rouge
14
+ - bertscore
15
+ - moverscore
16
  widget:
17
  - text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
18
  example_title: "Example 1"
22
  example_title: "Example 3"
23
  ---
24
 
25
+ # BART LARGE fine-tuned for English Question Generation
26
+ BART LARGE Model fine-tuned on English question generation dataset (SQuAD) with an extensive hyper-parameter search.
27
+
28
+ - [Project Repository](https://github.com/asahi417/lm-question-generation)
29
+
30
+ ## Overview
31
+
32
+ **Language model:** facebook/bart-large
33
+ **Language:** English (en)
34
+ **Downstream-task:** Question Generation
35
+ **Training data:** SQuAD
36
+ **Eval data:** SQuAD
37
+ **Code:** See [our repository](https://github.com/asahi417/lm-question-generation)
38
+
39
+ ## Usage
40
+ ### In Transformers
41
+ ```python
42
+ from transformers import pipeline
43
+
44
+ model_path = 'asahi417/lmqg-t5-small-squad'
45
+ pipe = pipeline("text2text-generation", model_path)
46
+
47
+ paragraph = 'Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.'
48
+ # highlight an answer in the paragraph to generate question
49
+ answer = 'Etta James'
50
+ highlight_token = '<hl>'
51
+ input_text = paragraph.replace(answer, '{0} {1} {0}'.format(highlight_token, answer))
52
+ input_text = 'generate question: {}'.format(input_text) # add task specific prefix
53
+ generation = pipe(input_text)
54
+ print(generation)
55
+ >>> [{'generated_text': 'What is the name of the biopic that Beyonce starred in?'}]
56
+ ```
57
+
58
+ ## Evaluations
59
+
60
+ Evaluation on the test set of [SQuAD QG dataset](https://huggingface.co/datasets/asahi417/qg_squad).
61
+ The results are comparable with the [leaderboard](https://paperswithcode.com/sota/question-generation-on-squad11) and previous works.
62
+ All evaluations were done using our [evaluation script](https://github.com/asahi417/lm-question-generation).
63
+
64
+
65
+ | BLEU 4 | ROUGE L | METEOR | BERTScore | MoverScore |
66
+ | ------ | -------- | ------ | --------- | ---------- |
67
+ | 21.75 | 50.48 | 25.12 | 90.78 | 64.80 |
68
+
69
+ ## Fine-tuning Parameters
70
+ We ran grid search to find the best hyper-parameters and continued fine-tuning until the validation metric decrease.
71
+ The best hyper-parameters can be found [here](https://huggingface.co/asahi417/lmqg-bart-large-squad/raw/main/trainer_config.json), and fine-tuning script is released in [our repository](https://github.com/asahi417/lm-question-generation).
72
+
73
+ ## Citation
74
+ TBA
75
+
76
+