geekydevu commited on
Commit
578b52c
β€’
1 Parent(s): cfb01f9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -0
README.md CHANGED
@@ -1,3 +1,62 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+ # BART-base fine-tuned on NaturalQuestions for **Question Generation**
5
+
6
+ [BART Model](https://arxiv.org/pdf/1910.13461.pdf) fine-tuned on [Google NaturalQuestions](https://ai.google.com/research/NaturalQuestions/) for **Question Generation** by treating long answer as input, and question as output.
7
+
8
+ ## Details of BART
9
+
10
+ The **BART** model was presented in [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) by *Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer* in Here the abstract:
11
+
12
+ We present BART, a denoising autoencoder for pretraining sequence-to-sequence models. BART is trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. It uses a standard Tranformer-based neural machine translation architecture which, despite its simplicity, can be seen as generalizing BERT (due to the bidirectional encoder), GPT (with the left-to-right decoder), and many other more recent pretraining schemes. We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token. BART is particularly effective when fine tuned for text generation but also works well for comprehension tasks. It matches the performance of RoBERTa with comparable training resources on GLUE and SQuAD, achieves new state-of-the-art results on a range of abstractive dialogue, question answering, and summarization tasks, with gains of up to 6 ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system for machine translation, with only target language pretraining. We also report ablation experiments that replicate other pretraining schemes within the BART framework, to better measure which factors most influence end-task performance.
13
+
14
+ ## Details of the downstream task (QG) - Dataset πŸ“š 🧐
15
+
16
+ Dataset: ```NaturalQuestions``` from Google (https://ai.google.com/research/NaturalQuestions/)
17
+
18
+ | Dataset | Split | # samples |
19
+ | -------- | ----- | --------- |
20
+ | NaturalQuestions | train | 97650 |
21
+ | NaturalQuestions | valid | 10850 |
22
+
23
+
24
+ ## Model fine-tuning πŸ‹οΈβ€
25
+
26
+ The training script can be found [here](https://github.com/McGill-NLP/MLQuestions/blob/main/QG/train.py)
27
+
28
+
29
+ ## Model in Action πŸš€
30
+
31
+ ```python
32
+ from transformers import AutoModel, BartTokenizer
33
+ #Load the tokenizer
34
+ tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
35
+ #Load the model
36
+ model = AutoModel.from_pretrained("McGill-NLP/bart-qg-nq-checkpoint")
37
+ ```
38
+
39
+ ## Citation
40
+ If you want to cite this model you can use this:
41
+
42
+ ```bibtex
43
+ @inproceedings{kulshreshtha-etal-2021-back,
44
+ title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval",
45
+ author = "Kulshreshtha, Devang and
46
+ Belfer, Robert and
47
+ Serban, Iulian Vlad and
48
+ Reddy, Siva",
49
+ booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
50
+ month = nov,
51
+ year = "2021",
52
+ address = "Online and Punta Cana, Dominican Republic",
53
+ publisher = "Association for Computational Linguistics",
54
+ url = "https://aclanthology.org/2021.emnlp-main.566",
55
+ pages = "7064--7078",
56
+ abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.",
57
+ }
58
+ ```
59
+
60
+ > Created by [Devang Kulshreshtha](https://geekydevu.netlify.app/)
61
+
62
+ > Made with <span style="color: #e25555;">&hearts;</span> in Spain