geekydevu commited on
Commit
f2fee21
1 Parent(s): d21e8e7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md CHANGED
@@ -1,3 +1,54 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+ # BART-base fine-tuned on NaturalQuestions for **Question Generation**
5
+
6
+ [BART Model](https://arxiv.org/pdf/1910.13461.pdf) trained for Question Generation in an unsupervised manner using [Self-Training](https://arxiv.org/pdf/2104.08801.pdf) algorithm (Kulshreshtha et al, EMNLP 2021). The dataset used are unaligned questions and passages from [MLQuestions dataset](https://github.com/McGill-NLP/MLQuestions/tree/main/data).
7
+
8
+ ## Details of Self-Training
9
+
10
+ The Self-Training algorithm was presented as a baseline algorithm to compete with proposed Back-Training in [Back-Training excels Self-Training at Unsupervised Domain Adaptation
11
+ of Question Generation and Passage Retrieval](https://arxiv.org/pdf/2104.08801.pdf) by *Devang Kulshreshtha, Robert Belfer, Iulian Vlad Serban, Siva Reddy* in Here the abstract:
12
+
13
+ In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA) from source to target domain. While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between the target domain and synthetic data distribution, and reduces model overfitting to the source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU4 points on generation, and 17.6% top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation datasetMLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.
14
+
15
+ ## Model training 🏋️‍
16
+
17
+ The training script can be found [here](https://github.com/McGill-NLP/MLQuestions/blob/main/UDA-SelfTraining.sh)
18
+
19
+
20
+ ## Model in Action 🚀
21
+
22
+ ```python
23
+
24
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
25
+ #Load the tokenizer
26
+ tokenizer = AutoTokenizer.from_pretrained("geekydevu/bart-qg-mlquestions-selftraining")
27
+ #Load the model
28
+ model = AutoModelForSeq2SeqLM.from_pretrained("geekydevu/bart-qg-mlquestions-selftraining")
29
+ ```
30
+
31
+ ## Citation
32
+ If you want to cite this model you can use this:
33
+
34
+ ```bibtex
35
+ @inproceedings{kulshreshtha-etal-2021-back,
36
+ title = "Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval",
37
+ author = "Kulshreshtha, Devang and
38
+ Belfer, Robert and
39
+ Serban, Iulian Vlad and
40
+ Reddy, Siva",
41
+ booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
42
+ month = nov,
43
+ year = "2021",
44
+ address = "Online and Punta Cana, Dominican Republic",
45
+ publisher = "Association for Computational Linguistics",
46
+ url = "https://aclanthology.org/2021.emnlp-main.566",
47
+ pages = "7064--7078",
48
+ abstract = "In this work, we introduce back-training, an alternative to self-training for unsupervised domain adaptation (UDA). While self-training generates synthetic training data where natural inputs are aligned with noisy outputs, back-training results in natural outputs aligned with noisy inputs. This significantly reduces the gap between target domain and synthetic data distribution, and reduces model overfitting to source domain. We run UDA experiments on question generation and passage retrieval from the Natural Questions domain to machine learning and biomedical domains. We find that back-training vastly outperforms self-training by a mean improvement of 7.8 BLEU-4 points on generation, and 17.6{\%} top-20 retrieval accuracy across both domains. We further propose consistency filters to remove low-quality synthetic data before training. We also release a new domain-adaptation dataset - MLQuestions containing 35K unaligned questions, 50K unaligned passages, and 3K aligned question-passage pairs.",
49
+ }
50
+ ```
51
+
52
+ > Created by [Devang Kulshreshtha](https://geekydevu.netlify.app/)
53
+
54
+ > Made with <span style="color: #e25555;">&hearts;</span> in Spain