alisawuffles commited on
Commit
22d3c34
1 Parent(s): 99e83b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -4
README.md CHANGED
@@ -29,14 +29,19 @@ prediction = model.config.id2label[label_id]
29
 
30
  ### Citation
31
  ```
32
- @misc{liu-etal-2022-wanli,
33
- title = "WANLI: Worker and AI Collaboration for Natural Language Inference Dataset Creation",
34
  author = "Liu, Alisa and
35
  Swayamdipta, Swabha and
36
  Smith, Noah A. and
37
  Choi, Yejin",
38
- month = jan,
 
39
  year = "2022",
40
- url = "https://arxiv.org/pdf/2201.05955",
 
 
 
 
41
  }
42
  ```
 
29
 
30
  ### Citation
31
  ```
32
+ @inproceedings{liu-etal-2022-wanli,
33
+ title = "{WANLI}: Worker and {AI} Collaboration for Natural Language Inference Dataset Creation",
34
  author = "Liu, Alisa and
35
  Swayamdipta, Swabha and
36
  Smith, Noah A. and
37
  Choi, Yejin",
38
+ booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2022",
39
+ month = dec,
40
  year = "2022",
41
+ address = "Abu Dhabi, United Arab Emirates",
42
+ publisher = "Association for Computational Linguistics",
43
+ url = "https://aclanthology.org/2022.findings-emnlp.508",
44
+ pages = "6826--6847",
45
+ abstract = "A recurring challenge of crowdsourcing NLP datasets at scale is that human writers often rely on repetitive patterns when crafting examples, leading to a lack of linguistic diversity. We introduce a novel approach for dataset creation based on worker and AI collaboration, which brings together the generative strength of language models and the evaluative strength of humans. Starting with an existing dataset, MultiNLI for natural language inference (NLI), our approach uses dataset cartography to automatically identify examples that demonstrate challenging reasoning patterns, and instructs GPT-3 to compose new examples with similar patterns. Machine generated examples are then automatically filtered, and finally revised and labeled by human crowdworkers. The resulting dataset, WANLI, consists of 107,885 NLI examples and presents unique empirical strengths over existing NLI datasets. Remarkably, training a model on WANLI improves performance on eight out-of-domain test sets we consider, including by 11{\%} on HANS and 9{\%} on Adversarial NLI, compared to training on the 4x larger MultiNLI. Moreover, it continues to be more effective than MultiNLI augmented with other NLI datasets. Our results demonstrate the promise of leveraging natural language generation techniques and re-imagining the role of humans in the dataset creation process.",
46
  }
47
  ```