sileod commited on
Commit
81483a2
1 Parent(s): 9280f38

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -1
README.md CHANGED
@@ -10,4 +10,21 @@ task_ids:
10
  Imppres, but it works
11
  https://github.com/facebookresearch/Imppres
12
 
13
- https://arxiv.org/abs/2004.03066
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  Imppres, but it works
11
  https://github.com/facebookresearch/Imppres
12
 
13
+ ```
14
+ @inproceedings{jeretic-etal-2020-natural,
15
+ title = "Are Natural Language Inference Models {IMPPRESsive}? {L}earning {IMPlicature} and {PRESupposition}",
16
+ author = "Jereti\v{c}, Paloma and
17
+ Warstadt, Alex and
18
+ Bhooshan, Suvrat and
19
+ Williams, Adina",
20
+ booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
21
+ month = jul,
22
+ year = "2020",
23
+ address = "Online",
24
+ publisher = "Association for Computational Linguistics",
25
+ url = "https://www.aclweb.org/anthology/2020.acl-main.768",
26
+ doi = "10.18653/v1/2020.acl-main.768",
27
+ pages = "8690--8705",
28
+ abstract = "Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models to make pragmatic inferences remains understudied. We create an IMPlicature and PRESupposition diagnostic dataset (IMPPRES), consisting of 32K semi-automatically generated sentence pairs illustrating well-studied pragmatic inference types. We use IMPPRES to evaluate whether BERT, InferSent, and BOW NLI models trained on MultiNLI (Williams et al., 2018) learn to make pragmatic inferences. Although MultiNLI appears to contain very few pairs illustrating these inference types, we find that BERT learns to draw pragmatic inferences. It reliably treats scalar implicatures triggered by {``}some{''} as entailments. For some presupposition triggers like {``}only{''}, BERT reliably recognizes the presupposition as an entailment, even when the trigger is embedded under an entailment canceling operator like negation. BOW and InferSent show weaker evidence of pragmatic reasoning. We conclude that NLI training encourages models to learn some, but not all, pragmatic inferences.",
29
+ }
30
+ ```