Papers
arxiv:2207.14251

Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions

Published on Jul 28, 2022
Authors:
,
,
,
,
,
,
,

Abstract

Large amounts of training data are one of the major reasons for the high performance of state-of-the-art NLP models. But what exactly in the training data causes a model to make a certain prediction? We seek to answer this question by providing a language for describing how training data influences predictions, through a causal framework. Importantly, our framework bypasses the need to retrain expensive models and allows us to estimate causal effects based on observational data alone. Addressing the problem of extracting factual knowledge from pretrained language models (PLMs), we focus on simple data statistics such as co-occurrence counts and show that these statistics do influence the predictions of PLMs, suggesting that such models rely on shallow heuristics. Our causal framework and our results demonstrate the importance of studying datasets and the benefits of causality for understanding NLP models.

Community

Sign up or log in to comment

Models citing this paper 84

Browse 84 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2207.14251 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2207.14251 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.