|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:08:56.065738Z" |
|
}, |
|
"title": "Do Language Embeddings capture Scales?", |
|
"authors": [ |
|
{ |
|
"first": "Xikun", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "xikunz2@cs.stanford.edu" |
|
}, |
|
{ |
|
"first": "Deepak", |
|
"middle": [], |
|
"last": "Ramachandran", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "ramachandrand@google.com" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Tenney", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "iftenney@google.com" |
|
}, |
|
{ |
|
"first": "Yanai", |
|
"middle": [], |
|
"last": "Elazar", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "yanaiela@gmail.com" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "danroth@seas.upenn.edu" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense and factual knowledge. One form of knowledge that has not been studied yet in this context is information about the scalar magnitudes of objects. We show that pretrained language models capture a significant amount of this information but are short of the capability required for general common-sense reasoning. We identify contextual information in pre-training and numeracy as two key factors affecting their performance, and show that a simple method of canonicalizing numbers can have a significant effect on the results. 1 * Both authors contributed equally. \u2020 Work done during an internship at Google Research.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Pretrained Language Models (LMs) have been shown to possess significant linguistic, common sense and factual knowledge. One form of knowledge that has not been studied yet in this context is information about the scalar magnitudes of objects. We show that pretrained language models capture a significant amount of this information but are short of the capability required for general common-sense reasoning. We identify contextual information in pre-training and numeracy as two key factors affecting their performance, and show that a simple method of canonicalizing numbers can have a significant effect on the results. 1 * Both authors contributed equally. \u2020 Work done during an internship at Google Research.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The success of contextualized pretrained Language Models like BERT (Devlin et al., 2018) and ELMo (Peters et al., 2018) on tasks like Question Answering and Natural Language Inference, has led to speculation that they are good at Common Sense Reasoning (CSR).", |
|
"cite_spans": [ |
|
{ |
|
"start": 67, |
|
"end": 88, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 93, |
|
"end": 119, |
|
"text": "ELMo (Peters et al., 2018)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "On one hand, recent work has approached this question by measuring the ability of LMs to answer questions about physical common sense (Bisk et al., 2020) (\"How to separate egg whites from yolks?\"), temporal reasoning (Zhou et al., 2020) (\"How long does a basketball game take?\"), and numerical common sense (Lin et al., 2020) . On the other hand, after realizing some high-level reasoning skills like this may be difficult to learn from a language-modeling objective only, (Geva et al., 2020) injects numerical reasoning skills into LMs by additional pretraining on automatically generated data. All of these skills are prerequisites for CSR. Figure 1 : Scalar probing example. The mass of \"dog\" is a distribution (gray histogram) concentrated around 10-100kg. We train a linear model over a frozen (shown by the snowflake in the figure) encoder to predict this distribution (orange histogram) using either a dense cross-entropy or a regression loss (Section 3).", |
|
"cite_spans": [ |
|
{ |
|
"start": 134, |
|
"end": 153, |
|
"text": "(Bisk et al., 2020)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 307, |
|
"end": 325, |
|
"text": "(Lin et al., 2020)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 473, |
|
"end": 492, |
|
"text": "(Geva et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 643, |
|
"end": 651, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Here, we address a simpler task which is another pre-requisite for CSR: the prediction of scalar attributes, a task we call Scalar Probing. Given an object (such as a \"wedding ring\") and an attribute with continuous numeric values (such as Mass or Price), can an LM's representation of the object predict the value of that attribute? Since in general, there may not be a single correct value for such attributes due to polysemy (\"crane\" as a bird, versus construction equipment) or natural variation (e.g. different breeds of dogs), we interpret this as a task of predicting a distribution of possible values for this attribute, and compare it to a ground truth distribution of such values. An overview of this scalar probing is shown in Figure 1 . Examples of ground-truth distributions and model predictions for different objects and attributes are shown in Figure 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 738, |
|
"end": 746, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 860, |
|
"end": 868, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our analysis shows that contextual encoders, like BERT and ELMo, perform better than noncontextual ones, like Word2Vec, on scalar probing de-spite the task being non-contextual (Mikolov et al., 2013) . Further, we show that using scientific notation to represent numbers in pre-training can have a significant effect on results (though sensitive to the evaluation metric used). Put together, these results imply that scale representation in contextual encoders is mediated by transfer of magnitude information from numbers to nouns in pre-training and making this mechanism more robust could improve performance on this and other CSR tasks. We also show improvements by zero-shot transfer from our probes to 2 related tasks: relative comparisons (Forbes and Choi, 2017) and product price prediction (Jianmo Ni, 2019) , indicating that our results are robust across datasets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 177, |
|
"end": 199, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 746, |
|
"end": 769, |
|
"text": "(Forbes and Choi, 2017)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 799, |
|
"end": 816, |
|
"text": "(Jianmo Ni, 2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We define the scalar probing task (see Figure 1 ) as the problem of predicting a distribution over values of a scalar attribute of an object. We map these values into 12 logarithmically-spaced buckets, so that our task is equivalent to predicting (the distribution of) the order of magnitude of the target value. We explore both models that predict the full distribution and models that predict a point estimate of the value, which is essentially a distribution with all the mass concentrating on one bucket.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 47, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Problem Definition and Data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our primary resource for the scalar probing task is Distributions over Quantities (DoQ; Elazar et al., 2019) which consists of empirical counts of scalar attribute values associated with >350K nouns, adjectives, and verbs over 10 different attributes, collected from web data. In this work, we focus only on nouns (which we refer to as objects) over the scalar attributes (or scales) of MASS (in grams), LENGTH (in meters) and PRICE (in US Dollars). For each object and scale, DoQ provides an empirical distribution over possible values (e.g. Figure 2) that we map into the 12 afore-mentioned buckets and treat it as \"ground truth\". We note that DoQ itself is derived heuristically from web text and itself contains noise; however, we use it as a starting point to evaluate the performance of different models. Moreover, we validate our findings with transfer experiments shown in Section 6, using DoQ to train a probe that is evaluated on the ground-truth data of Forbes and Choi (2017) and Jianmo Ni (2019) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 82, |
|
"end": 87, |
|
"text": "(DoQ;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 88, |
|
"end": 108, |
|
"text": "Elazar et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 965, |
|
"end": 987, |
|
"text": "Forbes and Choi (2017)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 992, |
|
"end": 1008, |
|
"text": "Jianmo Ni (2019)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 543, |
|
"end": 549, |
|
"text": "Figure", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Problem Definition and Data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To explore the role of context in scalar probing, we also trained specialized probing models on a subset of DoQ data in narrow domains: MASS of Animals and PRICE of Household products.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Definition and Data", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We probe three different embedding models: Word2vec (Mikolov et al., 2013) , ELMo (Peters et al., 2018) and BERT (Devlin et al., 2018 ) (the latter two of which are contextualized encoders). For each encoder, the input layer extracts an embedding of the object and the probing layer predicts the scalar magnitude. 2 Input representations For Word2vec, we follow the standard practice of averaging the embeddings of each word in the object's name. If an object name is a full phrase in the dictionary, we use its embedding instead. As BERT and ELMo are contextual text encoders operating on full sentences, we generate artificial sentences with the following templates:", |
|
"cite_spans": [ |
|
{ |
|
"start": 52, |
|
"end": 74, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 77, |
|
"end": 103, |
|
"text": "ELMo (Peters et al., 2018)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 113, |
|
"end": 133, |
|
"text": "(Devlin et al., 2018", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 314, |
|
"end": 315, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 MASS: The X is heavy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 PRICE: The X is expensive.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 LENGTH: The X is big.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "and use the CLS token emebedding (for BERT) or final state embedding (for ELMo) as the input representation. For LENGTH, We use \"big\" instead of \"long\", since LENGTH measurements in DoQ can be widths or heights as well. Variations of these templates with different adjectives and sentence structures (e.g. \"The X is small.\" or \"What is the length of X?\" for LENGTH) led to very similar performance in our evaluations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Probes We use linear probes in all cases following many previous probing work (Shi et al., 2016; Ettinger et al., 2016; Pimentel et al., 2020) since we want to use a simple probe to find easily accessible information in a representation. Hewitt and Liang (2019) also demonstrates that linear probes achieve relatively high selectivity compared to non-linear ones like MLP.", |
|
"cite_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 96, |
|
"text": "(Shi et al., 2016;", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 97, |
|
"end": 119, |
|
"text": "Ettinger et al., 2016;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 120, |
|
"end": 142, |
|
"text": "Pimentel et al., 2020)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 238, |
|
"end": 261, |
|
"text": "Hewitt and Liang (2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We experiment with two different approaches for predicting scales:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Regression (rgr) For the point estimate, we use a standard Linear Regression model trained to predict log of the median of all values for each object for the scale attribute under consideration.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probing Model", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We take a non-parametric approach to modeling the full distribution of scalar values and treat the prediction of which bucket a measurement will fall under as a multi-class classification task, with one class per bucket. A similar approach was shown by (Van Oord et al., 2016) to perform well for modeling image pixel values. This approach discards the relationship between adjacent bucket values, but it allows us to use the full empirical distribution as soft labels. We train a linear model with softmax output, using a dense cross-entropy loss against the empirical distribution from DoQ.", |
|
"cite_spans": [ |
|
{ |
|
"start": 253, |
|
"end": 276, |
|
"text": "(Van Oord et al., 2016)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-class Classification (mcc)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "More details of the model and training procedure are in the Appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multi-class Classification (mcc)", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Wallace et al. 2019showed that BERT and ELMo had a limited amount of numeracy or numerical reasoning ability, when restricted to numbers of small magnitude. Intuitively, it seems that significant model capacity is expended in parsing the natural representation of numbers as Arabic numerals, where higher and lower order digits are given equal prominence. As further evidence of this, it is shown in Appendix B of Wallace et al. (2019) that the simple intervention of left-padding numbers in ELMo instead of the default right-padding used in Char-CNNs greatly improves accuracy on these tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 414, |
|
"end": 435, |
|
"text": "Wallace et al. (2019)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Numeracy through Scientific Notation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To examine the effect of numerical representations on scalar probing, we trained a new version of the BERT model (which we call NumBERT) by replacing every instance of a number in the training data with its representation in scientific notation, a combination of an exponent and mantissa (for example 314.1 is represented as 3141[EXP]2 where [EXP] is a new token introduced into the vocabulary). This enables the BERT model to more easily associate objects in the sentence directly with the magnitude expressed in the exponent, ignoring the relatively insignificant mantissa. This model converged to a similar loss on the original BERT Masked LM+NSP pre-training task and a standard suite of NLP tasks (See Appendix) as BERT-base, demonstrating that it was not over-specialized for numerical reasoning tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Numeracy through Scientific Notation", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We offer the following aggregate baseline to help interpret our results: For each attribute, we compute the empirical distribution over buckets across all objects in the training set, and use that as a predicted distribution for all objects in the test set (this is a stronger version of the majority baseline used in classification tasks). Since we are comparing results from regression and classification models, we report results on 3 disparate metrics that give a full picture of performance:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Accuracy For mcc we use the highest scoring bucket from the predicted distribution as the predicted bucket, while for rgr we map the predicted scalar to the single containing bucket and use that as the predicted bucket. Then the accuracy is calculated between the predicted bucket and the groundtruth bucket, which is the highest scoring bucket in the empirical distribution in DoQ.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Mean Square Error (MSE) When used to compare distributions, this is also known as the Cramer-Von Mises distance (Baringhaus and Henze, 2017) . It ignores the difference in magnitude between different buckets (a difference in probability mass between buckets i and i + 1 is equivalent to the same difference between buckets i and any other), but is upper-bounded by 1, making it easier to interpret. To calculate MSE for rgr, we assume that it assigns a probability of 1 to the single containing bucket. 3", |
|
"cite_spans": [ |
|
{ |
|
"start": 112, |
|
"end": 140, |
|
"text": "(Baringhaus and Henze, 2017)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Earth Mover's Distance (EMD) Also known as the Wasserstein distance (Rubner et al., 1998) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 68, |
|
"end": 89, |
|
"text": "(Rubner et al., 1998)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Given two probability densities p 1 and p 2 on \u2126, and some distance measure d on \u2126, the Earth Mover's Distance is defined as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "D(p 1 , p 2 ) = inf \u03c0 \u2126 \u2126 d(x, y)d\u03c0(x, y)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "where the infimum is over all non-negative measures \u03c0 on \u2126\u00d7\u2126 satisfying \u03c0(E\u00d7\u2126)\u2212\u03c0(\u2126\u00d7E) =", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "E p 1 (x)dx \u2212 E p 2 (x)dx for measurable subsets E \u2282 \u2126.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Intuitively, EMD measures how much \"work\" needs to be done to move the probability mass of p 1 to p 2 , while MSE measures pointwise what the difference in densities is. So EMD accounts for the distance between buckets, and predictions to neighboring buckets are penalized less than those further away. EMD is favored in the statistics literature because of its better convergence properties (Rubner et al., 1998) , and there is evidence that it is more robust to adversarial perturbations of the data distribution (Liu et al., 2019) , which is relevant for our transfer tasks described below.", |
|
"cite_spans": [ |
|
{ |
|
"start": 392, |
|
"end": 413, |
|
"text": "(Rubner et al., 1998)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 515, |
|
"end": 533, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Transfer experiments We also evaluate models trained on DoQ on 2 datasets containing ground truth labels of scalar attributes. The first is a humanlabeled dataset of relative comparisons (e.g. (person, fox, weight, bigger)) (Forbes and Choi, 2017) . Predictions for this task are made by comparing the point estimates for rgr and highest-scoring buckets for mcc. The second is an empirical distribution of product price data extracted from the Amazon Review Dataset (Jianmo Ni, 2019) . We retrained a model on DoQ prices using 12 power-of-4 buckets to support finer grained predictions. Table 1 shows results of scalar probing on DoQ data. 4 For MSE and EMD the best possible score is 0, while for accuracy we take a loose upper bound to be the performance of a model that samples from the ground-truth distribution and is evaluated against the mode. This method achieves accuracies of 0.570 for lengths, 0.537 for masses, and 0.476 for prices. Compared to the baseline, we can see that mcc over the best encoders capture about half (as measured by accuracy) to a third (by MSE and EMD) of the distance to the upper bound, suggesting that while a significant amount of scalar information is available, there is a long way to go to support robust commonsense reasoning. From Table 1 , we see that the more expressive models using mcc consistently beat rgr, with the latter frequently unable to improve upon the Aggregate baseline. This shows that scale information is present in the embeddings, but training on the median alone is not enough to reliably extract it; the full data distribution is needed.", |
|
"cite_spans": [ |
|
{ |
|
"start": 224, |
|
"end": 247, |
|
"text": "(Forbes and Choi, 2017)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 474, |
|
"end": 483, |
|
"text": "Ni, 2019)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 640, |
|
"end": 641, |
|
"text": "4", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 587, |
|
"end": 594, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1274, |
|
"end": 1281, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Comparing results by encoders, we see that Word2Vec performs significantly worse than the contextual encoders -even though the task is noncontextual -indicating that contextual information during pre-training improves the representation of scales.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Despite being weaker than BERT on downstream NLP tasks, ELMo does better on scalar probing, consistent with it being better at numeracy (Wallace et al., 2019 ) due to its character-level tokenization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 136, |
|
"end": 157, |
|
"text": "(Wallace et al., 2019", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "NumBERT does consistently better than ELMo and BERT on the EMD metric, but worse on MSE and Accuracy. This is in contrast to other standard benchmarks such as Q/A and NLI, where Num-BERT made no difference relative to BERT. Our key takeaway is that the numerical representation has an impact on scale prediction (see Figure 2 for qualitative differences), but the direction is sensitive to the choice of evaluation metric. As discussed in Section 5, we believe EMD to be the most robust metric a priori, but this finding highlights the need to still examine the full range of metrics.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 317, |
|
"end": 325, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Results on Animal Masses (Table 1) show that training models only on objects in a narrow domain can significantly improve scalar prediction, underscoring the importance of context. For example, while \"crane\" in general can refer to either a bird or a piece of construction equipment, only the former is relevant in the animal domain, giving the model a simpler distribution of masses to predict.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 25, |
|
"end": 34, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Note that, despite significant differences in the raw numbers for each scale (mass/length/price), the relative behavior of encoders, metrics and probes are the same, indicating that our conclusions are broadly applicable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Transfer experiments On the F&C relative comparison task (Table 2) , rgr+NumBERT performed best, approaching the performance of using DoQ as an oracle, though short of specialized models for this task (Yang et al., 2018) . Scalar probes trained with mcc perform poorly, possibly because a finer-grained model of predicted distribution is not useful for the 3-class comparative task. On the Amazon price dataset (Table 3) which is a full distribution prediction task, mcc+NumBERT did best on all three metrics. On both zero-shot transfer tasks, NumBERT was the best encoder on all configurations of metric/objective, suggesting that manipulating numeric representations can signifi- cantly improve performance on scalar prediction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 201, |
|
"end": 220, |
|
"text": "(Yang et al., 2018)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 66, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 420, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "From our novel scalar probing experiments, we find there is a significant amount of scale information in object embeddings, but still a sizable gap to overcome before LMs achieve this prerequisite of CSR. We conclude that although we observe some non-trivial signal to extract scale information from language embedding, the weak signals suggest these models are far from satisfying common sense scale understanding. Our analysis points to improvements in modeling context and numeracy as directions in which progress can be made, mediated by the transfer of scale information from numbers to nouns. The NumBERT intervention has a measurable impact on scalar probing results, and transfer experiments suggest that it is an improvement. For future work we would like to extend our models to predict scales for sentences bearing relevant context about scalar magnitudes, e.g. \"I saw a baby elephant\". ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Here we provide the model hyperparameters we use for reproducibility.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A Model Hyperparameters", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For the regression model, we use a ridge regression with regularization strength of 1. For the multiclass classification model, we use a linear classifier with a softmax activation function and regularization strength of 0.01. For experiments on the narrow domains with smaller datasets, we first use PCA to reduce embeddings down to 150 dimensions before training the probing model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "A.1 Probing Layer of the Scalar Probing Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "NumBERT is pretrained on the Wikipedia and Books corpora used by the original BERT paper (Devlin et al., 2018) . The BERT configuration is the same as BERT-Base (L=12, H=768, A=12, Total Parameters=110M). The language model masking is applied after WordPiece tokenization with a uniform masking rate of 15%. Maximum sequence length (number of tokens) is 128. We train with batch size of 64 sequences for 1,000,000 steps, which is approximately 40 epochs over the 3.3 billion word corpus. All the other hyperparameters and implementation details (optimizer, warm-up steps, etc.) are the same as the original BERT implementation. Table 4 shows a comparison of Num-BERT vs a re-implementation of BERT-base with identical settings as above, on a suite of standard NLP benchmarks, and we conclude that the two models reach similar performance on these tasks. Table 5 shows the statistics of 3 datasets/resources we use in this paper. For DoQ, we take the original resource and get each subset by filtering using the corresponding dimensions and/or object types (e.g. all objects, animals, product categories, etc.). Also, only objects with more than 100 values collected in the resource are used. For F&C Cleaned dataset, we use the data and the train/dev/test splits from (Elazar et al., 2019) . We model the distributions of those scalar attributes as categorical distributions over 12 categories. We first take the base-10 logarithm of all the values and then round them to the nearest integer (between -2 and 9 for all scales). We treat each integer as a bucket and use the normalized counts in each bucket as the true distribution for that scalar attribute of the object.", |
|
"cite_spans": [ |
|
{ |
|
"start": 89, |
|
"end": 110, |
|
"text": "(Devlin et al., 2018)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 1268, |
|
"end": 1289, |
|
"text": "(Elazar et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 628, |
|
"end": 635, |
|
"text": "Table 4", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 854, |
|
"end": 861, |
|
"text": "Table 5", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "A.2 NumBERT", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To explore the effect of ambiguity, we divide all the data in DoQ for each scale into 2 sets, Unimodal where the distribution has one well-defined peak and Multimodal, where multiple peaks are present. The number of peaks were identified by a simple hill-climbing algorithm.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Data Statistics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "As words often have more than one meaning in different contexts or even modifiers, their corresponding distribution from DoQ should reflect the different senses if they appeared enough in the data. When the objects are different enough (e.g. an ice-cream have mainly one meaning and its size doesn't vary much, as opposed to a truck which can be a toy truck, which is very small, or an actual vehicle, which is very big), they may have different modalities. In order to better understand our results, we wish to separate between objects of different modalities to objects with a single modality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Data Statistics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In order to estimate a multi-modal function, we take the bucketed DoQ distribution and smooth it into a probability density function. Then, by finding local maxima over the fitted density function, we estimate a distribution to be multi-modal if we find more than one maximum, otherwise we determine it to be a single-modal distribution.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "B Data Statistics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The complete experiment results including the mutlimodal experiments are in Table 6 . ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 83, |
|
"text": "Table 6", |
|
"ref_id": "TABREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "B Data Statistics", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We use Word2Vec embeddings of dimension size 500 trained on Wikipedia, BERT-Base (L=12, H=768, A=12, Total Parameters=110M) trained on Wikipedia+Books and ELMo-Small (LSTM Hidden Size=1024, Output Size=128, #Highway Layers=1, Total Parameters=13.6M) trained on the 1 Billion Word Benchmark, approximately 800M tokens of news crawl data from WMT 2011.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This is distinguished from the MSE loss used to train regression models, as it is a distance measure over pairs of distributions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The full set of experimental results are shown inTable 6in the Appendix.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We want to thank Daniel Spokoyny for the idea of using scientific notation for numbers and Jeremiah Liu for helpful discussions on statistical distance measures.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Cram\u00e9r-von mises distance: probabilistic interpretation, confidence intervals, and neighbourhood-of-model validation", |
|
"authors": [ |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Baringhaus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Henze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Journal of Nonparametric Statistics", |
|
"volume": "29", |
|
"issue": "2", |
|
"pages": "167--188", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "L Baringhaus and N Henze. 2017. Cram\u00e9r-von mises distance: probabilistic interpretation, confidence intervals, and neighbourhood-of-model validation. Journal of Nonparametric Statistics, 29(2):167-188.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Piqa: Reasoning about physical commonsense in natural language", |
|
"authors": [ |
|
{ |
|
"first": "Yonatan", |
|
"middle": [], |
|
"last": "Bisk", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rowan", |
|
"middle": [], |
|
"last": "Zellers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianfeng", |
|
"middle": [], |
|
"last": "Ronan Le Bras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Gao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "Thirty-Fourth AAAI Conference on Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jian- feng Gao, and Yejin Choi. 2020. Piqa: Reasoning about physical commonsense in natural language. In Thirty-Fourth AAAI Conference on Artificial Intelli- gence.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Devlin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming-Wei", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kristina", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1810.04805" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "How large are lions? inducing distributions over quantitative attributes", |
|
"authors": [ |
|
{ |
|
"first": "Yanai", |
|
"middle": [], |
|
"last": "Elazar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abhijit", |
|
"middle": [], |
|
"last": "Mahabal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deepak", |
|
"middle": [], |
|
"last": "Ramachandran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tania", |
|
"middle": [], |
|
"last": "Bedrax-Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Roth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Association for Computational Linguistics (ACL)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yanai Elazar, Abhijit Mahabal, Deepak Ramachandran, Tania Bedrax-Weiss, and Dan Roth. 2019. How large are lions? inducing distributions over quanti- tative attributes. In Association for Computational Linguistics (ACL).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Probing for semantic evidence of composition by means of simple classification tasks", |
|
"authors": [ |
|
{ |
|
"first": "Allyson", |
|
"middle": [], |
|
"last": "Ettinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ahmed", |
|
"middle": [], |
|
"last": "Elgohary", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philip", |
|
"middle": [], |
|
"last": "Resnik", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "134--139", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Allyson Ettinger, Ahmed Elgohary, and Philip Resnik. 2016. Probing for semantic evidence of composition by means of simple classification tasks. In Proceed- ings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 134-139.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Verb physics: Relative physical knowledge of actions and objects", |
|
"authors": [ |
|
{ |
|
"first": "Maxwell", |
|
"middle": [], |
|
"last": "Forbes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "266--276", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/P17-1025" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maxwell Forbes and Yejin Choi. 2017. Verb physics: Relative physical knowledge of actions and objects. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 266-276, Vancouver, Canada. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Injecting numerical reasoning skills into language models", |
|
"authors": [ |
|
{ |
|
"first": "Mor", |
|
"middle": [], |
|
"last": "Geva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ankit", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Berant", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.04487" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mor Geva, Ankit Gupta, and Jonathan Berant. 2020. Injecting numerical reasoning skills into language models. arXiv preprint arXiv:2004.04487.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Designing and interpreting probes with control tasks", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [], |
|
"last": "Hewitt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1909.03368" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John Hewitt and Percy Liang. 2019. Designing and in- terpreting probes with control tasks. arXiv preprint arXiv:1909.03368.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Justifying recommendations using distantly-labeled reviews and fined-grained aspects", |
|
"authors": [ |
|
{ |
|
"first": "Julian Mcauley Jianmo", |
|
"middle": [], |
|
"last": "Ni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jiacheng", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Empirical Methods in Natural Language Processing (EMNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julian McAuley Jianmo Ni, Jiacheng Li. 2019. Jus- tifying recommendations using distantly-labeled re- views and fined-grained aspects. In Empirical Meth- ods in Natural Language Processing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Birds have four legs?! numersense: Probing numerical commonsense knowledge of pre-trained language models", |
|
"authors": [ |
|
{ |
|
"first": "Seyeon", |
|
"middle": [], |
|
"last": "Bill Yuchen Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rahul", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang", |
|
"middle": [], |
|
"last": "Khanna", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ren", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2005.00683" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang Ren. 2020. Birds have four legs?! nu- mersense: Probing numerical commonsense knowl- edge of pre-trained language models. arXiv preprint arXiv:2005.00683.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Transferable adversarial training: A general approach to adapting deep classifiers", |
|
"authors": [ |
|
{ |
|
"first": "Hong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mingsheng", |
|
"middle": [], |
|
"last": "Long", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jianmin", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 36th International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hong Liu, Mingsheng Long, Jianmin Wang, and Michael I. Jordan. 2019. Transferable adversarial training: A general approach to adapting deep classi- fiers. In Proceedings of the 36th International Con- ference on Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Advances in neural information processing systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Deep contextualized word representations", |
|
"authors": [ |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohit", |
|
"middle": [], |
|
"last": "Neumann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Iyyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kenton", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:1802.05365" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "2020. Information-theoretic probing for linguistic structure", |
|
"authors": [ |
|
{ |
|
"first": "Tiago", |
|
"middle": [], |
|
"last": "Pimentel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Josef", |
|
"middle": [], |
|
"last": "Valvoda", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rowan", |
|
"middle": [], |
|
"last": "Hall Maudslay", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ran", |
|
"middle": [], |
|
"last": "Zmigrod", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adina", |
|
"middle": [], |
|
"last": "Williams", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Cotterell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": null, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": { |
|
"arXiv": [ |
|
"arXiv:2004.03061" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tiago Pimentel, Josef Valvoda, Rowan Hall Maudslay, Ran Zmigrod, Adina Williams, and Ryan Cotterell. 2020. Information-theoretic probing for linguistic structure. arXiv preprint arXiv:2004.03061.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "A metric for distributions with applications to image databases", |
|
"authors": [ |
|
{ |
|
"first": "Yossi", |
|
"middle": [], |
|
"last": "Rubner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Carlo", |
|
"middle": [], |
|
"last": "Tomasi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leonidas", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Guibas", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "59--66", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yossi Rubner, Carlo Tomasi, and Leonidas J Guibas. 1998. A metric for distributions with applications to image databases. In Sixth International Conference on Computer Vision (IEEE Cat. No. 98CH36271), pages 59-66. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Does string-based neural mt learn source syntax?", |
|
"authors": [ |
|
{ |
|
"first": "Xing", |
|
"middle": [], |
|
"last": "Shi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Inkit", |
|
"middle": [], |
|
"last": "Padhi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Knight", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1526--1534", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Xing Shi, Inkit Padhi, and Kevin Knight. 2016. Does string-based neural mt learn source syntax? In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 1526- 1534.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Pixel recurrent neural networks", |
|
"authors": [ |
|
{ |
|
"first": "Aaron", |
|
"middle": [], |
|
"last": "Van Oord", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nal", |
|
"middle": [], |
|
"last": "Kalchbrenner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Koray", |
|
"middle": [], |
|
"last": "Kavukcuoglu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1747--1756", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aaron Van Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. 2016. Pixel recurrent neural net- works. In International Conference on Machine Learning, pages 1747-1756.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Do NLP models know numbers? probing numeracy in embeddings", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Wallace", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yizhong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sujian", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sameer", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Matt", |
|
"middle": [], |
|
"last": "Gardner", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "5306--5314", |
|
"other_ids": { |
|
"DOI": [ |
|
"10.18653/v1/D19-1534" |
|
] |
|
}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do NLP models know numbers? probing numeracy in embeddings. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5306- 5314, Hong Kong, China. Association for Computa- tional Linguistics.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "Empirical DoQ distributions and scalar probe predictions for MCC+BERT and MCC+NumBERT (Section 4). The left plot shows length for the term 'runner', showing two peaks corresponding to the length of runner cloths and distances run in races. The right plot shows price for the term 'bill', with counts corresponding to popular denominations and the volumes of larger currency transactions.", |
|
"num": null, |
|
"uris": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF0": { |
|
"num": null, |
|
"content": "<table><tr><td>Accuracy</td><td>MSE</td><td>EMD</td></tr><tr><td colspan=\"2\">mcc rgr mcc</td><td>rgr mcc</td><td>rgr</td></tr></table>", |
|
"text": "Lengths Aggregate .24 .24 .027 .027 .077 .077 word2vec .30 .12 .026 .099 .079 .072 ELMo .43 .23 .019 .084 .055 .072 BERT .42 .24 .020 .084 .056 .072 NumBERT .40 .22 .021 .086 .052 .072 Masses Aggregate .15 .15 .026 .026 .076 .076 word2vec .26 .20 .025 .088 .082 .077 ELMo .36 .21 .021 .087 .061 .077 BERT .33 .22 .021 .085 .062 .077 NumBERT .32 .20 .021 .088 .057 .077 Prices Aggregate .24 .24 .019 .019 .057 .057 word2vec .26 .14 .019 .090 .063 .087 ELMo .37 .21 .016 .081 .051 .087 BERT .33 .19 .017 .083 .054 .087 NumBERT .32 .17 .017 .085 .051 .087 Animal Masses Aggregate .30 .30 .022 .022 .059 .059 word2vec .33 .35 .021 .069 .069 .077 ELMo .43 .28 .016 .079 .057 .077 BERT .41 .26 .017 .079 .058 .077 NumBERT .42 .23 .018 .083 .053 .077", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF1": { |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"4\">: Accuracy on VerbPhysics (Forbes and Choi,</td></tr><tr><td>2017).</td><td/><td/><td/></tr><tr><td/><td>Accuracy</td><td>MSE</td><td>EMD</td></tr><tr><td/><td colspan=\"3\">mcc rgr mcc rgr mcc rgr</td></tr><tr><td>Aggregate</td><td>.04 .04</td><td>.02 .02</td><td>.06 .06</td></tr><tr><td>word2vec</td><td>.09 .23</td><td>.02 .07</td><td>.07 .08</td></tr><tr><td>BERT</td><td>.14 .25</td><td>.02 .07</td><td>.06 .08</td></tr><tr><td>NumBERT</td><td>.18 .27</td><td>.02 .07</td><td>.05 .08</td></tr></table>", |
|
"text": "", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "NumBERT vs BERT-base on a suite of standard NLP benchmarks.", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF9": { |
|
"num": null, |
|
"content": "<table><tr><td>: Statistics of Datasets/Resources used in our</td></tr><tr><td>paper</td></tr><tr><td>C Complete Experimental Results</td></tr></table>", |
|
"text": "", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF11": { |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "", |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |