title
sequencelengths 0
16
| author
sequencelengths 0
109
| authoraffiliation
sequencelengths 0
68
| venue
sequencelengths 0
3
| abstract
stringlengths 12
16.7k
| doi
stringlengths 13
39
⌀ | pdfurls
sequencelengths 1
1
⌀ | corpusid
int64 148
259M
| arxivid
stringlengths 9
15
| pdfsha
stringlengths 40
40
| text
stringlengths 2.47k
723k
| github_urls
sequencelengths 0
22
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"A Implementation Details A.1 SumGNN Parameter Setup",
"A Implementation Details A.1 SumGNN Parameter Setup"
] | [] | [] | [] | We use the following hyperparameter set for SumGNN after random search on validation set. We use 1, 024-bits Morgan fingerprint for drug featurization. We set the subgraph to be 2-hops neighbors (i.e. k = 2). In the subgraph summarization module, we use weight matrix of size d = 32 for W 1 and W 2 . The hidden dimension h k v is set to be d = 32. The relation matrix r is set to be 32. The edge pruning threshold is set to be = 0. The input hidden representation of each node is d = 32. The number of basis B in Eq. (3) is set to 8 as the performance do not change much when set from 4 to 16 and suffer from over-fitting with B > 16. We study the effect of key parameter d, and k in our experiment part (Section 4). | 10.1093/bioinformatics/btab207 | [
"https://arxiv.org/pdf/2010.01450v2.pdf"
] | 222,133,982 | 2010.01450 | 66437db1e20523511b478a196707d0550842c055 |
A Implementation Details A.1 SumGNN Parameter Setup
A Implementation Details A.1 SumGNN Parameter Setup
We use the following hyperparameter set for SumGNN after random search on validation set. We use 1, 024-bits Morgan fingerprint for drug featurization. We set the subgraph to be 2-hops neighbors (i.e. k = 2). In the subgraph summarization module, we use weight matrix of size d = 32 for W 1 and W 2 . The hidden dimension h k v is set to be d = 32. The relation matrix r is set to be 32. The edge pruning threshold is set to be = 0. The input hidden representation of each node is d = 32. The number of basis B in Eq. (3) is set to 8 as the performance do not change much when set from 4 to 16 and suffer from over-fitting with B > 16. We study the effect of key parameter d, and k in our experiment part (Section 4).
A.2 Training Details
Training Parameters. For both our method and baselines, the training parameters are set as follows unless specified. We train the model for 50 epochs with batch size 256. Our model is optimized with ADAM optimizer (Kingma and Ba, 2014) of learning rate 5 ⇥ 10 3 with gradient clipping set to 10 under L2 norm. We set the L2 weight decay to 1 ⇥ 10 5 , the layer of GNN to 2 and set the dropout rate to 0.3 for each Layer in GNN. Model Implementation and Computing Infrastructure. All methods are implemented in PyTorch 3 and the graph neural network modules are build on Deep Graph Library (DGL) 4 . The System we use is Ubuntu 18.04.3 LTS with Python 3.6, Pytorch 1.2 and DGL 0.4.3. Our code is run in a Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz CPU and a GeForce GTX TITAN X GPU.
A.3 The Range for Tuning Hyper-parameters
We use grid search to determine hyper-parameters and list the search space of key hyper-parameters as follows.
A.4 Baseline Setup
For the baselines, the settings are described as follows:
• MLP: We implement MLP with Pytorch with the Morgan fingerprint. We use a two-layer MLP and set the hidden dimension to 100 with dropout 0.3.
• Node2vec: We follow the officially released implementation from authors 5 and set the embedding dimension to 64.
• Decagon: We use DGL to implement the model. Following (Zitnik et al., 2018a), we set the number of GNN layers to 2 set the hidden dimension to 64 and 32 for two layers with a dropout rate of 0.1 and a minibatch size of 512.
• GAT: We use DGL to implement the model and set the hidden dimension to 64 with 4 attention heads, as we find that improving the number of heads will hurt the performance. We set the activation function to LeakyReLU with ↵ = 0.2.
• Others: We follow the officially released implementa-tion from the authors listed as follows:
Table 3 :
3The range for tuning hyper-parameters. The bold numbers are the default settings.Parameters
Range
Learning Rate [5 ⇥ 10 4 , 1 ⇥ 10 3 , 5 ⇥ 10 3 , 1 ⇥ 10 2 ]
Weight Decay [1 ⇥ 10 6 , 1 ⇥ 10 5 , 1 ⇥ 10 4 , 1 ⇥ 10 3 ]
Dropout
[0.3, 0.4, 0.5]
Layers of GNN
[1, 2, 3]
d
[8, 16, 32, 64]
k
[1, 2, 3, 4]
B
[4, 8, 12, 16, 24, 32]
- SkipGNN :
SkipGNNhttps://github.com/kexinhuang12345/SkipGNN. -KG-DDI: the neural model is based on code in https://github.com/rezacsedu/ Drug-Drug-Interaction-Prediction, and the KG embeddings are trained via OpenKE toolbox https://github.com/thunlp/OpenKE. -GraIL: https://github.com/kkteru/grail. -KGNN: https://github.com/xzenglab/KGNN.
https://pytorch.org/ 4 https://www.dgl.ai/
https://github.com/aditya-grover/node2vec
| [
"https://github.com/kexinhuang12345/SkipGNN.",
"https://github.com/rezacsedu/",
"https://github.com/thunlp/OpenKE.",
"https://github.com/kkteru/grail.",
"https://github.com/xzenglab/KGNN.",
"https://github.com/aditya-grover/node2vec"
] |
[
"Mathematical Language Processing Project",
"Mathematical Language Processing Project"
] | [
"Robert Pagel \nDatabase Systems and Information Management Group\nTechnische Universität Berlin\nEinsteinufer 1710587BerlinGermany\n",
"Moritz Schubotz schubotz@tu-berlin.de \nDatabase Systems and Information Management Group\nTechnische Universität Berlin\nEinsteinufer 1710587BerlinGermany\n"
] | [
"Database Systems and Information Management Group\nTechnische Universität Berlin\nEinsteinufer 1710587BerlinGermany",
"Database Systems and Information Management Group\nTechnische Universität Berlin\nEinsteinufer 1710587BerlinGermany"
] | [] | In natural language, words and phrases themselves imply the semantics. In contrast, the meaning of identifiers in mathematical formulae is undefined. Thus scientists must study the context to decode the meaning. The Mathematical Language Processing (MLP) project aims to support that process. In this paper, we compare two approaches to discover identifier-definition tuples. At first we use a simple pattern matching approach. Second, we present the MLP approach that uses part-of-speech tag based distances as well as sentence positions to calculate identifier-definition probabilities. The evaluation of our prototypical system, applied on the Wikipedia text corpus, shows that our approach augments the user experience substantially. While hovering the identifiers in the formula, tool-tips with the most probable definitions occur. Tests with random samples show that the displayed definitions provide a good match with the actual meaning of the identifiers. | null | [
"https://arxiv.org/pdf/1407.0167v1.pdf"
] | 1,723,652 | 1407.0167 | 6c1c861d35e09d0fd4fd2f799155c986200bee1b |
Mathematical Language Processing Project
Robert Pagel
Database Systems and Information Management Group
Technische Universität Berlin
Einsteinufer 1710587BerlinGermany
Moritz Schubotz schubotz@tu-berlin.de
Database Systems and Information Management Group
Technische Universität Berlin
Einsteinufer 1710587BerlinGermany
Mathematical Language Processing Project
definition discoverytext miningparallel computing
In natural language, words and phrases themselves imply the semantics. In contrast, the meaning of identifiers in mathematical formulae is undefined. Thus scientists must study the context to decode the meaning. The Mathematical Language Processing (MLP) project aims to support that process. In this paper, we compare two approaches to discover identifier-definition tuples. At first we use a simple pattern matching approach. Second, we present the MLP approach that uses part-of-speech tag based distances as well as sentence positions to calculate identifier-definition probabilities. The evaluation of our prototypical system, applied on the Wikipedia text corpus, shows that our approach augments the user experience substantially. While hovering the identifiers in the formula, tool-tips with the most probable definitions occur. Tests with random samples show that the displayed definitions provide a good match with the actual meaning of the identifiers.
Introduction
Mathematical formulae are viable sources of information for a wide range of scientists. Often, they contain identifiers whose meaning might be at first unknown or at least ambiguous to the reader (depending on their knowledge). Therefore, one usually needs to study the surrounding text to find the relevant definition. An automatic information retrieval system can be used to reduce the reader's effort by displaying the most relevant definition relation found to the reader. Students and scientists of other disciplines would especially profit from a system that helps them to understand formulae more quickly. In the long term, the extracted identifier definition tuples contribute to an increased machine readability of scientific publications. This builds a foundation for added value services such as search, clustering and improved accessibility.
To build such a system, a labelled text corpus that annotates identifiers and their definition is desirable. At the project start, such a corpus was not available. Consequently we had to start manual investigation of individual articles. Our first observation was that many identifier definitions use a fixed string pattern to explain the definition to the reader. Furthermore, most definitions usually appear very close to the related identifier in the sentences. Thus, we calculate the probabilities for correct identifier definition tuples based on distance metrics for certain part-of-speech (POS) tagged words. This correlates to the experience that readers usually extract identifier definitions from context that is given by the surrounding text.
We chose the Wikipedia as the target text corpus because of two facts. First, most articles make use of <math/> tags (texvc as an input language) for formulae. The identification of <math/> tags is trivial, and from the MathML output, it is easy to extract the identifiers. Second, the articles are already annotated with mark-up. Particularly, hyperlinks to other articles within Wikipedia are of interest as they typically wrap around any number of words and indicate that these in combination are relevant in the given context or (respectively) sentence.
The English Wikipedia contains roughly four million articles. Even if we only pick articles containing <math/> tags, our processor still needs to compute with tens of thousands of articles. Especially when using text annotators (e.g., POS tagger [8]), like Stanford's NLP framework, one can make use of a parallel processing system to speed up computation. We implement the proposed strategy with the Stratosphere system [3]. It is based on the PACT programming model [2], which enables us to rapidly generate a large amount of definition relation candidates with only minimal implementation overhead for the parallelization.
Related Work. Quoc et al. [7] proposed an approach for relating whole formulas to sentences and their describing paragraphs. Yokoi et al. [10] trained a support vector machine to extract natural language descriptions for mathematical expressions. Furhter work in this field was done by [4] and [5].
Pattern-based Definition Discovery
At first, we implemented a simple identifier definition extractor that is based on a set of static patterns. As this is a fairly robust approach and easy to implement, it serves as a good reference point in terms of performance. It simply iterates through the text, trying to find word groups, that are matched by a pattern. The patterns being used to discover description terms are depicted in Table 1. Due to the fact that we already tokenized and annotated the articles in a previous step in the MLP system, we can make use of POS tags here as well.
Note, determiners not only contain articles, but also quantifiers and distributives. The last pattern in Table 1 contains '*/DT'. This is a shorthand for every word, that has the POS tag 'DT' (determiner). Otherwise this pattern would be rather large, as it needs to contain every possible determiner. IDENTIFIER as well as DESCRIPTION are place-holder, that mark the positions of the entities from a possible definition relation.
Pattern <description> <identifier> <identifier> is <description> <identifier> is the <description> let <identifier> be the <description> <description> is|are denoted by <identifier> <identifier> denotes */DT <description> Table 1. Implemented static patterns
Statistical Definition Discovery
We detect relations between identifiers and their description in two steps. First, we extract the identifiers from the formulae found in an article, and second we determine their description from the surrounding text.
Extracting relevant identifiers from the article relies on the assumption that the author will use <math/> tags for all formulae. This said, a formula that is written in the running text cannot be recognized, and therefore, cannot be extracted by our system.
The fact that we can estimate all relevant identifiers for an article (see Section 4.1), combined with some common assumptions about definition relations, can be exploited to largely reduce the set of candidates that need to be ranked. Please note that this reduction is essential for retrieving the correct relations for our approach. Otherwise almost any word would be ranked and the precision of the retrieval would drop significantly.
The basic assumption of our approach is that the two entities of a definition relation co-occur in the same sentence. In other words, if we want to retrieve the description for an identifier, only sentences containing the identifier could include the definition relation. Having said this, any other sentences can be ignored. Furthermore, we assume that it is more likely to find the description in first sentences than in the latter. This is based on the idea that authors introduce the meaning of an identifier and than subsequent use the identifier, without necessarily repeating its definition.
Another assumption can be made about the lexical class of the definition relation we want to rank. The descriptions are nouns or even noun phrases (e.g., 'the effective spring constant k' or 'mass m of something'). We discard all other words (according to their POS tag) except noun phrases and Wikipedia hyper-links. These are the candidates descriptions for a definition relation. Noun phrases and hyper-links may consist of multiple words. For all intents and purposes, it is not necessary to threat noun phrases and hyper-links as a set of words, and therefore, they will be treated subsequently as if they were one. This is important, due to the fact that the overall ranking will be greatly influenced by the distance of candidates to the position of the identifier.
Numerical Statistics
Each description candidate is ranked with the weighted sum
R(n, ∆, t, d) = αR σ d (∆) + βR σs (n) + γ tf(t, s) α + β + γ → [0, 1].(1)
The weighted sum depends on the distance ∆ (amount of word tokens) between identifier and the description term t, the sentence number n counting (from the beginning of the article) all sentences containing the identifier, and the term frequency tf(t, s) in the set of sentences s. The distance was normalized
with R σ (∆) = exp − 1 2 ∆ 2 −1 σ 2 .
We assume that the probability to find a relation at ∆ = 1 is maximal. For example in the text fragment 'the energy E, and the mass m', in order to determine the full width at half maximum of our distribution, we evaluated some articles manually and found R σ d (1) ≈ 2R σ d (5) and thus σ d = 12 ln 2 . The probability to find a correct definition decays to 50% within three sentences. Consequently σ s = 2 (ln 2)
− 1 2 .
Robustness. The classic tf-idf [9] statistic reflects the importance of a term to a document. For our task, the inverse document frequency (idf) assigns high penalties to frequent words like 'length', as opposed to words seldom seen such as 'Hamiltonian'. These are both valid definitions for identifiers. As the influence of tf(t, s) on the sensitivity of the overall ranking (1) seems to be very high, we reduce the impact with the tuning parameters γ = 0.1 and remain α = β = 1. Please note that the algorithm currently only takes sentences into account which were found in a single article. In the future, the MLP system will examine sets of closely related articles. This will leverage the problem that distributional properties will be volatile on term universes with very few members (e.g., term frequencies in a single sentence).
Implementation. We implemented the MLP processing system [6] as Stratosphere data-flow using Java which allows for scalable execution, application of complex higher order functions, and easy integration of third party tools such as Stanford NLP and the Mylyn framework for mark-up parsing.
Wiki Dumps
Map Parser
CoGroup Kernel
Map Tagger
Reduce Filter
Raw Candidates
Identifier Retrieval
Throughout our experiments, we made some observations that had an impact on the accuracy of retrieving the correct set of identifiers. First of all, people tend to use texvc only as a typesetting language and neglect its semantic capabilities. For example, \text{log} is more often used than the correct operator \log.
Another problem is that sometimes people use indices as a form of 'in field' annotation, like T bef ore and T af ter . The identifier T is defined in the surrounding text, but neither T bef ore nor T af ter . There are more ambiguities. For example the superscripted 2 in x 2 and σ 2 can be interpreted as the power or as a part of the identifier. Another ambiguity is that the multiplication sign can be omitted, so that it is undecidable for a naive program whether ab 2 contains one or two identifiers. We took a very conservative approach and preprocessed all formulas. The T E X command \text{} blocks along with subscriptions containing more than a single character will be removed before analysis. Superscripts will also be ignored in terms of being a part of the identifier. Moreover, we created a comprehensive blacklist to improve the results further. Identifier like 'a', 'A', and 'I', which are also very common in the English language, could be easily matched by our processor in the surrounding text, and therefore, will also be blacklisted. Additionally, we blacklist common mathematical operators, constants, and functions.
We took a sample of 30 random articles and counted all matches by hand. The resulting estimates for the identifier retrieval performance are Recall: 0.99 and Precision: 0.86, which satisfy our information needs, as we are mostly interested in recall at this stage.
Description Retrieval
We ran our program on a dataset of about 20,000 articles, all containing <math/> tags, and retrieved about 550,000 candidate relations. The most common definition relations are listed in Observations. We observed some poorly ranked relations. For example, in the fragment 'where φ( r i ) is the electrostatic potential', the distance is ∆(φ, electrostatic potential) = 6. This is due to counting brackets and function arguments as words. Also wrongly tagged words like 'Hamiltonian' as an adjective leads to false negatives.
Comparative Evaluation At the start of our project there were no gold standard datasets available to measure the performance of identifier definition extractors. Thus, we created one on our own. This is a very time consuming job. At the moment, the dataset only contains two large articles (revision ids are included) with around 100 identifier definitions. This dataset is also available on the project repository. As in many articles, those in the evaluation dataset contain identifiers whose description cannot be retrieved. This is due to two reasons. First and foremost, the identifier found in a formula is never mentioned in the surrounding text, and therefore, no description can be extracted. Second, the identifier is somehow ambiguous (see Section 4.1) and has been dropped. Most notably, identifiers like I xx will be discarded because of an ambiguous index that contains multiple letters.
Unfortunately 32 out of 99 identifiers from our dataset fall into that category. We've decided to evaluate the performance of the remainder, as those 32 do not convey any conceptual flaws. From the users standpoint, the overall performance (in terms of recall) of such a system would be rather annoying. As we are only interested in evaluating the performance of the MLP Ranking algorithm itself, it is safe to ignore those 32 identifiers. Our results show that the unoptimized MLP approach keeps up with the performance of the simple pattern matcher. Furthermore, we observed that it is more robust in terms of recall, as it is less vulnerable to small changes in the sentence structure.
Further work
Our original intuition was to discover grammatical patterns like '<identifier> indicates/stands for/denotes <description> ' based on the statistical findings. However, our impression is that this would not lead to significant performance gain.
The distance measure R σ d fails for the example of Fig. 1 since ∆(energy, E) = ∆(mass, m) = 2. Unfortunately, one cannot simply detect punctuation marks and introduce some kind of directed associativity (e.g., inflicting a penalty on the ranking if the candidate relations spans over a comma). This leads to whole classes of relation 'types' (in terms of the grammatical structure) never being retrieved. We plan to mitigate this problem by taking more closely related scientific articles (based on their specific fields) into consideration and count the frequencies of the candidate relations. The intuition behind this is, that articles of the same scientific field will likely use the same definition for the identifiers. Moreover, we also hope to resolve the problem of 'dangling' identifiers (those not mentioned in the article itself), as they might be described in related articles.
Currently, we use the ranking R to identify the most probable descriptionidentifier tuple on each article, even if it occurs multiple times on the page. For example, in the 'Mass-energy equivalence' article, 21 sentences contain the combination of the identifier E and the noun 'energy'. A promising approach, is to use R Σ = n i=1 2 −i R i , where R i is a sorted list. Here, R 1 is the highest ranked definition for that relation according to the current measure R. A systematic approach for determining a wise choice of the ranking parameters should significantly improve the overall performance of our system.
Conclusion
Our experiments showed that selecting candidates according to their POS tags combined with numerical statistics about the text surface, can lead to quality results. However, this approach is only applicable under certain conditions. For identifiers which are seldom seen, our statistical approach tends to fail. In that situation, other methods, especially supervised ones, are preferred. Unfortunately, many of them require a labelled test corpus to measure the performance of a classifier that could be trained with our generated data. Currently, we are planning to use the NTCIR-Math-10 Task, Math Understanding Subtask gold standard dataset [1] for a comparable evaluation.
During this project we had the impression that one could discover 'namespaces' (sets of documents, that use the same definitions for identifier) to aid in the retrieval process. Robert Pagel is currently working on this topic for his diploma thesis.
Fig. 1 .
1Screenshot of the energy mass relation page 'Mass-energy equivalence', while hovering the letter 'E'.
Fig. 2 .
2Data flow of the Stratosphere program 4 Evaluation
table 2 .
2Identifier Descriptions
Count
n
number
1709
t
time
1480
M
mass
1042
r
radius
752
T
temperature
666
θ
angle
639
G
group
635
Table 2. Most common definition relations
Table 3 .
3Evaluation results. Note: k equals the amount of the top ranked candidate definitions.
Acknowledgments. Thanks to Howard Cohl for proofreading the paper and to Holmer Hemsen, the course instructor of the database project course at TU-Berlin in Fall 2012. The implementation and a first draft of this paper was completed in the duration of this course.
NTCIR-10 Math Pilot Task Overview. A Aizawa, M Kohlhase, I Ounis, Proceedings of the 10th NTCIR Conference on Evaluation of Information Access Technologies. the 10th NTCIR Conference on Evaluation of Information Access TechnologiesTokyo, JapanAizawa, A., Kohlhase, M., and Ounis, I. (2013). NTCIR-10 Math Pilot Task Overview. In Proceedings of the 10th NTCIR Conference on Evaluation of Information Access Technologies, pages 654-661, Tokyo, Japan.
Massively parallel data analysis with PACTs on Nephele. A Alexandrov, D Battré, S Ewen, M Heimel, F Hueske, O Kao, V Markl, E Nijkamp, D Warneke, Proceedings of the VLDB Endowment. the VLDB Endowment3Alexandrov, A., Battré, D., Ewen, S., Heimel, M., Hueske, F., Kao, O., Markl, V., Nijkamp, E., and Warneke, D. (2010). Massively parallel data analysis with PACTs on Nephele. Proceedings of the VLDB Endowment, 3:1625-1628.
The stratosphere platform for big data analytics. A Alexandrov, R Bergmann, S Ewen, J.-C Freytag, F Hueske, A Heise, O Kao, M Leich, U Leser, V Markl, The VLDB Journal. Alexandrov, A., Bergmann, R., Ewen, S., Freytag, J.-C., Hueske, F., Heise, A., Kao, O., Leich, M., Leser, U., Markl, V., et al. (2014). The stratosphere platform for big data analytics. The VLDB Journal, pages 1-26.
The Language of Mathematics. M Ganesalingam, SpringerGanesalingam, M. (2013). The Language of Mathematics. Springer.
Computerizing mathematical text with mathlang. F Kamareddine, J B Wells, Electronic Notes in Theoretical Computer Science. 205Kamareddine, F. and Wells, J. B. (2008). Computerizing mathematical text with mathlang. Electronic Notes in Theoretical Computer Science, 205:5-30.
R Pagel, Mlp project repository. Pagel, R. (2013). Mlp project repository. https://github.com/rbzn/ project-mlp.
Mining coreference relations between formulas and text using Wikipedia. M N Quoc, K Yokoi, Y Matsubayashi, Aizawa , A , Quoc, M. N., Yokoi, K., Matsubayashi, Y., and Aizawa, A. (2010). Mining coreference relations between formulas and text using Wikipedia. (August):69-74.
A maximum entropy model for part-of-speech tagging. A Ratnaparkhi, Ratnaparkhi, A. (1996). A maximum entropy model for part-of-speech tag- ging.
Introduction to Modern Information Retrieval. G Salton, M J Mcgill, McGraw-Hill, IncNew York, NY, USASalton, G. and McGill, M. J. (1986). Introduction to Modern Information Retrieval. McGraw-Hill, Inc., New York, NY, USA.
Contextual Analysis of Mathematical Expressions for Advanced Mathematical Search. K Yokoi, M.-Q Nghiem, Y Matsubayashi, Aizawa , A , Yokoi, K., Nghiem, M.-q., Matsubayashi, Y., and Aizawa, A. (2010). Con- textual Analysis of Mathematical Expressions for Advanced Mathematical Search.
| [
"https://github.com/rbzn/"
] |
[
"Explainable Prediction of Medical Codes from Clinical Text",
"Explainable Prediction of Medical Codes from Clinical Text"
] | [
"James Mullenbach jmullenbach3@gatech.edu \nGeorgia Institute of Technology\n\n",
"Sarah Wiegreffe swiegreffe6@gatech.edu \nGeorgia Institute of Technology\n\n",
"Jon Duke jon.duke@gatech.edu \nGeorgia Institute of Technology\n\n",
"Jimeng Sun jsun@cc.gatech.edu \nGeorgia Institute of Technology\n\n",
"Jacob Eisenstein jacobe@gatech.edu \nGeorgia Institute of Technology\n\n"
] | [
"Georgia Institute of Technology\n",
"Georgia Institute of Technology\n",
"Georgia Institute of Technology\n",
"Georgia Institute of Technology\n",
"Georgia Institute of Technology\n"
] | [
"Proceedings of NAACL-HLT 2018"
] | Clinical notes are text documents that are created by clinicians for each patient encounter. They are typically accompanied by medical codes, which describe the diagnosis and treatment. Annotating these codes is labor intensive and error prone; furthermore, the connection between the codes and the text is not annotated, obscuring the reasons and details behind specific diagnoses and treatments. We present an attentional convolutional network that predicts medical codes from clinical text. Our method aggregates information across the document using a convolutional neural network, and uses an attention mechanism to select the most relevant segments for each of the thousands of possible codes. The method is accurate, achieving precision@8 of 0.71 and a Micro-F1 of 0.54, which are both better than the prior state of the art. Furthermore, through an interpretability evaluation by a physician, we show that the attention mechanism identifies meaningful explanations for each code assignment. | 10.18653/v1/n18-1100 | [
"https://www.aclweb.org/anthology/N18-1100.pdf"
] | 3,305,987 | 1802.05695 | dea35b495d36582df36a41a00e1c9c3daa91ee56 |
Explainable Prediction of Medical Codes from Clinical Text
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 1 -6. 2018. 2018
James Mullenbach jmullenbach3@gatech.edu
Georgia Institute of Technology
Sarah Wiegreffe swiegreffe6@gatech.edu
Georgia Institute of Technology
Jon Duke jon.duke@gatech.edu
Georgia Institute of Technology
Jimeng Sun jsun@cc.gatech.edu
Georgia Institute of Technology
Jacob Eisenstein jacobe@gatech.edu
Georgia Institute of Technology
Explainable Prediction of Medical Codes from Clinical Text
Proceedings of NAACL-HLT 2018
NAACL-HLT 2018New Orleans, LouisianaAssociation for Computational LinguisticsJune 1 -6. 2018. 2018
Clinical notes are text documents that are created by clinicians for each patient encounter. They are typically accompanied by medical codes, which describe the diagnosis and treatment. Annotating these codes is labor intensive and error prone; furthermore, the connection between the codes and the text is not annotated, obscuring the reasons and details behind specific diagnoses and treatments. We present an attentional convolutional network that predicts medical codes from clinical text. Our method aggregates information across the document using a convolutional neural network, and uses an attention mechanism to select the most relevant segments for each of the thousands of possible codes. The method is accurate, achieving precision@8 of 0.71 and a Micro-F1 of 0.54, which are both better than the prior state of the art. Furthermore, through an interpretability evaluation by a physician, we show that the attention mechanism identifies meaningful explanations for each code assignment.
Introduction
Clinical notes are free text narratives generated by clinicians during patient encounters. They are typically accompanied by a set of metadata codes from the International Classification of Diseases (ICD), which present a standardized way of indicating diagnoses and procedures that were performed during the encounter. ICD codes have a variety of uses, ranging from billing to predictive modeling of patient state (Choi et al., 2016;Ranganath et al., 2015;Denny et al., 2010;Avati et al., 2017). Because manual coding is timeconsuming and error-prone, automatic coding has been studied since at least the 1990s (de Lima et al., 1998). The task is difficult for two main reasons. First, the label space is very high-dimensional, with over 15,000 codes in the ICD-9 taxonomy, and over 140,000 codes combined in the newer ICD-10-CM and ICD-10-PCS taxonomies (World Health Organization, 2016). Second, clinical text includes irrelevant information, misspellings and non-standard abbreviations, and a large medical vocabulary. These features combine to make the prediction of ICD codes from clinical notes an especially difficult task, for computers and human coders alike (Birman-Deych et al., 2005).
In this application paper, we develop convolutional neural network (CNN)-based methods for automatic ICD code assignment based on text discharge summaries from intensive care unit (ICU) stays. To better adapt to the multi-label setting, we employ a per-label attention mechanism, which allows our model to learn distinct document representations for each label. We call our method Convolutional Attention for Multi-Label classification (CAML). Our model design is motivated by the conjecture that important information correlated with a code's presence may be contained in short snippets of text which could be anywhere in the document, and that these snippets likely differ for different labels. To cope with the large label space, we exploit the textual descriptions of each code to guide our model towards appropriate parameters: in the absence of many labeled examples for a given code, its parameters should be similar to those of codes with similar textual descriptions.
We evaluate our approach on two versions of MIMIC (Johnson et al., 2016), an open dataset of ICU medical records. Each record includes a variety of narrative notes describing a patient's stay, including diagnoses and procedures. Our approach substantially outperforms previous results on medical code prediction on both MIMIC-II and MIMIC-III datasets.
We consider applications of this work in a decision support setting. Interpretability is important for any decision support system, especially in the Table 1: Presentation of example qualitative evaluations. In real evaluation, system names generating the 4-gram are not given. An 'I' marking indicates a snippet evaluated as informative, and 'HI' indicates that it is highly informative; see § 4 for more details.
medical domain. The system should be able to explain why it predicted each code; even if the codes are manually annotated, it is desirable to explain what parts of the text are most relevant to each code. These considerations further motivate our per-label attention mechanism, which assigns importance values to -grams in the input document, and which can therefore provide explanations for each code, in the form of extracted snippets of text from the input document. We perform a human evaluation of the quality of the explanations provided by the attention mechanism, asking a physician to rate the informativeness of a set of automatically generated explanations. 1
Method
We treat ICD-9 code prediction as a multilabel text classification problem (McCallum, 1999). 2 Let represent the set of ICD-9 codes; the labeling problem for instance is to determine , ∈ {0, 1} for all ∈ . We train a neural network which passes text through a convolutional layer to compute a base representation of the text of each document (Kim, 2014), and makes || binary classifi-1 Our code, data splits, and pre-trained models are available at github.com/jamesmullenbach/ caml-mimic. 2 We focus on codes from the ICD-9 taxonomy, rather than the more recent ICD-10, for the simple reason that this is the version of ICD used in the MIMIC datasets. cation decisions. Rather than aggregating across this representation with a pooling operation, we apply an attention mechanism to select the parts of the document that are most relevant for each possible code. These attention weights are then applied to the base representation, and the result is passed through an output layer, using a sigmoid transformation to compute the likelihood of each code. We employ a regularizer to encourage each code's parameters to be similar to those of codes with similar textual descriptions. We now describe each of these elements in more detail.
Convolutional architecture
At the base layer of the model, we havedimensional pre-trained embeddings for each word in the document, which are horizontally concatenated into the matrix = [ 1 , 2 , … , ], where is the length of the document. Adjacent word embeddings are combined using a convolutional filter ∈ ℝ × × , where is the filter width, the size of the input embedding, and the size of the filter output. At each step , we compute
= ( * ∶ + −1 + ),(1)
where * denotes the convolution operator, is an element-wise nonlinear transformation, and ∈ ℝ is the bias. We additionally pad each side of the input with zeros so that the resulting matrix has dimension ℝ × .
Attention
After convolution, the document is represented by the matrix ∈ ℝ × . It is typical to reduce this matrix to a vector by applying pooling across the length of document, by selecting the maximum or average value at each row (Kim, 2014). However, our goal is to assign multiple labels (i.e., medical codes) for each document, and different parts of the base representation may be relevant for different labels. For this reason, we apply a per-label attention mechanism. An additional benefit is that it selects the -grams from the text that are most relevant to each predicted label.
Formally, for each label , we compute the matrix-vector product, ⊤ , where ∈ ℝ is a vector parameter for label . We then pass the resulting vector through a softmax operator, obtaining a distribution over locations in the document,
= SoftMax( ⊤ ),(2)
where SoftMax( ) = exp( ) ∑ exp( ) , and exp( ) is the element-wise exponentiation of the vector . The attention vector is then used to compute vector representations for each label,
= ∑ =1 , .(3)
As a baseline model, we instead use maxpooling to compute a single vector for all labels,
= max ℎ , .(4)
Classification
Given the vector document representation , we compute a probability for label using another linear layer and a sigmoid transformation:
= ( ⊤ + ),(5)
where ∈ ℝ is a vector of prediction weights, and is a scalar offset. The overall model is illustrated in Figure 1.
Training
The training procedure minimizes the binary cross-entropy loss,
BCE ( , ) = − ∑ =1 log(̂ ) + (1 − ) log(1 −̂ ),(6)
plus the L2 norm of the model weights, using the Adam optimizer (Kingma and Ba, 2015).
Embedding label descriptions
Due to the dimensionality of the label space, many codes are rarely observed in the labeled data. To improve performance on these codes, we use text descriptions of each code from the World Health Organization (2016). Examples can be found in Table 1, next to the code numbers. We use these descriptions to build a secondary module in our network that learns to embed them as vectors. These vectors are then used as the target of regularization on the model parameters . If code is rarely observed in the training data, this regularizer will encourage its parameters to be similar to those of other codes with similar descriptions.
The code embedding module consists of a maxpooling CNN architecture. Let be a max-pooled vector, obtained by passing the description for code into the module. Let be the number of true labels in a training example. We add the following regularizing objective to our loss ,
( , ) = BCE + 1 ∑ ∶ =1 ‖ − ‖ 2 ,(7)
where is a tradeoff hyperparameter that calibrates the performance of the two objectives. We call this model variant Description Regularized-CAML (DR-CAML).
Evaluation of code prediction
This section evaluates the accuracy of code prediction, comparing our models against several competitive baselines. MIMIC-III (Johnson et al., 2016) is an open-access dataset of text and structured records from a hospital ICU. Following previous work, we focus on discharge summaries, which condense information about a stay into a single document. In MIMIC-III, some admissions have addenda to their summary, which we concatenate to form one document. Each admission is tagged by human coders with a set of ICD-9 codes, describing both diagnoses and procedures which occurred during the patient's stay. There are 8,921 unique ICD-9 codes present in our datasets, including 6,918 diagnosis codes and 2,003 procedure codes. Some patients have multiple admissions and therefore multiple discharge summaries; we split the data by patient ID, so that no patient appears in both the training and test sets.
Datasets
In this full-label setting, we use a set of 47,724 discharge summaries from 36,998 patients for training, with 1,632 summaries and 3,372 summaries for validation and testing, respectively.
Secondary evaluations
For comparison with prior work, we also follow Shi et al. (2017) and train and evaluate on a label set consisting of the 50 most frequent labels. In this setting, we filter each dataset down to the instances that have at least one of the top 50 most frequent codes, and subset the training data to equal the size of the training set of Shi et al. (2017), resulting in 8,067 summaries for training, 1,574 for validation, and 1,730 for testing.
We also run experiments with the MIMIC-II dataset, to compare with prior work by Baumel et al. (2018) and Perotte et al. (2013). We use the train/test split of Perotte et al. Table 2.
Preprocessing We remove tokens that contain no alphabetic characters (e.g., removing "500" but keeping "250mg"), lowercase all tokens, and replace tokens that appear in fewer than three training documents with an 'UNK' token. We pretrain word embeddings of size = 100 using the word2vec CBOW method (Mikolov et al., 2013) on the preprocessed text from all discharge summaries. All documents are truncated to a maximum length of 2500 tokens.
Systems
We compare against the following baselines:
• a single-layer one-dimensional convolutional neural network (Kim, 2014);
• a bag-of-words logistic regression model;
• a bidirectional gated recurrent unit (Bi-GRU). 3
For the CNN and Bi-GRU, we initialize the embedding weights using the same pretrained word2vec vectors that we use for the CAML models. All neural models are implemented using PyTorch 4 . The logistic regression model consists of || binary one-vs-rest classifiers acting on unigram bagof-words features for all labels present in the training data. If a label is not present in the training data, the model will never predict it in the held-out data.
Parameter tuning
We tune the hyperparameters of the CAML model and the neural baselines using the Spearmint Bayesian optimization package (Snoek et al., 2012;Swersky et al., 2013). 5 We allow Spearmint to sample parameter values for the L2 penalty on the model weights and learning rate , as well as filter size , number of filters , and dropout probability for the convolutional models, and number of hidden layers of dimension for the Bi-GRU, using precision@8 on the MIMIC-III full-label validation set as the performance measure. We use these parameters for DR-CAML as well, and port the optimized parameters to the MIMIC-II full-label and MIMIC-III 50-label models, and manually fine-tune the learning rate in these settings. We select for DR-CAML based on pilot experiments on the validation sets. Hyperparameter tuning is summarized in Table 3. Convolutional models are trained with dropout after the 3 Our pilot experiments found that GRU was stronger than long short-term memory (LSTM) for this task. 4 https://github.com/pytorch/pytorch 5 https://github.com/HIPS/Spearmint embedding layer. We use a fixed batch size of 16 for all models and datasets. Models are trained with early stopping on the validation set; training terminates after the precision@8 does not improve for 10 epochs, and the model at the time of the highest precision@8 is used on the test set.
Evaluation Metrics
To facilitate comparison with both future and prior work, we report a variety of metrics, focusing on the micro-averaged and macro-averaged F1 and area under the ROC curve (AUC). Micro-averaged values are calculated by treating each (text, code) pair as a separate prediction. Macro-averaged values, while less frequently reported in the multilabel classification literature, are calculated by averaging metrics computed per-label. For recall, the metrics are distinguished as follows:
Micro-R = ∑ || =1 TP ∑ || =1 TP + FN (8) Macro-R = 1 || || ∑ =1 TP TP + FN ,(9)
where TP denotes true positive examples and FN denotes false negative examples. Precision is computed analogously. The macro-averaged metrics place much more emphasis on rare label prediction.
We also report precision at (denoted as 'P@n'), which is the fraction of the highestscored labels that are present in the ground truth. This is motivated by the potential use case as a decision support application, in which a user is presented with a fixed number of predicted codes to review. In such a case, it is more suitable to select a model with high precision than high recall. We choose = 5 and = 8 to compare with prior work (Vani et al., 2017;Prakash et al., 2017). For the MIMIC-III full label setting, we also compute precision@15, which roughly corresponds to the average number of codes in MIMIC-III discharge summaries (Table 2).
Results
Our main quantitative evaluation involves predicting the full set of ICD-9 codes based on the text of the MIMIC-III discharge summaries. These results are shown in Table 4. The CAML model gives the strongest results on all metrics. Attention yields substantial improvements over the "vanilla" convolutional neural network (CNN). The recurrent Bi-GRU architecture is comparable to the vanilla CNN, and the logistic regression baseline is substantially worse than all neural architectures. The best-performing CNN model has 9.86M tunable parameters, compared with 6.14M tunable parameters for CAML. This is due to the hyperparameter search preferring a larger number of filters for the CNN. Finally, we observe that the DR-CAML performs worse on most metrics than CAML, with a tuned regularization coefficient of = 0.01. Among prior work, only Scheurwegs et al.
(2017) evaluate on the full ICD-9 code set for MIMIC-III. Their reported results distinguished between diagnosis codes and procedure codes. The CAML models are stronger on both sets. Additionally, our method does not make use of any external information or structured data, while We feel that precision@8 is the most informative of the metrics, as it measures the ability of the system to return a small high-confidence subset of codes. Even with a space of thousands of labels, our models achieve relatively high precision: of the eight most confident predictions, on average 5.5 are correct. It is also apparent how difficult it is to achieve high Macro-F1 scores, due to the metric's emphasis on rare-label performance. To put these results in context, a hypothetical system that performs perfectly on the 500 most common labels, and ignores all others, would achieve a Macro-F1 of 0.052 and a Micro-F1 of 0.842.
Secondary evaluations
To compare with prior published work, we also evaluate on the 50 most common codes in MIMIC-III (Table 5), and on MIMIC-II (Table 6). We report DR-CAML results on the 50-label setting of MIMIC-III with = 10, and on MIMIC-II with = 0.1, which were determined by grid search on a validation set. The other hyperparameters were left at the settings for the main MIMIC-III evaluation, as described in Table 3. In the 50-label setting of MIMIC-III, we see strong improvement over prior work in all reported metrics, as well as against the baselines, with the exception of precision@5, on which the CNN baseline performs best. We hypothesize that this is because the relatively large value of = 10 for CAML leads to a larger network that is more suited to larger datasets; tuning CAML's hyperparameters on this dataset would be expected to improve performance on all metrics. Baumel et al. (2018) additionally report a micro-F1 score of 0.407 by training on MIMIC-III, and evaluating on MIMIC-II. Our model achieves better performance using only the (smaller) MIMIC-II training set, leaving this alternative training protocol for future work.
Evaluation of Interpretability
We now evaluate the explanations generated by CAML's attention mechanism, in comparison with three alternative heuristics. A physician was presented with explanations from four methods, using a random sample of 100 predicted codes from the MIMIC-III full-label test set. The most important -gram from each method was extracted, along with a window of five words on either side for context. We select = 4 in this setting to emulate a span of attention over words likely to be given by a human reader. Examples can be found in Table 1. Observe that the snippets may overlap in multiple words. We prompted the evaluator to select all text snippets which he felt adequately explained the presence of a given code, provided the code and its description, with the option to distinguish snippets as "highly informative" should they be found particularly informative over others.
Extracting informative text snippets
CAML The attention mechanism allows us to extract -grams from the text that are most influential in the prediction of each label, by taking the argmax of the SoftMax output . results from the max-pooling step as
Max-pooling CNN
= arg max ∈{1,…, − +1} ( ),(10)
we can compute the importance of position for label ,
= ∑ ∶ = , .(11)
We then select the most important -gram for a given label as arg max .
Logistic regression
The informativeness of each -gram with respect to label is scored by the sum of the coefficients of the weight matrix for , over the words in the -gram. The top-scoring -gram is then returned as the explanation.
Code descriptions Finally, we calculate a word similarity metric between each stemmed -gram and the stemmed ICD-9 code description. We compute the idf-weighted cosine similarity, with idf weights calculated on the corpus consisting of all notes and relevant code descriptions. We then select the argmax over -grams in the document, breaking ties by selecting the first occurrence. We remove those note-label pairs for which no -gram has a score greater than 0, which gives an "unfair" advantage to this baseline.
Results
The results of the interpretability evaluation are presented in Table 7. Our model selects the greatest number of "highly informative" explanations, and selects more "informative" explanations than both the CNN baseline and the logistic regression model. While the cosine similarity metric also performs well, the examples in Table 1 demonstrate the strengths of CAML in extracting text snippets in line with more intuitive explanations for the presence of a code. As noted above, there exist some cases, which we exclude, where the cosine similarity method is unable to provide any explanation, because no -grams in a note have a nonzero similarity for a given label description. This occurs for about 12% of all note-label pairs in the test set.
Related Work
Attentional Convolution for NLP CNNs have been successfully applied to tasks such as sentiment classification (Kim, 2014) and language modeling (Dauphin et al., 2017). Our work combines convolution with attention (Bahdanau et al., 2015;Yang et al., 2016) to select the most relevant parts of the discharge summary. Other recent work has combined convolution and attention (e.g., Allamanis et al., 2016;Yin et al., 2016;dos Santos et al., 2016;Yin and Schütze, 2017). Our attention mechanism is most similar to those of Yang et al. (2016) and Allamanis et al. (2016), in that we use context vectors to compute attention over specific locations in the text. Our work differs in that we compute separate attention weights for each label in our label space, which is better tuned to our goal of selecting locations in a document which are most important for predicting specific labels.
Automatic ICD coding ICD coding is a longstanding task in the medical informatics community, which has been approached with machine learning and handcrafted methods (Scheurwegs et al., 2015). Many recent approaches, like ours, use unstructured text data as the only source of information (e.g., Kavuluru et al., 2015;Subotin and Davis, 2014), though some incorporates struc- (Wang et al., 2016), relied on datasets that focus on a subset of medical scenarios (Zhang et al., 2017), or evaluated on data that are not publicly available, making direct comparison difficult (Subotin and Davis, 2016). A recent shared task for ICD-10 coding focused on coding of death certificates in English and French (Névéol et al., 2017). This dataset also contains shorter documents than those we consider, with an average of 18 tokens per certificate in the French corpus.
We use the open-access MIMIC datasets containing de-identified, general-purpose records of intensive care unit stays at a single hospital. Perotte et al. (2013) use "flat" and "hierarchical" SVMs; the former treats each code as an individual prediction, while the latter trains on child codes only if the parent code is present, and predicts on child codes only if the parent code was positively predicted. Scheurwegs et al. (2017) use a feature selection approach to ICD-9 and ICD-10 classification, incorporating structured and unstructured text information from EHRs. They evaluate over various medical specialties and on the MIMIC-III dataset. We compare directly to their results on the full label set of MIMIC-III.
Other recent approaches have employed neural network architectures. Baumel et al. (2018) apply recurrent networks with hierarchical sentence and word attention (the HA-GRU) to classify ICD9 diagnosis codes while providing insights into the model decision process. Similarly, Shi et al. (2017) applied character-aware LSTMs to generate sentence representations from specific subsections of discharge summaries, and apply attention to form a soft matching between the representations and the top 50 codes. Prakash et al. (2017) use memory networks that draw from discharge summaries as well as Wikipedia, to predict top-50 and top-100 codes. Another recent neural architecture is the Grounded Recurrent Neural Network (Vani et al., 2017), which employs a modified GRU with dimensions dedicated to predicting the presence of individual labels. We compare directly with published results from all of these papers, except Vani et al. (2017), who evaluate on only a 5000 code subset of ICD-9. Empirically, the CAML architecture proposed in this paper yields stronger results across all experimental conditions. We attribute these improvements to the attention mechanism, which focuses on the most critical features for each code, rather than applying a uniform pooling operation for all codes. We also observed that convolution-based models are at least as effective, and significantly more computationally efficient, than recurrent neural networks such as the Bi-GRU.
Explainable text classification A goal of this work is that the code predictions be explainable from features of the text. Prior work has also em-phasized explainability. Lei et al. (2016) model "rationales" through a latent variable, which tags each word as relevant to the document label. compute the salience of individual words by the derivative of the label score with respect to the word embedding. Ribeiro et al. (2016) use submodular optimization to select a subset of features that closely approximate a specific classification decision (this work is also notable for extensive human evaluations). In comparison to these approaches, we employ a relatively simple attentional architecture; this simplicity is motivated by the challenge of scaling to multi-label classification with thousands of possible labels. Other prior work has emphasized the use of attention for highlighting salient features of the text (e.g., Rush et al., 2015;Rocktäschel et al., 2016), although these papers did not perform human evaluations of the interpretability of the features selected by the attention mechanism.
Conclusions and Future Work
We present CAML, a convolutional neural network for multi-label document classification, which employs an attention mechanism to adaptively pool the convolution output for each label, learning to identify highly-predictive locations for each label. CAML yields strong improvements over previous metrics on several formulations of the ICD-9 code prediction task, while providing satisfactory explanations for its predictions. Although we focus on a clinical setting, CAML is extensible without modification to other multi-label document tagging tasks, including ICD-10 coding. We see a number of directions for future work. From the linguistic side, we plan to integrate the document structure of discharge summaries in MIMIC-III, and to better handle non-standard writing and other sources of out-of-vocabulary tokens. From the application perspective, we plan to build models that leverage hierarchy of ICD codes (Choi et al., 2016), and to attempt the more difficult task of predicting diagnosis and treatment codes for future visits from discharge summaries.
Figure 1 :
1CAML architecture with per-label attention shown for one label. In a max-pooling architecture, is mapped directly to the vector by maximizing over each dimension.
(2013), which consists of 20,533 training examples and 2,282 testing examples. Detailed statistics for the three settings are summarized in
We select the -grams that provide the maximum value selected by maxpooling at least once and weighting by the final layer weights. Defining an argmax vector
934.1: "Foreign body in main bronchus" CAML (HI) ...line placed bronchoscopy performed showing large mucus plug on the left on transfer to... Cosine Sim ...also needed medication to help your body maintain your blood pressure after receiving iv... CNN ...found to have a large lll lingular pneumonia on chest x ray he was... Logistic Regression ...impression confluent consolidation involving nearly the entire left lung with either bronchocentric or vascular... Logistic Regression (HI) ...anticoagulation monitored on tele pump systolic dysfunction with ef of seen on recent echo...442.84: "Aneurysm of other visceral artery"
CAML (I)
...and gelfoam embolization of right hepatic artery branch pseudoaneurysm coil embolization
of the gastroduodenal...
Cosine Sim
...coil embolization of the gastroduodenal artery history of present illness the pt is a...
CNN
...foley for hemodynamic monitoring and serial hematocrits angio was performed and his gda
was...
Logistic Regression (I)
...and gelfoam embolization of right hepatic artery branch pseudoaneurysm coil embolization
of the gastroduodenal...
428.20: "Systolic heart failure, unspecified"
CAML
...no mitral valve prolapse moderate to severe mitral regurgitation is seen the tricuspid valve...
Cosine Sim
...is seen the estimated pulmonary artery systolic pressure is normal there is no pericardial...
CNN
...and suggested starting hydralazine imdur continue aspirin arg admitted at baseline cr ap-
pears patient...
Table 2 :
2Descriptive statistics for MIMIC discharge summary training sets.Range
CAML CNN Bi-GRU
50-500
50
500
-
2-10
10
4
-
0.2-0.8
0.2
0.2
-
0, 0.001, 0.01, 0.1
0
0
0
0.0001,
0.0003,
0.001, 0.003
0.0001 0.003 0.003
1-4
-
-
1
32-512
-
-
512
Table 3 :
3Hyperparameter ranges and optimal values for each neural model selected by Spearmint.
Table 4 :
4Results on MIMIC-III full, 8922 labels. Here, "Diag" denotes Micro-F1 performance on diagnosis codes
only, and "Proc" denotes Micro-F1 performance on procedure codes only. Here and in all tables, (*) by the bold
(best) result indicates significantly improved results compared to the next best result, < 0.001.
Scheurwegs et al. use structured data and various
medical ontologies in their text representation.
Table 5 :
5Results on MIMIC-III, 50 labels.
Table 6 :
6Results on MIMIC-II full, 5031 labels.Highly
Method
Informative informative
CAML
46
22
Code Descriptions
48
20
Logistic Regression 41
18
CNN
36
13
Table 7 :
7Qualitative evaluation results. The columns show the number of examples (out of 100) for which each method was selected as "informative" or "highly informative". tured data as well (e.g.,Scheurwegs et al., 2017;Wang et al., 2016). Most previous methods have either evaluated only on a strict subset of the full ICD label space
Acknowledgments Helpful feedback was provided by the anonymous reviewers, and by the members of the Georgia Tech Computational Linguistics lab. The project was partially supported by project HDTRA1-15-1-0019 from the Defense Threat Reduction Agency, by the National Science Foundation under awards IIS-1418511 and CCF-1533768, by the National Institutes of Health under awards 1R01MD011682-01 and R56HL138415, by Children's Healthcare of Atlanta, and by UCB.
A convolutional attention network for extreme summarization of source code. Miltiadis Allamanis, Hao Peng, Charles Sutton, International Conference on Machine Learning. Miltiadis Allamanis, Hao Peng, and Charles Sutton. 2016. A convolutional attention network for extreme summarization of source code. In International Conference on Machine Learning. pages 2091-2100.
Anand Avati, Kenneth Jung, Stephanie Harman, Lance Downing, Andrew Ng, Nigam H Shah, arXiv:1711.06402Improving palliative care with deep learning. arXiv preprintAnand Avati, Kenneth Jung, Stephanie Harman, Lance Downing, Andrew Ng, and Nigam H. Shah. 2017. Improving palliative care with deep learning. arXiv preprint arXiv:1711.06402 .
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations.
Multi-label classification of patient notes a case study on ICD code assignment. Tal Baumel, Jumana Nassour-Kassis, Michael Elhadad, Noémie Elhadad, AAAI Workshop on Health Intelligence. Tal Baumel, Jumana Nassour-Kassis, Michael Elhadad, and Noémie Elhadad. 2018. Multi-label classifica- tion of patient notes a case study on ICD code assign- ment. In AAAI Workshop on Health Intelligence.
Accuracy of ICD-9-CM codes for identifying cardiovascular and stroke risk factors. Elena Birman-Deych, Amy D Waterman, Yan Yan, David S Nilasena, Martha J Radford, Brian F Gage, Medical care. 435Elena Birman-Deych, Amy D. Waterman, Yan Yan, David S. Nilasena, Martha J. Radford, and Brian F Gage. 2005. Accuracy of ICD-9-CM codes for iden- tifying cardiovascular and stroke risk factors. Medi- cal care 43(5):480-485.
Doctor AI: Predicting clinical events via recurrent neural networks. Edward Choi, Mohammad Taha Bahadori, Andy Schuetz, F Walter, Jimeng Stewart, Sun, Machine Learning for Healthcare Conference. Edward Choi, Mohammad Taha Bahadori, Andy Schuetz, Walter F Stewart, and Jimeng Sun. 2016. Doctor AI: Predicting clinical events via recurrent neural networks. In Machine Learning for Health- care Conference. pages 301-318.
Language modeling with gated convolutional networks. N Yann, Angela Dauphin, Michael Fan, David Auli, Grangier, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningYann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2017. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learn- ing. pages 933-941. http://proceedings. mlr.press/v70/dauphin17a.html.
A hierarchical approach to the automatic categorization of medical documents. R S Luciano, Alberto H F De Lima, Berthier A Laender, Ribeiro-Neto, Proceedings of the seventh international conference on Information and knowledge management. the seventh international conference on Information and knowledge managementACMLuciano R.S. de Lima, Alberto H.F. Laender, and Berthier A. Ribeiro-Neto. 1998. A hierarchical ap- proach to the automatic categorization of medical documents. In Proceedings of the seventh inter- national conference on Information and knowledge management. ACM, pages 132-139.
PheWAS: demonstrating the feasibility of a phenome-wide scan to discover gene-disease associations. Joshua C Denny, Marylyn D Ritchie, Melissa A Basford, Jill M Pulley, Lisa Bastarache, Kristin Brown-Gentry, Deede Wang, Dan R Masys, Dan M Roden, Dana C Crawford, Bioinformatics. 269Joshua C. Denny, Marylyn D. Ritchie, Melissa A. Bas- ford, Jill M. Pulley, Lisa Bastarache, Kristin Brown- Gentry, Deede Wang, Dan R. Masys, Dan M. Ro- den, and Dana C. Crawford. 2010. PheWAS: demon- strating the feasibility of a phenome-wide scan to discover gene-disease associations. Bioinformatics 26(9):1205-1210.
Attentive pooling networks. Cıcero Nogueira Dos Santos, Ming Tan, Bing Xiang, Bowen Zhou, Cıcero Nogueira dos Santos, Ming Tan, Bing Xiang, and Bowen Zhou. 2016. Attentive pooling networks.
. Corr, abs/1602.03609CoRR, abs/1602.03609 .
MIMIC-III, a freely accessible critical care database. E W Alistair, Tom J Johnson, Lu Pollard, Shen, H Liwei, Mengling Lehman, Mohammad Feng, Benjamin Ghassemi, Peter Moody, Leo Anthony Szolovits, Roger G Celi, Mark, Scientific data 3Alistair E.W. Johnson, Tom J. Pollard, Lu Shen, Li- wei H. Lehman, Mengling Feng, Mohammad Ghas- semi, Benjamin Moody, Peter Szolovits, Leo An- thony Celi, and Roger G. Mark. 2016. MIMIC-III, a freely accessible critical care database. Scientific data 3.
An empirical evaluation of supervised learning approaches in assigning diagnosis codes to electronic medical records. Ramakanth Kavuluru, Anthony Rios, Yuan Lu, Artificial intelligence in medicine. 652Ramakanth Kavuluru, Anthony Rios, and Yuan Lu. 2015. An empirical evaluation of supervised learn- ing approaches in assigning diagnosis codes to elec- tronic medical records. Artificial intelligence in medicine 65(2):155-166.
Convolutional neural networks for sentence classification. Yoon Kim, Proceedings of the 2017. the 2017Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2017
Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Lan- guage Processing. pages 1746-1751.
Adam: A method for stochastic optimization. Diederik Kingma, Jimmy Ba, International Conference on Learning Representations. Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations.
Rationalizing neural predictions. Tao Lei, Regina Barzilay, Tommi Jaakkola, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational LinguisticsAustin, TexasTao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Com- putational Linguistics, Austin, Texas, pages 107- 117. https://aclweb.org/anthology/ D16-1011.
Visualizing and understanding neural models in NLP. Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsJiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2016. Visualizing and understanding neural models in NLP. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies. Association for Computational Linguistics, San Diego, California, pages 681-691. http:// www.aclweb.org/anthology/N16-1082.
Multi-label text classification with a mixture model trained by EM. Andrew Mccallum, AAAI workshop on Text Learning. Andrew McCallum. 1999. Multi-label text classifica- tion with a mixture model trained by EM. In AAAI workshop on Text Learning. pages 1-7.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in neural information processing systems. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems. pages 3111-3119.
CLEF ehealth 2017 multilingual information extraction task overview: ICD10 coding of death certificates in english and french. Aurélie Névéol, N Robert, Anderson, Cyril Bretonnel Cohen, Thomas Grouin, Grégoire Lavergne, Aude Rey, Claire Robert, Pierre Rondet, Zweigenbaum, CLEF 2017 Evaluation Labs and Workshop: Online Working Notes. 17Aurélie Névéol, Robert N Anderson, K Bretonnel Co- hen, Cyril Grouin, Thomas Lavergne, Grégoire Rey, Aude Robert, Claire Rondet, and Pierre Zweigen- baum. 2017. CLEF ehealth 2017 multilingual in- formation extraction task overview: ICD10 coding of death certificates in english and french. In CLEF 2017 Evaluation Labs and Workshop: Online Work- ing Notes, CEUR-WS. page 17.
Diagnosis code assignment: models and evaluation metrics. Rimma Adler Perotte, Karthik Pivovarov, Nicole Natarajan, Frank Weiskopf, Noémie Wood, Elhadad, Journal of the American. Adler Perotte, Rimma Pivovarov, Karthik Natarajan, Nicole Weiskopf, Frank Wood, and Noémie El- hadad. 2013. Diagnosis code assignment: models and evaluation metrics. Journal of the American
A method for modeling co-occurrence propensity of clinical codes with application to ICD-10-PCS autocoding. Michael Subotin, Anthony R Davis, Journal of the American Medical Informatics Association. 235Michael Subotin and Anthony R. Davis. 2016. A method for modeling co-occurrence propensity of clinical codes with application to ICD-10-PCS auto- coding. Journal of the American Medical Informat- ics Association 23(5):866-871.
Multi-task bayesian optimization. Kevin Swersky, Jasper Snoek, Ryan P Adams, Advances in Neural Information Processing Systems. C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. WeinbergerCurran Associates, Inc26Kevin Swersky, Jasper Snoek, and Ryan P Adams. 2013. Multi-task bayesian optimization. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Infor- mation Processing Systems 26, Curran Associates, Inc., pages 2004-2012.
Ankit Vani, Yacine Jernite, David Sontag, arXiv:1705.08557Grounded recurrent neural networks. arXiv preprintAnkit Vani, Yacine Jernite, and David Sontag. 2017. Grounded recurrent neural networks. arXiv preprint arXiv:1705.08557 .
Diagnosis code assignment using sparsity-based disease correlation embedding. Sen Wang, Xiaojun Chang, Xue Li, Guodong Long, Lina Yao, Z Quan, Sheng, IEEE Transactions on Knowledge and Data Engineering. 2812Sen Wang, Xiaojun Chang, Xue Li, Guodong Long, Lina Yao, and Quan Z Sheng. 2016. Diagnosis code assignment using sparsity-based disease correlation embedding. IEEE Transactions on Knowledge and Data Engineering 28(12):3191-3202.
International statistical classification of diseases and related health problems 10th revision. World Health OrganizationWorld Health Organization. 2016. International statistical classification of diseases and re- lated health problems 10th revision. http: //apps.who.int/classifications/ icd10/browse/2016/en.
Hierarchical attention networks for document classification. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J Smola, Eduard H Hovy, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsZichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. 2016. Hi- erarchical attention networks for document classifi- cation. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies. Association for Computational Linguistics, San Diego, California, pages 1480-1489.
. Wenpeng Yin, Hinrich Schütze, arXiv:1710.00519Attentive convolution. arXiv preprintWenpeng Yin and Hinrich Schütze. 2017. Attentive convolution. arXiv preprint arXiv:1710.00519 .
ABCNN: Attention-based convolutional neural network for modeling sentence pairs. Wenpeng Yin, Hinrich Schütze, Bing Xiang, Bowen Zhou, Transactions of the Association for Computational Linguistics. 4Wenpeng Yin, Hinrich Schütze, Bing Xiang, and Bowen Zhou. 2016. ABCNN: Attention-based con- volutional neural network for modeling sentence pairs. Transactions of the Association for Compu- tational Linguistics 4:259-272.
Enhancing automatic ICD-9-CM code assignment for medical texts with pubmed. Danchen Zhang, Daqing He, Sanqiang Zhao, Lei Li, BioNLP. Danchen Zhang, Daqing He, Sanqiang Zhao, and Lei Li. 2017. Enhancing automatic ICD-9-CM code as- signment for medical texts with pubmed. BioNLP 2017 pages 263-271.
| [
"https://github.com/pytorch/pytorch",
"https://github.com/HIPS/Spearmint"
] |
[
"Latent Template Induction with Gumbel-CRFs",
"Latent Template Induction with Gumbel-CRFs"
] | [
"Yao Fu yao.fu@ed.ac.uk@chuanqi.tcq \nILCC\nUniversity of Edinburgh\n\n",
"Chuanqi Tan \nAlibaba Group\n\n",
"Mosha Chen chenmosha.cms@alibaba-inc.com \nAlibaba Group\n\n",
"Yansong Feng fengyansong@pku.edu.cn \nAlibaba Group\n\n\nWICT, Peking Univeristy\n\n",
"Alexander M Rush arush@cornell.edu \nCornell University\n\n"
] | [
"ILCC\nUniversity of Edinburgh\n",
"Alibaba Group\n",
"Alibaba Group\n",
"Alibaba Group\n",
"WICT, Peking Univeristy\n",
"Cornell University\n"
] | [] | Learning to control the structure of sentences is a challenging problem in text generation. Existing work either relies on simple deterministic approaches or RL-based hard structures. We explore the use of structured variational autoencoders to infer latent templates for sentence generation using a soft, continuous relaxation in order to utilize reparameterization for training. Specifically, we propose a Gumbel-CRF, a continuous relaxation of the CRF sampling algorithm using a relaxed Forward-Filtering Backward-Sampling (FFBS) approach. As a reparameterized gradient estimator, the Gumbel-CRF gives more stable gradients than score-function based estimators. As a structured inference network, we show that it learns interpretable templates during training, which allows us to control the decoder during testing. We demonstrate the effectiveness of our methods with experiments on data-to-text generation and unsupervised paraphrase generation. * Work done during an internship at Alibaba DAMO Academy, in collaboration with PKU and Cornell. | null | [
"https://arxiv.org/pdf/2011.14244v1.pdf"
] | 227,228,098 | 2011.14244 | 2841c420bad3c1e41939b4492fed717f107e286c |
Latent Template Induction with Gumbel-CRFs
Yao Fu yao.fu@ed.ac.uk@chuanqi.tcq
ILCC
University of Edinburgh
Chuanqi Tan
Alibaba Group
Mosha Chen chenmosha.cms@alibaba-inc.com
Alibaba Group
Yansong Feng fengyansong@pku.edu.cn
Alibaba Group
WICT, Peking Univeristy
Alexander M Rush arush@cornell.edu
Cornell University
Latent Template Induction with Gumbel-CRFs
Learning to control the structure of sentences is a challenging problem in text generation. Existing work either relies on simple deterministic approaches or RL-based hard structures. We explore the use of structured variational autoencoders to infer latent templates for sentence generation using a soft, continuous relaxation in order to utilize reparameterization for training. Specifically, we propose a Gumbel-CRF, a continuous relaxation of the CRF sampling algorithm using a relaxed Forward-Filtering Backward-Sampling (FFBS) approach. As a reparameterized gradient estimator, the Gumbel-CRF gives more stable gradients than score-function based estimators. As a structured inference network, we show that it learns interpretable templates during training, which allows us to control the decoder during testing. We demonstrate the effectiveness of our methods with experiments on data-to-text generation and unsupervised paraphrase generation. * Work done during an internship at Alibaba DAMO Academy, in collaboration with PKU and Cornell.
Introduction
Recent work in NLP has focused on model interpretability and controllability [63,34,24,55,16], aiming to add transparency to black-box neural networks and control model outputs with task-specific constraints. For tasks such as data-to-text generation [50,63] or paraphrasing [37,16], interpretability and controllability are especially important as users are interested in what linguistic properties -e.g., syntax [4], phrases [63], main entities [49] and lexical choices [16] -are controlled by the model and which part of the model controls the corresponding outputs.
Most existing work in this area relies on non-probabilistic approaches or on complex Reinforcement Learning (RL)-based hard structures. Non-probabilistic approaches include using attention weights as sources of interpretability [26,61], or building specialized network architectures like entity modeling [49] or copy mechanism [22]. These approaches take advantages of differentiability and end-to-end training, but does not incorporate the expressiveness and flexibility of probabilistic approaches [45,6,30]. On the other hand, approaches using probabilistic graphical models usually involve non-differentiable sampling [29,65,34]. Although these structures exhibit better interpretability and controllability [34], it is challenging to train them in an end-to-end fashion.
In this work, we aim to combine the advantages of relaxed training and graphical models, focusing on conditional random field (CRF) models. Previous work in this area primarily utilizes the score function estimator (aka. REINFORCE) [62,52,29,32] to obtain Monte Carlo (MC) gradient estimation for simplistic categorical models [44,43]. However, given the combinatorial search space, these approaches suffer from high variance [20] and are notoriously difficult to train [29]. Furthermore, in a linear-chain CRF setting, score function estimators can only provide gradients for the whole sequence, while it would be ideal if we can derive fine-grained pathwise gradients [44] for each step of the sequence. In light of this, naturally one would turn to reparameterized estimators with pathwise gradients which are known to be more stable with lower variance [30,44].
Our simple approach for reparameterizing CRF inference is to directly relax the sampling process itself. Gumbel-Softmax [27,38] has become a popular method for relaxing categorical sampling. We propose to utilize this method to relax each step of CRF sampling utilizing the forward-filtering backward-sampling algorithm [45]. Just as with Gumbel-Softmax, this approach becomes exact as temperature goes to zero, and provides a soft relaxation in other cases. We call this approach Gumbel-CRF. As is discussed by previous work that a structured latent variable may have a better inductive bias for capturing the discrete nature of sentences [28,29,16], we apply Gumbel-CRF as the inference model in a structured variational autoencoder for learning latent templates that control the sentence structures. Templates are defined as a sequence of states where each state controls the content (e.g., properties of the entities being discussed) of the word to be generated.
Experiments explore the properties and applications of the Gumbel-CRF approach. As a reparameterized gradient estimator, compared with score function based estimators, Gumbel-CRF not only gives lower-variance and fine-grained gradients for each sampling step, which leads to a better text modeling performance, but also introduce practical advantages with significantly fewer parameters to tune and faster convergence ( § 6.1). As a structured inference network, like other hard models trained with REINFORCE, Gumbel-CRF also induces interpretable and controllable templates for generation. We demonstrate the interpretability and controllability on unsupervised paraphrase generation and datato-text generation ( § 6.2). Our code is available at https://github.com/FranxYao/Gumbel-CRF.
Related Work
Latent Variable Models and Controllable Text Generation. Broadly, our model follows the line of work on deep latent variable models [14,30,54,23,11] for text generation [28,42,16,67]. At an intersection of graphical models and deep learning, these works aim to embed interpretability and controllability into neural networks with continuous [7,67], discrete [27,38], or structured latent variables [29]. One typical template model is the Hidden Semi-Markov Model (HSMM), proposed by Wiseman et al. [63]. They use a neural generative HSMM model for joint learning the latent and the sentence with exact inference. Li and Rush [34] further equip a Semi-Markov CRF with posterior regularization [18]. While they focus on regularizing the inference network, we focus on reparameterizing it. Other works about controllability include [55,24,33], but many of them stay at word-level [17] while we focus on structure-level controllability.
Monte Carlo Gradient Estimation and Continuous Relaxation of Discrete Structures. Within the range of MC gradient estimation [44], our Gumbel-CRF is closely related to reparameterization and continuous relaxation techniques for discrete structures [64,36,31]. To get the MC gradient for discrete structures, many previous works use score-function estimators [52,43,29,65]. This family of estimators is generally hard to train, especially for a structured model [29], while reparameterized estimators [30,54] like Gumbel-Softmax [27,38] give a more stable gradient estimation. In terms of continuous relaxation, the closest work is the differentiable dynamic programming proposed by Mensch and Blondel [40]. However, their approach takes an optimization perspective, and it is not straightforward to combine it with probabilistic models. Compared with their work, our Gumbel-CRF is a specific differentiable DP tailored for FFBS with Gumbel-Softmax. In terms of reparameterization, the closest work is the Perturb-and-MAP Markov Random Field (PM-MRF), proposed by Papandreou and Yuille [46]. However, when used for sampling from CRFs, PM-MRF is a biased sampler, while FFBS is unbiased. We will use a continuously relaxed PM-MRF as our baseline, and compare the gradient structures in detail in the Appendix.
Gumbel-CRF: Relaxing the FFBS Algorithm
In this section, we discuss how to relax a CRF with Gumbel-Softmax to allow for reparameterization. In particular, we are interested in optimizing φ for an expectation under a parameterized CRF distribution, e.g.,
E p φ (z|x) [f (z)](1)
We start by reviewing Gumbel-Max [27], a technique for sampling from a categorical distribution. Let Y be a categorical random variable with domain {1, .., K} parameterized by the probability vector π = [π 1 , .., π K ], denoted as y ∼ Cat(π). Let G(0) denotes the standard Gumbel distribution,
g i ∼ G(0), i ∈ {1,p(zt|ẑt+1, x) = Φ(z t ,ẑ t+1 ,x t+1 )α t (z t ) α t+1 (ẑ t+1 ) 6: Sampleẑt ∼ p(zt|ẑt+1, x) 7: end for 8: Returnẑ1:T Wild wood coffee is nice z 1 = 1 z 2 = 1 z 3 = 1 z 4 = 2 z 5 = 3 Go A(w) Encoder Gumbel-CRF Decoder Wild A(w) wood A(w) coffee A(w) is A(w)
Wild wood coffee is nice
g 1 = 0 g 2 = 0 g 3 = 1 g 4 = 1 g 5 = 1 z 1 = 1 z 2 = 1 z 3 = 2 z 4 = 3 z 5 = 3 Go A(w) Joe's A(w) coffee A(w) is A(w) quite A(w)
Joe's coffee is quite popular g 1 = 0 g 2 = 1 g 3 = 1 g 4 = 0
3: πT = αT /Z 4:zT = softmax((log πT + g)/τ ), g ∼ G(0) 5:ẑT = argmax(zT ) 6: for t ← T − 1, 1 do 7: πt = Φ(z t ,ẑ t+1 ,x t+1 )α t (z t ) α t+1 (ẑ t+1 )
8:zt = softmax((log πt + g)/τ ), g ∼ G(0) 9:ẑt = argmax(zt) 10: end for 11: Returnẑ1:T ,z1:T z is a relaxation forẑ y = arg max i (log π i + g i ). Then the Gumbel-Softmax reparameterization is a continuous relaxation of Gumbel-Max by replacing the hard argmax operation with the softmax,
y = softmax([log π 1 + g 1 , .., log π K + g K ]) = exp((log π + g)/τ ) i exp((log π i + g i )/τ )(2)
whereỹ can be viewed as a relaxed one-hot vector of y. As τ → 0, we haveỹ → y.
Now we turn our focus to CRFs which generalize softmax to combinatorial structures. Given a sequence of inputs x = [x 1 , .., x T ] a linear-chain CRF is parameterized by the factorized potential function Φ(z, x) and defines the probability of a state sequence z = [z 1 , .., z T ], z t ∈ {1, 2, ..., K} over x.
p(z|x) = Φ(z, x) Z Φ(z, x) = t Φ(z t−1 , z t , x t )(3)α 1:T = Forward(Φ(z, x)) Z = i α T (i)(4)
The conditional probability of z is given by the Gibbs distribution with a factorized potential (equation 3). The partition function Z can be calculated with the Forward algorithm [58] (equation 4) where α t is the forward variable summarizing the potentials up to step t.
To sample from a linear-chain CRF, the standard approach is to use the forward-filtering backwardsampling (FFBS) algorithm (Algorithm 1). This algorithm takes α and Z as inputs and samples z with a backward pass utilizing the locally conditional independence. The hard operation comes from the backward sampling operation forẑ t ∼ p(z t |ẑ t+1 , x) at each step (line 6). This is the operation that we focus on.
We observe that p(z t |ẑ t+1 , x) is a categorical distribution, which can be directly relaxed with Gumbel-Softmax. This leads to our derivation of the Gumbelized-FFBS algorithm(Algorithm 2). The backbone of Algorithm 2 is the same as the original FFBS except for two key modifications: (1) Gumbel-Max (line 8-9) recovers the unbiased sampleẑ t and the same sampling path as Algorithm 1;
(2) the continuous relaxation of argmax with softmax (line 8) that gives the differentiable 2 (but biased)z.
We can apply this approach in any setting requiring structured sampling. For instance let p φ (z|x) denote the sampled distribution and f (z) be a downstream model. We achieve a reparameterized gradient estimator for the sampled model withz:
∇ φ E p φ (z|x) [f (z)] ≈ E g∼G(0) [∇ φ f (z(φ, g))](5)z t ∇ ϕzt (ϕ, g)
CRF transition factor CRF emission factor Figure 2: Architecture of our model. Note the structure of gradients induced by Gumbel-CRF differs significantly from score-function approaches. Score function receives a gradient for the sampled sequence ∇ φ log q φ (z|x) while Gumbel-CRF allows the model to backprop gradients along each sample step ∇ φzt (red dashed arrows) without explicit factorization of the generative model.
z 1 z 2 z 3 z 4 z 5
We further consider a straight-through (ST) version [27] of this estimator where we use the hard sampleẑ in the forward pass, and back-propagate through each of the softz t .
We highlight one additional advantage of this reparameterized estimator (and its ST version), compared with the score-function estimator. Gumbel-CRF usesz which recieve direct fine-grained gradients for each step from the f itself. As illustrated in Figure 1 (here f is the generative model),z t is a differentiable function of the potential and the noise:z t =z t (φ, g). So during back-propagation, we can take stepwise gradients ∇ φzt (φ, g) weighted by the uses of the state. On the other hand, with a score-function estimator, we only observe the reward for the whole sequence, so the gradient is at the sequence level ∇ φ log p φ (z|x). The lack of intermediate reward, i.e., stepwise gradients, is an essential challenge in reinforcement learning [56,66]. While we could derive a model specific factorization for the score function estimator [], this challenge is circumvented with the structure of Gumbel-CRF, thus significantly reducing modeling complexity in practice (detailed demonstrations in experiments).
Gumbel-CRF VAE
An appealing use of reparameterizable CRF models is to learn variational autoencoders (VAEs) with a structured inference network. Past work has shown that these models (trained with RL) [29,34] can learn to induce useful latent structure while producing accurate models. We introduce a VAE for learning latent templates for text generation. This model uses a fully autoregressive generative model with latent control states. To train these control states, it uses a CRF variational posterior as an inference model. Gumbel CRF is used to reduce the variance of this procedure.
Generative Model We assume a simple generative process where each word x t of a sentence x = [x 1 , .., x T ] is controlled by a sequence of latent states z = [z 1 , .., z T ], i.e., template, similar to Li and Rush [34]:
p θ (x, z) = t p(x t |z t , z 1:t−1 , x 1:t−1 ) · p(z t |z 1:t−1 , x 1:t−1 ) (6) h t = Dec([z t−1 ; x t−1 ], h t−1 ) (7) p(z t |z 1:t−1 , x 1:t−1 ) = softmax(FF(h t )) (8) p(x t |z t , z 1:t−1 , x 1:t−1 ) = softmax(FF([e(z t ); h t ]))(9)
Where Dec(·) denotes the decoder, FF(·) denotes a feed-forward network, h t denotes the decoder state, [·; ·] denotes vector concatenation. e(·) denotes the embedding function. Under this formulation, the generative model is autoregressive w.r.t. both x and z. Intuitively, it generates the control states and words in turn, and the current word x t is primarily controlled by its corresponding z t .
Inference Model Since the exact inference of the posterior p θ (z|x) is intractable, we approximate it with a variational posterior q φ (z|x) and optimize the following form of the ELBO objective:
ELBO = E q φ (z|x) [log p θ (x, z)] − H[q φ (z|x)](10)
Where H denotes the entropy. The key use of Gumbel-CRF is for reparameterizing the inference model q φ (z|x) to learn control-state templates. Following past work [25], we parameterize q φ (z|x) as a linear-chain CRF whose potential is predicted by a neural encoder:
h (enc) 1:T = Enc(x 1:T ) (11) Φ(z t , x t ) = W Φ h (enc) t + b Φ (12) Φ(z t−1 , z t , x t ) = Φ(z t−1 , z t ) · Φ(z t , x t )(13)
Where Enc(·) denotes the encoder and h (enc) t is the encoder state. With this formulation, the entropy term in equation 10 can be computed efficiently with dynamic programming, which is differentiable [39].
Training and Testing
The key challenge for training is to maximize the first term of the ELBO under the expectation of the inference model, i.e.
E q φ (z|x) [log p θ (x, z)]
Here we use the Gumbel-CRF gradient estimator with relaxed samplesz from the Gumbelized FFBS (Algorithm 2) in both the forward and backward passes. For the ST version of Gumbel-CRF, we use the exact sampleẑ in the forward pass and back-propogate gradients through the relaxedz. During testing, for evaluating paraphrasing and data-to-text, we use greedy decoding for both z and x. For experiments controlling the structure of generated sentences, we sample a fixed MAP z from the training set (i.e., the aggregated variational posterior), feed it to each decoder steps, and use it to control the generated x.
Extension to Conditional Settings For conditional applications, such as paraphrasing and datato-text, we make a conditional extension where the generative model is conditioned on a source data structure s, formulated as p θ (x, z|s). Specifically, for paraphrase generation, s = [s 1 , ..., s N ] is the bag of words (a set, N being the size of the BOW) of the source sentence, similar to Fu et al. [16]. We aim to generate a different sentence x with the same meaning as the input sentence. In addition to being autoregressive on x and z, the decoder also attend to [1] and copy [22] from s. For data-to-text, we denote the source data is formed as a table of key-value pairs:
s = [(k 1 , v 1 ), ..., (k N , v N )]
, N being size of the table. We aim to generate a sentence x that best describes the table. Again, we condition the generative model on s by attending to and copying from it. Note our formulation would effectively become a neural version of slot-filling: for paraphrase generation we fill the BOW into the neural templates, and for data-to-text we fill the values into neural templates. We assume the inference model is independent from the source s and keep it unchanged, i.e., q φ (z|x, s) = q φ (z|x). The ELBO objective in this conditional setting is:
ELBO = E q φ (z|x) [log p θ (x, z|s)] − H[q φ (z|x)](14)
Experimental Setup
Our experiments are in two parts. First, we compare Gumbel-CRF to other common gradient estimators on the standard text modeling task. Then we integrate Gumbel-CRF to real-world models, specifically paraphrase generation and data-to-text generation.
Datasets We focus on two datasets. For text modeling and data-to-text generation, we use the E2E dataset [50], a common dataset for learning structured templates for text [63,34]. This dataset contains approximately 42K training, 4.6K validation and 4.6K testing sentences. The vocabulary size is 945. For paraphrase generation we follow the same setting as Fu et al. [16], and use the common MSCOCO dataset. This dataset has 94K training and 23K testing instances. The vocabulary size is 8K.
Metrics For evaluating the gradient estimator performance, we follow the common practice and primarily compare the test negative log-likelihood (NLL) estimated with importance sampling. We also report relative metrics: ELBO, perplexity (PPL), and entropy of the inference network. Importantly, to make all estimates unbiased, all models are evaluated in a discrete setting with unbiased hard samples. For paraphrase task performance, we follow Liu et al. [37], Fu et al. [16] and use BLEU (bigram to 4-gram) [47] and ROUGE [35] (R1, R2 and RL) to measure the generation quality. We note that although being widely used, the two metrics do not penalize the similarity between the generated sentence and the input sentence (because we do not want the model to simply copy the input). So we adopt iBLUE [57], a specialized BLUE score that penalize the ngram overlap between the generated sentence and the input sentence, and use is as our primary metrics. The iBLUE score is defined as: iB(i, o, r) = αB(o, r) + (1 − α)B(o, i), where iB(·) denotes iBLUE score, B(·) denotes BLUE score, i, o, r denote input, output, and reference sentences respectively. We follow Liu et al. [37] and set α = 0.9. For text generation performance, we follow Li and Rush [34] and use the E2E official evaluation script, which measures BLEU, NIST [5], ROUGE, CIDEr [60], and METEOR [3]. More experimental details are in the Appendix.
VAE Training Details At the beginning of training, to prevent the decoder from ignoring z, we apply word dropout [7], i.e., to randomly set the input word embedding at certain steps to be 0. After z converges to a meaningful local optimal, we gradually decrease word dropout ratio to 0 and recover the full model. For optimization, we add a β coefficient to the entropy term, as is in the β-VAE [23]. As is in many VAE works [7,10], we observe the posterior collapse problem where q(z|x) converges to meaningless local optimal. We observe two types of collapsed posterior in our case: a constant posterior (q outputs a fixed z no matter what x is. This happens when β is too weak), and a uniform posterior (when β is too strong). To prevent posterior collapse, β should be carefully tuned to achieve a balance.
Results
Gumbel-CRF as Gradient Estimator: Text Modeling
We compare our Gumbel-CRF (original and ST variant) with two sets of gradient estimators: score function based and reparameterized. For score function estimators, we compare our model with REINFORCE using the mean reward of other samples (MS) as baseline. We further find that adding a carefully tuned constant baseline helps with the scale of the gradient (REINFORCE MS-C). For reparameterized estimators, we use a tailored Perturb-and-Map Markov Random Field (PM-MRF) estimator [46] with the continuous relaxation introduced in Corro and Titov [9]. Compared to our Gumbel-CRF, PM-MRF adds Gumbel noise to local potentials, then runs a relaxed structured argmax algorithm [40]. We further consider a straight-through (ST) version of PM-MRF. The basics of these estimators can be characterized from four dimensions, as listed in Figure 3(A). The appendix provides a further theoretical comparison of gradient structures between these estimators. We also see that PM-MRF estimators perform worse than other estimators. Due to the biased nature of the Perturb-and-MAP sampler, during optimization, PM-MRF is not optimizing the actual model. As a Monte Carlo sampler (forward pass, rather than a gradient estimator) Gumbel-CRF is less biased than PM-MRF. We further observe that both the ST version of Gumbel-CRF and PM-MRF perform better than the non-ST version. We posit that this is because of the consistency of using hard samples in both training and testing time (although non-ST has other advantages).
Variance Analysis To show that reparameterized estimators have lower variance, we compare the log variance ratio of Gumbel-CRF and REINFORCE-MS-C (Figure 3 B), which is defined as r = log(Var(∇ φ L)/|∇ φ L|) (∇ φ L is gradients of the inference model) 3 . We see that Gumbel-CRF has a lower training curve of r than REINFORCE-MS, showing that it is more stable for training.
Gumbel-CRF for Control: Data-to-Text and Paraphrase Generation
Data-to-Text Generation Data-to-text generation models generate descriptions for tabular information. Classical approaches use rule-based templates with better interpretability, while recent approaches use neural models for better performance. Here we aim to study the interpretability and controllability of latent templates. We compare our model with neural and template-based models. Neural models include: D&J [13] (with basic seq2seq); and KV2Seq [15] (with SOTA neural memory architectures); Template models include: SUB [13] (with rule-based templates); hidden semi-markov model (HSMM) [63] (with neural templates); and another semi-markov CRF model (SM-CRF-PC) [34] (with neural templates and posterior regularization [18]). Table 2 shows the results of data-to-text generation. As expected, neural models like KV2Seq with advanced architectures achieve the best performance. Template-related models all come with a certain level of performance costs for better controllability. Among the template-related models, SM-CRF PC performs best. However, it utilizes multiple weak supervision to achieve better template-data alignment, while our model is fully unsupervised for the template. Our model, either trained with REINFORCE or Gumbel-CRF, outperforms the baseline HSMM model. We further see that in this case, models trained with Gumbel-CRF gives better end performance than REINFORCE.
To see how the learned templates induce controllability, we conduct a qualitative study. To use the templates, after convergence, we collect and store all the MAP z for the training sentences. During testing, given an input table, we retrieve a template z and use this z as the control state for the decoder. Figure 4 shows sentences generated from templates. We can see that sentences with different templates exhibit different structures. E.g,. the first sentence for the clowns coffee shop starts with the location, while the second starts with the price. We also observe a state-word correlation. E.g,. state 44 always corresponds to the name of a restaurant and state 8 always corresponds to the rating.
To see how learned latent states encode sentence segments, we associate frequent z-state ngrams with their corresponding segments ( Figure 5). Specifically, after the convergence of training, we: (a) collect the MAP templates for all training cases, (b) collapse consecutive states with the same class into one single state (e.g., a state sequence [1, 1, 2, 2, 3] would be collapsed to [1,2,3]), (c) gather the top 100 most frequent state ngrams and their top5 corresponding sentence segments, (d) pick the mutually most different segments (because the same state ngram may correspond to very similar sentence segments, and the same sentence segment may correspond to different state ngrams). Certain level of cherry picking happens in step (d). We see that state ngrams have a vague correlation with sentence meaning. In cases (A, B, D, E), a state ngram encode semantically similar segments (e.g., all segments in case A are about location, and all segments in case E are about food and price). But the same state ngram may not correspond to the same sentence meaning (cases C, F). For example, while (C1) and (C2) both correspond to state bigram 20-12, (C1) is about location but (C2) is about comments.
Unsupervised Paraphrase Generation
Unsupervised paraphrase generation is defined as generating different sentences conveying the same meaning of an input sentence without parallel training instances. To show the effectiveness of Gumbel-CRF as a gradient estimator, we compare the results when our model is trained with REINFORCE. To show the overall performance of our structured model, we compare it with other unsupervised models, including: a Gaussian VAE for paraphrasing [7]; CGMH [41], a general-purpose MCMC method for controllable generation; UPSA [37], a strong paraphrasing model with simulated annealing. To better position our template model, we also report the supervised performance of a state-of-the-art latent bag of words model (LBOW) [16]. Table 3 shows the results. As expected, the supervised model LBOW performs better than all unsupervised models. Among unsupervised models, the best iB4 results come from our model trained with REINFORCE. In this task, when trained with Gumbel-CRF, our model performs worse than REINFORCE (though better than other paraphrasing models). We note that this inconsistency between the gradient estimation performance and the end task performance involve multiple gaps between ELBO, NLL, and BLEU. The relationship between these metrics may be an interesting future research direction.
Practical Benefits
Although our model can be trained on either REINFORCE or Gumbel-CRF, we emphasize that training structured variables with REINFORCE is notoriously difficult [34], and Gumbel-CRF substantially reduces the complexity. Table 4 demonstrates this empirically. Gumbel-CRF requires fewer hyperparameters to tune, fewer MC samples, less GPU memory, and faster training. These advantages would considerably benefit all practitioners with significantly less training time and resource consumption.
Conclusion
In this work, we propose a pathwise gradient estimator for sampling from CRFs which exhibits lower variance and more stable training than existing baselines. We apply this gradient estimator to the task of text modeling, where we use a structured inference network based on CRFs to learn latent templates. Just as REINFORCE, models trained with Gumbel-CRF can also learn meaningful latent templates that successfully encode lexical and structural information of sentences, thus inducing interpretability and controllability for text generation. Furthermore, the Gumbel-CRF gives significant practical benefits than REINFORCE, making it more applicable to real-world tasks.
Broader Impact
Generally, this work is about controllable text generation. When applying this work to chatbots, one may get a better generation quality. This could potentially improve the accessibility [8,53] for people who need a voice assistant to use an electronic device, e.g. people with visual, intellectual, and other disabilities [51,2]. However, if not properly tuned, this model may generate improper sentences like fake information, putting the user at a disadvantage. Like many other text generation models, if trained with improper data (fake news [21], words of hatred [12]), a model could generate these sentences as well. In fact, one of the motivations for controllable generation is to avoid these situations [63,12]. But still, researchers and engineers need to be more careful when facing these challenges.
Algorithm 4
Viterbi with Relaxed Back-tracking 1: Input:Φ(zt−1, zt, xt), t ∈ {1, .., T } 2: s1(i) = logΦ(i, x1) 3: for t ← 2, T do 4:
st(i) = maxj{st−1(j) + logΦ(zt−1 = j, zt = i, xt)} 5:
bt(i) = Softmaxj(st−1(j) + logΦ(zt−1 = j, zt = i, xt)) 6: end for 7: Back-tracking: 8:zT = Softmax(sT ) 9:ẑT = Argmax(sT (i)) 10: for t ← T − 1, 1 do 11:ẑt+1 = Argmax i (zt+1(i)) 12:zt = bt+1(ẑt+1) 13: end for 14: Returnẑ,z
∇ φ L REINFORCE ≈ t f (x 1:n ,ẑ 1:n ) reward term · ∇ φ log q φ (ẑ t |ẑ t−1 , x) stepwise term (18) ∇ φ L Gumbel-CRF-ST ≈ t ∇z t f (x 1:n ,ẑ 1:n ) pathwise term ∇ φzt (ẑ t+1 , w 1:n , t ) stepwise term (19) ∇ φ L PM-MRF-ST ≈ t ∇z t f (x 1:n ,ẑ 1:n ) pathwise term ∇ φz t (ẑ t+1 , w 1:n , t )
stepwise term (20) In equation 18, we decompose q(z|x) with its markovian property, leading to a summation over the chain where the same reward f is distributed to all steps. Equations 19 and 20 use the chain rule to get the gradients. ∇z t f (x 1:n ,ẑ 1:n ) denotes the gradient of f evaluated on hard sampleẑ 1:n and taken w.r.t. soft samplez t . ∇ φzt (ẑ t+1 , w 1:n , t ) denotes the Jacobian matrix ofz t (notez t is a vector) taken w.r.t. the parameter φ (note φ is also a vector, so taking gradients ofz t w.r.t. φ gives a Jacobian matrix). Consequently is a special vector-matrix summation which result in a vector (note this is different with equation 18 since the later is a scalar-vector product). We further usez t (ẑ t+1 , w 1:n , t ) to denote thatz t is a function of the previous hard sampleẑ t+1 , all CRF weights w 1:n , and the local Gumbel noise t . Similar notation applies to equation 20.
All gradients are formed as a summation over the steps. Inside the summation is a scalar-vector product or a vector-matrix product. The REINFORCE estimator can be decomposed with a reward term and a "stepwise" term, where the stepwise term comes from the "transition" probability. The Gumbel-CRF and PM-MRF estimator can be decomposed with a pathwise term, where we take gradient of f w.r.t. each sample stepz t orz t , and a "stepwise" term where we take Jacobian w.r.t. φ.
To compare the three estimators, we see that:
• Using hard sampleẑ. like REINFORCE, Gumbel-CRF-ST use hard sampleẑ for the forward pass, as indicated by the term f (x 1:n , z 1:n ) -The advantage of using the hard sample is that one can use it to best explore the search space of the inference network, i.e. to search effective latent codes using Monte Carlo samples. -Gumbel-CRF-ST perserves the same advantage as REINFORCE, while PM-MRF-ST cannot fully search the space because its sampleẑ is biased. • Coupled sample path. The soft samplez t of Gumbel-CRF-ST is based on the hard, exact sample pathẑ t+1 , as indicated by the termz t (ẑ t+1 , w 1:n , t ).
-The coupling of hardẑ and softz is ensured by our Gumbelized FFBS algorithm by applying gumbel noise t to each transitional distributionz t = Softmax(log q(z t |ẑ t+1 , x) + t ).
-Consequently, we can recover the hard sample with the Argmax functionẑ t = Argmax(z t ). -This property allows us the use continuous relaxation to allow pathwise gradients ∇ φzt (ẑ t+1 , w 1:n , t ) without losing the advantage of using hard exact sampleẑ. -PM-MRF with relaxed Viterbi also has this advantage of continuous relaxation, as shown by the term ∇ φz t (ẑ t+1 , w 1:n , t ), but it does not have the advantage of using unbiased sample sinceẑ t+1 is biased.
• "Fine-grained" gradients. The stepwise term ∇ φ log q φ (ẑ t |ẑ t−1 , x) in the REINFORCE estimator is scaled by the same reward term f (x 1:n ,ẑ 1:n ), while the stepwise term ∇ φzt (ẑ t+1 , w 1:n , t ) in the rest two estimators are summed with different pathwise terms ∇z t f (x 1:n ,ẑ 1:n ).
-To make REINFORCE achieve similar "fine-grained" gradients for each steps, the reward function (generative model) f must exhibit certain structures that make it decomposible. This is not always possible, and one always need to manually derive such decomposition. -The fine-grained gradients of Gumbel-CRF is agnostic with the structure of the generative model. No matter what f is, the gradients decompose automatically with AutoDiff libraries.
D Experiment Details
D.1 Data Processing
For the E2E dataset, we follow similar processing pipeline as Wiseman et al. [63]. Specifically, given the key-value pairs and the sentences, we substitute each value token in the sentence with its corresponding key token. For the MSCOCO dataset, we follow similar processing pipeline as Fu et al. [16]. Since the official test set is not publically available, we use the same training/ validation/ test split as Fu et al. [16]. We are unable to find the implementation of Liu et al. [37], thus not sure their exact data processing pipeline, making our results of unsupervised paraphrase generation not strictly comparable with theirs. However, we have tested different split of the validation dataset, and the validation performance does not change significantly with the split. This indicates that although not strictly comparable, we can assume their testing set is just another random split, and their performance should not change much under our split.
D.2 Model Architecture
For the inference model, we use a bi-directional LSTM to predict the CRF emission potentials. The dropout ratio is 0.2. The number of latent state of the CRF is 50. The decoder is a uni-directional LSTM model. We perform attention to the BOW, and also let the decoder to copy [22] from the BOW. For text modeling and data-to-text, we set the number of LSTM layers to 1 (both encoder and decoder), and the hidden state size to 300. This setting is comparable to [63]. For paraphrase generation, we set the number of LSTM layers (both encoder and decoder) to 2, and the hidden state size to 500. This setting is comparable to [16]. The embedding size for the words and the latent state is the same as the hidden state size in both two settings.
D.3 Hyperparameters, Training and Evaluation Details
Hyperparameters For the score function estimators, we conduct more than 40 different runs searching for the best hyperparameter and architecture, and choose the best model according to the validation performance. The hyperparameters we searched include: (a). For the settings we have tested, all settings are repeated 2 times to check the sensitivity under different random initialization. If we find a hyperparameter setting is sensitive to initialization, we run this setting 2 more times and choose the best.
Training We find out the convergence of score-function estimators are generally less stable than the reparameterized estimators, they are: (a). more sensitive to random initialization (b). more prone to converging to a collapsed posterior. For the reparameterized estimators, the ST versions generally converge faster than the original versions.
Figure 1 :
1Gumbel-CRF FFBS algorithm and visualization for sequence-level v.s. stepwise gradients. Solid arrows show the forward pass and dashed arrows show the backward pass.
Figure 3 :
3Text Modeling. (A). Characteristics of gradient estimators. (B). Variance comparison, Gumbel-CRF vs REINFORCE-MS, training curve.
Figure 5 :
5Analysis of state ngrams. State ngrams correlate to sentence meaning. In cases (A, B, D, E), semantically similar sentence segments are clustered to the same state ngrams: (A) "location" (B) "rating" (D) "location" (E) "food" and "price". Yet there are also cases where state ngrams correspond to sentence segments with different meaning: (C1) "location" v.s. (C2) "comments"; (F1) "price" v.s. (F2) "price" and "comments".
number of MC sample (3, 5) (b). value of the constant baseline (0, 0.1, 1.0) (c). β value (5 × 10 −6 , 10 −4 , 10 −3 ) (d). scaling factor of the surrogate loss of the score function estimator (1, 10 2 , 10 4 ). For the reparameterized estimators, we conduct more than 20 different runs for architecture and hyperparameter search. The hyperparameters we searched include: (a). the template in Softmax (1.0, 0.01) (b). β value (5 × 10 −6 , 10 −4 , 10 −3 ). Other parameter/ architecture we consider include: (a). number of latent states (10, 20, 25, 50) (b). use/ not use the copy mechanism (c). dropout ratio (d). different word drop schedule. Although we considered a large range of hyperparameters, we have not tested all combinations.
.., K} are i.i.d. gumbel noise. Gumbel-Max sampling of Y can be performed as:Algorithm 1 Forward Filtering Backward Sampling
1: Input: Φ(zt−1, zt, xt), t ∈ {1, .., T }, α1:T , Z
2: Calculate p(zT |x) = αT /Z
3: SampleẑT ∼ p(zT |x)
4: for t ← T − 1, 1 do
5:
Table 1 :
1Density Estimation Results. NLL is estimated with 100 importance samples. Models are selected from 3 different random seeds based on validation NLL. All metrics are evaluated on the discrete (exact) model.Model
Neg. ELBO NLL
PPL
Ent. #sample
RNNLM
-
34.69
4.94
-
-
PM-MRF
69.15
50.22 10.41 4.11
1
PM-MRF-ST
53.16
37.03
5.48
2.04
1
REINFORCE-MS
35.11
34.50
4.84
3.48
5
REINFORCE-MS-C
34.35
33.82
4.71
3.34
5
Gumbel-CRF (ours)
38.00
35.41
4.71
3.03
1
Gumbel-CRF-ST (ours)
34.18
33.13
4.54
3.26
1
(A) Characteristics of the estimators we compare
(B) Log variance ratio, training curve
Estimators
Score
/Reparam.
Seq. Level/
Stepwise
Unbiased
MC Sample
Unbiased
Grad.
REINFORCE-MS
Score
Seq.
Unbiased
Unbiased
REINFORCE-MS-C Score
Seq.
Unbiased
Unbiased
PM-MRF
Reparam. Step
Biased
Biased
PM-MRF-ST
Reparam. Step
Biased
Biased
Gumbel-CRF
Reparam. Step
Biased
Biased
Gumbel-CRF-ST
Reparam. Step
Unbiased
Biased
Table 2 :
2Data-to-text generation results. Upper: neural models, Lower: template-related models. Models are selected from 5 different random seeds based on validation BLEU.Model
BLEU NIST ROUGE CIDEr METEOR
D&J[13]
65.93
8.59
68.50
2.23
44.83
KV2Seq[15]
74.72
9.30
70.69
2.23
46.15
SUB[13]
43.78
6.88
54.64
1.39
37.35
HSMM[63]
55.17
7.14
65.70
1.70
41.91
HSMM-AR[63]
59.80
7.56
65.01
1.95
38.75
SM-CRF PC [34]
67.12
8.52
68.70
2.24
45.40
REINFORCE
60.41
7.99
62.54
1.78
38.04
Gumbel-CRF
65.83
8.43
65.06
1.98
41.44
Table 1
1shows the main results comparing different gradient estimators on text modeling. Our Gumbel-CRF-ST outperforms other estimators in terms of NLL and PPL. With fewer samples required, reparameterization and continuous relaxation used in Gumbel-CRF are particularly effective for learning structured inference networks.
Table 3 :
3Paraphrase Generation. Upper: supervised models, Lower: unsupervised models. Models are selected from 5 random seeds based validation iB4 (iBLUE 4 gram).Model
iB4
B2
B3
B4
R1
R2
RL
LBOW [16]
-
51.14 35.66 25.27 42.08 16.13 38.16
Gaussian VAE[7]
7.48
24.90 13.04
7.29
22.05
4.64
26.05
CGMH [41]
7.84
-
-
11.45 32.19
8.67
-
UPSA [37]
9.26
-
-
14.16 37.18 11.21
-
REINFORCE
11.20 41.29 26.54 17.10 32.57 10.20 34.97
Gumbel-CRF
10.20 38.98 24.65 15.75 31.10
9.24
33.60
Table 4 :
4Practical benefits of using Gumbel-CRF. Typically, REINFORCE has a long list of parameters to tune:h entropy regularization, b0 constant baseline, b baseline model, r reward scaling, #s number of MC sample.
Gumbel-CRF reduces the engineering complexity with significantly less parameters (h entropy regularization,
τ temperature annealing), less samples required (thus less memory consumption), and less time consumption.
Models tested on Nvidia P100 with batch size 100.
Model
Hyperparams. #s GPU mem Sec. per batch
REINFORCE h, b0, b, r, #s
5
1.8G
1.42
Gumbel-CRF
h, τ
1
1.1G
0.48
Note that argmax is also differentiable almost everywhere, however its gradient is almost 0 everywhere and not well-defined at the jumping point[48]. Our relaxedz does not have these problems.
Previous works compare variance, rather than variance ratio[59,19]. We think that simply comparing the variance is only reasonable when the scale of gradients are approximately the same, which may not hold in different estimators. In our experiments, we observe that the gradient scale of Gumbel-CRF is significantly smaller than REINFORCE, thus the variance ratio may be a better proxy for measuring stability.
Algorithm 3 Linear-chain CRF Entropy 1: Input: Φ(zt−1, zt, xt), t ∈ {1, .., T }, α1:T , Z 2: H1(i) = 0We assume a deterministic special start state 3: for t ← 1, T − 1 do 4:wt+1(i, j) = Φ(z t =i,z t+1 =j,x t+1 )α t (i) α t+1 (j)5:We assume all states ends with a special end state with probability 1.A CRF Entropy CalculationThe entropy of the inference network can be calculated by another forward-styled DP algorithm. Algorithm 3 gives the datils.B PM-MRFAs noted in the main paper, the baseline estimator PM-MRF also involve in-depth exploitation of the structure of models and gradients, thus being quite competitive. Here we give a detailed discussion.Papandreou and Yuille[46]proposed the Perturb-and-MAP Random Field, an efficient sampling method for general Markov Random Field. Specifically, they propose to use the Gumbel noise to perturb each local potential Φ i of an MRF, then run a MAP algorithm (if applicable) on the perturbed MRF to get a MAPẑ. This MAPẑ from the perturbedΦ can be viewed as a biased sample from the original MRF. This method is much faster than the MCMC sampler when an efficient MAP algorithm exists. Applying to a CRF, this would mean adding noise to its potential at every step, then run Viterbi:ΦHowever, when tracing back along the Viterbi path, we still getẑ as a sequence of index. For continuous relaxation, we would like to relaxẑ t to be relaxed one-hot, instead of index. One natural choice is to use the Softmax function. The relaxed back-tracking algorithm is listed in Algorithm 4.In our experiments, for the PM-MRF estimator, we usez for both forward and back-propagation. For the PM-MRF-ST estimator, we useẑ for the forward pass, andz for the back-propagation pass.It is easy to verify the PM-MRF is a biased sampler by checking the sample probability of the first stepẑ 1 . With the PM-MRF, the biased z 1 is essentially from a categorical distribution parameterized by π where:With forward-sampling, however, the unbiased z 1 should be from the marginal distribution where:Where β denote the backward variable from the backward algorithm[58]. The inequality in equation 17 shows that PM-MRF gives biased sample.C Theoretical Comparison of Gradient StructuresWe compare the detailed structure of gradients of each estimator. We denote f (x 1:n , z 1:n ) = log p θ (x 1:n , z 1:n ). We useẑ to denote unbiased hard sample,z to denote soft sample coupled withẑ, z to denote biased hard sample from the PM-MRF,z to denote soft sample coupled withẑ output by the relaxed Viterbi algorithm. We use w 1:n to denote the "emission" weights of the CRF. The gradients of all estimators are:
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, 3rd International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR 2015, 2015.
Use of voice activated interfaces by people with intellectual disability. Laurianne Saminda Sundeepa Balasuriya, Andrew A Sitbon, Maria Bayor, Margot Hoogstrate, Brereton, Proceedings of the 30th Australian Conference on Computer-Human Interaction. the 30th Australian Conference on Computer-Human InteractionSaminda Sundeepa Balasuriya, Laurianne Sitbon, Andrew A. Bayor, Maria Hoogstrate, and Margot Brereton. Use of voice activated interfaces by people with intellectual disability. Proceedings of the 30th Australian Conference on Computer-Human Interaction, 2018.
Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. Satanjeev Banerjee, Alon Lavie, Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarizationSatanjeev Banerjee and Alon Lavie. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65-72, 2005.
Generating sentences from disentangled syntactic and semantic spaces. Yu Bao, Hao Zhou, Shujian Huang, Lei Li, Lili Mou, Olga Vechtomova, Xin-Yu Dai, Jiajun Chen, 10.18653/v1/P19-1602Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsYu Bao, Hao Zhou, Shujian Huang, Lei Li, Lili Mou, Olga Vechtomova, Xin-yu Dai, and Jiajun Chen. Generating sentences from disentangled syntactic and semantic spaces. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6008-6019, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1602. URL https://www.aclweb.org/anthology/P19-1602.
Comparing automatic and human evaluation of nlg systems. Anja Belz, Ehud Reiter, 11th Conference of the European Chapter of the Association for Computational Linguistics. Anja Belz and Ehud Reiter. Comparing automatic and human evaluation of nlg systems. In 11th Conference of the European Chapter of the Association for Computational Linguistics, 2006.
David M Blei, Alp Kucukelbir, Jon D Mcauliffe, abs/1601.00670Variational inference: A review for statisticians. David M. Blei, Alp Kucukelbir, and Jon D. McAuliffe. Variational inference: A review for statisticians. ArXiv, abs/1601.00670, 2016.
Generating sentences from a continuous space. Luke Samuel R Bowman, Oriol Vilnis, Vinyals, M Andrew, Rafal Dai, Samy Jozefowicz, Bengio, arXiv:1511.06349arXiv preprintSamuel R Bowman, Luke Vilnis, Oriol Vinyals, Andrew M Dai, Rafal Jozefowicz, and Samy Bengio. Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349, 2015.
What can i say?: addressing user experience challenges of a mobile voice user interface for accessibility. Eric Corbett, Astrid Weber, Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services. the 18th International Conference on Human-Computer Interaction with Mobile Devices and ServicesEric Corbett and Astrid Weber. What can i say?: addressing user experience challenges of a mobile voice user interface for accessibility. Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services, 2016.
Differentiable perturb-and-parse: Semi-supervised parsing with a structured variational autoencoder. Caio Corro, Ivan Titov, arXiv:1807.09875arXiv preprintCaio Corro and Ivan Titov. Differentiable perturb-and-parse: Semi-supervised parsing with a structured variational autoencoder. arXiv preprint arXiv:1807.09875, 2018.
Avoiding latent variable collapse with generative skip models. B Adji, Yoon Dieng, Alexander M Kim, David M Rush, Blei, arXiv:1807.04863arXiv preprintAdji B Dieng, Yoon Kim, Alexander M Rush, and David M Blei. Avoiding latent variable collapse with generative skip models. arXiv preprint arXiv:1807.04863, 2018.
Carl Doersch, arXiv:1606.05908Tutorial on variational autoencoders. arXiv preprintCarl Doersch. Tutorial on variational autoencoders. arXiv preprint arXiv:1606.05908, 2016.
Fighting offensive language on social media with unsupervised text style transfer. Cícero Nogueira Dos Santos, Igor Melnyk, Inkit Padhi, ACL. Cícero Nogueira dos Santos, Igor Melnyk, and Inkit Padhi. Fighting offensive language on social media with unsupervised text style transfer. In ACL, 2018.
Sequence-to-sequence generation for spoken dialogue via deep syntax trees and strings. Ondřej Dušek, Filip Jurčíček, arXiv:1606.05491arXiv preprintOndřej Dušek and Filip Jurčíček. Sequence-to-sequence generation for spoken dialogue via deep syntax trees and strings. arXiv preprint arXiv:1606.05491, 2016.
Deep generative models for natural language processing. Yao Fu, Yao Fu. Deep generative models for natural language processing. URL https://github. com/FranxYao/Deep-Generative-Models-for-Natural-Language-Processing.
Natural answer generation with heterogeneous memory. Yao Fu, Yansong Feng, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersYao Fu and Yansong Feng. Natural answer generation with heterogeneous memory. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 185-195, 2018.
Paraphrase generation with latent bag of words. Yao Fu, Yansong Feng, John P Cunningham, NeurIPSYao Fu, Yansong Feng, and John P. Cunningham. Paraphrase generation with latent bag of words. In NeurIPS, 2019.
Rethinking text attribute transfer: A lexical analysis. Yao Fu, Hao Zhou, Jiaze Chen, Lei Li, arXiv:1909.12335arXiv preprintYao Fu, Hao Zhou, Jiaze Chen, and Lei Li. Rethinking text attribute transfer: A lexical analysis. arXiv preprint arXiv:1909.12335, 2019.
Posterior regularization for structured latent variable models. Kuzman Ganchev, Joao Graca, Jennifer Gillenwater, Ben Taskar, Journal of Machine Learning Research. 1167Kuzman Ganchev, Joao Graca, Jennifer Gillenwater, and Ben Taskar. Posterior regularization for structured latent variable models. Journal of Machine Learning Research, 11(67):2001-2049, 2010. URL http://jmlr.org/papers/v11/ganchev10a.html.
Backpropagation through the void: Optimizing control variates for black-box gradient estimation. Will Grathwohl, Dami Choi, Yuhuai Wu, Geoffrey Roeder, David Duvenaud, abs/1711.00123ArXiv. Will Grathwohl, Dami Choi, Yuhuai Wu, Geoffrey Roeder, and David Duvenaud. Backpropaga- tion through the void: Optimizing control variates for black-box gradient estimation. ArXiv, abs/1711.00123, 2018.
Variance reduction techniques for gradient estimates in reinforcement learning. Evan Greensmith, L Peter, Jonathan Bartlett, Baxter, Journal of Machine Learning Research. 5Evan Greensmith, Peter L Bartlett, and Jonathan Baxter. Variance reduction techniques for gradient estimates in reinforcement learning. Journal of Machine Learning Research, 5(Nov): 1471-1530, 2004.
. Nir Grinberg, Kenneth Joseph, Lisa Friedland, Briony Swire-Thompson, David Lazer, Science. 363Fake news on twitter during the 2016 u.s. presidential electionNir Grinberg, Kenneth Joseph, Lisa Friedland, Briony Swire-Thompson, and David Lazer. Fake news on twitter during the 2016 u.s. presidential election. Science, 363:374-378, 2019.
Incorporating copying mechanism in sequence-to-sequence learning. Jiatao Gu, Zhengdong Lu, Hang Li, O K Victor, Li, arXiv:1603.06393arXiv preprintJiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393, 2016.
Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. Irina Higgins, Loïc Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Irina Higgins, Loïc Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017.
Toward controlled generation of text. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, Eric P Xing, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1587-1596. JMLR. org, 2017.
Bidirectional lstm-crf models for sequence tagging. Zhiheng Huang, Wei Xu, Kai Yu, arXiv:1508.01991arXiv preprintZhiheng Huang, Wei Xu, and Kai Yu. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991, 2015.
Attention is not explanation. ArXiv, abs. Sarthak Jain, Byron C Wallace, Sarthak Jain and Byron C. Wallace. Attention is not explanation. ArXiv, abs/1902.10186, 2019.
Categorical reparameterization with gumbel-softmax. Eric Jang, Shixiang Gu, Ben Poole, arXiv:1611.01144arXiv preprintEric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144, 2016.
Yoon Kim, Sam Wiseman, Alexander M Rush, arXiv:1812.06834A tutorial on deep latent variable models of natural language. arXiv preprintYoon Kim, Sam Wiseman, and Alexander M Rush. A tutorial on deep latent variable models of natural language. arXiv preprint arXiv:1812.06834, 2018.
Unsupervised recurrent neural network grammars. Yoon Kim, Alexander M Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, Gábor Melis, arXiv:1904.03746arXiv preprintYoon Kim, Alexander M Rush, Lei Yu, Adhiguna Kuncoro, Chris Dyer, and Gábor Melis. Unsupervised recurrent neural network grammars. arXiv preprint arXiv:1904.03746, 2019.
Auto-encoding variational bayes. P Diederik, Max Kingma, Welling, arXiv:1312.6114arXiv preprintDiederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
Stochastic beams and where to find them: The gumbel-top-k trick for sampling sequences without replacement. Wouter Kool, Herke Van Hoof, Max Welling, arXiv:1903.06059arXiv preprintWouter Kool, Herke Van Hoof, and Max Welling. Stochastic beams and where to find them: The gumbel-top-k trick for sampling sequences without replacement. arXiv preprint arXiv:1903.06059, 2019.
Dependency grammar induction with a neural variational transition-based parser. Bowen Li, Jianpeng Cheng, Yang Liu, Frank Keller, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Bowen Li, Jianpeng Cheng, Yang Liu, and Frank Keller. Dependency grammar induction with a neural variational transition-based parser. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 6658-6665, 2019.
He He, and Percy Liang. Delete, retrieve, generate: A simple approach to sentiment and style transfer. Juncen Li, Robin Jia, arXiv:1804.06437arXiv preprintJuncen Li, Robin Jia, He He, and Percy Liang. Delete, retrieve, generate: A simple approach to sentiment and style transfer. arXiv preprint arXiv:1804.06437, 2018.
Posterior control of blackbox generation. ArXiv, abs. Lisa Xiang, Alexander M Li, Rush, Xiang Lisa Li and Alexander M. Rush. Posterior control of blackbox generation. ArXiv, abs/2005.04560, 2020.
Manual and automatic evaluation of summaries. Chin-Yew Lin, Eduard Hovy, Proceedings of the ACL-02 Workshop on Automatic Summarization. the ACL-02 Workshop on Automatic SummarizationAssociation for Computational Linguistics4Chin-Yew Lin and Eduard Hovy. Manual and automatic evaluation of summaries. In Proceedings of the ACL-02 Workshop on Automatic Summarization-Volume 4, pages 45-51. Association for Computational Linguistics, 2002.
Reparameterizing the birkhoff polytope for variational permutation inference. Gonzalo E Scott W Linderman, Hal Mena, Liam Cooper, John P Paninski, Cunningham, arXiv:1710.09508arXiv preprintScott W Linderman, Gonzalo E Mena, Hal Cooper, Liam Paninski, and John P Cunningham. Reparameterizing the birkhoff polytope for variational permutation inference. arXiv preprint arXiv:1710.09508, 2017.
Unsupervised paraphrasing by simulated annealing. Xianggen Liu, Lili Mou, Fandong Meng, Hao Zhou, Jie Zhou, Sen Song, abs/1909.03588ArXiv. Xianggen Liu, Lili Mou, Fandong Meng, Hao Zhou, Jie Zhou, and Sen Song. Unsupervised paraphrasing by simulated annealing. ArXiv, abs/1909.03588, 2019.
Andriy Chris J Maddison, Yee Whye Mnih, Teh, arXiv:1611.00712The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprintChris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016.
Efficient computation of entropy gradient for semisupervised conditional random fields. S Gideon, Andrew Mann, Mccallum, MASSACHUSETTS UNIV AMHERST DEPT OF COMPUTER SCIENCE. Technical reportGideon S Mann and Andrew McCallum. Efficient computation of entropy gradient for semi- supervised conditional random fields. Technical report, MASSACHUSETTS UNIV AMHERST DEPT OF COMPUTER SCIENCE, 2007.
Differentiable dynamic programming for structured prediction and attention. Arthur Mensch, Mathieu Blondel, arXiv:1802.03676arXiv preprintArthur Mensch and Mathieu Blondel. Differentiable dynamic programming for structured prediction and attention. arXiv preprint arXiv:1802.03676, 2018.
Cgmh: Constrained sentence generation by metropolis-hastings sampling. Ning Miao, Hao Zhou, Lili Mou, Rui Yan, Lei Li, In AAAI. Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. Cgmh: Constrained sentence generation by metropolis-hastings sampling. In AAAI, 2018.
Deep generative models for natural language processing. Yishu Miao, University of OxfordPhD thesisYishu Miao. Deep generative models for natural language processing. PhD thesis, University of Oxford, 2017.
Variational inference for monte carlo objectives. Andriy Mnih, Danilo Jimenez Rezende, ICML. Andriy Mnih and Danilo Jimenez Rezende. Variational inference for monte carlo objectives. In ICML, 2016.
Monte carlo gradient estimation in machine learning. ArXiv, abs. Shakir Mohamed, Mihaela Rosca, Michael Figurnov, Andriy Mnih, Shakir Mohamed, Mihaela Rosca, Michael Figurnov, and Andriy Mnih. Monte carlo gradient estimation in machine learning. ArXiv, abs/1906.10652, 2019.
Machine learning: a probabilistic perspective. P Kevin, Murphy, Kevin P Murphy. Machine learning: a probabilistic perspective. 2012.
Perturb-and-map random fields: Using discrete optimization to learn and sample from energy models. George Papandreou, Alan L Yuille, 2011 International Conference on Computer Vision. IEEEGeorge Papandreou and Alan L Yuille. Perturb-and-map random fields: Using discrete optimiza- tion to learn and sample from energy models. In 2011 International Conference on Computer Vision, pages 193-200. IEEE, 2011.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting on association for computational linguistics. the 40th annual meeting on association for computational linguisticsAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311-318. Association for Computational Linguistics, 2002.
B Max, Dami Paulus, Daniel Choi, Andreas Tarlow, Chris J Krause, Maddison, arXiv:2006.08063Gradient estimation with stochastic softmax tricks. arXiv preprintMax B Paulus, Dami Choi, Daniel Tarlow, Andreas Krause, and Chris J Maddison. Gradient estimation with stochastic softmax tricks. arXiv preprint arXiv:2006.08063, 2020.
Data-to-text generation with entity modeling. Ratish Puduppully, Li Dong, Mirella Lapata, ACL. Ratish Puduppully, Li Dong, and Mirella Lapata. Data-to-text generation with entity modeling. In ACL, 2019.
E2e nlg challenge: Neural models vs. templates. Yevgeniy Puzikov, Iryna Gurevych, Proceedings of the 11th International Conference on Natural Language Generation. the 11th International Conference on Natural Language GenerationYevgeniy Puzikov and Iryna Gurevych. E2e nlg challenge: Neural models vs. templates. In Proceedings of the 11th International Conference on Natural Language Generation, pages 463-471, 2018.
Ubiquitous arabic voice control device to assist people with disabilities. Uvais Qidwai, Mohamed Shakir, 4th International Conference on Intelligent and Advanced Systems (ICIAS2012). 1Uvais Qidwai and Mohamed Shakir. Ubiquitous arabic voice control device to assist people with disabilities. 2012 4th International Conference on Intelligent and Advanced Systems (ICIAS2012), 1:333-338, 2012.
. Rajesh Ranganath, Sean Gerrish, David M Blei, arXiv:1401.0118Black box variational inference. arXiv preprintRajesh Ranganath, Sean Gerrish, and David M Blei. Black box variational inference. arXiv preprint arXiv:1401.0118, 2013.
Using intelligent personal assistants to assist the elderlies an evaluation of amazon alexa, google assistant, microsoft cortana, and apple siri. Arsénio Reis, Dennis Paulino, Hugo Paredes, Isabel Barroso, Maria João Monteiro, Vitor Rodrigues, João Barroso, 2nd International Conference on Technology and Innovation in Sports, Health and Wellbeing (TISHW). Arsénio Reis, Dennis Paulino, Hugo Paredes, Isabel Barroso, Maria João Monteiro, Vitor Rodrigues, and João Barroso. Using intelligent personal assistants to assist the elderlies an evaluation of amazon alexa, google assistant, microsoft cortana, and apple siri. 2018 2nd International Conference on Technology and Innovation in Sports, Health and Wellbeing (TISHW), pages 1-5, 2018.
Stochastic backpropagation and approximate inference in deep generative models. Danilo Jimenez Rezende, Shakir Mohamed, Daan Wierstra, arXiv:1401.4082arXiv preprintDanilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
Style transfer from nonparallel text by cross-alignment. Tianxiao Shen, Tao Lei, Regina Barzilay, Tommi Jaakkola, Advances in neural information processing systems. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi Jaakkola. Style transfer from non- parallel text by cross-alignment. In Advances in neural information processing systems, pages 6830-6841, 2017.
Mastering the game of go with deep neural networks and tree search. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den, Julian Driessche, Ioannis Schrittwieser, Veda Antonoglou, Marc Panneershelvam, Lanctot, nature. 5297587484David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driess- che, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016.
Joint learning of a dual SMT system for paraphrase generation. Hong Sun, Ming Zhou, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsJeju Island, KoreaAssociation for Computational Linguistics2Short Papers)Hong Sun and Ming Zhou. Joint learning of a dual SMT system for paraphrase generation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguis- tics (Volume 2: Short Papers), pages 38-42, Jeju Island, Korea, July 2012. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P12-2008.
An introduction to conditional random fields. Foundations and Trends® in Machine Learning. Charles Sutton, Andrew Mccallum, 4Charles Sutton, Andrew McCallum, et al. An introduction to conditional random fields. Foundations and Trends® in Machine Learning, 4(4):267-373, 2012.
Rebar: Low-variance, unbiased gradient estimates for discrete latent variable models. George Tucker, Andriy Mnih, Chris J Maddison, John Lawson, Jascha Sohl-Dickstein, abs/1703.07370ArXiv. George Tucker, Andriy Mnih, Chris J. Maddison, John Lawson, and Jascha Sohl-Dickstein. Rebar: Low-variance, unbiased gradient estimates for discrete latent variable models. ArXiv, abs/1703.07370, 2017.
Cider: Consensus-based image description evaluation. Ramakrishna Vedantam, Lawrence Zitnick, Devi Parikh, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionRamakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566-4575, 2015.
Attention is not not explanation. Sarah Wiegreffe, Yuval Pinter, EMNLP/IJCNLP. Sarah Wiegreffe and Yuval Pinter. Attention is not not explanation. In EMNLP/IJCNLP, 2019.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. J Ronald, Williams, Machine learning. 83-4Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforce- ment learning. Machine learning, 8(3-4):229-256, 1992.
Learning neural templates for text generation. Sam Wiseman, M Stuart, Alexander M Shieber, Rush, arXiv:1808.10122arXiv preprintSam Wiseman, Stuart M Shieber, and Alexander M Rush. Learning neural templates for text generation. arXiv preprint arXiv:1808.10122, 2018.
. Sang Michael Xie, Stefano Ermon, arXiv:1901.10517Differentiable subset sampling. arXiv preprintSang Michael Xie and Stefano Ermon. Differentiable subset sampling. arXiv preprint arXiv:1901.10517, 2019.
Structvae: Tree-structured latent variable models for semi-supervised semantic parsing. Pengcheng Yin, Chunting Zhou, Junxian He, Graham Neubig, arXiv:1806.07832arXiv preprintPengcheng Yin, Chunting Zhou, Junxian He, and Graham Neubig. Structvae: Tree-structured latent variable models for semi-supervised semantic parsing. arXiv preprint arXiv:1806.07832, 2018.
Seqgan: Sequence generative adversarial nets with policy gradient. Lantao Yu, Weinan Zhang, Jun Wang, Yong Yu, abs/1609.05473ArXiv. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. ArXiv, abs/1609.05473, 2017.
Jake Zhao, Yoon Kim, Kelly Zhang, Alexander M Rush, Yann Lecun, arXiv:1706.04223Adversarially regularized autoencoders. arXiv preprintJake Zhao, Yoon Kim, Kelly Zhang, Alexander M Rush, and Yann LeCun. Adversarially regularized autoencoders. arXiv preprint arXiv:1706.04223, 2017.
| [
"https://github.com/FranxYao/Gumbel-CRF."
] |
[
"NLP as a Lens for Causal Analysis and Perception Mining to Infer Mental Health on Social Media",
"NLP as a Lens for Causal Analysis and Perception Mining to Infer Mental Health on Social Media"
] | [
"Muskan Garg garg.muskan@mayo.edu \nMayo Clinic\nRochesterMNUSA\n",
"Chandni Saxena csaxena@cse.cuhk.edu.hk \nThe Chinese University of Hong Kong\nHong Kong SAR\n",
"Usman Naseem usman.naseem@sydney.edu.au \nThe University of Sydney\n2006NSWAustralia\n",
"Bonnie J Dorr bonniejdorr@ufl.edu \nUniversity of Florida\nFlUSA\n"
] | [
"Mayo Clinic\nRochesterMNUSA",
"The Chinese University of Hong Kong\nHong Kong SAR",
"The University of Sydney\n2006NSWAustralia",
"University of Florida\nFlUSA"
] | [] | Interactions among humans on social media often convey intentions behind their actions, yielding a psychological language resource for Mental Health Analysis (MHA) of online users. The success of Computational Intelligence Techniques (CIT) for inferring mental illness from such social media resources points to NLP as a lens for causal analysis and perception mining. However, we argue that more consequential and explainable research is required for optimal impact on clinical psychology practice and personalized mental healthcare. To bridge this gap, we posit two significant dimensions: 1) Causal analysis to illustrate a cause-and-effect relationship in the user-generated text; 2) Perception mining to infer psychological perspectives of social effects on online users' intentions. Within the scope of Natural Language Processing (NLP), we further explore critical areas of inquiry associated with these two dimensions, specifically through recent advancements in discourse analysis. This position paper guides the community to explore solutions in this space and advance the state of practice in developing conversational agents for inferring mental health from social media. We advocate for a more explainable approach toward modeling computational psychology problems through the lens of language as we observe an increased number of research contributions in dataset and problem formulation for causal relation extraction and perception enhancements while inferring mental states. | 10.48550/arxiv.2301.11004 | [
"https://export.arxiv.org/pdf/2301.11004v1.pdf"
] | 256,274,572 | 2301.11004 | 893abbd9db950726cdf49e41f723e5dc7b36541e |
NLP as a Lens for Causal Analysis and Perception Mining to Infer Mental Health on Social Media
Muskan Garg garg.muskan@mayo.edu
Mayo Clinic
RochesterMNUSA
Chandni Saxena csaxena@cse.cuhk.edu.hk
The Chinese University of Hong Kong
Hong Kong SAR
Usman Naseem usman.naseem@sydney.edu.au
The University of Sydney
2006NSWAustralia
Bonnie J Dorr bonniejdorr@ufl.edu
University of Florida
FlUSA
NLP as a Lens for Causal Analysis and Perception Mining to Infer Mental Health on Social Media
1discoursesexplainabilityinterpretabilitypragmatics Index Terms-depressionmental healthsocial mediasuicide risk
Interactions among humans on social media often convey intentions behind their actions, yielding a psychological language resource for Mental Health Analysis (MHA) of online users. The success of Computational Intelligence Techniques (CIT) for inferring mental illness from such social media resources points to NLP as a lens for causal analysis and perception mining. However, we argue that more consequential and explainable research is required for optimal impact on clinical psychology practice and personalized mental healthcare. To bridge this gap, we posit two significant dimensions: 1) Causal analysis to illustrate a cause-and-effect relationship in the user-generated text; 2) Perception mining to infer psychological perspectives of social effects on online users' intentions. Within the scope of Natural Language Processing (NLP), we further explore critical areas of inquiry associated with these two dimensions, specifically through recent advancements in discourse analysis. This position paper guides the community to explore solutions in this space and advance the state of practice in developing conversational agents for inferring mental health from social media. We advocate for a more explainable approach toward modeling computational psychology problems through the lens of language as we observe an increased number of research contributions in dataset and problem formulation for causal relation extraction and perception enhancements while inferring mental states.
I. INTRODUCTION
According to the World Health Organization (WHO) reports, 1 the prevalence of anxiety and depression is increased by 25% in the first year of COVID- 19 pandemic, yet many such cases have gone undetected. Traditionally, multiple in-person sessions with clinical psychologists are required to examine and infer a mental state, yet pandemic lockdowns affect the convenience of open sessions with mental health practitioners. Reports released in August 2021 2 indicate that 1.6 million people in England were on waiting lists for mental health care. An estimated 8 million people were unable to obtain assistance from a specialist, as they were not considered sick enough to qualify. This situation underscores the need for automation of mental health detection from social media data where people express themselves and their thoughts, beliefs/ emotions with ease. Motivated by [1], there are continuously growing trends and patterns in this area of research which would benefit substantially from convenient pathways and directions towards explainability of AI models for mental health analysis on social media. 3 Social media platforms are frequently relied upon as open fora for honest disclosure. Social NLP researchers analyze social media posts to obtain useful insights for behavioral therapy. Facebook, a leading social media platform, uses artificial intelligence and pattern recognition to find users at risk. 4 The use of social media data for Mental Health Analysis (MHA) is bolstered by users' propensity for self-disclosures, which often serve as a therapeutic component of social wellbeing [2]. Motivation: Motivated with the need to understand, process and generate human behavior for modeling real-time conversational AI agents, we examine the social NLP literature to process self-reported articles beyond syntactic and semantic analysis. To this end, we examine discourses and pragmatics in self-reported texts for inferring empathetic and perceptual understanding of mental states. Moreover, according to official 2019 report by Department of Veterans Affairs, 6,261 veterans die by suicide which is 7% less than the previous year. However, as per national strategy for preventing veteran suicide: 2018-2028, the Department of Veterans Affairs et. al., 2018 targets the national goal of reducing the number to 20% by 2025. The adverse effect of isolation on veterans demands the need of personalized therapy through automated conversational AI agents. Scope of this study: Social media posts are cause-and-effect expositions which may or may not contain the signs of reason behind the intent of a user. Existing studies cover the significance of causal explanations inferred from writing style [3]. However, studies on the pragmatic use of languages and discourse analysis on social media are limited. To bridge this gap between Computational Intelligence Techniques (CIT) and clinical psychology, we pose causal analysis and perception mining by emphasizing on discourse and pragmatics in social media texts as shown in Figure 1. We carry out collaborative discussions among the NLP research community and a senior clinical psychologist to maintain the integrity of this position paper. The scope of our work is limited to the presentation of a new perspective. Our position paper moves this line of research in CIT one-step closer to the automation for real-time applications of clinical psychology. We identify a window of opportunity for the NLP community to extract more nuances about human behavior, highlighting key challenges and future vision with an in-depth analysis of mental illness on social media.
A. Current Position of Community
Clinical psychologists conduct in-person sessions to understand human behavior for mental health analysis. This activity is simulated using Natural Language Understanding (NLU) over social media texts. However, research and publications in this area are still limited, hindering rapid community access to available resources to support such simulations. Some interesting surveys and reviews are available on machine learning and deep learning for identifying and predicting the intent of affected users [4], [5]. The results have been convincing enough to embrace computational studies for MHA by projecting prominent key findings and their limitations to bridge a gap between two disciplines.
Comprehensive studies of learning-based techniques have been applied to digital data [6], medical record [7], complex and large (original) reports [8], social media text [9]- [11] and multimodal data [12]. Systematic studies and reviews towards this research field recapitulate the problem of demographic bias in data collection mechanism, managing consent of users for data, and theoretical underpinnings about human behavior in user-generated information [13]. Although such studies have made headway in human understanding of causes and perceptions associated with user-generated social media posts, automated identification of mental health related causes and perceptions is an area that has not yet come to fruition.
We classify this field of research into two levels of scientific study for inferring mental health from social media language, as shown in Figure 2: Level 0 is the most prevalent position of the NLU community. Researchers in this realm build algorithms for mental health identification and prediction, often relying on handcrafted or automated features for building AI models [14], [15]. Contributions toward this endeavor have emerged through exclusive studies on computational intelligence with ethical research protocols, as it is mandatory to address ethical concerns due to the sensitive nature of datasets from which such algorithms develop [16]- [18].
Level 1, the perspective we rely upon in our position, assumes an in-depth analysis using perception-and-affect and cause-and-effect relationships. We view this as a new research direction to build real-time explainable AI models. This line of research is aligned with the initial signs of causal explanation analysis on Facebook data [19], which has opened up new research directions to identify the reasons behind mental illness.
B. Our Position
As outlined above, Level 0 studies contribute to a wellestablished line of research, but we posit that Level 1 studies provide a basis for an in-depth analysis of human behavior via causal analysis and perception mining. In light of these considerations, this position paper presents NLP as a lens through which to infer mental health on social media via two paradigmatic approaches: leftmargin=* 1) Causal Analysis: Users' social media posts may express their grief and reasons, thus providing background for their mental illness or justification for actions under consideration. The causal analysis is further classified as cause detection, causal inference and cause categorization. Thus, Causal analysis is a cross-sectional study that identifies reasons behind the intent of a user. 2) Perception Mining: The mental state of users may be inferred from online postings of perspectives expressed in a social media post. Identifying beliefs, morals, and identity in the user's arguments lays a foundation for identifying and predicting mental states. Thus, Perception mining deals with the way a user interprets sensory information that affects the social attitude of a person. As such, perception mining acts as a backbone for causal analysis.
We investigate the problem formulation, current position, and open research directions for causal analysis and perspective mining in Sections 2 and 3, respectively.
II. CAUSAL ANALYSIS
A data-driven approach uses Pointwise Mutual Information (PMI) to find correlations between two verb phrases acquired from data [20]. Causal inference in place of correlation gives better and directional insights among different phrases [21]. Causal analysis is an untapped area of research for MHA, probably due to its perceived difficulty. In this position paper, we adopt the view that cause-and-effect relationships have significance for MHA and, moreover, that exploration of causeand-effect requires discourse parsing beyond textual features. Researchers in psychology have found that the human mind has a very complex mechanism for identifying and attributing the cause for their mental disturbance [22]. Inferring causeeffect relations between intent of chronic problems such as depression, suicide risk and statements specifying reasons such as isolation, unemployment, has also been found to be an important part of usergenerated text comprehension, especially for narrative text. Terminology: We introduce the intent as an argument made by users on social media platforms while expressing their feelings, beliefs and circumstances. For this position paper, we further restrict the use of the term intent of a user for arguments containing information about users' mental state only. Consider an example A for a post written by a social media user as: A: I hate my job .. I cant stand living with my dad. I'm afraid to apply to any developer jobs or show my skills off to employers. I don't even own a car. I just feel like a failure. We then classify the causal analysis into three sub-tasks as indicated below, with the psychologist's input on concrete questions to constrain the nature of the reason behind intent of a user: leftmargin=* • Cause Detection: A classification technique to identify whether texts inferring users' mental health contains any reason or cause behind user's intent. Example A shows that there exists at least one reason behind the poor mental state of a user (e.g., job, family issues, finances). Causal detection answers the question: "Does the text contain any indicator of the cause behind the mental condition, such as a job loss or a death in the family?". • Causal Inference: An NLP task to obtain abstractive or extractive explanations from a user's intent after cause detection. Example A reveals a causal inference as "hate my job, dont even have a car, hate my job, feels like failure." Causal inference answers the question "Which parts of the text segments explains the reason behind mental illness?" • Cause Categorization: Considering cause as a topic/concept, a topic-specific categorization of users' intent using causal inference. A dominant cause in example A is categorized as something related to Jobs and Career. Causal categorization answers the question "Among given causal categories, which causal category does this text belong to?". We briefly describe a working instance of a corpus, exploring three sub-tasks of causal analysis, as shown in Figure 3. We use social NLP as a lens through which to conduct a Level 1 study of intent.
A. Psychological Theories
We use NLP as a lens for the investigation of AI models developed for NLP tasks that categorize social media posts into their associated mental states. A neural Rhetorical Structure Theory (RST) parsing system is publicly available to examine discourse relations and perceived persuasiveness of social media data [23]. Potential signs of cause behind an imbalanced mindset are given in the posts such as insomnia, weight gain, or other indicators of worthlessness or excessive or inappropriate guilt. Underlying reasons may include: bias or abuse [24], loss of jobs or career [25], physical/emotional illness leading to, or induced by, medication use [26], [27], relationship dysfunction, e.g., marital issues [28], and alienation [29]. This list is not exhaustive, but it is a starting point for level 1 study of mental health analysis.
B. Thinking beyond Social Features
People with depression exhibit differences with respect to linguistic styles, such as the distribution of nouns, verbs and CORPUS T1 Suicide is the major cause of death adverbs [30], resulting in the unconscious conceptualization of complex sentences. We advocate the use of behavioral features in the past such as first-person language, present tense and anger-based terms [31]. Most of the existing language processing is associated with surface-level linguistic features and semantic level aspects.for MHA on social media data.
Neural information processing uses automatic feature transformation in end-to-end models. Word embedding techniques such as Word2Vec, Glove and FastText [32] encode a token of the text in a dense vector representation. More recently, pretrained language models such as BERT, GPT, Sentence BERT, and BART use attention mechanism to embed sentences and achieve state-of-the-art performance for cross-sectional studies [33]. Although neural information processing is suitable for some rapid assessments of mental state classification and categorization, data representation methods lack information necessary to examine in-depth nuances of users' intent. We briefly describe some possible solutions for the task of indepth text analytics.
C. Discourses for Causal Analysis
The information extracted from textual features is in the form of the morphological, syntactic and semantic meaning of words from the intent of a user. Causal analysis motivates the community to think beyond existing surface-level linguistic features and semantic level aspects yielding the need of level 1 studies for mental health analysis.
Knowledge Graph: To "inject" mental disturbance through self-reported text into AI assistants such as Amazon Alexa, utilization of cross-domain knowledge of social interactions, emotions and linguistic variations of natural language is critical. Knowledge Graphs represents a network of realworld entities, namely, (i) objects as aspects of mental wellbeing, such as social aspect, vocational aspect, emotional aspect (ii) events triggering mental disturbance, such as death, breakup, isolation (iii) situations, such as human-user advocacy, domain knowledge, common-sense knowledge. The illustration of relationships between them is visualized as a graph structure through a graph database. We map events from self-reported texts that indicate objects/ key aspects of mental disturbance through environmental situations suggesting the need of discourse analysis for mental healthcare. The complex nature of language processing tasks requires the construction of Knowledge-Graphs (KG) to capture text semantics [34]. KG's support the discovery of cause and effect relationships to reveal a reason behind suicidal intention [35]. We lay down a tuple to represent triplets as <event, object, relation> where event is a reason that triggers mental disturbance and object is the aspect of mental well-being thus affected through any given situational relation. Such cause and effect relationships deduce discourse relations to examine reliability and hence, trustworthiness of decision making by AI models. Discourse Relations: There has been a recent surge in the use of discourse analysis, and its potential is demonstrated in a recent survey [36]. Discourse analysis determines the connectivity among different text segments to map causeand-effect relationships. Son et al. [19] conducts a recent experiment with Facebook data to extract causal explanations, which yields research insights for exploring discourse relations in the field of mental health analysis. More recent studies indicate that discourse relations support pragmatic inference worthy of future investigation in this domain as well [37]. Thus, Son et al. [19] introduce ground-breaking research using discourse relations to detect mental health from Facebook data. The authors propose an approach to cause detection and causal inference in their work and show promising results. However, the dataset is publicly unavailable and thus, address the limitations of dataset availability and the complexity of a problem.
Thus, discourse parsing and KG-based methods extract information from given social media post but without regard to its complexity. The longer the post, the more potential there is for introducing inconsistencies, further complicating the ability to understand user-generated language. A potential solution of complex self-reported text is sentence simplification, rephrasing the sentence in a simplified form. [38]. Many existing simplifications approaches rephrase the text without considering semantic information. However, to keep the essence of the cause-and-effect relationship, we argue that semantic dependency is required for this task. Semantic Dependency Information guided Sentence Simplification (SISS) is a neural sentence simplification system [39].
III. PERCEPTION MINING
Clinical psychologists use their judgements to glean the psychological perception of a person via in-person offline sessions. Human judgements are more than common sense and regular language understanding. As a result, an in-depth analysis of self-reported social media posts is required to simulate human judgements about the psychological perspective of a user.
In social media platforms, a historical timeline of users' posts reflects an overall attitude towards life. This attitude evolves from users' perceptions. A time-varying study, referred as longitudinal, is used for identifying behavioral patterns to determine the extent to which a user is socially affected. Longitudinal studies enable the exploration of solutions to important research questions such as quantifying the social effect on a user, detection of suicidal ideation over a period of time, inferring changing patterns of mental health, and early risk prediction. Perspective mining supports solutions to these research questions. For this task, the given social media post is analyzed using advanced stages of NLP: pragmatics and discourse.
Working Instance: Consider the timeline for a user as shown in Figure 4. In given user's timeline, the purple and blue colored phrase indicates the cause and intent of a user, respectively. Our senior clinical psychologist suggests that the user's perception is FREEDOM, ATTRACTION, ASSET as evident from organizing a technical fest, refused proposal/ unable to concentrate, no job/ lost my dad, respectively. From this, we see that perceptions uncover users' beliefs and morals which often underlie their intent as revealed through crosssectional evaluation of causal analysis. Thus, we posit the need for perception mining to develop real-time explainable AI models such as conversational AI agents or AI chatbots for automatically handling mental health disorders.
A. Psychological Theories
A person perceives through the five senses and sometimes through common sense as well to behave in a certain way. Self-perception theory infers attitude and behavior via user's verbal and non-verbal actions [40]. We use NLP as a lens and consider written verbal communication. Structural balance theory is a basic theory of cognitive consistency in social networks and examines the consistency in social behavior and user's attitude [41]. There are many controversial topics, such as legal abortion, live-in relationships, early/ late marriages and joint/ nuclear family, on which people have different thoughts, beliefs and morals. An interesting theory on moral behavior sheds light on what people think about their identity [42]. With this background, we glean deep nuances and theoretical underpinnings of users' perceptions to understand them through AI models.
Taylor and Brown [43] suggest that the social world and cognitive-processing mechanisms impose filters on incoming information and impact the psychological perspective of the user's well-being. A well-established study shows that a person's perception is a matter of pragmatics, depending largely on interpersonal relationships and their impact on quality of life [44]. Correspondingly, a recent surge of pragmaticsinspired research on emotions has paved the way for new solutions to the problem of perception mining [45], [46].
B. Investigating Personality in Text
Although perception mining is closely related to psychological theories of personality, there is a significant difference between perception and personality detection. Personality is a set of qualities/ characteristics which differentiates two persons and explains how they behave in society, but perception is a way of organizing, identifying, and interpreting sensory information. 5 Thus, perception directly affects thoughts, actions, and behavior, it is helpful to recognize situations and patterns. A user has a change in perception due to perceptual aberrations, for instance, problems in perceiving cognitive information due to neurological disorders that affect mental states.
We further advocate the presence of recently introduced datasets and path-breaking models to examine perception of the author through language in social media. As evident from past studies, moralization in social networks provides useful insights about social phenomena such as protest dynamics, message dissemination in network, and social distancing [47]. Recent studies [48] have investigated 10 categories of moral sentiment (care/harm, purity/degradation, etc.) in order to correlate stances in social media posts with both online and offline phenomena that include protest dynamics and social distancing. Furthermore, perception mining supports extensive studies over discordant knowing [49] and exploring social perceptions of health and diseases using social media data [50].
C. Pragmatics for Perception Mining
Pragmatics deals with real-time situations sensibly and realistically in a way that is based on practicality rather than theoretical considerations. State-of-the-art NLP models acquire their knowledge of syntax, semantics and pragmatics from large amounts of text, on the order of billions of words, and store this knowledge in layers of artificial neural networks thereby addressing multiple long-standing problems in psychiatry [51]. Existing well-equipped studies in pragmatic analysis of mental healthcare are empathetic conversations suggesting real-time application of online mental health support [52]- [55], and infusing commonsense knowledge [56].
IV. DISCUSSION
The analysis above supports the critical need for automated analysis at a level that supports mental health experts understanding reasons and causes for mental-health states as expressed in social media posts. We posit that this is precisely where interpretable AI models are necessary for seeing beyond simple assignment of text snippets to mental-health categories, i.e., explainability.
A. Explainability
Our perspective on level 1 studies encourage the NLP research community to find explanations behind the reflection of neuropsychiatric behavior in personal writings. Major challenges in advancing explainable AI for modeling user's behavior are (i) availability of limited dataset, (ii) quantitative and qualitative performance evaluation measures for user's perspective, and (iii) investigating discourse-specific explanations from long texts. We now depict a proposed representation as output for explainability with two examples for further illustration:
Text 1: ...no point of living alone, my mother has no time for me! Text 2: Feeling low, she refused my proposal Unable to concentrate on work.
In Text 1, the mentions 'no point of living' and 'mother has no time' enable an inference that feeling neglected is a suicide risk indicator, and a perception of alienation may lead to suicidal tendencies. The corresponding explainable representations are:
• causal relationship(neglect, suicide risk) • perception mining(alienation, suicide risk) • causal relationship(rejection, depressed) • perception mining(attraction, depressed)
In Text 2, the mentions 'feeling low' and 'concentration problem' enable an inference that the author is depressed due to rejection, and a perception of attraction leading to depression. The corresponding explainable representations are:
We posit that cause detection must first be applied to determine whether there exists any cause, with binary output [0: does not exists, 1: does exists] for the author's intent expressed in bold text. We further posit that causal inference must then be applied to extract cause as an explanation in the author's phrases shown in italicized text. A final step categorizes the text into appropriate causes. We work with indepth analysis of users' perception to support causal inference and categorization. Other than these must-haves, we leave a good-to-have aspect to find correlations/ patterns among causal analysis and perception mining.
B. Available Resources and Future Scope
We enlist initial tasks/ entities as a resourceful compilation of references for causal analysis and perception mining in Table I. We observe (i) publicly available datasets such as CEASE, 6 CAMS, 7 RHMD 8 , empathetic conversations 9 , (ii) dataset available on request such as MotiVAte [52]. Datasets curated in the past can be expanded with additional annotation and datasets for perception mining such as adding morals, values and beliefs. On the other hand, we come across three different problems formulated for causal analysis in the past as explained in Section 2.1. We suggest a thought of problem formulation and data annotations for extending existing datasets to displace causal analysis on the top of perception mining and thus, reduce gap between the two.
Enriched with elements of (i) commonsense knowledge, (ii) domain-specific knowledge and (iii) other semantic enhancements for developing context-aware AI models for identifying, categorizing and predicting mental disorders, we more towards real-time applications such as developing conversational AI models through empathetic and personality analysis. We witness this low-level analysis through empathetic response generation [52], moral foundations [48], semantic health mentions [59], personality analysis [53], [54], human beliefs [53], [55] and cause-and-effect relationship in a given text [19], [57], [61], [62]. However, this high-level analysis misses key components to develop responsible AI models such as explainability, fairness, transparency, and accountability to deploy real-time applications in mental healthcare.
V. CONCLUSION
We posit causal analysis and perception mining for MHA on social media through the lens of NLP. The concept of causal analysis is described in three different stages: cause detection, extracting inference, and cause categorization. We examine existing textual features and introduce the need to exploit discourse relations and Knowledge Graphs (KG) for causal analysis. Perception mining is an explainable feature for both AI models and causal analysis. The contribution of this work derives from the potential for tackling different use cases at a deeper, interpretable level than that of most existing approaches while addressing the ethical considerations required for developing real-time systems. We endeavor to disseminate this position widely in the research community and urge researchers to develop richer, explainable models for inferring mental illness on social media.
Out. Dataset
Task Description CA CAMS Causal Categorization [57] Handling unstructured long texts to find reason behind intent. CA CAMS Explainable NLP [58] Explainable causal categorization of mental health.
PM Twitter
Moral Foundation [48] Moral sentiment classification from Twitter data.
CA Facebook
Causal Explanation [19] Causal explanation identification and extraction on social media. PM RHMD Classification [59] Perception based health mention classification in Reddit posts. PM News Empathy analysis [53] Personality and belief driven empathetic conversation modeling. PM Twitter Personality analysis [60] Language-based personality assessment of regional users.
PM Twitter
Beliefs [55] Modeling latent dimensions of human beliefs. CA CEASE Causal Recognition [61] Cause identification and extraction for emotions in suicide notes. CA Docs ECPE [62] Emotion-cause pair extraction (ECPE) from text documents. PM CEASE Personality analysis [54] Personality subtyping from suicide notes. PM MotiVAte
Dialogue system [52] Empathetic response generation in online mental health support.
PM Curated
Classify perception [63] Interpersonal conflict types for classifying perception.
ETHICAL CONSIDERATIONS
Although many anticipated research benefits are associated with our position above, the ethical implications of using NLP on social media text reveal a wide range of issues and concerns [64]. Convey et al. [65] introduce a taxonomy of ethical principles on using Twitter in public health research. These ethical principles are applied on many social media platforms. In this section, we briefly highlight ethical considerations for this line of research by examining different stakeholders involved and focusing on some important ethical principles, including Privacy, Responsibility, Transparency and Fairness.
• Privacy: The research community experiences ethical challenges in ensuring data privacy on social media [65], [66]. We adopt the guidelines of Benton et al. [17], which extend ethical protocols to guide NLP research from a healthcare perspective by framing privacy concerns for using social media data. Specifically, it becomes a privacy risk when personal attributes such as identity of a person are revealed using publicly available data. For example, protecting users' (farmers/veterans) data privacy is essential, which is connected to their autonomy, personal identity, and well-being [67]. • Responsibility: Responsibility implies honesty and accountability in the application of CIT in mental health. For example, it is the responsibility of healthcare professionals to ensure that CIT-based mental health applications provide benefits to users/patients, and it is the researcher's responsibility to design AI models to ensure "traceability" of decision-making processes. According to WHO 10 guidance on Ethics and Governance of Artificial Intelligence for Health, reliance on AI technologies in clinical care requires collective responsibility, accountability and liability among numerous stakeholders. Thus, a mindful practice has immense scope for the trustworthy and efficient exposition in mental health and improves clinical outcomes using such applications [64], [68], [69]. 10 https://www.who.int/publications/i/item/9789240029200
• Transparency: The regulations of data transparency provided by a guidance note from the United Nations Development Group address data collection challenges with due diligence. 11 In this case, policymakers are concerned about developing a transparent data collection process that may ensure the confidentiality of users' data. NLP researchers are responsible for transparency about computational research with sensitive data accessed during model design and deployment. • Fairness: Researchers are responsible for ensuring that the collected data are unbiased, balanced, and sufficient. They are also accountable for better outcomes of NLP research for values like justice and equity. The development of fair AI technologies in mental healthcare supports unbiased clinical decision-making. Moreover, interpretation [70], [71] and explanation [72], [73] are possible means for detecting bias so that it may be addressed. Furthermore, healthcare practitioners and researchers must collectively ensure effective evaluation mechanisms of AI technologies for mental health in support of trustworthy and fair decision-making.
In future, we encourage practical deployment of explainable and responsible AI models which adhere to ethical considerations.
LIMITATIONS
The scope of our work is limited to abstract study and theoretical perspective of causal analysis and perception mining only. We acknowledge the absence of implementation/ empirical studies in this position paper and plan it for future work. Although there are no direct conclusions on the responsibility of AI from this perspective, we give cues about explainable AI in this work [74] and plan to carry out integrated studies with discourses in the near future. We limit our investigation to mining language in social media and avoid its extension to an in-depth study of clinical symptoms and diagnoses.
Fig. 1 .
1To bridge a gap between computational intelligence techniques and clinical psychology, we pose causal analysis and perception mining by emphasizing on discourse and pragmatics for social media analysis using NLP as a lens.
Fig. 3 .
3Working instance for causal analysis of user's intent in three steps: Cause Detection (CD), Causal Inference (CI) and Cause Categorization (CC). We give five categories as an example of cause categorization.
Fig. 4 .
4Users' historical timeline as an instance for perception mining. Overall attitude can be examined by tracking the user's historical timeline. User's perception of FREEDOM, ATTRACTION, ASSET as evident from organizing a technical fest, refused proposal/ unable to concentrate, no job/ lost my dad, respectively.
T2 I am done with my life T3 I want to die because I cannot deal with this breakup T4 I have no job, want to end my lifeCD
T1 0
T2 0
T3 1
T4 1
T3: cannot deal
with breakup
T4: no Job
CI
CC
T1 0
T2 0
T3 4
T4 2
Cause Detection
Causal Inference
Cause Categorization
Categories
0: No reason
1: Bias or Abuse
2: Jobs/ Careers
3: Medication
4: Relationships
5: Alienation
USER's TIMELINE 03/04/2017 09:10 pm: College life is really fun, I have a concert tomorrow 12/04/2017 11:24 am: We are organizing a technical fest, feel free to be part of it 21/04/2017 03:14 pm: Successfully organized a technical fest. Loving it!!! <About 2 years and 1 month later> 14 May 2019 08:54 pm: Feeling low, she refused my proposal. 17 May 2019 03:32 pm: I hate exams. Why don't they teach only practical. 26 May 2019 04:15 pm: Unable to concentrate on exams, nothing is good. <About 1 year later> 30/05/2020 07:30 pm: Finally graduated!!! but no job :( <About 2 months later> 27/07/2020 03:14 pm: Lost my dad :( Worst day of my life. 29/07/2020 05:15 pm: free me from all this or I will do something myself
TABLE I A
ILIST OF INITIAL TASKS/ENTITIES IS A RESOURCEFUL COMPILATION OF REFERENCES FOR CAUSAL ANALYSIS AND PERCEPTION MINING OF EMOTIONS, SENTIMENTS AND THUS, MENTAL HEALTH. HERE, Out MEANS "OUTCOME", CA REFERS TO "CAUSAL ANALYSIS", PM REFERS TO "PERCEPTION MINING".
https://www.who.int/news/item/02-03-2022-covid-19-pandemic-triggers-25-increase-in-prevalence-of-anxiety-and-depression-worldwide 2 https://www.theguardian.com/society/2021/aug/29/strain-on-mentalhealth-care-leaves-8m-people-without-help-say-nhs-leaders3 Explainability refers to the ability to determine reasons behind an algorithm's output through generation of an explanation for a particular decision, e.g., classification of a social media post as an indicator of depression.
https://wikidiff.com/perception/personality
https://www.iitp.ac.in/ ∼ ai-nlp-ml/resources.html 7 https://github.com/drmuskangarg/CAMS 8 https://github.com/usmaann/RHMD-Health-Mention-Dataset 9 https://github.com/wwbp/empathic reactions
https://unsdg.un.org/sites/default/files /UNDG BigData final web.pdf
Digital life data in the clinical whitespace. G Coppersmith, Current Directions in Psychological Science. 311G. Coppersmith, "Digital life data in the clinical whitespace," Current Directions in Psychological Science, vol. 31, no. 1, pp. 34-40, 2022.
Healthy personality and self-disclosure. S M Jourard, New YorkMental HygieneS. M. Jourard, "Healthy personality and self-disclosure." Mental Hy- giene. New York, 1959.
Pessimistic explanatory style is a risk factor for physical illness: a thirty-five-year longitudinal study. C Peterson, M E Seligman, G E Vaillant, Journal of personality and social psychology. 55123C. Peterson, M. E. Seligman, and G. E. Vaillant, "Pessimistic explana- tory style is a risk factor for physical illness: a thirty-five-year longitudi- nal study." Journal of personality and social psychology, vol. 55, no. 1, p. 23, 1988.
A machine learning analysis of covid-19 mental health data. M Rezapour, L Hansen, Scientific reports. 121M. Rezapour and L. Hansen, "A machine learning analysis of covid-19 mental health data," Scientific reports, vol. 12, no. 1, pp. 1-16, 2022.
Natural language processing applied to mental illness detection: a narrative review. T Zhang, A M Schoene, S Ji, S Ananiadou, NPJ digital medicine. 51T. Zhang, A. M. Schoene, S. Ji, and S. Ananiadou, "Natural language processing applied to mental illness detection: a narrative review," NPJ digital medicine, vol. 5, no. 1, pp. 1-13, 2022.
Artificial intelligence for mental health and mental illnesses: an overview. S Graham, C Depp, E E Lee, C Nebeker, X Tu, H.-C Kim, D V Jeste, Current psychiatry reports. 2111S. Graham, C. Depp, E. E. Lee, C. Nebeker, X. Tu, H.-C. Kim, and D. V. Jeste, "Artificial intelligence for mental health and mental illnesses: an overview," Current psychiatry reports, vol. 21, no. 11, pp. 1-18, 2019.
Facebook language predicts depression in medical records. J C Eichstaedt, R J Smith, R M Merchant, L H Ungar, P Crutchley, D Preoţiuc-Pietro, D A Asch, H A Schwartz, Proceedings of the National Academy of Sciences. 11544J. C. Eichstaedt, R. J. Smith, R. M. Merchant, L. H. Ungar, P. Crutchley, D. Preoţiuc-Pietro, D. A. Asch, and H. A. Schwartz, "Facebook language predicts depression in medical records," Proceedings of the National Academy of Sciences, vol. 115, no. 44, pp. 11 203-11 208, 2018.
Artificial intelligence and suicide prevention: a systematic review of machine learning investigations. R A Bernert, A M Hilberg, R Melia, J P Kim, N H Shah, F Abnousi, International journal of environmental research and public health. 17165929R. A. Bernert, A. M. Hilberg, R. Melia, J. P. Kim, N. H. Shah, and F. Abnousi, "Artificial intelligence and suicide prevention: a systematic review of machine learning investigations," International journal of environmental research and public health, vol. 17, no. 16, p. 5929, 2020.
Machine learning for mental health in social media: Bibliometric study. J Kim, D Lee, E Park, J Med Internet Res. 23324870J. Kim, D. Lee, and E. Park, "Machine learning for mental health in social media: Bibliometric study," J Med Internet Res, vol. 23, no. 3, p. e24870, 2021.
Ai in mental health. S , Current Opinion in Psychology. 36S. D'Alfonso, "Ai in mental health," Current Opinion in Psychology, vol. 36, pp. 112-117, 2020.
Natural language processing in mental health applications using non-clinical texts. R A Calvo, D N Milne, M S Hussain, H Christensen, Natural Language Engineering. 235R. A. Calvo, D. N. Milne, M. S. Hussain, and H. Christensen, "Natural language processing in mental health applications using non-clinical texts," Natural Language Engineering, vol. 23, no. 5, pp. 649-685, 2017.
Multimodal mental health analysis in social media. A H Yazdavar, M S Mahdavinejad, G Bajaj, W Romine, A Sheth, A H Monadjemi, K Thirunarayan, J M Meddar, A Myers, J Pathak, Plos one. 154226248A. H. Yazdavar, M. S. Mahdavinejad, G. Bajaj, W. Romine, A. Sheth, A. H. Monadjemi, K. Thirunarayan, J. M. Meddar, A. Myers, J. Pathak et al., "Multimodal mental health analysis in social media," Plos one, vol. 15, no. 4, p. e0226248, 2020.
Machine learning for suicidal ideation identification: A systematic literature review. W F Heckler, J V De Carvalho, J L V Barbosa, Computers in Human Behavior. 107095W. F. Heckler, J. V. de Carvalho, and J. L. V. Barbosa, "Machine learning for suicidal ideation identification: A systematic literature review," Computers in Human Behavior, p. 107095, 2021.
Phase: Learning emotional phase-aware representations for suicide ideation detection on social media. R Sawhney, H Joshi, L Flek, R Shah, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeR. Sawhney, H. Joshi, L. Flek, and R. Shah, "Phase: Learning emotional phase-aware representations for suicide ideation detection on social media," in Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 2021, pp. 2415-2428.
Mentalbert: Publicly available pretrained language models for mental healthcare. S Ji, T Zhang, L Ansari, J Fu, P Tiwari, E Cambria, arXiv:2110.15621arXiv preprintS. Ji, T. Zhang, L. Ansari, J. Fu, P. Tiwari, and E. Cambria, "Mentalbert: Publicly available pretrained language models for mental healthcare," arXiv preprint arXiv:2110.15621, 2021.
On the state of social media data for mental health research. K Harrigian, C Aguirre, M Dredze, NAACL HLT 2021. 15K. Harrigian, C. Aguirre, and M. Dredze, "On the state of social media data for mental health research," NAACL HLT 2021, p. 15, 2021.
Ethical research protocols for social media health research. A Benton, G Coppersmith, M Dredze, Proceedings of the first ACL workshop on ethics in natural language processing. the first ACL workshop on ethics in natural language processingA. Benton, G. Coppersmith, and M. Dredze, "Ethical research protocols for social media health research," in Proceedings of the first ACL workshop on ethics in natural language processing, 2017, pp. 94-102.
Social media, big data, and mental health: current advances and ethical implications. M Conway, D O'connor, Current opinion in psychology. 9M. Conway and D. O'Connor, "Social media, big data, and mental health: current advances and ethical implications," Current opinion in psychology, vol. 9, pp. 77-82, 2016.
Causal explanation analysis on social media. Y Son, N Bayas, H A Schwartz, arXiv:1809.01202arXiv preprintY. Son, N. Bayas, and H. A. Schwartz, "Causal explanation analysis on social media," arXiv preprint arXiv:1809.01202, 2018.
Unsupervised learning of narrative event chains. N Chambers, D Jurafsky, Proceedings of ACL-08: HLT. ACL-08: HLTN. Chambers and D. Jurafsky, "Unsupervised learning of narrative event chains," in Proceedings of ACL-08: HLT, 2008, pp. 789-797.
Causal inference of script knowledge. N Weber, R Rudinger, B Van Durme, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingN. Weber, R. Rudinger, and B. Van Durme, "Causal inference of script knowledge," in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020, pp. 7583- 7596.
The many facets of the cause-effect relation. C Khoo, S Chan, Y Niu, The Semantics of Relationships. SpringerC. Khoo, S. Chan, and Y. Niu, "The many facets of the cause-effect relation," in The Semantics of Relationships. Springer, 2002, pp. 51- 70.
Neural-based rst parsing and analysis in persuasive discourse. J Li, L Xiao, Proceedings of the Seventh Workshop on Noisy Usergenerated Text (W-NUT 2021. the Seventh Workshop on Noisy Usergenerated Text (W-NUT 2021J. Li and L. Xiao, "Neural-based rst parsing and analysis in persuasive discourse," in Proceedings of the Seventh Workshop on Noisy User- generated Text (W-NUT 2021), 2021, pp. 274-283.
The impact of different types of abuse on depression. M L Radell, E G Hamza, W H Daghustani, A Perveen, A A Moustafa, Depression research and treatment. 2021M. L. Radell, E. G. Abo Hamza, W. H. Daghustani, A. Perveen, and A. A. Moustafa, "The impact of different types of abuse on depression," Depression research and treatment, vol. 2021, 2021.
Job loss and depression: The role of subjective expectations. B Mandal, P Ayyagari, W T Gallo, Social Science & Medicine. 724B. Mandal, P. Ayyagari, and W. T. Gallo, "Job loss and depression: The role of subjective expectations," Social Science & Medicine, vol. 72, no. 4, pp. 576-583, 2011.
Depression in cancer patients: Pathogenesis, implications and treatment. H R Smith, Oncology letters. 94H. R. Smith, "Depression in cancer patients: Pathogenesis, implications and treatment," Oncology letters, vol. 9, no. 4, pp. 1509-1514, 2015.
Depression among patients with hiv/aids: research development and effective interventions (gapresearch). B X Tran, R Ho, C S Ho, C A Latkin, H T Phan, G H Ha, G T Vu, J Ying, M W Zhang, International journal of environmental research and public health. 16101772B. X. Tran, R. Ho, C. S. Ho, C. A. Latkin, H. T. Phan, G. H. Ha, G. T. Vu, J. Ying, and M. W. Zhang, "Depression among patients with hiv/aids: research development and effective interventions (gap- research)," International journal of environmental research and public health, vol. 16, no. 10, p. 1772, 2019.
Marital and family therapy for depression in adults. S R Beach, D J Jones, S. R. Beach and D. J. Jones, "Marital and family therapy for depression in adults." 2002.
Diagnostic and statistical manual of mental disorders. F Edition, Am Psychiatric Assoc. 21F. Edition et al., "Diagnostic and statistical manual of mental disorders," Am Psychiatric Assoc, vol. 21, 2013.
The language of mental health problems in social media. G Gkotsis, A Oellrich, T Hubbard, R Dobson, M Liakata, S Velupillai, R Dutta, Proceedings of the Third Workshop on Computational Linguistics and Clinical Psychology. the Third Workshop on Computational Linguistics and Clinical PsychologyG. Gkotsis, A. Oellrich, T. Hubbard, R. Dobson, M. Liakata, S. Velupil- lai, and R. Dutta, "The language of mental health problems in social media," in Proceedings of the Third Workshop on Computational Lin- guistics and Clinical Psychology, 2016, pp. 63-73.
A linguistic analysis of suicide-related twitter posts. B O'dea, M E Larsen, P J Batterham, A L Calear, H Christensen, Crisis: The Journal of Crisis Intervention and Suicide Prevention. 385319B. O'dea, M. E. Larsen, P. J. Batterham, A. L. Calear, and H. Chris- tensen, "A linguistic analysis of suicide-related twitter posts." Crisis: The Journal of Crisis Intervention and Suicide Prevention, vol. 38, no. 5, p. 319, 2017.
Latent suicide risk detection on microblog via suicide-oriented word embeddings and layered attention. L Cao, H Zhang, L Feng, Z Wei, X Wang, N Li, X He, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingL. Cao, H. Zhang, L. Feng, Z. Wei, X. Wang, N. Li, and X. He, "Latent suicide risk detection on microblog via suicide-oriented word embeddings and layered attention," in Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 1718-1728.
A survey of transformers. T Lin, Y Wang, X Liu, X Qiu, arXiv:2106.04554arXiv preprintT. Lin, Y. Wang, X. Liu, and X. Qiu, "A survey of transformers," arXiv preprint arXiv:2106.04554, 2021.
Knowledge graphs. A Hogan, E Blomqvist, M Cochez, C Amato, G D Melo, C Gutierrez, S Kirrane, J E L Gayo, R Navigli, S Neumaier, Synthesis Lectures on Data, Semantics, and Knowledge. 122A. Hogan, E. Blomqvist, M. Cochez, C. d'Amato, G. d. Melo, C. Gutier- rez, S. Kirrane, J. E. L. Gayo, R. Navigli, S. Neumaier et al., "Knowl- edge graphs," Synthesis Lectures on Data, Semantics, and Knowledge, vol. 12, no. 2, pp. 1-257, 2021.
Building and using personal knowledge graph to improve suicidal ideation detection on social media. L Cao, H Zhang, L Feng, IEEE Transactions on Multimedia. L. Cao, H. Zhang, and L. Feng, "Building and using personal knowledge graph to improve suicidal ideation detection on social media," IEEE Transactions on Multimedia, 2020.
A survey of the extraction and applications of causal relations. B Drury, H G Oliveira, A De Andrade, Lopes, Natural Language Engineering. B. Drury, H. G. Oliveira, and A. de Andrade Lopes, "A survey of the extraction and applications of causal relations," Natural Language Engineering, pp. 1-40, 2021.
Discourse relation embeddings: Representing the relations between discourse segments in social media. Y Son, H A Schwartz, arXiv:2105.01306arXiv preprintY. Son and H. A. Schwartz, "Discourse relation embeddings: Represent- ing the relations between discourse segments in social media," arXiv preprint arXiv:2105.01306, 2021.
Improving human text simplification with sentence fusion. M Schwarzer, T Tanprasert, D Kauchak, Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing. the Fifteenth Workshop on Graph-Based Methods for Natural Language ProcessingTextGraphs-15M. Schwarzer, T. Tanprasert, and D. Kauchak, "Improving human text simplification with sentence fusion," in Proceedings of the Fifteenth Workshop on Graph-Based Methods for Natural Language Processing (TextGraphs-15), 2021, pp. 106-114.
Neural sentence simplification with semantic dependency information. Z Lin, X Wan, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Z. Lin and X. Wan, "Neural sentence simplification with semantic dependency information," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 15, 2021, pp. 13 371-13 379.
Self-perception theory. D J Bem, Advances in experimental social psychology. Elsevier6D. J. Bem, "Self-perception theory," in Advances in experimental social psychology. Elsevier, 1972, vol. 6, pp. 1-62.
Some dynamics of social balance processes: bringing heider back into balance theory. N P Hummon, P Doreian, Social Networks. 251N. P. Hummon and P. Doreian, "Some dynamics of social balance processes: bringing heider back into balance theory," Social Networks, vol. 25, no. 1, pp. 17-49, 2003.
Identity, morals, and taboos: Beliefs as assets. R Bénabou, J Tirole, The Quarterly Journal of Economics. 1262R. Bénabou and J. Tirole, "Identity, morals, and taboos: Beliefs as assets," The Quarterly Journal of Economics, vol. 126, no. 2, pp. 805- 855, 2011.
Illusion and well-being: a social psychological perspective on mental health. S E Taylor, J D Brown, Psychological bulletin. 1032193S. E. Taylor and J. D. Brown, "Illusion and well-being: a social psychological perspective on mental health." Psychological bulletin, vol. 103, no. 2, p. 193, 1988.
Quest for accuracy in person perception: A matter of pragmatics. W B Swann, Psychological review. 914457W. B. Swann, "Quest for accuracy in person perception: A matter of pragmatics." Psychological review, vol. 91, no. 4, p. 457, 1984.
Pragmatics and emotion. T Wharton, L De Saussure, 2022T. Wharton and L. de Saussure, "Pragmatics and emotion," 2022.
Find supports for the post about mental issues: More than semantic matching. Y Zhao, D Liu, C Wan, X Liu, X Qiu, J Nie, Transactions on Asian and Low-Resource Language Information Processing. Y. Zhao, D. Liu, C. Wan, X. Liu, X. Qiu, and J. Nie, "Find supports for the post about mental issues: More than semantic matching," Trans- actions on Asian and Low-Resource Language Information Processing, 2022.
Moralization in social networks and the emergence of violence during protests. M Mooijman, J Hoover, Y Lin, H Ji, M Dehghani, Nature human behaviour. 26M. Mooijman, J. Hoover, Y. Lin, H. Ji, and M. Dehghani, "Moralization in social networks and the emergence of violence during protests," Nature human behaviour, vol. 2, no. 6, pp. 389-396, 2018.
Moral foundations twitter corpus: A collection of 35k tweets annotated for moral sentiment. J Hoover, G Portillo-Wightman, L Yeh, S Havaldar, A M Davani, Y Lin, B Kennedy, M Atari, Z Kamel, M Mendlen, Social Psychological and Personality Science. 118J. Hoover, G. Portillo-Wightman, L. Yeh, S. Havaldar, A. M. Davani, Y. Lin, B. Kennedy, M. Atari, Z. Kamel, M. Mendlen et al., "Moral foundations twitter corpus: A collection of 35k tweets annotated for moral sentiment," Social Psychological and Personality Science, vol. 11, no. 8, pp. 1057-1071, 2020.
Discordant knowing: A social cognitive structure underlying fanaticism. A Gollwitzer, I Olcaysoy Okten, A O Pizarro, G Oettingen, Journal of experimental psychology: general. A. Gollwitzer, I. Olcaysoy Okten, A. O. Pizarro, and G. Oettingen, "Dis- cordant knowing: A social cognitive structure underlying fanaticism." Journal of experimental psychology: general, 2022.
Casia@ smm4h'22: A uniform health information mining system for multilingual social media texts. J Fu, S Li, H M Yuan, Z Li, Z Gan, Y Chen, K Liu, J Zhao, S Liu, Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task. The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared TaskJ. Fu, S. Li, H. M. Yuan, Z. Li, Z. Gan, Y. Chen, K. Liu, J. Zhao, and S. Liu, "Casia@ smm4h'22: A uniform health information mining system for multilingual social media texts," in Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task, 2022, pp. 143-147.
Natural language processing in psychiatry: the promises and perils of a transformative approach. N Rezaii, P Wolff, B H Price, The British Journal of Psychiatry. 2205N. Rezaii, P. Wolff, and B. H. Price, "Natural language processing in psychiatry: the promises and perils of a transformative approach," The British Journal of Psychiatry, vol. 220, no. 5, pp. 251-253, 2022.
Towards motivational and empathetic response generation in online mental health support. T Saha, V Gakhreja, A S Das, S Chakraborty, S Saha, Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 45th International ACM SIGIR Conference on Research and Development in Information RetrievalT. Saha, V. Gakhreja, A. S. Das, S. Chakraborty, and S. Saha, "Towards motivational and empathetic response generation in online mental health support," in Proceedings of the 45th International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, 2022, pp. 2650-2656.
Empathic conversations: A multi-level dataset of contextualized conversations. D Omitaomu, S Tafreshi, T Liu, S Buechel, C Callison-Burch, J Eichstaedt, L Ungar, J Sedoc, arXiv:2205.12698arXiv preprintD. Omitaomu, S. Tafreshi, T. Liu, S. Buechel, C. Callison-Burch, J. Eichstaedt, L. Ungar, and J. Sedoc, "Empathic conversations: A multi-level dataset of contextualized conversations," arXiv preprint arXiv:2205.12698, 2022.
Em-persona: Emotion-assisted deep neural framework for personality subtyping from suicide notes. S Ghosh, D K Maurya, A Ekbal, P Bhattacharyya, Proceedings of the 29th International Conference on Computational Linguistics. the 29th International Conference on Computational LinguisticsS. Ghosh, D. K. Maurya, A. Ekbal, and P. Bhattacharyya, "Em-persona: Emotion-assisted deep neural framework for personality subtyping from suicide notes," in Proceedings of the 29th International Conference on Computational Linguistics, 2022, pp. 1098-1105.
Modeling latent dimensions of human beliefs. H Vu, S Giorgi, J D Clifton, N Balasubramanian, H A Schwartz, Proceedings of the International AAAI Conference on Web and Social Media. the International AAAI Conference on Web and Social Media16H. Vu, S. Giorgi, J. D. Clifton, N. Balasubramanian, and H. A. Schwartz, "Modeling latent dimensions of human beliefs," in Proceedings of the International AAAI Conference on Web and Social Media, vol. 16, 2022, pp. 1064-1074.
Comma-deer: Common-sense aware multimodal multitask approach for detection of emotion and emotional reasoning in conversations. S Ghosh, G V Singh, A Ekbal, P Bhattacharyya, Proceedings of the 29th International Conference on Computational Linguistics. the 29th International Conference on Computational LinguisticsS. Ghosh, G. V. Singh, A. Ekbal, and P. Bhattacharyya, "Comma-deer: Common-sense aware multimodal multitask approach for detection of emotion and emotional reasoning in conversations," in Proceedings of the 29th International Conference on Computational Linguistics, 2022, pp. 6978-6990.
Cams: An annotated corpus for causal analysis of mental health issues in social media posts. M Garg, C Saxena, V Krishnan, R Joshi, S Saha, V Mago, B J Dorr, arXiv:2207.04674arXiv preprintM. Garg, C. Saxena, V. Krishnan, R. Joshi, S. Saha, V. Mago, and B. J. Dorr, "Cams: An annotated corpus for causal analysis of mental health issues in social media posts," arXiv preprint arXiv:2207.04674, 2022.
Explainable causal analysis of mental health on social media data. C Saxena, M Garg, G Ansari, Proceedings of ICONIP. ICONIPC. Saxena, M. Garg, and G. Ansari, "Explainable causal analysis of mental health on social media data," Proceedings of ICONIP, 2022.
Rhmd: A real-world dataset for health mention classification on reddit. U Naseem, M Khushi, J Kim, A G Dunn, IEEE Transactions on Computational Social Systems. U. Naseem, M. Khushi, J. Kim, and A. G. Dunn, "Rhmd: A real-world dataset for health mention classification on reddit," IEEE Transactions on Computational Social Systems, 2022.
Regional personality assessment through social media language. S Giorgi, K L Nguyen, J C Eichstaedt, M L Kern, D B Yaden, M Kosinski, M E Seligman, L H Ungar, H A Schwartz, G Park, Journal of personality. 903S. Giorgi, K. L. Nguyen, J. C. Eichstaedt, M. L. Kern, D. B. Yaden, M. Kosinski, M. E. Seligman, L. H. Ungar, H. A. Schwartz, and G. Park, "Regional personality assessment through social media language," Jour- nal of personality, vol. 90, no. 3, pp. 405-425, 2022.
Cares: Cause recognition for emotion in suicide notes. S Ghosh, S Roy, A Ekbal, P Bhattacharyya, European Conference on Information Retrieval. SpringerS. Ghosh, S. Roy, A. Ekbal, and P. Bhattacharyya, "Cares: Cause recognition for emotion in suicide notes," in European Conference on Information Retrieval. Springer, 2022, pp. 128-136.
Learning a general clause-to-clause relationships for enhancing emotion-cause pair extraction. H Chen, X Yang, C Li, arXiv:2208.13549arXiv preprintH. Chen, X. Yang, and C. Li, "Learning a general clause-to-clause re- lationships for enhancing emotion-cause pair extraction," arXiv preprint arXiv:2208.13549, 2022.
Understanding interpersonal conflict types and their impact on perception classification. C Welch, J Plepi, B Neuendorf, L Flek, arXiv:2208.08758arXiv preprintC. Welch, J. Plepi, B. Neuendorf, and L. Flek, "Understanding interper- sonal conflict types and their impact on perception classification," arXiv preprint arXiv:2208.08758, 2022.
Artificial intelligence, social media and depression. a new concept of health-related digital autonomy. S Laacke, R Mueller, G Schomerus, S Salloch, The American Journal of Bioethics. 217S. Laacke, R. Mueller, G. Schomerus, and S. Salloch, "Artificial intel- ligence, social media and depression. a new concept of health-related digital autonomy," The American Journal of Bioethics, vol. 21, no. 7, pp. 4-20, 2021.
Ethical issues in using twitter for public health surveillance and research: developing a taxonomy of ethical concepts from the research literature. M Conway, Journal of medical Internet research. 16123617M. Conway et al., "Ethical issues in using twitter for public health surveillance and research: developing a taxonomy of ethical concepts from the research literature," Journal of medical Internet research, vol. 16, no. 12, p. e3617, 2014.
Ethics and privacy implications of using the internet and social media to recruit participants for health research: A privacy-by-design framework for online recruitment. J L Bender, A B Cyr, L Arbuckle, L E Ferris, Journal of Medical Internet Research. 1947029J. L. Bender, A. B. Cyr, L. Arbuckle, and L. E. Ferris, "Ethics and privacy implications of using the internet and social media to recruit participants for health research: A privacy-by-design framework for online recruitment," Journal of Medical Internet Research, vol. 19, no. 4, p. e7029, 2017.
A governance model for the application of ai in health care. S Reddy, S Allan, S Coghlan, P Cooper, Journal of the American Medical Informatics Association. 273S. Reddy, S. Allan, S. Coghlan, and P. Cooper, "A governance model for the application of ai in health care," Journal of the American Medical Informatics Association, vol. 27, no. 3, pp. 491-497, 2020.
Who is the" human" in human-centered machine learning: The case of predicting mental health from social media. S Chancellor, E P Baumer, M. De Choudhury, Proceedings of the ACM on Human-Computer Interaction. 3CSCWS. Chancellor, E. P. Baumer, and M. De Choudhury, "Who is the" human" in human-centered machine learning: The case of predicting mental health from social media," Proceedings of the ACM on Human- Computer Interaction, vol. 3, no. CSCW, pp. 1-32, 2019.
Ai in global health: the view from the front lines. A Ismail, N Kumar, Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. the 2021 CHI Conference on Human Factors in Computing SystemsA. Ismail and N. Kumar, "Ai in global health: the view from the front lines," in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2021, pp. 1-21.
Feature attention network: Interpretable depression detection from social media. H Song, J You, J.-W Chung, J C Park, PACLIC. H. Song, J. You, J.-W. Chung, and J. C. Park, "Feature attention network: Interpretable depression detection from social media." in PACLIC, 2018.
Detecting traces of self-harm in social media: A simple and interpretable approach. J Aguilera, D I H Farías, M Montes-Y Gómez, L C González, Mexican International Conference on Artificial Intelligence. SpringerJ. Aguilera, D. I. H. Farías, M. Montes-y Gómez, and L. C. González, "Detecting traces of self-harm in social media: A simple and inter- pretable approach," in Mexican International Conference on Artificial Intelligence. Springer, 2021, pp. 196-207.
Explainablemachine-learning to discover drivers and to predict mental illness during covid-19. I P Jha, R Awasthi, A Kumar, V Kumar, T Sethi, I. P. Jha, R. Awasthi, A. Kumar, V. Kumar, and T. Sethi, "Explainable- machine-learning to discover drivers and to predict mental illness during covid-19," 2020.
On the explainability of automatic predictions of mental disorders from social media data. A S Uban, B Chulvi, P Rosso, International Conference on Applications of Natural Language to Information Systems. SpringerA. S. Uban, B. Chulvi, and P. Rosso, "On the explainability of automatic predictions of mental disorders from social media data," in International Conference on Applications of Natural Language to Information Sys- tems. Springer, 2021, pp. 301-314.
Causal explanations and xai. S Beckers, arXiv:2201.13169arXiv preprintS. Beckers, "Causal explanations and xai," arXiv preprint arXiv:2201.13169, 2022.
| [
"https://github.com/drmuskangarg/CAMS",
"https://github.com/usmaann/RHMD-Health-Mention-Dataset",
"https://github.com/wwbp/empathic"
] |
[
"Multimodal Event Graphs: Towards Event Centric Understanding of Multimodal World",
"Multimodal Event Graphs: Towards Event Centric Understanding of Multimodal World",
"Multimodal Event Graphs: Towards Event Centric Understanding of Multimodal World",
"Multimodal Event Graphs: Towards Event Centric Understanding of Multimodal World"
] | [
"Hammad A Ayyubi \nColumbia University\n\n",
"Christopher Thomas \nColumbia University\n\n",
"Lovish Chum \nColumbia University\n\n",
"Rahul Lokesh \nColumbia University\n\n",
"Yulei Niu \nColumbia University\n\n",
"Xudong Lin \nColumbia University\n\n",
"Long Chen \nColumbia University\n\n",
"Jaywon Koo \nColumbia University\n\n",
"Sounak Ray \nColumbia University\n\n",
"Shih-Fu Chang \nColumbia University\n\n",
"Hammad A Ayyubi \nColumbia University\n\n",
"Christopher Thomas \nColumbia University\n\n",
"Lovish Chum \nColumbia University\n\n",
"Rahul Lokesh \nColumbia University\n\n",
"Yulei Niu \nColumbia University\n\n",
"Xudong Lin \nColumbia University\n\n",
"Long Chen \nColumbia University\n\n",
"Jaywon Koo \nColumbia University\n\n",
"Sounak Ray \nColumbia University\n\n",
"Shih-Fu Chang \nColumbia University\n\n"
] | [
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n",
"Columbia University\n"
] | [] | Understanding how events described or shown in multimedia content relate to one another is a critical component to developing robust artificially intelligent systems which can reason about real-world media. While much research has been devoted to event understanding in the text, image, and video domains, none have explored the complex relations that events experience across domains. For example, a news article may describe a 'protest' event while a video shows an 'arrest' event.Recognizing that the visual 'arrest' event is a subevent of the broader 'protest' event is a challenging, yet important problem that prior work has not explored. In this paper, we propose the novel task of MultiModal Event Event Relations (M 2 E 2 R) to recognize such cross-modal event relations. We contribute a largescale dataset consisting of 100k video-news article pairs, as well as a benchmark of densely annotated data. We also propose a weakly supervised multimodal method which integrates commonsense knowledge from an external knowledge base (KB) to predict rich multimodal event hierarchies. Experiments show that our model outperforms a number of competitive baselines on our proposed benchmark. We also perform a detailed analysis of our model's performance and suggest directions for future research. | 10.48550/arxiv.2206.07207 | [
"https://export.arxiv.org/pdf/2206.07207v1.pdf"
] | 249,674,703 | 2206.07207 | de07b8f76770b11d7b2759163073f687e4799e72 |
Multimodal Event Graphs: Towards Event Centric Understanding of Multimodal World
Hammad A Ayyubi
Columbia University
Christopher Thomas
Columbia University
Lovish Chum
Columbia University
Rahul Lokesh
Columbia University
Yulei Niu
Columbia University
Xudong Lin
Columbia University
Long Chen
Columbia University
Jaywon Koo
Columbia University
Sounak Ray
Columbia University
Shih-Fu Chang
Columbia University
Multimodal Event Graphs: Towards Event Centric Understanding of Multimodal World
Understanding how events described or shown in multimedia content relate to one another is a critical component to developing robust artificially intelligent systems which can reason about real-world media. While much research has been devoted to event understanding in the text, image, and video domains, none have explored the complex relations that events experience across domains. For example, a news article may describe a 'protest' event while a video shows an 'arrest' event.Recognizing that the visual 'arrest' event is a subevent of the broader 'protest' event is a challenging, yet important problem that prior work has not explored. In this paper, we propose the novel task of MultiModal Event Event Relations (M 2 E 2 R) to recognize such cross-modal event relations. We contribute a largescale dataset consisting of 100k video-news article pairs, as well as a benchmark of densely annotated data. We also propose a weakly supervised multimodal method which integrates commonsense knowledge from an external knowledge base (KB) to predict rich multimodal event hierarchies. Experiments show that our model outperforms a number of competitive baselines on our proposed benchmark. We also perform a detailed analysis of our model's performance and suggest directions for future research.
Introduction
Human life is eventful. We use events to describe what is happening (e.g. protest, flood, etc.), to explain its causes (e.g. police brutality led to protests), to depict our understanding of the world (e.g. during the protest someone burned property) and so on. Understanding the nature of events and their relation is thus a crucial step towards the goal of understanding our world in a nuanced and human-like way.
One way to represent such an understanding of events and their relations is through a graph structure (see Figure 1), where nodes represent events (both textual and visual) and the edges represent eventevent relations. A structured graphical representations offers many advantages. For example, it can be used to summarize text [10], perform question answering [60] and commonsense reasoning [38], predict the next event in a sequence [7], and serve as an input to aid other downstream reasoning tasks [12].
Such an event based understanding of the world and its representation in structured graphical format has been widely explored in the natural language processing (NLP) community [55,56,61]. The task is typically formulated as two distinct steps: 1) event detection [48] and 2) event-event relation prediction [55]. Recently, a similar line of research has appeared in the vision community, tackling Preprint. Under review. Figure 1: Given a text article, existing methods output an event graph, consisting of text events as nodes and event relations as edges. For the proposed multimodal task, M 2 E 2 R, given a text article and a video, the expected output is a multimodal event graph, consisting of both text events and video events as nodes and their relations. The new multimodal edge is shown with a dashed line. visual event detection [49] and visual event-event relation prediction [44]. Unfortunately, both lines of work remain disjoint -no existing method tackles cross-modal event relation recognition. However, our world is inherently multimodal and unimodal representations fail to capture the rich cross-modal relationships which exist in this plentiful source of data. The proposed task, Multimodal Event Event Relation Prediction, M 2 E 2 R, is an effort towards bridging this gap.
Formally, we propose the novel task of Multimodal Event-Event Hierarchical Relationship (M 2 E 2 R) prediction. Given a text, annotated with textual events, and a video, annotated with video events, the task requires detection of all multimodal 'Hierarchical' and 'Identical' relations among all text event and video event pairs (see Figure 1). To support research on this task, we release a large scale dataset, M 2 E 2 R. The dataset consists of around 100.5K pairs of news article and accompanying video. We annotate a small subset (500 pairs) with all possible multimodal hierarchical and identical relations between text events and video events. This forms the test set for benchmarking and evaluation.
To overcome large scale annotation contraints, we propose a weakly supervised method -1) First generating pseudo labels using existing NLP and Vision techniques 2) Training our proposed, MERP (Multimodal Event Relations Predictor) on these pseudo labels while also leveraging commonsense knowledge from an external Knowledge Base. We evaluate our method against several baselines on our human-annotated benchmark of multimodal event relations. Our results demonstrate that our proposed approach significantly improves over a number of competitive baselines on this challenging task. We include detailed experiments ablating design choices of our model.
The major contributions of this work are:
• We introduce the Multimodal Event-Event Relationship (M 2 E 2 R) recognition task.
• We contribute a large-scale dataset of 100.5K news articles and associated videos to facilitate research on this task. We provide a large-number of densely annotated samples by trained annotators to benchmark performance on this task.
• We propose a weakly supervised method for this task which significantly outperforms competitive basselines.
Related Work
Hierarchical Event Relations in Text. Detecting hierarchical event relations (or sub-event relations) is a long-standing problem in the text domain [35,16]. Early works mainly rely on heuristic phrasal patterns. For example, Badgett et al.
[2] found some characteristic phrases (e.g., "media reports" in new articles) always contain sub-events with hierarchical relations. To further enrich hierarchical event relation instances, recent works [62] rely on generative language model to generate subevent knowledge among different commonsense knowledge [5,45], then incorporate knowledge into event ontology. In this paper, we focus on the hierarchical event relations between multi-modal events.
Relation Understanding in the Vision Domain. Scene graphs [24,58,21] have been densely studied for images/videos to parse the visual scene into a graph. However, the relationships studied in scene graphs are not between two events. To the best of our knowledge, the only pioneering work has discussed the event-event relationship in video domain is VidSitu [44]. Unfortunately, to simplify the research problem on this topic, they have made several assumptions: 1) All events are manually cutted into fixed interval (2-second). 2) All event types are from predefined event ontology. Instead, in this work, we focuses on open-vocabulary event types, and variable-length video events. Besides, we focus on multimodal relations in contrast to their vision-only event relations.
Multimodal Event Understanding. Since single-modality event extraction is well studied [34,47,28,29,63,27,31,36], understanding events from multiple modalities [25,8,39,26,66,52,57] has attracted extensive research interests because different modalities usually provide the complementary information for comprehensively understanding the real-world complex events. Two important benchmarks [25,8] have been established for image + text and video + text settings. Li et al. first introduced the task of jointly extracting events and labeling argument roles from both text articles and images. An unannotated image-article dataset and a manually labeled evaluation image-article dataset are collected to train and evaluate models, respectively. Chen et al. further defined the task of joint multimedia event extraction from video and text to exploit the rich dynamics from videos. However, both the works focus on event detection in comparison to the event-event relation task explored in this work.
Task
Rrepresenting our world in terms of events and their relations (event graphs) provides a powerful abstraction towards the overarching goal of understanding our world, especially from multimodal data (text + visual). We propose the novel task, M 2 E 2 R, to generate such multimodal graphs. Multimodal event graph consist of textual and visual events as nodes and their relations as edges. Inducing such an event graph is composed of two steps -(1) detecting events from text and visual data (2) predicting relationships between these events. Event detection from both modalities has been explored before [48,49] and we use the established SOTA techniques to detect these events. The proposed task, M 2 E 2 R, focuses on multimodal event-event relationship prediction. While there are many event relations types that have been explored in NLP and Vision literature [19,61], we focus on two types of relations in this work -Hierarchical and Identical.
Events have many facets to them and are referred to on multiple granular levels in text and visual data. For example in Figure 1, we find that during the event "e 1 : Protest", subevents "v 2 : Burning of public property" and people "e 2 : Flooding" streets occurred. Hierarchical event relationship prediction is aimed at resolving such event granularities. Such an event graph containing Hierarchical and Identical relationships have many downstream applications -summarization, question answering, commonsense reasoning etc [10, 60,38,12].
Formal Task Definition Given a text article, T , annotated with text events, {e i } m i=1 , and a video, V annotated with video events {v j } n j=1 , M 2 E 2 R requires the model to predict all possible Hierarchical and Identical event-event relations, {r k } K k=1 , from a text event, e i , to a video event, v j , among all possible m × n pairs, where r k ∈ {'Hierarchical', 'Identical'} and K ≤ m × n. We will now discuss definitions for different components of the task and justifications for task design choices.
Text Event Definition Text event has been defined quite thoroughly in different NLP works on information extraction [20,41,18]. As such, we closely follow ACE Corpus's [20] definition of events. Very broadly it's defined as: 'a change of state or the changed state of a person, thing or entity.' However, ACE corpus has a very detailed definition addressing many linguistic nuances which is not required for our application. Secondly, the ACE corpus is restricted to fixed ontology of event types, whereas we annotate events in open-domain. As a result of aforementioned points, we came up with a lightweight and modified event definition and annotation instructions (detailed in Appendix A.1).
Video Event Definition Like text events, we don't restrict video events to a fixed ontology, rather we deal with them in an open-domain setting. Precisely defining events in such a setting is quite challenging due to multiple granularities on which events are depicted in videos. For example, during a 'clash' event, we can see 'pulling out baton' event and 'throwing a punch' event. This makes it difficult to pick salient event boundary in video clips. Sadhu et al. [44] circumvent this ambiguity by defining temporally fixed event boundary of duration 2 seconds. However, pre-defining the boundary duration is difficult and application specific. Additionally, it can often divide salient events into multiple segments. We address these issues by defining event boundary to be shot change, partly following Shou et al. [49]. This gives us a good trade-off between ease, clarity, consistency and non-segmentation of events, based on qualitative analysis and annotators feedback.
Relation Types
We define two types of event-event relations in this work -Hierarchical and Identical. These relation types are well defined in NLP [16] and we follow them to define the relations for our task as below:
Hierarchical: 'A parent event A is said to be hierarchically related to a subevent B, if event B is spatio-temporally contained within the event A.'
Identical: 'An event A is said to be identical to another event B if both events denote exactly the same real-world events.'
Relation direction from text to video The relations in our multimodal event graphs are directed from text event to video events. The logic behind this design choice is that text events are often more abstract while video events are often atomic. For example, we are likely to observe abstract events such as war and election in text while their atomic subevents -fighting and voting -are more likely to be visible in the video.
Difference from Video Grounding Task One question that may be raised is -video grounding task also localize text (events) to video segments (events); so how is M 2 E 2 R different from the grounding task? The answer is that video grounding task can only provide us with 'Identical' relations. It will be difficult to expect grounding to extract 'Hierarcihcal' relations. For example, a parent event, 'strom' can be hierarchically related to subevent 'cancelled flight' [55]. However, 'strom' looks nothing like 'cancelled flight'. As such, it will be difficult for a video grounding model to predict these 'Hierarchical' relations.
Dataset
To support research on the proposed task, we introduce a large scale M 2 E 2 R dataset. We collected the dataset from news domain as the content is rich in event mentions and their relations [16,19]. In order to mitigate data bias, we chose news sources rated 'Center' by the media rating website allsides.com, resulting in a total of 9 news media sources (Appendix B.1). We scraped Youtube for news video in the channel ids of these 9 sources, collecting a total of 100.5K videos.
For each video, we also collected the associated description and closed captions. We filtered videos whose 1) duration was greater than 14 minutes; and 2) descriptions had less than 10 words. This was done to prune videos which may be too computationally expensive to process or whose description may be too short to have meaningful events. We split the data two ways -1) 100K unannotated train split for self-supervised/weakly supervised training and 2) 526 annotated test split -249 validation set and 277 test set -for benchmarking and evaluation. We annotate only a small set because annotations for this task is extremely challenging and resource consuming -two popular NLP Hierarchical event-event relations dataset [19,56] contain 100 articles each (including train split).
Train Split
The large scale train split containing 100k videos with a total duration of more than 4K hours and descriptions, which form our text data, totalling 28M words allows for training self-supervised/weakly supervised models on the task. We analyse and compare our training split against 4 popular videolanguage datasets in Table 1. Our dataset is orders of magnitude larger than any other dataset, especially in terms of video length duration and text article length (20x), allowing for rich events representation. Additionally, our dataset is the only one that includes Automatic Speech Recognition (ASR) script along with article descriptions of videos. We provide additional details in Appendix B.2.
Test Split
Annotation Procedure As the first step, following definition of video event from Section 3, we extract video events using an off-the-shelf video segmentation model -PySceneDetect. To make text event annotation easier, we provide automatically extracted text events (using [48]) to the annotators along with the instruction to add/omit events according to the definition in Section 3. Next, we task the annotators to mark all possible relations, ∈ 'Hierarchical', 'Identical', from the annotated text events to the provided video events in a video-article pair. We provide screenshots of annotation tool and additional details in Appendix B.3.
We train 5 annotators for this task through a series of short seminars, multiple rounds of feedback and consultation with all the annotators to improve consensus.
Inter Annotator Agreement (IAA) We measure the quality of the annotations using IAA. Inspired by Glavaš et al. [16] and GLAVAŠ and ŠNAJDER [17], we formulate IAA j = r 1(xrj≥2)
|∪ 5 i=1
Si| , where j ∈{'Hierarchical', 'Identical'}, S i = set of all relation annotated by annotator i as j, r ∈ ∪ 5 i=1 S i and x rj = # annotators who marked relation r as j. The intuition behind this formulation is to calculate the percentage of relations which have been annotated by at least 2 annotators. Using this formulation, we get IAA Hierarchical = 47.5 and IAA Identical = 48.9. While the IAA score may seem low, the IAA Hierarchical for NLP datasets HiEve and IC is 69 and 62 respectively. This denotes that the task in itself is quite challenging and adding multimodal aspect to it makes it even more demanding, lowering the IAA score even further.
Dataset Analysis We compare our dataset against two popular NLP event-event relations dataset -in Table 2 -as there is no existing dataset on multimodal event-event relations. M 2 E 2 R has comparable number of 'Hierarchical' and 'Identical' relations with the added novelty of being first-of-its-kind multimodal (text + vision) event-event relations dataset.
Multimodal Event-Event Relations Detection
Acquiring large scale labelled dataset for M 2 E 2 R task is time and resource consuming; not to mention the time and effort investment required to train multiple annotators (Section 4.2). As such, we propose a weakly supervised method ( Figure 2) -first, we generate pseudo labels using existing NLP and Vision techniques and secondly, we use these pseudo labels for training. We describe the method in detail below.
Pseudo Label Generation
Event Detection in Text and Video The first step is detect events in text and video separately. We use the same automatic methods to detect them as used on the test data -Open Domain IE [48] and open source library PySceneDetect 1 for text event and video event detection respectively.
Text Event-Text Event Hierarchical Relation Detection Lets assume we detected m text events,
{e i } m i=1
, in an article T and n video events, {v j } n j=1 , in the accompanying video V . The next step is to detect Hierarchical relations from text event, e u , to text event, e us , using Wang et al. [56], from all possible m × m pairs. Lets say we detected {e u e us } u=p,s=q u=1,s=1 hierarchical pairs, where the parent event is e u and the subevent is e us and p, q ≤ m.
Video Event Retrieval
The final step is to retrieve video events, {v us l } r l=1 and r ≤ n, from video V , which depicts the same real world event as the text subevent, e us . This step essentially simplifies to a video retrieval task. As CLIP [37] model has demonstrated state-of-the-art performance in multimodal retrieval tasks [30,11], we use it for this step. We discuss more details in Appendix C.1.
As a result of applying CLIP we get all possible video events which are identical to the text subevent: {e us v us l } r l=1 . Since e u was the parent event of e us and e us depicts the same event as {v us l } r l=1 , we can conclude that e u is the parent event of {v us l } r l=1 by transitivity. As a result, we get a total of {e u v us l } u=p ,s=q ,l=r u=1,s=1,l=1 Hierarchical pairs and {e us v us l } u=p ,s=q ,l=r u=1,s=1,l=1 Identical pairs, where p < p, q < q.
We collect additional identical pairs by directly comparing all text events {e i } m i=1 in the article T to all video events in v j n j=1 in the paired video V using CLIP. This gives an aggregate of
{e us v us l } u=p ,s=q ,l=r u=1,s=1,l=1 ∪ {e i v j } i=m ,j=n i=1,j=1
, where m < m, n < n, identical pairs. In total, we collect 57910 multimodal hierarchical event pairs and 390149 mutimodal identical event pairs from the 100K video-article pairs training set. This forms our pseudo label.
Training
Once we obtain the pseudo labels, we proceed to training ( Figure 3) using our model, MERP (Multimodal Event-Event Relations Predictor). Given, text event, e i , and video event, v j , having a label from the pseudo label set, r ij , we follow the below described training method and loss objective.
Input Representation & Feature Extraction
The text event is represented as a word, e i , in a sentence, se i = [w 1 , w 2 , ...e i , ..w j , ...w n ]. The video event, v j , is essentially a video clip in the video consisting of n video events, {v j } n j=1 . It is comprised of a stack of frames sampled uniformly at f s frames per second rate, v j = {F t y } Z y=1 . We use CLIP to extract text event features,
f t i = f t (se i ) as well as video event features, f v j = 1 Z Z y=1 f i (F j y ), where f i is CLIP'
s image encoder and f t is a minor modification of CLIP's text encoder, f t (Appendix C.1).
Contextualizing Video Event Features So far, we have extracted video event features independent of other events in the video. But, this is limiting because a video event such as 'building destruction' could have happened because of a 'storm' event or an 'earthquake' event. As such, we use Contextual Transformer (CT) to contextualize the event features with respect to other events in the video. CT is essentially a stack of multi-headed attention layers [54]. All the video events' features from video V , {f v j } n j=1 , forms the input token to CT. We get the resulting contextualized features of video event
v j as cf v j = CT ({f v j } n j=1 ).
Commonsense Features To aid learning the relationship between open domain text and video events, we incorporate commonsense knowledge from an external knowledge base, ConceptNet [51]. Inspired by Wang et al. [55], we extract events related by relations 'HasSubevent', 'HasFirstSubevent' and 'HasLastSubevent' from ConceptNet as positive pairs and random events as negative pairs. We embed the event pairs using CLIP and then, leverage the embeddings to train a feature extractor CS(., .) -a MLP (Multi Layer Perceptron) -using contrastive loss (Appendix C.2). Once trained, we freeze it and use it as a black box commonsense feature extractor while training MERP, cs ij = CS(f t i , cf v j ). Although while training MERP, one of the event is from visual modality, we are still able to use CS as it is because we extract embeddings for the video event using CLIP as well and CLIP's image embeddings and text embeddings are in the same embedding space.
Embeddings Interactions (EI) Following Wang et al. [55], we also add additional text event and video event features interactions for better representation. We add two types of features interactions -(1) Subtraction of text and video event features (SF), sf ij = f t i − cf v j and (2) Hardamard product of text and video event features (MF), mf ij = f t i * cf v j .
Multi Layer Perceptron (MLP) & Loss
We concatenate the text event feature, f t i , contextualized video event features, cf v j , commonsense features, cs ij , and embedding interactions, sf ij and mf ij , to form the input to a 2 layer MLP. The MLP layer is a 3-way classifier, outputting probabilities for e i and v j being classified as 'Hierarchical', 'Identical' or 'NoRel' (Not Related). The output is represented as
p ij = MLP([f t i ; cf v j ; cs ij ; sf ij ; mf ij]), where p ij ∈ R 1×3
. We train the model using Cross-Entropy loss between p ij and the label, r ij .
Inference
During inference, use CLIP with MERP, as an ensemble model to prune false positives for Identical relations. The rationale is to leverage CLIP's strong multimodal feature matching ability to prune out some of the video and text events falsely predicted as 'Identically' related, which CLIP can confidently predict to be not 'Identical'.
Implementation Details
Our best model uses 1 layer of multi-headed attention in CT. We make a note that the majority of text event and video event pairs are not related (94.52% in train and 93.23% in validation set). To address Table 4: Ablation studies on components and features. "MERP Basic" represents the basic MERP model. "CT" denotes the contextual transformer. "CS" denotes the common-sense knowledge features. "EI" denotes the embedding interactions, including the subtraction and element-wise product of textural and visual features. this label bias, we weight the the labels in Cross-Entropy Loss by the inverse ratio of their count in train set as done by Wang et al. [56].
We train our model for 15 epochs using a batch size of 1024 and a learning rate of 1e-5 on 4 NVIDIA Tesla v100 gpus for a total training time of around 34 hours. We provide ablation study of our model architecture in Section 6.3 and additional details on training hyperparameters in Appendix E.
Experiments
Evaluation Metric
Following event-event relations work in NLP ( [19,16], we evaluate each type of relations (Hierarchical and Identical) using Precision (P), Recall (R) and F 1 score (more details in Appendix D.1). We also report the macro average of F 1 scores (Avg. F 1 ) of Hierarchical and Identical relation types.
Baselines
Prior Baseline (Prior Base.) We use a random weighted classifier which randomly predicts a relation type (from the set {'Hierarchical', 'Identical', 'NoRel'}) for any text and video event based on the prior distribution of the relation type in the annotated labels.
Text Only Baseline (Text Base.) To evaluate the contribution of visual data in generating these multimodal event graphs, we compare our model against a Text only baseline. We construct this baseline by using ASR provided with video as a proxy for video events Appendix D.2). Proxy for a video event v j is the ASR found within the timestamps of v j , let's say X j . Now, we extract events from X j , {e jz } w z=1 as discussed in Section 5.1. Next, we use the NLP model [56] to predict relationship type, r ijz between text event in article, e i , and proxy video events {e jz } w z=1 found in video event v j . If any of the predicted relationship r ijz ∈ {'Hierarchical', 'Identical'}∀z, we propagate the relation r ijz from e i and e jz to e i and v j because e jz is a proxy for v j .
Multimodal Baseline (MM Base.) We discussed a method to predict multimodal event-event relations in Section 5.1, using SOTA NLP and vision methods, in the form of pseudo labels. This is currently the best performance that NLP and Vision can combine to give. As such, we compare the performance of our model against this multimodal baseline. We apply the aforementioned method on the validation and test to predict relations and evaluate it against the ground truth labels (Table 3). 6.3 Results
Comparison against baselines
The comparison between our MERP and above-mentioned baselines on the validation and test set are reported in Table 3, respectively. From both two tables, we have following observations: 1) For the most comprehensive metric Avg F 1 score, MERP obviously outperforms all baselines with significant performance gains (e.g., 18.0 vs. 11.4 on the validation set). 2) For Hierarchical relation type, our model achieves 4x recall over multimodal baseline (MM Base.). This is because our model can directly predict multi-modal 'Hierarchical' relationship from text parent event to video subevent. In comparison, MM Base. relies on finding the subevent in text first before retrieving the matching video subevent (Section 5.1). Absence of subevent mention in text causes it to miss numerous multimodal event relations. Meanwhile, since the multimodal baseline only produces quite a few predictions, it achieves higher precision scores. We also note the performance improvement of MM Base. over Text Base. demonstrating the criticality of visual data to this task.
Ablation Study
We conducted a set of ablations to verify the importance of different features used in our model. All results are reported in Table 4. We make several observations based on the results: 1) contextualizing video event features with respect to other video events (using CT) helps. 2) External Knowledge base (via CS features) improves understanding of open domain event-event relations. 3) All three network components -CT, CS and EI -combine to give the best performance.
Qualitative Analysis
We show some qualitative results from our model in Figure 4. The text-text event relations are derived using method described in Section 5.1 and MERP predicts the multimodal relations. These results demonstrate the advantage of MERP -we are able to predict multimodal relations in which the video subevent does not have a corresponding mention in the text article: v 2 :'shooting water cannon' on left and v 1 :'punching' on the right. This is extremely difficult with the even the best baseline, MM Base. (cf. Section 6.3.1). We provide more examples and analysis in Appendix F.
Conclusion and Future Work
We proposed the novel task of M 2 E 2 R for generating multimodal event graphs which is a powerful way to understand, represent and reason about our world. Along with the task, we introduced a large scale unannotated video-language dataset for training and a smaller annotated set for benchmarking and evaluation. We proposed a weakly supervised method to predict these multimodal event relationships, achieving an improvement of around 3x on recall and 50% on F 1 score (Hierarchical relations) over the strongest baseline, without requiring any annotations. Our method makes significant advances on the proposed task; still the best performance of 12.8 Avg F 1 score implies the immense difficulty of the task and consequently, avenues for future work. Some of them can be: 1) explicit contextual matching/contrast for text-video event; 2) dealing with open domain setting and 3) better video event detection. Further, adding more relationship types like Temporal and Causal will make event graphs more comprehensive.
As is the case with any large scale dataset, there are privacy, distribution and social bias concerns with M 2 E 2 R. We address these concerns in Appendix B.4.
References
[1] bert-restore-punctuation: Python library to restore punctuation. URL https://huggingface. co/felflare/bert-restore-punctuation. 23
[2] Allison Badgett and Ruihong Huang. Extracting subevents via an effective two-phase approach.
In
A Events and Events-Events Relations
Multimodal event graphs consists of events (text and video) as nodes and relations between them as edges. For the purpose of clarity and consistency, we provide additional details on the definition of each type of events and their relations.
A.1 Text Events
Events in text denote a change of state or the changed state of a person, thing or entity. An event trigger is the word which most clearly expresses the occurrence of the event. For the purpose of annotating events in text article, we task the annotators to annotate all possible event triggers. Events triggers can be (underlined text denote event triggers):
• Verbs. (eg. They have been married for three years) • Nouns. (eg. The attack killed 7 and injured 20)
• Nominals. Nominals are created from verb or adjective, eg. decide -> decision.
• To resolve the cases where both a verb and a noun are possible event triggers, we select the noun whenever the noun can be used by itself as a trigger word (eg. The leaders held a meeting in Beijing). • To resolve the cases where both verb and an adjective are possible event triggers, the adjective is selected as the trigger word whenever it can stand alone to express the resulting state brought about by the event (eg. The explosion left at least 30 dead and dozens injured). • In cases where multiple verbs are used together to express an event, the main verb is chosen as the event trigger (eg. John tried to kill Mary). • In case the verb and particle occur contiguously in a sentence, we will annotate both the verb and particle as event trigger (Jane was laid off by XYZ Corp). • Sometimes, multiple events are present withing a single scope ie. the sentence. In such cases, each sentence should be annotated with trigger words corresponding to all the events (eg. The explosion left at least 30 dead).
Additionally, we ask the annotators not to annotate the following type of events (underlined text denote events not to be annotated):
• Negative Events. An event is negative when it is explicitly indicated that the event did not occur (eg. His wife was sitting on the backseat and was not hurt). • Hypothetical Events. This includes the events which are believed to have happened or are hypothetical in nature (eg. Chapman would be concerned for his safety if released). • Generic Events. A generic event is any event which is contained in a generic statement (Terrorists often flee to nation-states with crumbling governments to avoid interference).
There have been violent and chaotic scenes at Manchester United's Old Trafford stadium as fans (e₁: invaded) the pitch to (e₂: protest) against the club's American owners.
T im e
A.2 Event Relation Types
There are primarily four types of relations that have been explored in the literature: identical [19,16], hierarchical [56], temporal [55] and causal [61,44]. While each of these relations are important towards building a comprehensive event graph, exploring all of them together is extremely challenging due to the annotation time required and the difficulty at distinguishing the different relation types for annotators. Despite the event relation prediction being a longstanding and established problem in NLP, no single large scale dataset exists containing all four relation types. Because the annotation and task difficulty increases substantially in the multimodal setting, we focus on the two most frequently found multimodal relation types in our human annotated data: Hierarchical and Identical relations.
Identical events are those events which denote exactly the same real world event. In contrast, two events are said to be hierarchically related if one of the event (parent event) spatio-temporally contains the other event (subevent). For this task, we constrain parent event to be a text event only to simplify the task and limit disagreements between the annotators. We provide additional illustration of the difference in these two types of relations in Figure 5.
As a corollary, a video event is 'Identical' to a text event if and only if the text event begins and ends in the video event and there is no more aspect to it, otherwise the text event is 'Hierarchical' to the video event. For example, a text event, 'meeting', will have a 'Hierarchical' relation to a video event showing two people shaking hands because the event 'meeting' has aspects other than just shaking hands like 'discussion', 'interview' etc.
Although, the direction of our Hierarchical relation if from text event to video event, we make a note that there could be event-event relations directed from video event to text event. However, we do not deal with them in this work to make the task more tractable.
B Dataset Details B.1 Dataset Curation
We consider the news domain for our dataset collection as news stories often contain rich events information along with their nuanced event-event relations. However, news articles are often opinionated and run the risk of being biased. One way of adjudging this bias is to rate the political leaning of news outlet as being left, center or right. To minimize such bias and collect only factual news, we employ media rating website, allsides.com, to select news agencies rated as 'Center'. Using this strategy, we end up with 9 news media sources: Associated Press, Axios, BBC News, Christian Science Monitor, Forbes, NPR, The Hill, Wall Street Journal and Reuters. Having finalized the news sources, we collect YouTube channel ids for all these sources. Next, we scrape videos, associated descriptions and closed captions from the collected channel ids. Videos longer than 14 minutes or having a description length smaller than 10 words were filtered out. We end up with dataset of size 100.5K samples: 100K for training and 526 for evaluation. We further divided this evaluation set into two sets: 249 for validation and 277 for testing.
B.2 Dataset Exploration
Data Statistics We provide additional data statistics on distribution of video duration, text article #words and #sentences in Figure 6. We observe that most of the videos in our dataset are at least 100 seconds long and most of the text articles have at least 10 sentences and 50 words. We compare these statistics against other video language dataset below.
We compare our dataset against 10 related video language datasets in Table 5. Our dataset is around 20x bigger in terms of total video duration and number of sentences. Datasets such as HowTo100M [33] and MERLOT [64] which only contain closed captions as textual data without accompanying Data Topic Exploration To discover the range of news topics covered by our dataset, we employed LDA topic modeling [4] implemented in Mallet [32] following Zellers et al. [64]. Precisely, we grouped our dataset into 100 topics using a vocabulary of size 22K consisting of words which occurred in at least 25 articles but no more than 10% of articles. Further, we removed those topic which contained media source names (eg. Reuters, WSJ etc.).
We report the top 10 topics in Table 6. We see that a diverse range of topics are covered by our dataset: Election, Justice, War, International News, Entertainment etc. For a random 1000 data samples, we visualize the topic distribution via tsne plot in Figure 7 and a wordle of most prominent words in Figure 8.
B.3 Annotation Tool
For annotating the evaluation set, we used the Label Studio 2 annotation tool. We did not use the vanilla tool; rather we customized it according to our use case. The screenshots of annotation interface is shown in Figure 9.
The top row shows an overview of our interface. The text article, pre-populated with automatically extracted text events (depicted by red boxes), is on the left. The video along with a scroll bar and play/pause button is in the center. It has also been pre-populated with automatically extracted video events. These video events are shown by segmented green bars in center middle. The dark green segment shows the selected video event. The rightmost panel shows information on selected event, the list of all annotated text and video events and finally, the list of all event-event relation annotations.
The annotation task is divided into two subtasks -1) adding/deleting text events; and 2) annotating all possible 'Hierarchical' and 'Identical' multimodal relations from text event to video event. The middle rows shows the steps involved in adding a text event in blue and deleting a text event in pink.
To add text event, the annotator first needs to click on the box marked 'Text Event' at top left corner. Then, they needs to select word(s) from article to be labelled as a text event. To delete a text event, the annotator needs to first click on the red box containing the event to be deleted. Then, they need to either hit backspace key or click on the delete icon on the rightmost panel.
The bottom row illustrates the process of labelling a multimodal relation from text event to video event. To annotate a relation, the annotator needs to first click on the text event that they want to label the relation from. Then, they need to click on the relation icon on the rightmost panel. Next, the video event segment, to which the relation is directed to, needs to be clicked. Finally, the type is relation needs to be selected from the drop down menu displayed at the bottom on rightmost panel.
Once the annotation is complete, we export the labels in json format.
B.4 Privacy, Distribution and Social Impact
Privacy As discussed, we download videos from the official YouTube channels of news media companies. As such, the videos are publicly available as well as originating from a public entity. Further, we manually verified on~50 videos that the youtube descriptions do not contain any reference to individual authors; rather they contain generic links to news agency website and their social media pages. These steps taken minimizes the risk of compromising privacy/identity of any individual.
Distribution and Licensing Following data release strategy from prior work [64,33], we will be only releasing urls of the YouTube videos. This enables us to publicly release the data quickly while also minimizing potential consent and privacy issues. The usage of the data is for non-commercial research and educational purposes, which we believe constitutes 'fair use'. However, our data release strategy is aimed at further reducing any potential licensing issues.
Social Impact Our dataset have been sourced from news domain to ensure richness of events. However, as news media tend to be opinionated, and sometimes sensationalized, the dataset risks suffering from racial, gender, cultural and religious bias. As discussed in previous sections, we consciously took steps to reduce these biases by making an effort to collect factual news from 'Center' rated news media sources.
C Model Components Details C.1 CLIP For Video Event Retrieval
As discussed in the main paper, we employ a weakly supervised training method due to supervised training data constraints. As the first step in this strategy, we collect pseudo labels. Given a paired text article and video, we first extract all text events {e i } m i=1 and all video events {v j } n j=1 . Next, we detect all hierarchical pairs in text, {e u e us } u=p,s=q u=1,s=1 , using Wang et al. [56]. Here, e u is the parent event and e us is the subevent. Finally, we employ CLIP to match text subevent e us with a video event so that we can propagate the hierarchical relation from text subevent to video subevent, giving us the Table 7: Ablation studies on components and features. "MERP Basic" represents the basic MERP model. "CT" denotes the contextual transformer. "CS" denotes the common-sense knowledge features. "EI" denotes the embedding interactions, including the subtraction and element-wise product of textural and visual features. 18.0 required multimodal hierarchical relation (pseudo label) from text parent event to video subevent. We describe the precise approach for this matching step below.
CLIP is a multimodal transformer model which encodes text into an embedding using a transformer architecture, f t (.), and separately encodes image into an embedding, in the same embedding space as the text embedding, using another transformer architecture, f i (.). It then scores the alignment of text and image using a distance scoring function -typically cosine -between encoded text embedding and encoded image embedding.
While encoding text, CLIP brackets the input sentence with [SOS] and [EOS] tokens and outputs the activation of the [EOS] token from the highest layer of text transformer as the embedding representation for the whole sentence. However, we want the embedding of the event word contextualized by words around it. To this end, we change the attention masking of [EOS] token in the last layer of text transformer from being uniform to all words in the sentence to focusing on the event word with a polynomially decreasing attention mask depending on the distance of the words in sentence from the event word. We represent this modified text encoder by f t (.).
We encode the text event e us by inputting the sentence containing the event word, se us = [w 1 , w 2 , ..., e us , ..., w i , ...w n , to f t to get f t (se us ). To encode the video event, v us l , we first represent it as a stack of frames sampled at some f s frames per second, v us l = {F us l y } Z y=1 . Next, we encode each frame using the image encoder, f i (.), and aggregate the contribution of each frame to get the video event embedding: 1 Z Z y=1 f i (F us l y ). We say that the text event, e us , represent the same event as the video event, v us l , if:
f t (se us ) × 1 Z Z y=1 f i (F us l y ) > λ.(1)
λ is a threshold that we arrived at by fine-tuning on the multimodal event pairs which had 'Identical' relations from 100 video article sample pairs in the validation set. The threshold found, λ = 30.39 gave a precision value of 75% on the validation set. Using eq. (1), we arrive at the required text event -video event matching pairs, {e us v us l } r l=1 .
C.2 Commonsense Features from Knowledge Base
We leverage commonsense knowledge in the world to help us predict multimodal relations between open domain events. To this end, we use ConceptNet [51], a large scale knowledge base containing concepts, entities and events as nodes and the relation between them as edges. We extract positive and negative event pairs from ConceptNet and use it to train the commonsense feature extractor, CS(., .). CS(., .) is a single linear layer which concatenates the embeddings of the event pairs to form its input. It outputs a 512 dimensional embedding as commonsense features, which is trained using contrastive loss. These extracted features characterize how 'Hierarchical' event pairs should relate to each other.
C.3 Implementation Details
We provide additional implementation details in this section. • To extract video event features, we sample frames uniformly from the video event clip at a rate of f s . f s in this work was fixed at 3 frames per second.
• The video event embeddings extracted using CLIP forms the token embedding to be fed to the Contextual Transformer (CT). These embeddings are concatenated by positional embeddings, as is common with transformer architectures ( [54]). Further, we cap the maximum number of video events that CT can process at 77 to limit the computational cost.
• As CLIP has demonstrated strong perormance in co-relating multimodal data, we use CLIP to remove false positive 'Identical' relations from model prediction. To implement it, we use eq. (1) with a threshold of 28.0. Event pairs scoring below this threshold are pruned. This threshold was again arrived at using the validation set.
D Experimental Details and Setup
D.1 Mathematical Formulation For Evaluation Metric
Given ground truth event-event relations {r i } K i=1 and predicted relations {p j } K j=1 ', where r i , p j ∈ {'Hierarchical', 'Identical', 'NoRel'}, we define Precsiion (P t ), Recall (R t ) and F 1 (F 1t ) for a relation, t ∈ {'Hierarchical', 'Identical'}, as:
P t = |{p i , ∀p i = t} ∩ {r i , ∀r i = t}| |{p i , ∀p i = t}| R t = |{p i , ∀p i = t} ∩ {r i , ∀r i = t}| |{r i , ∀r i = t}| F 1t = P t R t P t + R t Then, Avg F 1 = t F1 t 2 .
D.2 Text Only Baseline Details
We construct text only baseline by using video ASR as proxy for video itself -each video event is represented by the ASR content within the timestamps of the video event. Now, events and eventevent relations with text article can be extracted from ASR using NLP models. However, we don't use raw ASR directly. We found it difficult because they are unpunctuated and the NLP models to detect events and event-event relations have been trained on punctuated data. As such, we punctuate ASR using [1].
We observed the Text baseline to have a low recall in Table 7. On manual analysis, we found three reasons for it: 1) a number of video events do not associated ASR content; 2) some video events are very short in duration which limits the ASR content to a very few words; and 3) text events detected in ASR sometimes don't provide useful information about the events in video (eg. an event such as 'mentioned' detected in ASR). We also observe that the contribution of contextual transformer is most significant towards the full model performance. This makes sense because contextualizing a video event with respect to other video event can significantly change its interpretation. For example, a video event showing crowd of people on streets could be demonstrators agitating against the government or cheering on their newly elected leader depending on the context.
Influence of Number of Layers in Contextual Transformer.
To further investigate the influence of different number of layer n in the contextual Transformer, we conducted a set of ablations by setting n = {1, 2, 4, 6}, and results are reported in Table 8. As can be observed, all different layer settings can achieve good Avg F 1 score, which demonstrates robustness of our model. Meanwhile, n = 1 achieves slightly better results. The reason could be that more layers cause some over-fitting. As such, we use n = 1 in setting in our best model.
Pruning Strategy As shown in Table 9, the pruning strategy can slightly improve the F 1 score on identical event relations, and maintain the performance on hierarchical relations.
Dealing with Noisy Labels As discussed in the main paper, our training labels are actually pseudo labels derived using other NLP and Vision models. As such, the labels are naturally noisy. To deal with this noise, we employed Multi Instance Learning (MIL) [6] training objective (Table 10). However, we found that it didn't improve performance. One possible reason could be the fact that our large scale training data compensates for the inherent noise in labels. Such outcome have also been observed previously by Jia et al. [22].
F Qualitative Analysis on predicted samples
We provide additional qualitative results and visualizations in Figure 10. As can be seen, our model is able to enrich text event graph with new visual events nodes and different multimodal relations from diverse topics (explosion and protest). This is especially emphatic because our model was trained in an open domain setting and was not specifically trained to detect events from these topics.
Further, we compute heatmap visualization for illustrating the attention of the model while making predictions. We use Grad-CAM for this purpose [46,15]. Through these visualizations, we provide evidence that the model is indeed learning meaningful co-relations between text and visual events instead of learning some statistical prior. For example, in top row we can see that the model is focussing on smoke while predicting it's hierarchically related to explosion.
G End-to-End System Evaluation: Event Detection + Relationship Prediction
Multimodal Event graphs consists of text and video events as nodes and the event-event relations relations as edges. The proposed task in the paper, M 2 E 2 R, focuses on prediction and evaluation of multimodal event-event relations, given a text and video event.
The assumption is that we are e₁: explosion Radical groups in Georgia tried to prevent a (e₂: rally)… Riot police were (e₃: stationed) in between the two opposing rallies to prevent violence... provided with ground truth text and video events. However, the assumption doesn't hold for 'in the wild' text-video pair. To generate multimodal event graphs for such an input, we need to predict text and video events first and then predict the relations between them. As such, we evaluate our model against this combined task as well -event detection and event-event relation prediction.
In this setting, the model is first required to predict text event and then predict multimodal eventevent relation between the predicted text event and given video event. We don't task the model to predict video events because our video events are actually camera shots by definition and are not annotated by the annotators. The evaluation is then done between the predicted multimodal event-event relations and ground truth multimodal event-event relations. The evaluation metric remains the same -Precision, Recall and F 1 . We call this setting IETe2Ve (Information Extraction + Text event to Video event relation prediction) and the earlier setting of predicting only multimodal event-event relations Te2Ve(Text event to Video event relation prediction).
In our model, we use Shen et al. [48] to extract text events and then predict the multimodal relations the usual way. For Text Base. and MM Base. as well, we again use Shen et al. [48] as the first step to detect text events. The results are reported in Table 11. MERP obviously performs worse now than in Te2Ve setting (7.8 vs 12.8 on test set). However, it still performs better than next best baseline by 1%.
We note the extreme challenge posed by this setting -predicting open domain events and then predicting relations between it and open domain video events. This is also evidenced by extremely low performance of Prior Base. -0.1 Avg F 1 down from 1.62 in Te2Ve setting. This is so because in IETe2Ve, it must consider all words in text article as possible events. This makes the prior of 'NoRel' very high, significantly reducing number of relations predicted as 'Hierarchical' and 'Identical'. We also note the low performance of Text Base. -this is due to the same reasons discussed in Appendix D.1.
H Discussion
As discussed in the main paper, one of the biggest advantage of our proposed model, MERP, over even the best baseline, is the ability to predict multimodal relations where the video subevent is not mentioned in text article. We provide quantitative results as further evidence -MERP has a recall of 15.92 for such multimodal Hierarchical relations as against 2.14 for the baseline. We arrive at this result by evaluating the models against those multimodal Hierarchical relations for which the video subevent was not annotated with a corresponding Identical text event.
Performance Bottleneck. One major reason is that we are dealing in an open domain setting. While, there are many advantages to having an open domain of video and text events, it makes understanding new events and then predicting their relationships very challenging.
I Datasheet for M 2 E 2 R
In this section, we present a DataSheet [13, 3] for M 2 E 2 R, synthesizing many of the other analyses we performed in this paper.
Motivation For Datasheet Creation
• Why was the dataset created? In order to study cross-modal event event relations.
• Has the dataset been used already? No.
• What (other) tasks could the dataset be used for? Possibly visual grounding and multi-modal schema induction.
Data composition
• What are the instances? The instances that we consider in this work are pairs of news articles and associated videos . • How many instances are there? We include 100.5K news articles and associated videos.
Figure 2 :
2Proposed pseudo label generation. HR: Hierarchical Relations
Figure 3 :
3Proposed MERP Model. e i denotes i th text event and v j denotes j th video event.
Protests outside US embassy and Chinese consulate in Manila..Over a hundred protesters (e₂: took) to the streets of the Philippine capital...The (e₁: demonstrations) happened ahead of Duterte's third State of the Nation..
Figure 4 :
4Model Output. H: Hierarchical; TP: True Positive; FP: False Positive; FN: False Negative.
Figure 5 :
5Illustration of Hierarchical and Identical Relation Types. I: Identical; H:Hierarchical. While event e 1 : invaded is 'Identical' to the video event showing people on the pitch v 1 , event e 2 : protest is 'Hierarchical' to all video events -v 1 : showing invasion, v 2 : showing clash, v 3 : showing rallyrepresenting different aspects of protest.
Figure 6 :
6Data Statistics: Top row shows the distribution of video duration, middle row shows the distribution of article sentence length and the last row shows the distribution of article Word length.
Figure 7 :Figure 8 :
78TSNE topic distribution for 1K random articles Wordle of 1K random articles text descriptions/articles have not been compared as richness of events found in articles are necessary for our task.
Figure 9 :
9Top: Overview of Annotation Tool. Middle: Task 1 -Text Event Annotation. Steps for adding text event shown in blue and steps for deleting text event shown in pink. Bottom: Task 2 -Multimodal Event-Event Relation Annotation
Figure 10 :
10Model Prediction: The left column shows the input, middle column shows the model output and the right column shows heatmap visualizations of the input based on the prediction.
Table 1 :
1Comparison of the training data of related video datasets.Dataset
Domain #Videos #Avg. len (sec) #Total hours ASR #Sent.
MSVD [9]
Open
1970
10
5.3
No
70K
MSR-VTT [59]
Open
7180
20
41.2
No
200K
Charades [50]
Human
9848
30
82.01
No
28K
ActyNet Cap [23] Open
20K
180
849
No
100K
M2E2R (Train)
News
100K
148.9
4138.86
Yes
1.9M
Table 2 :
2Comparison of test sets of relation prediction dataset. "Hier." denotes "Hierarchical". "Id." denotes "Identical".Dataset
Domain Modality
#Hier. Rels. #Id. Rels.
HiEve [16]
News
Text
3648
758
IC [19]
News
Text
4586
2353
M2E2R (Test) News
Text + Vision
3077
1524
Table 3 :
3Comparison with baseline models on the validation/test set.Hierarchical
Identical
Avg F1
P
R
F1
P
R
F1
Prior Base.
4.7/2.0
4.7/2.0
4.7/2.0
2.0/1.2
2.0/1.2
2.0/1.2
3.4/1.6
Text Base.
5.9/2.1
0.1/0.1
0.1/0.1
2.5/2.6
7.1/13.6
3.6/4.3
1.9/2.2
MM Base.
35.7/28.0
5.0/6.3
8.8/10.3
8.8/7.6 33.1/32.3 13.9/12.4 11.4/11.4
MERP
21.9/11.9 22.1/18.8 22.0/14.6 8.2/6.3 44.5/39.0 13.9/10.9 18.0/12.8
Marc-André Carbonneau, Veronika Cheplygina, Eric Granger, and Ghyslain Gagnon. Multiple instance learning: A survey of problem characteristics and applications. Pattern Recognition, 77:329-353, may 2018. doi: 10.1016/j.patcog.2017.10.009. 24 [7] Snigdha Chaturvedi, Haoruo Peng, and Dan Roth. Story comprehension for predicting what happens next. In Proceedings of the 2017 Conference on Empirical Methods in Natural David Chen and William B Dolan. Collecting highly parallel data for paraphrase evaluation. In Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies, pages 190-200, 2011. 4, 19 [10] Hal Daumé III and Daniel Marcu. Bayesian query-focused summarization.Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing,
pages 906-911, 2016. 2
[3] Emily Bender and Batya Friedman. Data statements for nlp: Toward mitigating system bias and
enabling better science. Transactions of the Association for Computational Linguistics, 2019.
26
[4] David M. Blei, Andrew Y. Ng, Michael I. Jordan, and John Lafferty. Latent dirichlet allocation.
Journal of Machine Learning Research, 3:2003, 2003. 19
[5] Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz,
and Yejin Choi. Comet: Commonsense transformers for knowledge graph construction. In
Association for Computational Linguistics (ACL), 2019. 2
[6] Language Processing, pages 1603-1614, Copenhagen, Denmark, September 2017. Association
for Computational Linguistics. doi: 10.18653/v1/D17-1168. URL https://aclanthology.
org/D17-1168. 1
[8] Brian Chen, Xudong Lin, Christopher Thomas, Manling Li, Shoya Yoshida, Lovish Chum,
Heng Ji, and Shih-Fu Chang. Joint multimedia event extraction from video and article. arXiv
preprint arXiv:2109.12776, 2021. 3
[9] In Proceedings
of the 21st International Conference on Computational Linguistics and 44th Annual Meeting
of the Association for Computational Linguistics, pages 305-312, Sydney, Australia, July
2006. Association for Computational Linguistics. doi: 10.3115/1220175.1220214. URL
https://aclanthology.org/P06-1039. 1, 3
[11] Han Fang, Pengfei Xiong, Luhui Xu, and Yu Chen. Clip2video: Mastering video-text retrieval
via image clip, 2021. 6
[12] Yi Fung, Christopher Thomas, Revanth Gangi Reddy, Sandeep Polisetty, Heng Ji, Shih-Fu
Chang, Kathleen McKeown, Mohit Bansal, and Avi Sil. InfoSurgeon: Cross-media fine-grained
information consistency checking for fake news detection. In Proceedings of the 59th Annual
Meeting of the Association for Computational Linguistics and the 11th International Joint
Conference on Natural Language Processing (Volume 1: Long Papers), pages 1683-1698,
Online, August 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.
acl-long.133. URL https://aclanthology.org/2021.acl-long.133. 1, 3
[13] Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna
Wallach, Hal Daumeé III, and Kate Crawford. Datasheets for datasets. arXiv preprint
arXiv:1803.09010, 2018. 26
Table 5 :
5Comparison of the training data of related video datasets.Dataset
Domain
#Videos #Avg. len (sec) #Total hours ASR #Sent.
MSVD [9]
Open
1970
10
5.3
No
70K
MPII Cooking [43]
Cooking
44
600
8
No
5609
TACoS [40]
Cooking
127
360
15.9
No
18K
TACos-MLevel [42] Cooking
185
360
27.1
No
53K
M-VAD [53]
Movie
92
6.2
84.6
No
56K
MSR-VTT [59]
Open
7180
20
41.2
No
200K
Charades [50]
Human
9848
30
82.01
No
28K
VTW [65]
Open
18.1K
90
213.2
No
45K
YouCook II [67]
Cooking
2000
316
176
No
15K
ActyNet Cap [23]
Open
20K
180
849
No
100K
M2E2R (Train)
News
100K
148.9
4138.86
Yes
1.9M
Table 6 :
6Top 10 topics derived from M 2 E 2 R represented by their most common words.White House
trump obama donald white house barack biden presidential republican
International
talks peace agreement leaders secretary deal conference
Attack
injured hospital car attack bomb blast wounded scene damaged
Election
election vote elections presidential voters voting candidate percent
Justice
court trial judge case charges lawyer prison accused lawyers
War
soldiers troops army forces fighting border rebels area fire
Entertainment film actor star movie actress director hollywood festival
Conference
conference cutaway reporters general spokesman want need
Congress
house senate congress committee trump senator republican democrats
Protest
protesters protest demonstrators anti chanting rally march demonstration
Table 8 :
8Ablation studies on the number of layer n in the contextual Transformer.Hierarchical
Identical
P
R
F1
P
R
F1
Avg F1
n = 1 21.9 22.1 22.0 8.2 44.5 13.9
18.0
n = 2 16.5 24.3 19.6 8.0 46.4 13.6
16.6
n = 4 20.0 19.4 19.7 7.9 49.3 13.6
16.7
n = 6 15.5 31.2 20.7 7.9 47.8 13.6
17.2
Table 9: Ablation studies on the pruning strategies for identical relation predictions.
Hierarchical
Identical
P
R
F1
P
R
F1
Avg F1
MERP (w/o pruning) 21.9 22.1 22.0 7.7 45.8 13.2
17.6
MERP
21.9 22.1 22.0 8.2 44.5 13.9
18.0
Table 10 :
10Effects of weighted sampling (WS) and multiple instance learning (MIL) strategiesE Ablation and Hyperparameter DetailsNetwork Architecture Ablation We report the ablation study on the components of our mdoel, MERP, inTable 7. The result demonstrates the contribution of each component choice: Contextual Transformer (CT) improves performance by 1.4%, Commonsense Features improves performance by 0.8% and Embedding Interactions improve performance by 1.4%.Hierarchical
Identical
P
R
F1
P
R
F1
Avg F1
MERP
21.9 22.1 22.0 8.2 44.5 13.9
18.0
MERP w/ MIL
7.4 47.7 12.8 7.3 47.5 12.7
12.8
Table 11 :
11Comparison with baseline models on the validation/test set for the IETe2Ve setting.Hierarchical
Identical
Avg F1
P
R
F1
P
R
F1
Prior Base.
0.9/0.2
0.2/0.2 0.2/0.2 0.1/0.1
0.1/0.1
0.1/0.1
0.1/0.1
Text Base.
3.9/0.0
0.1/0.0 0.1/0.0 0.5/1.6
3.6/10.0
0.8/2.8
0.5/1.4
MM Base.
21.8/11.2 2.2/1.7 3.9/2.9 5.0/6.7 23.9/26.1 8.2/10.7 6.1/6.8
MERP
5.5/5.6
6.4/7.6 5.9/6.5 4.2/5.3 30.1/30.0
7.3/9.0
6.6/7.8
http://scenedetect.com/en/latest/
https://labelstud.io/
https://pytube.io/en/latest/ 4 https://pypi.org/project/youtube-search-python/
AppendixWe provide additional details in this section which further elucidate the claims and contributions of the paper. It's divided into the following sections:• Further details on Events' definition (Appendix A) • Details on dataset curation, statistics, annotation and broader impacts.
Datasheets for datasets. Timnit Gebru, Jamie H Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna M Wallach, Hal Daumé, Kate Crawford, Communications of the ACM. 6416Timnit Gebru, Jamie H. Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna M. Wallach, Hal Daumé, and Kate Crawford. Datasheets for datasets. Communications of the ACM, 64:86 -92, 2021. 16
Pytorch library for cam methods. Jacob Gildenblat, Contributors, 24Jacob Gildenblat and contributors. Pytorch library for cam methods. https://github.com/ jacobgil/pytorch-grad-cam, 2021. 24
Hieve: A corpus for extracting event hierarchies from news stories. Goran Glavaš, Jan Šnajder, Parisa Kordjamshidi, Marie-Francine Moens, Proceedings of 9th language resources and evaluation conference. 9th language resources and evaluation conference1727Goran Glavaš, Jan Šnajder, Parisa Kordjamshidi, and Marie-Francine Moens. Hieve: A corpus for extracting event hierarchies from news stories. In Proceedings of 9th language resources and evaluation conference, pages 3678-3683. ELRA, 2014. 2, 4, 5, 8, 17, 27
Construction and evaluation of event graphs. Goran Glavaš, Jan Šnajder, 10.1017/S1351324914000060.5Natural Language Engineering. 214GORAN GLAVAŠ and JAN ŠNAJDER. Construction and evaluation of event graphs. Natural Language Engineering, 21(4):607-652, 2015. doi: 10.1017/S1351324914000060. 5
Construction and evaluation of event graphs. Goran Glavaš, Jan Šnajder, 10.1017/S1351324914000060.3Natural Language Engineering. 214GORAN GLAVAŠ and JAN ŠNAJDER. Construction and evaluation of event graphs. Natural Language Engineering, 21(4):607-652, 2015. doi: 10.1017/S1351324914000060. 3
Events are not simple: Identity, non-identity, and quasi-identity. Eduard Hovy, Teruko Mitamura, Felisa Verdejo, Jun Araki, Andrew Philpot, Workshop on Events: Definition, Detection, Coreference, and Representation. Atlanta, GeorgiaAssociation for Computational Linguistics17273, 4, 5, 8Eduard Hovy, Teruko Mitamura, Felisa Verdejo, Jun Araki, and Andrew Philpot. Events are not simple: Identity, non-identity, and quasi-identity. In Workshop on Events: Definition, Detection, Coreference, and Representation, pages 21-28, Atlanta, Georgia, June 2013. Association for Computational Linguistics. URL https://aclanthology.org/W13-1203. 3, 4, 5, 8, 17, 27
Automatic content extraction (ace) program -task definitions and performance measures. Shudong Huang, Stephanie Strassel, Alexis Mitchell, Zhiyi Song, LREC 2004: 4th International Conference on Language Resources and Evaluation. Shudong Huang, Stephanie Strassel, Alexis Mitchell, and Zhiyi Song. Automatic content extraction (ace) program -task definitions and performance measures. In LREC 2004: 4th International Conference on Language Resources and Evaluation, 2004. 3
Action genome: Actions as compositions of spatio-temporal scene graphs. Jingwei Ji, Ranjay Krishna, Li Fei-Fei, Juan Carlos Niebles, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJingwei Ji, Ranjay Krishna, Li Fei-Fei, and Juan Carlos Niebles. Action genome: Actions as compositions of spatio-temporal scene graphs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10236-10247, 2020. 3
Scaling up visual and vision-language representation learning with noisy text supervision. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V Le, Yun-Hsuan Sung, Zhen Li, Tom Duerig, ICML. 24Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun- Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In ICML, 2021. 24
Densecaptioning events in videos. Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, Juan Carlos Niebles, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer vision419Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. Dense- captioning events in videos. In Proceedings of the IEEE international conference on computer vision, pages 706-715, 2017. 4, 19
Visual genome: Connecting language and vision using crowdsourced dense image annotations. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, International journal of computer vision. 1231Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123(1):32-73, 2017. 3
Cross-media structured common space for multimedia event extraction. Manling Li, Alireza Zareian, Qi Zeng, Spencer Whitehead, Di Lu, Ji Heng, Shih-Fu Chang, 10.18653/v1/2020.acl-main.230Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsOnline, 2020Manling Li, Alireza Zareian, Qi Zeng, Spencer Whitehead, Di Lu, Heng Ji, and Shih-Fu Chang. Cross-media structured common space for multimedia event extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2557-2568, Online, 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.230. URL https://aclanthology.org/2020.acl-main.230. 3
Manling Li, Ruochen Xu, Shuohang Wang, Luowei Zhou, Xudong Lin, Chenguang Zhu, Michael Zeng, Ji Heng, Shih-Fu Chang, arXiv:2201.05078Clip-event: Connecting text and images with event structures. 2022arXiv preprintManling Li, Ruochen Xu, Shuohang Wang, Luowei Zhou, Xudong Lin, Chenguang Zhu, Michael Zeng, Heng Ji, and Shih-Fu Chang. Clip-event: Connecting text and images with event structures. arXiv preprint arXiv:2201.05078, 2022. 3
Situation recognition with graph neural networks. Ruiyu Li, Makarand Tapaswi, Renjie Liao, Jiaya Jia, Raquel Urtasun, Sanja Fidler, http:/doi.ieeecomputersociety.org/10.1109/ICCV.2017.448IEEE International Conference on Computer Vision. Venice, ItalyIEEE Computer SocietyRuiyu Li, Makarand Tapaswi, Renjie Liao, Jiaya Jia, Raquel Urtasun, and Sanja Fidler. Situation recognition with graph neural networks. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 4183-4192. IEEE Computer Society, 2017. doi: 10.1109/ICCV.2017.448. URL http://doi.ieeecomputersociety.org/10. 1109/ICCV.2017.448. 3
Neural cross-lingual event detection with minimal parallel resources. Jian Liu, Yubo Chen, Kang Liu, Jun Zhao, 10.18653/v1/D19-1068Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsJian Liu, Yubo Chen, Kang Liu, and Jun Zhao. Neural cross-lingual event detection with minimal parallel resources. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 738-748, Hong Kong, China, 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1068. URL https://aclanthology. org/D19-1068. 3
Event extraction as machine reading comprehension. Jian Liu, Yubo Chen, Kang Liu, Wei Bi, Xiaojiang Liu, 10.18653/v1/2020.emnlp-main.128Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online, 2020. Association for Computational LinguisticsJian Liu, Yubo Chen, Kang Liu, Wei Bi, and Xiaojiang Liu. Event extraction as ma- chine reading comprehension. In Proceedings of the 2020 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 1641-1651, Online, 2020. Asso- ciation for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.128. URL https://aclanthology.org/2020.emnlp-main.128. 3
Clip4clip: An empirical study of CLIP for end to end video clip retrieval. Huaishao Luo, Lei Ji, Ming Zhong, Yang Chen, Wen Lei, Nan Duan, Tianrui Li, abs/2104.08860CoRRHuaishao Luo, Lei Ji, Ming Zhong, Yang Chen, Wen Lei, Nan Duan, and Tianrui Li. Clip4clip: An empirical study of CLIP for end to end video clip retrieval. CoRR, abs/2104.08860, 2021. URL https://arxiv.org/abs/2104.08860. 6
Recurrent models for situation recognition. Arun Mallya, Svetlana Lazebnik, 10.1109/ICCV.2017.57IEEE International Conference on Computer Vision. Venice, ItalyIEEE Computer SocietyArun Mallya and Svetlana Lazebnik. Recurrent models for situation recognition. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy, October 22-29, 2017, pages 455-463. IEEE Computer Society, 2017. doi: 10.1109/ICCV.2017.57. URL https: //doi.org/10.1109/ICCV.2017.57. 3
Mallet: A machine learning for language toolkit. Andrew Kachites Mccallum, Andrew Kachites McCallum. Mallet: A machine learning for language toolkit. 2002. URL http://mallet.cs.umass.edu. 19
Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, Josef Sivic, 10.1109/ICCV.2019.002722019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South). IEEE1820Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. In 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 -November 2, 2019, pages 2630-2640. IEEE, 2019. doi: 10.1109/ICCV.2019.00272. URL https://doi.org/10.1109/ICCV.2019.00272. 18, 20
Joint event extraction via recurrent neural networks. Kyunghyun Thien Huu Nguyen, Ralph Cho, Grishman, 10.18653/v1/N16-1034Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsThien Huu Nguyen, Kyunghyun Cho, and Ralph Grishman. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300-309, San Diego, California, 2016. Association for Computational Linguistics. doi: 10.18653/v1/ N16-1034. URL https://aclanthology.org/N16-1034. 3
Richer event description: Integrating event coreference with temporal, causal and bridging annotation. Kristin Tim O'gorman, Martha Wright-Bettner, Palmer, Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016). the 2nd Workshop on Computing News Storylines (CNS 2016)Tim O'Gorman, Kristin Wright-Bettner, and Martha Palmer. Richer event description: Integrat- ing event coreference with temporal, causal and bridging annotation. In Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016), pages 47-56, 2016. 2
Grounded situation recognition. Sarah Pratt, Mark Yatskar, Luca Weihs, Ali Farhadi, Aniruddha Kembhavi, European Conference on Computer Vision. SpringerSarah Pratt, Mark Yatskar, Luca Weihs, Ali Farhadi, and Aniruddha Kembhavi. Grounded situation recognition. In European Conference on Computer Vision, pages 314-332. Springer, 2020. 3
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever, Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agar- wal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision, 2021. URL https://arxiv.org/abs/2103.00020. 6
Event2Mind: Commonsense inference on events, intents, and reactions. Maarten Hannah Rashkin, Emily Sap, Noah A Allaway, Yejin Smith, Choi, 10.18653/v1/P18-1043Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics13Long Papers)Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, and Yejin Choi. Event2Mind: Commonsense inference on events, intents, and reactions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 463-473, Melbourne, Australia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-1043. URL https://aclanthology.org/P18-1043. 1, 3
Xilin Revanth Gangi Reddy, Manling Rui, Xudong Li, Haoyang Lin, Jaemin Wen, Lifu Cho, Mohit Huang, Avirup Bansal, Shih-Fu Sil, Chang, arXiv:2112.10728Multimedia multi-hop news question answering via cross-media knowledge extraction and grounding. arXiv preprintRevanth Gangi Reddy, Xilin Rui, Manling Li, Xudong Lin, Haoyang Wen, Jaemin Cho, Lifu Huang, Mohit Bansal, Avirup Sil, Shih-Fu Chang, et al. Mumuqa: Multimedia multi-hop news question answering via cross-media knowledge extraction and grounding. arXiv preprint arXiv:2112.10728, 2021. 3
Grounding action descriptions in videos. Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Transactions of the Association for Computational Linguistics. 119Bernt Schiele, and Manfred PinkalMichaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. Grounding action descriptions in videos. Transactions of the Association for Computational Linguistics, 1:25-36, 2013. 19
Temporal anchoring of events for the timebank corpus. Nils Reimers, Nazanin Dehghani, Iryna Gurevych, 10.18653/v1/P16-1207.3012016Nils Reimers, Nazanin Dehghani, and Iryna Gurevych. Temporal anchoring of events for the timebank corpus. pages 2195-2204, 01 2016. doi: 10.18653/v1/P16-1207. 3
Coherent multi-sentence video description with variable level of detail. Anna Rohrbach, Marcus Rohrbach, Wei Qiu, Annemarie Friedrich, German conference on pattern recognition. Springer19Manfred Pinkal, and Bernt SchieleAnna Rohrbach, Marcus Rohrbach, Wei Qiu, Annemarie Friedrich, Manfred Pinkal, and Bernt Schiele. Coherent multi-sentence video description with variable level of detail. In German conference on pattern recognition, pages 184-195. Springer, 2014. 19
A database for fine grained activity detection of cooking activities. Marcus Rohrbach, Sikandar Amin, Mykhaylo Andriluka, Bernt Schiele, 2012 IEEE conference on computer vision and pattern recognition. IEEE19Marcus Rohrbach, Sikandar Amin, Mykhaylo Andriluka, and Bernt Schiele. A database for fine grained activity detection of cooking activities. In 2012 IEEE conference on computer vision and pattern recognition, pages 1194-1201. IEEE, 2012. 19
Visual semantic role labeling for video understanding. Arka Sadhu, Tanmay Gupta, Mark Yatskar, Ram Nevatia, Aniruddha Kembhavi, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 17Arka Sadhu, Tanmay Gupta, Mark Yatskar, Ram Nevatia, and Aniruddha Kembhavi. Visual semantic role labeling for video understanding. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. 2, 3, 4, 17
Atomic: An atlas of machine commonsense for if-then reasoning. Maarten Sap, Emily Ronan Le Bras, Chandra Allaway, Nicholas Bhagavatula, Hannah Lourie, Brendan Rashkin, Roof, A Noah, Yejin Smith, Choi, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Maarten Sap, Ronan Le Bras, Emily Allaway, Chandra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. Atomic: An atlas of machine commonsense for if-then reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3027-3035, 2019. 2
Grad-cam: Visual explanations from deep networks via gradientbased localization. R Ramprasaath, Michael Selvaraju, Abhishek Cogswell, Ramakrishna Das, Devi Vedantam, Dhruv Parikh, Batra, 10.1109/ICCV.2017.74.242017 IEEE International Conference on Computer Vision (ICCV). Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient- based localization. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 618-626, 2017. doi: 10.1109/ICCV.2017.74. 24
Jointly extracting event triggers and arguments by dependency-bridge RNN and tensor-based argument interaction. Lei Sha, Feng Qian, Baobao Chang, Zhifang Sui, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18). Sheila A. McIlraith and Kilian Q. Weinbergerthe Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)New Orleans, Louisiana, USAAAAI PressLei Sha, Feng Qian, Baobao Chang, and Zhifang Sui. Jointly extracting event triggers and arguments by dependency-bridge RNN and tensor-based argument interaction. In Sheila A. McIlraith and Kilian Q. Weinberger, editors, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5916-5923. AAAI Press, 2018. URL https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/ 16222. 3
Corpus-based open-domain event type induction. Jiaming Shen, Yunyi Zhang, Ji Heng, Jiawei Han, 10.18653/v1/2021.emnlp-main.441Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational Linguistics626Jiaming Shen, Yunyi Zhang, Heng Ji, and Jiawei Han. Corpus-based open-domain event type induction. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5427-5440, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.441. URL https://aclanthology.org/2021.emnlp-main.441. 1, 3, 5, 6, 26
Generic event boundary detection: A benchmark for event segmentation. Mike Zheng Shou, Deepti Ghadiyaram, Weiyao Wang, Matt Feiszli, 34Mike Zheng Shou, Deepti Ghadiyaram, Weiyao Wang, and Matt Feiszli. Generic event boundary detection: A benchmark for event segmentation. ICCV, 2021. URL https://arxiv.org/ abs/2101.10511. 2, 3, 4
Hollywood in homes: Crowdsourcing data collection for activity understanding. Gül Gunnar A Sigurdsson, Xiaolong Varol, Ali Wang, Ivan Farhadi, Abhinav Laptev, Gupta, European Conference on Computer Vision. Springer419Gunnar A Sigurdsson, Gül Varol, Xiaolong Wang, Ali Farhadi, Ivan Laptev, and Abhinav Gupta. Hollywood in homes: Crowdsourcing data collection for activity understanding. In European Conference on Computer Vision, pages 510-526. Springer, 2016. 4, 19
Conceptnet 5.5: An open multilingual graph of general knowledge. Robyn Speer, Joshua Chin, Catherine Havasi, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence3122Robyn Speer, Joshua Chin, and Catherine Havasi. Conceptnet 5.5: An open multilingual graph of general knowledge. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1), Feb. 2017. URL https://ojs.aaai.org/index.php/AAAI/article/view/11164. 7, 22
Image enhanced event detection in news articles. Meihan Tong, Shuai Wang, Yixin Cao, Bin Xu, Juanzi Li, Lei Hou, Tat-Seng Chua, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Meihan Tong, Shuai Wang, Yixin Cao, Bin Xu, Juanzi Li, Lei Hou, and Tat-Seng Chua. Image enhanced event detection in news articles. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 9040-9047, 2020. 3
Using descriptive video services to create a large data source for video annotation research. Atousa Torabi, Christopher Pal, Hugo Larochelle, Aaron Courville, arXiv:1503.0107019arXiv preprintAtousa Torabi, Christopher Pal, Hugo Larochelle, and Aaron Courville. Using descriptive video services to create a large data source for video annotation research. arXiv preprint arXiv:1503.01070, 2015. 19
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. 723Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008, 2017. 7, 23
Joint constrained learning for event-event relation extraction. Haoyu Wang, Muhao Chen, Hongming Zhang, Dan Roth, 10.18653/v1/2020.emnlp-main.51Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational Linguistics717Haoyu Wang, Muhao Chen, Hongming Zhang, and Dan Roth. Joint constrained learning for event-event relation extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 696-706, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.51. URL https://aclanthology.org/2020.emnlp-main.51. 1, 4, 7, 17
Learning constraints and descriptive segmentation for subevent detection. Haoyu Wang, Hongming Zhang, Muhao Chen, Dan Roth, doi: 10. 18653/v1/2021.emnlp-main.423Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingDominican RepublicAssociation for Computational Linguistics1720Online and Punta Cana423. 1, 4, 6, 8Haoyu Wang, Hongming Zhang, Muhao Chen, and Dan Roth. Learning constraints and descriptive segmentation for subevent detection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 5216-5226, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10. 18653/v1/2021.emnlp-main.423. URL https://aclanthology.org/2021.emnlp-main. 423. 1, 4, 6, 8, 17, 20
Resin: A dockerized schema-guided cross-document cross-lingual cross-media information extraction and event tracking system. Haoyang Wen, Ying Lin, Tuan Lai, Xiaoman Pan, Sha Li, Xudong Lin, Ben Zhou, Manling Li, Haoyu Wang, Hongming Zhang, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: DemonstrationsHaoyang Wen, Ying Lin, Tuan Lai, Xiaoman Pan, Sha Li, Xudong Lin, Ben Zhou, Manling Li, Haoyu Wang, Hongming Zhang, et al. Resin: A dockerized schema-guided cross-document cross-lingual cross-media information extraction and event tracking system. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Demonstrations, pages 133-143, 2021. 3
Scene graph generation by iterative message passing. Danfei Xu, Yuke Zhu, B Christopher, Li Choy, Fei-Fei, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionDanfei Xu, Yuke Zhu, Christopher B Choy, and Li Fei-Fei. Scene graph generation by iterative message passing. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5410-5419, 2017. 3
Msr-vtt: A large video description dataset for bridging video and language. Jun Xu, Tao Mei, Ting Yao, Yong Rui, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognition419Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 5288-5296, 2016. 4, 19
Structured use of external knowledge for event-based open domain question answering. Hui Yang, Tat-Seng Chua, Shuguang Wang, Chun-Keat Koh, 10.1145/860435.86044413New York, NY, USAAssociation for Computing MachineryHui Yang, Tat-Seng Chua, Shuguang Wang, and Chun-Keat Koh. Structured use of external knowledge for event-based open domain question answering. New York, NY, USA, 2003. Association for Computing Machinery. ISBN 1581136463. doi: 10.1145/860435.860444. URL https://doi.org/10.1145/860435.860444. 1, 3
Weakly Supervised Subevent Knowledge Acquisition. Wenlin Yao, Zeyu Dai, Maitreyi Ramaswamy, Bonan Min, Ruihong Huang, 10.18653/v1/2020.emnlp-main.430Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational Linguistics317Wenlin Yao, Zeyu Dai, Maitreyi Ramaswamy, Bonan Min, and Ruihong Huang. Weakly Super- vised Subevent Knowledge Acquisition. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5345-5356, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.430. URL https://aclanthology.org/2020.emnlp-main.430. 1, 3, 17
Weakly supervised subevent knowledge acquisition. Wenlin Yao, Zeyu Dai, Maitreyi Ramaswamy, Bonan Min, Ruihong Huang, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)2020Wenlin Yao, Zeyu Dai, Maitreyi Ramaswamy, Bonan Min, and Ruihong Huang. Weakly supervised subevent knowledge acquisition. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020. 2
Situation recognition: Visual semantic role labeling for image understanding. Mark Yatskar, Luke S Zettlemoyer, Ali Farhadi, 10.1109/CVPR.2016.5972016 IEEE Conference on Computer Vision and Pattern Recognition. Las Vegas, NV, USAIEEE Computer SocietyMark Yatskar, Luke S. Zettlemoyer, and Ali Farhadi. Situation recognition: Visual semantic role labeling for image understanding. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 5534-5542. IEEE Computer Society, 2016. doi: 10.1109/CVPR.2016.597. URL https://doi.org/10. 1109/CVPR.2016.597. 3
Merlot: Multimodal neural script knowledge models. Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, Yejin Choi, Advances in Neural Information Processing Systems. M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman VaughanCurran Associates, Inc3420Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, and Yejin Choi. Merlot: Multimodal neural script knowledge models. In M. Ran- zato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman Vaughan, editors, Ad- vances in Neural Information Processing Systems, volume 34, pages 23634-23651. Cur- ran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/ c6d4eb15f1e84a36eff58eca3627c82e-Paper.pdf. 18, 19, 20
Generation for user generated videos. Kuo-Hao Zeng, Tseng-Hung Chen, Juan Carlos Niebles, Min Sun, European conference on computer vision. Springer19Kuo-Hao Zeng, Tseng-Hung Chen, Juan Carlos Niebles, and Min Sun. Generation for user generated videos. In European conference on computer vision, pages 609-625. Springer, 2016. 19
Improving event extraction via multimodal integration. Tongtao Zhang, Spencer Whitehead, Hanwang Zhang, Hongzhi Li, Joseph G Ellis, Lifu Huang, Wei Liu, Ji Heng, Shih-Fu Chang, 10.1145/3123266.3123294Proceedings of the 2017 ACM on Multimedia Conference, MM 2017. the 2017 ACM on Multimedia Conference, MM 2017Mountain View, CA, USATongtao Zhang, Spencer Whitehead, Hanwang Zhang, Hongzhi Li, Joseph G. Ellis, Lifu Huang, Wei Liu, Heng Ji, and Shih-Fu Chang. Improving event extraction via multimodal integration. In Proceedings of the 2017 ACM on Multimedia Conference, MM 2017, Mountain View, CA, USA, October 23-27, 2017, pages 270-278, 2017. doi: 10.1145/3123266.3123294. URL https://doi.org/10.1145/3123266.3123294. 3
Towards automatic learning of procedures from web instructional videos. Luowei Zhou, Chenliang Xu, Jason J Corso, AAAI Conference on Artificial Intelligence. Luowei Zhou, Chenliang Xu, and Jason J Corso. Towards automatic learning of procedures from web instructional videos. In AAAI Conference on Artificial Intelligence, pages 7590-7598, 2018. URL https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/17344. 19
What data does each instance consist of? The instances have 'raw' video frames and text. • What data does each instance consist of? The instances have 'raw' video frames and text.
Is there a label or target associated with each instance? The train set doesn't contain any label, while the test set is annotated with text events and their multimodal relations ('Hierarchical' or 'Identical') with video events. • Is there a label or target associated with each instance? The train set doesn't contain any label, while the test set is annotated with text events and their multimodal relations ('Hierarchical' or 'Identical') with video events.
• Is any information missing from individual instances? No. • Is any information missing from individual instances? No.
• Are relationships between individual instances made explicit? Not applicable -we do not study relations between different videos and articles. • Are relationships between individual instances made explicit? Not applicable -we do not study relations between different videos and articles.
• Does the dataset contain all possible instances or is it a sample? Just a sample. • Are there recommended data splits (e.g., training, development/validation, testing)? We divide our data into train/val/test splits. • Does the dataset contain all possible instances or is it a sample? Just a sample. • Are there recommended data splits (e.g., training, development/validation, test- ing)? We divide our data into train/val/test splits.
• Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. Yes. The Inter Annotator Agreement scores for Hierarchical multimodal relations is 47.5 which denotes inconsistencies in the annotation of different annotators. However, we argue that this task is difficult and even in NLP, where this task is unimodal, the IAA scores of two established datasets. 16, 19] are 69 and 62 respectively• Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description. Yes. The Inter Annotator Agreement scores for Hier- archical multimodal relations is 47.5 which denotes inconsistencies in the annotation of different annotators. However, we argue that this task is difficult and even in NLP, where this task is unimodal, the IAA scores of two established datasets [16, 19] are 69 and 62 respectively.
• Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? The dataset is self-contained. However, we only release video urls for the sake of minimizing privacy and consent issues. • Is the dataset self-contained, or does it link to or otherwise rely on external re- sources (e.g., websites, tweets, other datasets)? The dataset is self-contained. How- ever, we only release video urls for the sake of minimizing privacy and consent issues.
Collection Process • What mechanisms or procedures were used to collect the data? We used the pytube 3 and youtube-search-python 4 library. Collection Process • What mechanisms or procedures were used to collect the data? We used the pytube 3 and youtube-search-python 4 library.
• How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data? The data was directly observable from YouTube. • How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey re- sponses), or indirectly inferred/derived from other data? The data was directly observable from YouTube.
If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? We collected random 100.5K youtube video and description pairs from 9 media sources rated Center. • If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? We collected random 100.5K youtube video and description pairs from 9 media sources rated Center.
how much were crowdworkers paid)? Data collection was primarily done by the first author of this paper. • Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The data was collected from. • Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated. Although, the data was created by the news media sources as early as 1963. They were uploaded on the YouTube platform since it was established• Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdwork- ers paid)? Data collection was primarily done by the first author of this paper. • Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created. The data was collected from August 2021 to September 2021. Although, the data was created by the news media sources as early as 1963. They were uploaded on the YouTube platform since it was established.
discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? We do not preprocess or label the train set. However, we do label the test set for multimodal event-event relations. • Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the 'raw' data. The raw data was saved. Data Preprocessing • Was any preprocessing/cleaning/labeling of the data done. but at this time we do not plan to release it directly due to copyright and privacy concernsData Preprocessing • Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, re- moval of instances, processing of missing values)? We do not preprocess or label the train set. However, we do label the test set for multimodal event-event relations. • Was the "raw" data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the 'raw' data. The raw data was saved, but at this time we do not plan to release it directly due to copyright and privacy concerns.
• Is the software used to preprocess/clean/label the instances available? If so, please provide a link or other access point. We will release our code upon the acceptance of publication. • Is the software used to preprocess/clean/label the instances available? If so, please provide a link or other access point. We will release our code upon the acceptance of publication.
motivation for creating the dataset stated in the first section of this datasheet? If not, what are the limitations? The qualitative results (Figure 10) and data exploration (Ccreffig:datawordle) illustrate rich diversity of events and their relations, implying our goal of creating a dataset for studying multimodal events relations was successful. • Does this dataset collection/processing procedure achieve the. However, there are limitations which we discuss in Appendix B.4.• Does this dataset collection/processing procedure achieve the motivation for cre- ating the dataset stated in the first section of this datasheet? If not, what are the limitations? The qualitative results (Figure 10) and data exploration (Ccreffig:data- wordle) illustrate rich diversity of events and their relations, implying our goal of creating a dataset for studying multimodal events relations was successful. However, there are limitations which we discuss in Appendix B.4.
Dataset Distribution • How will the dataset be distributed? We plan to distribute all the videos and articles. • When will the dataset be released/first distributed? What license (if any) is it distributed under?. We will release the dataset upon acceptance of publicationDataset Distribution • How will the dataset be distributed? We plan to distribute all the videos and articles. • When will the dataset be released/first distributed? What license (if any) is it distributed under? We will release the dataset upon acceptance of publication.
Are there any copyrights on the data? It should be "fair use. • Are there any copyrights on the data? It should be "fair use".
• Are there any fees or access restrictions? No. • Are there any fees or access restrictions? No.
Dataset Maintenance • Will the dataset be updated? If so. Dataset Maintenance • Will the dataset be updated? If so, how often and by whom? No.
Is there a repository to link to any/all papers/systems that use this dataset? No. • If others want to extend/augment/build on this dataset, is there a mechanism for them to do so?. Not at this time• Is there a repository to link to any/all papers/systems that use this dataset? No. • If others want to extend/augment/build on this dataset, is there a mechanism for them to do so? Not at this time.
Legal and Ethical Considerations • Were any ethical review processes conducted (e.g., by an institutional review board)? We haven't done official processes. Legal and Ethical Considerations • Were any ethical review processes conducted (e.g., by an institutional review board)? We haven't done official processes.
• Does the dataset contain data that might be considered confidential? No, we only use public videos. • Does the dataset contain data that might be considered confidential? No, we only use public videos.
• Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? If so. please describe why. Yes, we discuss this in Appendix B.4.• Does the dataset contain data that, if viewed directly, might be offensive, insult- ing, threatening, or might otherwise cause anxiety? If so, please describe why. Yes, we discuss this in Appendix B.4.
• Does the dataset relate to people? Yes. • Does the dataset relate to people? Yes.
Not explicitly. • Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? Yes. • Does the dataset identify any subpopulations. All the videos we use are publicly available• Does the dataset identify any subpopulations (e.g., by age, gender)? Not explicitly. • Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? Yes. All the videos we use are publicly available,
| [] |
[
"Compositional Generalization in Multilingual Semantic Parsing over Wikidata",
"Compositional Generalization in Multilingual Semantic Parsing over Wikidata"
] | [
"Ruixiang Cui \nDepartment of Computer Science\nUniversity of Copenhagen\n\n",
"Rahul Aralikatte \nDepartment of Computer Science\nUniversity of Copenhagen\n\n",
"Heather Lent \nDepartment of Computer Science\nUniversity of Copenhagen\n\n",
"Daniel Hershcovich \nDepartment of Computer Science\nUniversity of Copenhagen\n\n"
] | [
"Department of Computer Science\nUniversity of Copenhagen\n",
"Department of Computer Science\nUniversity of Copenhagen\n",
"Department of Computer Science\nUniversity of Copenhagen\n",
"Department of Computer Science\nUniversity of Copenhagen\n"
] | [] | Semantic parsing (SP) allows humans to leverage vast knowledge resources through natural interaction. However, parsers are mostly designed for and evaluated on English resources, such as CFQ(Keysers et al., 2020), the current standard benchmark based on English data generated from grammar rules and oriented towards Freebase, an outdated knowledge base. We propose a method for creating a multilingual, parallel dataset of question-query pairs, grounded in Wikidata. We introduce such a dataset, which we call Multilingual Compositional Wikidata Questions (MCWQ), and use it to analyze the compositional generalization of semantic parsers in Hebrew, Kannada, Chinese and English. While within-language generalization is comparable across languages, experiments on zero-shot cross-lingual transfer demonstrate that cross-lingual compositional generalization fails, even with state-ofthe-art pretrained multilingual encoders. Furthermore, our methodology, dataset and results will facilitate future research on SP in more realistic and diverse settings than has been possible with existing resources. | 10.1162/tacl_a_00499 | [
"https://arxiv.org/pdf/2108.03509v2.pdf"
] | 249,209,473 | 2108.03509 | ad331dce175b1d38d6516455013c1ec0e26e606b |
Compositional Generalization in Multilingual Semantic Parsing over Wikidata
Ruixiang Cui
Department of Computer Science
University of Copenhagen
Rahul Aralikatte
Department of Computer Science
University of Copenhagen
Heather Lent
Department of Computer Science
University of Copenhagen
Daniel Hershcovich
Department of Computer Science
University of Copenhagen
Compositional Generalization in Multilingual Semantic Parsing over Wikidata
Semantic parsing (SP) allows humans to leverage vast knowledge resources through natural interaction. However, parsers are mostly designed for and evaluated on English resources, such as CFQ(Keysers et al., 2020), the current standard benchmark based on English data generated from grammar rules and oriented towards Freebase, an outdated knowledge base. We propose a method for creating a multilingual, parallel dataset of question-query pairs, grounded in Wikidata. We introduce such a dataset, which we call Multilingual Compositional Wikidata Questions (MCWQ), and use it to analyze the compositional generalization of semantic parsers in Hebrew, Kannada, Chinese and English. While within-language generalization is comparable across languages, experiments on zero-shot cross-lingual transfer demonstrate that cross-lingual compositional generalization fails, even with state-ofthe-art pretrained multilingual encoders. Furthermore, our methodology, dataset and results will facilitate future research on SP in more realistic and diverse settings than has been possible with existing resources.
Introduction
Semantic parsers grounded in knowledge bases (KBs) enable knowledge base question answering (KBQA) for complex questions. Many semantic parsers are grounded in KBs such as Freebase (Bollacker et al., 2008), DBpedia (Lehmann et al., 2015) and Wikidata (Pellissier Tanon et al., 2016), and models can learn to answer questions about unseen entities and properties (Herzig and Berant, 2017;Cheng and Lapata, 2018;Shen et al., 2019;Sas et al., 2020). An important desired ability is compositional generalization-the ability to generalize to unseen combinations of known components (Oren et al., 2020;Kim and Linzen, 2020).
One of the most widely used datasets for measuring compositional generalization in KBQA is CFQ (Compositional Freebase Questions; Keysers et al., 2020), which was generated using grammar rules, and is based on Freebase, an outdated and unmaintained English-only KB. While the need to expand language technology to many languages is widely acknowledged (Joshi et al., 2020), the lack of a benchmark for compositional generalization in multilingual semantic parsing (SP) hinders KBQA in languages other than English. Furthermore, progress in both SP and KB necessitates that benchmarks can be reused and adapted for future methods. Wikidata is a multilingual KB, with entity and property labels in a multitude of languages. It has grown continuously over the years and is an important complement to Wikipedia. Much effort has been made to migrate Freebase data to Wikidata (Pellissier Tanon et al., 2016;Diefenbach et al., 2017;Hogan et al., 2021) but only in English. Investigating compositional generalization in cross-lingual SP requires a multilingual dataset, a gap we address in this work.
We Table 1: Selected fields in a CFQ entry. questionWithBrackets is the full English question with entities surrounded by brackets. questionPatternModEntities is the question with entites replaced by placeholders. In questionWithMids, the entity codes (Freebase machine IDs; MIDs) are given instead of their labels. sparql is the fully executable SPARQL query for the question, and in sparqlPatternModEntities the entity codes are replaced by placeholders.
original English, an Indo-European language using the Latin script, we create parallel datasets of questions in Hebrew, Kannada and Chinese, which use different scripts and belong to different language families: Afroasiatic, Dravidian and Sino-Tibetan, respectively. Our dataset includes questions in the four languages and their associated SPARQL queries. Our contributions are:
• a method to automatically migrate a KBQA dataset to another KB and extend it to diverse languages and domains,
• a benchmark for measuring compositional generalization in SP for KBQA over Wikidata in four typologically diverse languages,
• monolingual experiments with different SP architectures in each of the four languages, demonstrating similar within-language generalization, and
• zero-shot cross-lingual experiments using pretrained multilingual encoders, showing that compositional generalization from English to the other languages fails.
Our code for generating the dataset and for the experiments, as well as the dataset itself and trained models, are publicly available on https://github. com/coastalcph/seq2sparql. CFQ (Compositional Freebase Questions;Keysers et al., 2020) is a dataset for measuring compositional generalization in SP. It targets the task of parsing questions in English into SPARQL queries executable on the Freebase KB (Bollacker et al., 2008). Parsers trained on CFQ transform these questions into SPARQL queries, which can subsequently be executed against Freebase to answer the original questions (in this case, "Yes").
Limitations of CFQ
CFQ contains questions as in
CFQ uses the Distribution-Based Compositionality Assessment (DBCA) method to generate multiple train-test splits with maximally divergent examples in terms of compounds, while maintaining a low divergence in terms of primitive elements (atoms). In these maximum compound divergence (MCD) splits, the test set is constrained to examples containing novel compounds, i.e., new ways of composing the atoms seen during training. For measuring compositional generalizations, named entities in the questions are anonymized so that models cannot simply learn the relationship between entities and properties. CFQ contains 239,357 English question-answer pairs, which encompass 49,320 question patterns and 34,921 SPARQL query patterns. Table 1 shows selected fields of an example in CFQ. In their experiments, Keysers et al. (2020) trained semantic parsers using several architectures on various traintest splits. They demonstrated strong negative correlation between models' accuracy (correctness of the full generated SPARQL query) and compound divergence across a variety of system architecturesall models generalized poorly in the high-divergence settings, highlighting the need to improve compositional generalization in SP.
By the time CFQ was released, Freebase had already been shut down. On that account, to our knowledge, there is no existing SP dataset targeting compositional generalization that is grounded in a currently usable KB, which contains up-to-date information. We therefore migrate the dataset to such a KB, namely Wikidata, in §3.
Moreover, only few studies have evaluated semantic parsers' performance in a multilingual setting, due to the scarcity of multilingual KBQA datasets (Perevalov et al., 2022b). No comparable benchmark exists for languages other than English, and it is therefore not clear whether results are generalizable to other languages. Compositional generalization in typologically distant languages may pose completely different challenges, as these languages may have different ways to compose meaning (Evans and Levinson, 2009). We create such a multilingual dataset in §4, leveraging the multilinguality of Wikidata.
Migration to Wikidata
Wikidata is widely accepted as the replacement for Freebase. It is actively maintained and represents knowledge in a multitude of languages and domains, and also supports SPARQL. Migrating Freebase queries to Wikidata, however, is not trivial, as there is no established full mapping between the KBs' properties and entities. An obvious alternative to migration would be a replication of the original CFQ generation process but with Wikidata as the KB. Before delving into the details of the migration process, let us motivate the decision not to pursue that option: the grammar used to generate CFQ was not made available to others by Keysers et al. (2020) and is prohibitively too complex to reverse-engineer. Our migration process, on the other hand, is general and can similarly be applied for migrating other datasets from Freebase to Wikidata. Finally, many competitive models with specialized architecture have been developed for CFQ Gai et al., 2021). Our migrated dataset is formally similar and facilitates their evaluation and the development of new methods.
Property Mapping
As can be seen in Table 1, the WHERE clause in a SPARQL query consists of a list of triples, where the second element in each triple is the property, e.g., ns:people.person.gender. CFQ uses 51 unique properties in its SPARQL queries, mostly belonging to the cinematography domain. These Freebase properties cannot be applied directly to Wikidata, which uses different property codes known as P-codes, e.g., P21. We therefore need to map the Freebase properties into Wikidata properties. As a first step in the migration process, we check which Freebase properties used in CFQ have corresponding Wikidata properties. Using a publicly available repository providing a partial mapping between the KBs, 1 we identify 22 out of the 51 Freebase properties in CFQ can be directly mapped to Wikidata properties. 2 The other 29 require further processing:
Fourteen properties are the reverse of other properties, which do not have Wikidata counterparts. For example, ns:film.director.film is the reverse of ns:film.film.directed_by, and only the latter has Wikidata mapping, P57. We resolve the problem by swapping the entities around the property.
The other 15 properties deal with judging whether an entity has a certain quality. In CFQ, ?x1 a ns:film.director asks whether ?x1 is a director. Wikidata does not contain such unary properties. Therefore, we need to treat these CFQ properties as entities in Wikidata. For example, director is wd:Q2526255, so we paraphrase the query as ?x1 wdt:P106 wd:Q2526255, asking whether ?x1's occupation (P106) is director. In addition, we substitute the art director property from CFQ with the composer property because the former has no equivalent in Wikidata. Finally, we filter out queries with reverse marks over properties, e.g., ?x0 ns:people.person.gender M0, due to incompatibility with the question generation process ( §3.2).
After filtering, we remain with 236,304 entries with only fully-mappable properties-98.7% of all entries in CFQ. We additionally make necessary SPARQL syntax modification for Wikidata. 3
Entity Substitution
A large number of entities in Freebase are absent in Wikidata. For example, neither of the entities in Figure 1. The English question is generated from the CFQ entry in Table 1 by the migration process described in §3.3, and the questions in the other languages are automatically translated ( §4.1). The questionWithBrackets, questionPatternModEntities, sparql and sparqlPatternModEntities fields are analogous to the CFQ ones. recursionDepth (which quantifies the question complexity) and expectedResponse (which is the answer returned upon execution of the query) are copied from the CFQ entry.
prehensive or even partial mapping of Freebase entity IDs (i.e., Freebase machine IDs, MIDs, such as s:m.05zppz) to Wikidata entity IDs (i.e., Q-codes, such as wd:Q6581097). We replicate the grounding process carried out by Keysers et al. (2020), substituting entity placeholders with compatible entities codes by executing the queries against Wikidata:
1. Replacing entity placeholders with SPARQL variables (e.g., ?v0), we obtain queries that return sets of compatible candidate entity assignments instead of simply an answer for a given assignment of entities.
2. We add constraints for the entities to be distinct, to avoid nonsensical redundancies (e.g., due to conjunction of identical clauses).
3. Special entities, representing nationalities and genders, are regarded as part of the question patterns in CFQ (and are not replaced with placeholders). Before running the queries, we thus replace all such entities with corresponding Wikidata Q-codes (instead of variables).
4. We execute the queries against the Wikidata query service 4 to get the satisfying assignments of entity combinations, with which we replace the placeholders in sparqlPatternModEntities fields.
5. Finally, we insert the Q-codes into the English questions in the questionWithMids field and the corresponding entity labels into the questionWithBrackets to obtain the English questions for our dataset.
Along this process, 52.5% of the queries have at least one satisfying assignment. The resulting question-query pairs constitute our English dataset. They maintain the SPARQL patterns in CFQ, but the queries are all executable on Wikidata.
We obtain 124,187 question-query pairs, of which 67,523 are yes/no questions and 56,664 are wh-questions. The expected responses of yes/no questions in this set are all "yes" due to our entity assignment process. To make MCWQ comparable to CFQ, which has both positive and negative answers, we sample alternative queries by replacing entities with ones from other queries whose preceding predicates are the same. Our negtive sampling results in 30,418 questions with "no" answers.
Migration Example
Consider the SPARQL pattern from Table 1: Then we replace placeholders (e.g., M0) with variables and add constraints for getting only one assignment (which is enough for our purposes) with distinct entities. The resulting query is: We execute the query and get wd:Q50807639 (Lohengrin) and wd:Q1560129 (Margarete Joswig) as satisfying answers for v0 and v1 respectively. Note that these are different from the entities in the original question ('Murder' Legendre and Lillian Lugosi)-in general, there is no guarantee that the same entities from CFQ will be preserved in our dataset. Then we put back these answers into the query, and make necessary SPARQL syntax modification for Wikidata. The final query for this entry is:
SELECT ? v0 ? v1 WHERE {?
Dataset Statistics
We compare the statistics of MCWQ with CFQ in the train-test splits from CFQ on the corresponding subset of instances in our dataset.
The complexity of questions in CFQ is measured by recursion depth and reflects the number of rule applications used to generate a question, which encompasses grammar, knowledge, inference and resolution rules. While each question's complexity in MCWQ is the same as the corresponding CFQ question's, some cannot be migrated (see §3.1 and §3.2). To verify the compound divergence is not affected, we compare the question complexity distribution of the two datasets in one of the three compositional splits (MCD1) in Figure Stemming from its entities and properties, CFQ questions are limited to the domain of movies. The entities in MCWQ, however, can in principle come from any domain, owing to our flexible entity replacing method. Though MCWQ's properties are still a subset of those used in CFQ, they are primarily in the movies domain. We also observe a few questions
Generating Multilingual Questions
To create a typologically diverse dataset, starting from our English dataset (an Indo-European language using the Latin script), we use machine translation to three other languages from different families (Afroasiatic, Dravidian and Sino-Tibetan), which use different scripts: Hebrew, Kannada and Chinese ( §4.1). For a comparison to machine translation and a more realistic evaluation with regards to compositional SP, we manually translate a subset of the test sets of the three MCD splits ( §4.2) and evaluate the machine translation quality ( §4.3).
Generating Translations
Both question patterns and bracketed questions are translated separately with Google Cloud Translation 5 from English. 6 SPARQL queries remain unchanged, as both property and entity IDs are language-independent in Wikidata, which contains labels in different languages for each. Table 2 shows an example for a question in our dataset (which is generated from the same question as the CFQ instance from Table 1), as well as the resulting translations.
As an additional technical necessity, we add a question mark to the end of each question before translation (as the original dataset does not include question marks) and remove trailing question marks from the translated question before including it in our dataset. We find this step to be essential for translation quality.
Gold Test Set
CFQ and other datasets for evaluating compositional generalization (Lake and Baroni, 2018; Kim and Linzen, 2020) are generated from grammars. However, It has not been investigated how well models trained on them generalize to human questions. As a step towards that goal, we evaluate whether models trained with automatically generated and translated questions can generalize to high-quality humantranslated questions. For that purpose, we obtain the intersection of the test sets of the MCD splits (1,860 entries), and sample two translated questions with yes/no questions and two with wh-questions for each complexity level (if available). This sample, termed test-intersection-MT, has 155 entries in total. The authors (one native speaker for each language) manually translate the English questions into Hebrew, Kannada and Chinese. We term the resulting dataset test-intersection-gold.
Translation Quality
We compute the BLEU (Papineni et al., 2002) scores of test-intersection-MT against test-intersection-gold using SacreBLEU (Post, 2018), resulting in 87.4, 76.6 and 82.8 for Hebrew, Kannada and Chinese, respectively. This indicates high quality of the machine translation outputs.
Additionally, one author for each language manually assesses translation quality for one sampled question from each complexity level from the full dataset (40 in total). We rate the translations on a scale of 1-5 for fluency and for meaning preservation, with 1 being poor, and 5 being optimal. Despite occasional translation issues, mostly attributed to lexical choice or morphological agreement, we confirm quality. Therefore, we choose to translate question patterns and bracketed questions individually. that the translations are of high quality. Across languages, over 80% of examples score 3 or higher in fluency and meaning preservation. The average meaning preservation scores for Hebrew, Kannada and Chinese are 4.4, 3.9 and 4.0. For fluency, they are 3.6, 3.9 and 4.4.
As a control, one of the authors (a native English speaker) evaluates English fluency for the same sample of 40 questions. Only 62% of patterns were rated 3 or above. While all English questions are grammatical, many suffer from poor fluency, tracing back to their automatic generation using rules. Some translations are rated higher in terms of fluency, mainly due to annotator leniency (focusing on disfluencies that might result from translation) and paraphrasing of unnatural constructions by the MT system (especially for lower complexities).
Experiments
While specialized architectures have been achieved state-of-the-art results on CFQ (Guo et al., , 2021Gai et al., 2021), these approaches are Englishor Freebase-specific. We therefore experiment with sequence-to-sequence (seq2seq) models, among which T5 (Raffel et al., 2020) has been shown to perform best on CFQ . We evaluate these models for each lanugage separately ( §5.1), and subsequently evaluate their cross-lingual compositional generalization ( §5.2).
Monolingual Experiments
We evaluate six models' monolingual parsing performance on the three MCD splits and a random split of MCWQ. As done by Keysers et al. (2020), entities are masked during training, except those that are part of the question patterns (genders and nationalities).
We experiment with two seq2seq architectures on MCWQ for each language, with the same hyperparameters tuned by Keysers et al. (2020) on the CFQ random split: LSTM (Hochreiter and Schmidhuber, 1997) with attention mechanism (Bahdanau et al., 2015) and Evolved Transformer (So et al., 2019), both implemented using Tensor2Tensor (Vaswani et al., 2018). Separate models are trained and evaluated per language, with randomly initialized (not pretrained) encoders. We train a model for each of the three MCD splits plus a random split for each language.
We also experiment with pretrained language models (PLMs), to assess whether multilingual PLMs, mBERT (Devlin et al., 2019) and mT5 (Xue et al., 2020), are as effective for monolingual compositional generalization as an English-only PLM using the Transformers library (Wolf et al., 2020).
For mBERT, we fine-tune a multi_cased_L-12_H-768_A-12 encoder and a randomly initialized decoder of the same architecture. We train for 100 epochs with patience of 25, batch size of 128, and learning rate of 5 × 10 −5 with a linear decay.
For T5, we fine-tune T5-base on MCWQ English, and mT5-small and mT5-base on each language separately. We use the default hyperparameter settings except trying two learning rates, 5e −4 and 3e −5 (see results below). SPARQL queries are preprocessed using reversible intermediate representations (RIR), previously shown to facilitate compositional generalization for T5. We fine-tune all models for 50K steps.
We use six Titan RTX GPUs for training, with batch size of 36 for T5-base, 24 for mT5-small and 12 for mT5-base. We use two random seeds for T5-base. It takes 384 hours to finish a round of mT5-small experiments, 120 hours for T5-base and 592 hours for mT5-base.
In addition to exact-match accuracy, we report the BLEU scores of the predictions computed with SacreBLEU, as a large portion of the generated queries is partially (but not fully) correct.
Results
The results are shown in Table 4. While models generalize almost perfectly in the random split for all four languages, the MCD splits are much harder, with the highest mean accuracies of 38.3%, 33.2%, 32.1% and 36.3% for English, Hebrew, Kannada and Chinese, respectively. For comparison, on CFQ, T5-base+RIR has an accuracy of 60.8% on MCD mean . One reason for this decrease in performance is the smaller training data: the MCWQ dataset has 52.5% the size of CFQ. Furthermore, MCWQ has less redundancy than CFQ in terms of duplicate questions and SPARQL patterns, rendering models' potential strategy of simply memorizing patterns less effective.
Contrary to expectation, mT5-base does not outperform mT5-small. During training, we found mT5-base reached minimum loss early (after 1k steps). By changing the learning rate from the default 3e −5 to 5e −4 , we seem to have overcome the local minimum. Training mT5-small with learning rate 5e −4 also renders better performance. Furthermore, the batch size we use for mT5-base may not be optimal, but we could not experiment with larger batch sizes due to resource limitations. Table 4: Monolingual evaluation: exach match accuracies on MCWQ. MCD mean is the mean accuracy of all three MCD splits. Random represents a random split of MCWQ. This is an upper bound on the performance shown only for comparison. As SPARQL BLEU scores are highly correlated with accuracies in this experiment, we only show the latter here.
Comparing the performance across languages, mT5-base performs best on Hebrew and Kannada on average, while mT5-small has the best performance on English and Chinese. Due to resource limitations, we were not able to look deeper into the effect of hyperparameters or evaluate larger models. However, our experiments show that while multilingual compositional generalization is challenging for seq2seq semantic parsers, within-language generalization is comparable between languages. Nonetheless, English is always the easiest (at least marginally). A potential cause is that most semantic query languages were initially designed to represent and retrieve data stored in English databases, and thus have a bias towards English. Consequently, SPARQL syntax is closer to English than Hebrew, Kannada and Chinese. While translation errors might have an effect as well, we have seen in §4.3 that translation quality is high.
To investigate further, we plot the complexity distribution of true predictions (exactly matching the gold SPARQL) per language by the two best systems in Figure 4. We witness a near-linear performance decay from complexity level 19. We find that mT5base is better than mT5-small on lower complexity despite the latter's superior overall performance. Interestingly, translated questions seem to make the parsers generalize better at higher complexity, as shown in the figure. For mT5-small, the three non-English models successfully parse more questions within the complexity range 46-50 than English, for mT5-base 44-50. As is discussed in §4.3, machinetranslated questions tend to have higher fluency than English questions; we conjecture such a smoothing method helps the parser to understand and learn from higher complexity questions. Figure 4: Two mT5 models' number of correct predictions summing over the three MCD splits in monolingual experiments, plotted by complexity level. Each line represents a language. While mT5-small generalizes better overall, mT5-base is better in lower complexities (which require less compositional generalization). 2020; Sherborne and Lapata, 2022). Since translating datasets and training KBQA systems is expensive, it is beneficial to leverage multilingual PLMs, fine-tuned on English data, for generating SPARQL queries over Wikidata given natural language questions in different languages. While compositional generalization is difficult even in a monolingual setting, it is interesting to investigate whether multilingual PLMs can transfer in cross-lingual SP over Table 5: Mean BLEU scores and exact match accuracies on the three MCD splits and on a random split in zero-shot cross-lingual transfer experiments on MCWQ. The grey texts represent the models' monolingual performance on English, given for reference (the exact match accuracies are copied from Table 4). The black texts indicate the zero-shot cross-lingual transfer performances on Hebrew, Kannada and Chinese of a model trained on English. While the scores for individual MCD splits are omitted for brevity, in all three MCD splits, the accuracies are below 1% (except on MCD 2 Chinese, being 4%).
Zero-shot Cross-lingual Parsing
Wikidata. Simple seq2seq T5/mT5 models perform reasonably well (> 30% accuracy) on monolingual SP on some splits (see §5.1). We investigate whether the learned multilingual representations of such models enable compositional generalization even without target language training. We use mT5-small+RIR and mT5-base+RIR, the best two models trained and evaluated on English from previous experiments, to predict on the other languages.
Results
The results are shown in Table 5. Both BLEU and exact match accuracy of the predicted SPARQL queries drop drastically when the model is evaluated on Hebrew, Kannada and Chinese. mT5-small+RIR achieves 38.3% accuracy on MCD mean English, but less than 0.3% in zero-shot parsing on three non-English languages. Even putting aside compositionality evaluation, as seen in the random split, the exact match accuracy in the zero-shot cross-lingual setting is still low. The relatively high BLEU scores can be attributed to the small overall vocabulary used in SPARQL queries. Interestingly, while mT5-base+RIR on MCD mean English does not outperform mT5-small+RIR, it yields better performance in the zero-shot setting. For Hebrew, Kannada and Chinese, the accuracies are 0.2%, 0.4% and 1.3% higher. For mT5-base, Chinese is slightly easier than Kannada and Hebrew to parse in the zero-shot setting, outperforming 1.1% and 0.8%.
To conclude, zero-shot cross-lingual transfer from English to Hebrew, Kannada and Chinese fails to gen- Table 6: Mean BLEU scores and accuracies of monolingual models ( §5.1) on test-intersection-MT and testintersection-gold. The numbers are averaged over the accuracies of the predictions from the monolingual models trained on three MCD splits. Overall, there is no substantial difference between the performances on the two intersection sets, demonstrating the reliability of evaluating on machine translated data in this case.
erate valid queries in MCWQ. A potential cause for such unsuccessful transfer is that all four languages in MCWQ belong to different language families and have low linguistic similarities. It remains to be investigated whether such cross-lingual transfer will be more effective on related languages, such as from English to German (Lin et al., 2019).
Analysis
Evaluation with Gold Translation
Most existing compositional generalization datasets focus on SP (Lake and Baroni, 2018; Kim and Linzen, 2020;Keysers et al., 2020). These datasets are composed either with artificial language or in English using grammar rules. With test-intersectiongold proposed in §4.2, we investigate whether models can generalize from a synthetic automatically translated dataset to a manually translated dataset. We use the monolingual models trained on three MCD splits to parse test-intersection-gold. In Table 6, we present the mean BLEU scores and exact match accuracy of the predicted SPARQL queries. There is no substantial difference between the performances on the two intersection sets, except for Kannada, which has a 4% accuracy drop on average. These results testify that MCWQ has sufficiently high translation quality and that models trained with such synthetic data can be used to generalize to high-quality manually-translated questions.
Categorizing Errors
In an empirical analysis, we categorize typical prediction errors on test-intersection-gold and testintersection-MT into six types: missing property, extra property, wrong property (where the two property sets have the same numbers of properties, but the elements do not match), missing entity, extra entity and wrong entity (again, same number of entities but different entity sets). We plot the mean number of errors per category, as well as the number of predictions with multiple errors, in Figure 5 for monolingual mT5-small models. Overall, model predictions tend to have more missing properties and entities than extra ones. Different languages, however, vary in error types. For example, on Hebrew, models make more missing property/entity errors than other languages; but on Kannada they make more extra property/entity errors than the others. About 70 out of the 155 examples contain multiple errors for all languages, with Kannada being slightly more.
Comparing errors on test-intersection-gold and testintersection-MT, we find missing properties are more common in gold for all languages. For Hebrew and Kannada, extra properties and entities are also more common in gold. However, for Chinese, these and missing entities are less common in gold compared to MT.
In Figure 6 we plot the error statistics for zero-shot cross-lingual transfer using mT5-small models. We can see there are drastically more error occurrences. For both missing and extra property/entity, the numbers are about double those from monolingual experiments. The number of wrong property/entity errors remain similar, due to the difficulty of even predicting a set of the correct size in this setting. For all three target languages, nearly all predictions contain multiple errors. The statistics indicate the variety and pervasiveness of errors.
Other Observations
We also find that comparatively, parsers perform well on short questions on all four languages. This is expected as the compositionality of these questions is inherently low. On languages other than English, the models perform well when the translations are faithful. On occasions when they are less faithful or fluent but still generate correct queries, we hypothesize that translation acts as data regularizers, especially at higher complexities, as demonstrated in Figure 4. Among wrong entity errors, the most common cause across languages is the shuffling of entity placeholders. In the example shown in Figure 7, we see that the model generates M1 wdt:P57 M2 instead of M0 wdt:P57 M2, which indicates incorrect predicate-argument structure interpretation.
Related Work
Compositional Generalization Compositional generalization has witnessed great developments in recent years. SCAN (Lake and Baroni, 2018), a synthetic dataset consisting of natural language and command pairs, is an early dataset designed to systematically evaluate neural networks' generalization ability. CFQ and COGS are two more realistic benchmarks following SCAN. There are various approaches developed to enhance compositional generalization, for example, by using hierarchical poset decoding , combining relevant queries (Das et al., 2021) using span representation (Herzig and Berant, 2021) and graph encoding (Gai et al., 2021). In addition to pure language, the evaluation of compositional generalization has been expanded to image captioning and situated language understanding (Nikolaus et al., 2019;Ruis et al., 2020). Multilingual and cross-lingual compositional generalization is an important and challenging field to which our paper aims to bring researchers' attention.
Knowledge Base Question Answering
Comparing to machine reading comprehension (Rajpurkar et al., 2016;Joshi et al., 2017;Shao et al., 2018;Dua et al., 2019;d'Hoffschmidt et al., 2020), KBQA is less diverse in terms of datasets. Datasets such as WebQuestions (Berant et al., 2013), SimpleQuestions (Bordes et al., 2015), ComplexWebQuestions (Talmor and Berant, 2018), FreebaseQA , GrailQA (Gu et al., 2021),CFQ and *CFQ (Tsarkov et al., 2021) were proposed on Freebase, a now-discontinued KB. SimpleQues-tions2Wikidata (Diefenbach et al., 2017) and Com-plexSequentialQuestions (Saha et al., 2018) are based on Wikidata, but like most others, they are monolingual English datasets. Related to our work is RuBQ (Korablinov and Braslavski, 2020;Rybin et al., 2021), an English-Russian dataset for KBQA over Wikidata. While the dataset is bilingual, it uses crowdsourced questions and is not designed for compositionality analysis. Recently, Thorne et al. (2021) proposed WIKINLDB, a Wikidata-based English KBQA dataset, focusing on scalability rather than compositionality. Other related datasets include QALM (Kaffee et al., 2019), a dataset for multilingual question answering over a set of different popular knowledge graphs, intended to help determine the multilinguality of those knowledge graphs. Similarly, QALD-9 (Ngomo, 2018) and QALD-9-plus (Perevalov et al., 2022a) support the development of multilingual question answering systems, tied to DBpedia and Wikidata, respectively. The goal of both datasets is to expand QA systems to more languages rather than improving composition-ality. KQA Pro (Cao et al., 2022), a concurrent work to us, is an English KBQA dataset over Wikidata with a focus on compositional reasoning.
Wikidata has been leveraged across many NLP tasks such as coreference resolution , frame-semantic parsing (Sas et al., 2020), entity linking (Kannan Ravi et al., 2021) and named entity recognition (Nie et al., 2021). As for KBQA, the full potential of Wikidata is yet to be explored.
Multilingual and Cross-lingual Modelling
Benchmarks such as XGLUE (Liang et al., 2020) and XTREME (Hu et al., 2020) focus on multilingual classification and generation tasks. Cross-lingual learning has been studied across multiple fields, such as sentiment analysis (Abdalla and Hirst, 2017), document classification (Dong and de Melo, 2019), POS tagging (Kim et al., 2017) and syntactic parsing (Rasooli and Collins, 2017). In recent years, multilingual PLMs have been a primary tool for extending NLP applications to low-resource languages, as these models ameliorate the need to train individual models for each language, for which less data may be available. Several works have attempted to explore the limitations of such models in terms of practical usability for low-resource languages (Wu and Dredze, 2020), and also the underlying elements that make cross-lingual transfer learning viable (Dufter and Schütze, 2020). Beyond these PLMs, other works focus on improving cross-lingual learning by making particular changes to the encoder-decoder architecture, such as adding adapters to attune to specific information (Artetxe et al., 2020b;Pfeiffer et al., 2020).
For cross-lingual SP, Sherborne and Lapata (2022) explored zero-shot SP by aligning latent representations. Zero-shot cross-lingual SP has also been studied in dialogue modelling (Nicosia et al., 2021). Yang et al. (2021) present augmentation methods for Discourse Representation Theory (Liu et al., 2021b). Oepen et al. (2020) explore crossframework and cross-lingual SP for meaning representations. To the best of our knowledge, our work is the first on studying cross-lingual transfer learning in KBQA.
Limitations
MCWQ is based on CFQ, a rule-base generated dataset, and hence the inherited unnaturalness in question-query pairs of high complexity. Secondly, we use machine translation to make MCWQ multilingual. Although this is the dominant approach MCWQ mT5-base+RIR for generating multilingual datasets (Ruder et al., 2021) and we have provided evidences that MCWQ has reasonable translation accuracy and fluency with human evaluation and comparative experiments in §4.3 and §5.1, machine translation would nevertheless create substandard translation artifacts (Artetxe et al., 2020a). One alternative way is to write rules for template translation. The amount of work can possibly be reduced by refering to a recent work (Goodwin et al., 2021) in which English rules are provided for syntactic dependency parsing on CFQ's question fields.
Furthermore, the assumption that an English KB is a "canonical" conceptualization is unjustified, as speakers of other languages may know and care about other entities and relationships (Liu et al., 2021a;Hershcovich et al., 2022a). Therefore, future work must create multilingual SP datasets by sourcing questions from native speakers rather than translating them.
Conclusion
The field of KBQA has been saturated with work on English, due to both the inherent challenges of translating datasets and the reliance on English-only DBs. In this work, we presented a method for migrating the existing CFQ dataset to Wikidata and created a challenging multilingual dataset, MCWQ, targeting compositional generalization in multilingual and cross-lingual SP. In our experiments, we observe that pretrained multilingual language models struggle to transfer and generalize compositionally across languages. Our dataset will facilitate building robust multilingual semantic parsers by serving as a benchmark for evaluation of cross-lingual compositional generalization.
Environmental Impact
Following the climate-aware practice proposed by Hershcovich et al. (2022b), we present a climate performance model card in Table 7. "Time to train final model" is the sum over splits and languages for mT5-base+RIR, while "Time for all experiments" also includes the experiments with the English-only T5-base+RIR across all splits. While the work does not have direct positive environmental impact, better understanding of compositional generalization, resulting from our work, will facilitate more efficient modeling and therefore reduce emissions in the long term.
Figure 1 :
1An example from the MCWQ dataset. The question in every language corresponds to the same Wikidata SPARQL query, which, upon execution, returns the answer (which is positive in this case).
c t i o n a l _ u n i v e r s e . m a r r i a g e _ o f _ f i c t i o n a l _ c h a r a c t e r s . s p o u s e s M2 . FILTER ( ? x0 ! = M2 ) } We replace the properties and special entities (here the gender male: ns:m.05zppz → wd:Q6581097):
Figure 2 :
2Complexity distribution of the MCD 1 split of CFQ (above) and MCWQ (below).
2. The training, development and test sets of the split in CFQ and MCWQ follow a similar trend in general. The fluctuation in the complexity of questions in the MCWQ splits reflects the dataset's full distribution-see Figure 3.
Figure 5 :
5Number of errors per category in different SPARQL predictions on test-intersection-MT and test-intersection-gold, averaged across monolingual mT5-small+RIR models trained on the three MCD splits. The total number of items in each test set is 155.
Figure 6 :Figure 7 :
67Number of errors per category in different zero-shot cross-lingual SPARQL predictions on testintersection-MT, averaged across mT5-small+RIR models trained on the three MCD splits in English. Additionally, mean error counts on the English set are given for comparison. The total number of items in each test set is 155. Question Was M0 written by and directed by M1 , M2 , and M3 Gold ASK WHERE { M0 wdt:P57 M1 . M0 wdt:P57 M2 . M0 wdt:P57 M3 . M0 wdt:P58 M1 . M0 wdt:P58 M2 . M0 wdt:P58 M3 } Inferred ASK WHERE { M0 wdt:P57 M1 . M1 wdt:P57 M2 . M0 wdt:P58 M3 } Example for an error reflecting incorrect predicate-argument structure. wdt:P57 is director and wdt:P58 is screenwriter. Incorrect triples are shown in red and missed triples in blue.
leverage Wikidata and CFQ to create Multilingual Compositional Wikidata Questions (MCWQ), a new multilingual dataset of compositional questions grounded in Wikidata. Beyond theCFQ field
Content
questionWithBrackets
Did ['Murder' Legendre]'s male actor marry [Lillian Lugosi]
questionPatternModEntities Did M0 's male actor marry M2
questionWithMids
Did m.0h4y854 's male actor marry m.0hpnx3b
sparql
SELECT count(*) WHERE { ?x0 ns:film.actor.film/ns:film.performance.character
ns:m.0h4y854 . ?x0 ns:people.person.gender ns:m.05zppz . ?x0
ns:people.person.spouse_s/ns:fictional_universe.marriage_of_
fictional_characters.spouses ns:m.0hpnx3b . FILTER ( ?x0 != ns:m.0hpnx3b )}
sparqlPatternModEntities
SELECT count(*) WHERE { ?x0 ns:film.actor.film/ns:film.performance.character
M0 . ?x0 ns:people.person.gender ns:m.05zppz . ?x0 ns:people.person.spouse_s
/ns:fictional_universe.marriage_of_fictional_characters.spouses M2 . FILTER
( ?x0 != M2 )}
Table 1 ,
1as well as the following English question (with entities surrounded by brackets): "Was [United Artists] founded by [Mr. Fix-it]'s star, founded by [D. W. Griffith], founded by [Mary Pickford], and founded by [The Star Boarder]'s star?"
Table 1
1exist in Wikidata. Furthermore, unlike the case of properties, to our knowledge, there is no com-Lang. MCWQ field
Content
En
questionWithBrackets
Did [Lohengrin] 's male actor marry [Margarete Joswig]
questionPatternModEntities
Did M0 's male actor marry M2
He
questionWithBrackets
ה
א
ם
ה
ש
ח
ק
ן
ה
ג
ב
ר
י
ש
ל
[
ל
ו
ה
נ
ג
ר
י
ן
]
ה
ת
ח
ת
ן
ע
ם
[
מ
ר
ג
ר
ט
י
ו
ס
ו
ו
י
ג
]
questionPatternModEntities
ה
א
ם
ה
ש
ח
ק
ן
ה
ג
ב
ר
י
ש
ל
0
M
ה
ת
ח
ת
ן
ע
ם
2
M
Kn
questionWithBrackets
questionPatternModEntities
Zh
questionWithBrackets
[Lohengrin]的男演员嫁给了[Margarete Joswig]吗
questionPatternModEntities
M0的男演员和M2结婚吗
sparql
ASK WHERE { ?x0 wdt:P453 wd:Q50807639 . ?x0 wdt:P21 wd:Q6581097 .
?x0 wdt:P26 wd:Q1560129 . FILTER ( ?x0 != wd:Q1560129 )}
sparqlPatternModEntities
ASK WHERE { ?x0 wdt:P453 M0 . ?x0 wdt:P21 wd:Q6581097 . ?x0 wdt:P26
M2 . FILTER ( ?x0 != M2 )}
recursionDepth
20
expectedResponse
True
Table 2 :
2The MCWQ example from
ASK WHERE {? x0 wdt : P453 wd : Q50807639 . ? x0 wdt : P21 wd : Q6581097 . ? x0 wdt : P26 wd : Q1560129 . FILTER ( ? x0 ! = wd : Q1560129 ) }As for the English question, we map the Freebase entities in the questionWithMids field with the labels of the obtained Wikidata entities. Therefore, the English question resulting from this process is:Did [Lohengrin] 's male actor marry [Mar-
garete Joswig]?
Table 3 .
3MCWQ has 29,312 unique question patterns (mod entities, verbs, etcs), i.e., 23.6% of questions cover all question patterns, compared to 20.6% in CFQ. Furthermore, MCWQ has 86,353 unique query patterns (mod entities), resulting in 69.5% of instances covering all SPARQL patterns, 18% higher than CFQ. Our dataset thus poses a greater challenge for compositional SP, and exhibits less redundancy in terms of duplicate query patterns. It is worth noting that less unique query percentage in MCWQ than CFQ results from the loss during swapping the entities in §3.1.To be compositionally challenging,Keysers et al. (2020) generated the MCD splits to have high compound divergence while maintaining low atom divergence. As atoms in MCWQ are mapped from CFQ while leaving the compositional structure intact, we derive train-test splits of our dataset by inducing9
14
19
24
29
34
39
44
49
Question Complexity of MCD1 Split of CFQ
0.00
0.02
0.04
0.06
0.08
0.10
Percentage
train set
dev set
test set
9
14
19
24
29
34
39
44
49
Question Complexity of MCD1 Split of MCWQ
0.00
0.02
0.04
0.06
0.08
0.10
Percentage
train set
dev set
test set
Figure 3: Complexity distribution of MCWQ, measured by recursion depth, compared to CFQ.9
14
19
24
29
34
39
44
49
Question Complexity
0.00
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
Percentage
CFQ
MCWQ
CFQ
MCWQ
Unique questions
239,357
124,187
Questions patterns
49,320 (20.6%)
29,312 (23.6%)
Unique queries
228,149 (95.3%)
101,856 (82%)
Query patterns
123,262 (51.5%) 86,353 (69.5%)
Yes/no questions
130,571 (54.6%)
67,523 (54.4%)
Wh-questions
108,786 (45.5%) 56,664 (45.6%)
Table 3 :
3Dataset statistics comparison for MCWQ and CFQ. Percentages are relative to all unique questions. Questions patterns refer to mod entities, verbs, etc. while query patterns refer to mod entities only.from literature, politics, and history in MCWQ.
Zero-shot cross-lingual SP has witnessed new advances with the development of PLMs(Shao et al., 9
14
19
24
29
34
39
44
49
mT5-small
0
100
200
300
400
500
Quantity
En
He
Kn
Zh
9
14
19
24
29
34
39
44
49
mT5-base
0
100
200
300
400
500
Quantity
En
He
Kn
Zh
Table 7 :
7Information Unit1. Model publicly available? Yes 2. Time to train final model 592 hours 3. Time for all experiments 1315 hours 4. Energy consumption 2209.2 kWh 5. Location for computations Denmark 6. Energy mix at location 191 gCO2eq/ kWh 7. CO2eq for final model 189.96 kg 8. CO2eq for all experiments 421.96 kg Climate performance model card for mT5-base+RIR fine-tuned on all splits and languages.
https://www.wikidata.org/wiki/Wikidata: WikiProject_Freebase/Mapping 2 While some Freebase properties have multiple corresponding Wikidata properties, we consider a property mappable as long as it has at least one mapping.3 CFQ uses SELECT count(*) WHERE to query yes/no questions, but this syntax is not supported by Wikidata. We replace it with ASK WHERE, intended for boolean queries.
https://query.wikidata.org/
https://cloud.google.com/translate 6 We attempted to translate bracketed questions and subsequently replace the bracketed entities with placeholders as question patterns. In preliminary experiments, we found that separate translation of question patterns is of higher translation
AcknowledgmentsThe authors thank Anders Søgaard and Miryam de Lhoneux for their comments and suggestions, as well as the TACL editors and several rounds of reviewers for their constructive evaluation. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801199 (Heather Lent).
Crosslingual sentiment analysis without (good) translation. Mohamed Abdalla, Graeme Hirst, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, TaiwanLong Papers1Asian Federation of Natural Language ProcessingMohamed Abdalla and Graeme Hirst. 2017. Cross- lingual sentiment analysis without (good) transla- tion. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 506-515, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Rewarding coreference resolvers for being consistent with world knowledge. Rahul Aralikatte, Heather Lent, Ana Valeria Gonzalez, Daniel Herschcovich, Chen Qiu, Anders Sandholm, Michael Ringaard, Anders Søgaard, 10.18653/v1/D19-1118Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsRahul Aralikatte, Heather Lent, Ana Valeria Gon- zalez, Daniel Herschcovich, Chen Qiu, Anders Sandholm, Michael Ringaard, and Anders Sø- gaard. 2019. Rewarding coreference resolvers for being consistent with world knowledge. In Proceedings of the 2019 Conference on Empiri- cal Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 1229-1235, Hong Kong, China. Association for Computational Linguistics.
Translation artifacts in cross-lingual transfer learning. Mikel Artetxe, Gorka Labaka, Eneko Agirre, 10.18653/v1/2020.emnlp-main.618Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsMikel Artetxe, Gorka Labaka, and Eneko Agirre. 2020a. Translation artifacts in cross-lingual trans- fer learning. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 7674-7684, Online. Association for Computational Linguistics.
On the cross-lingual transferability of monolingual representations. Mikel Artetxe, Sebastian Ruder, Dani Yogatama, 10.18653/v1/2020.acl-main.421Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsOnlineMikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020b. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623-4637, On- line. Association for Computational Linguistics.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyung Hyun Cho, Yoshua Bengio, 3rd International Conference on Learning Representations. Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015.
Semantic parsing on Freebase from question-answer pairs. Jonathan Berant, Andrew Chou, Roy Frostig, Percy Liang, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational LinguisticsJonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533-1544, Seattle, Washington, USA. Association for Computational Linguistics.
Freebase: A collaboratively created graph database for structuring human knowledge. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, Jamie Taylor, 10.1145/1376616.1376746Proceedings of the. theKurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A col- laboratively created graph database for structuring human knowledge. In Proceedings of the 2008
ACM SIGMOD International Conference on Management of Data, SIGMOD '08. New York, NY, USAAssociation for Computing MachineryACM SIGMOD International Conference on Man- agement of Data, SIGMOD '08, page 1247-1250, New York, NY, USA. Association for Computing Machinery.
Antoine Bordes, Nicolas Usunier, Sumit Chopra, Jason Weston, arXiv:1506.02075Large-scale simple question answering with memory networks. arXiv preprintAntoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. 2015. Large-scale simple ques- tion answering with memory networks. arXiv preprint arXiv:1506.02075.
Bin He, and Hanwang Zhang. 2022. KQA pro: A dataset with explicit compositional programs for complex question answering over knowledge base. Shulin Cao, Jiaxin Shi, Liangming Pan, Lunyiu Nie, Yutong Xiang, Lei Hou, Juanzi Li, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandLong Papers1Association for Computational LinguisticsShulin Cao, Jiaxin Shi, Liangming Pan, Lunyiu Nie, Yutong Xiang, Lei Hou, Juanzi Li, Bin He, and Hanwang Zhang. 2022. KQA pro: A dataset with explicit compositional programs for complex ques- tion answering over knowledge base. In Proceed- ings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6101-6119, Dublin, Ireland. Asso- ciation for Computational Linguistics.
Weaklysupervised neural semantic parsing with a generative ranker. Jianpeng Cheng, Mirella Lapata, 10.18653/v1/K18-1035Proceedings of the 22nd Conference on Computational Natural Language Learning. the 22nd Conference on Computational Natural Language LearningBrussels, BelgiumAssociation for Computational LinguisticsJianpeng Cheng and Mirella Lapata. 2018. Weakly- supervised neural semantic parsing with a gener- ative ranker. In Proceedings of the 22nd Confer- ence on Computational Natural Language Learn- ing, pages 356-367, Brussels, Belgium. Associ- ation for Computational Linguistics.
Case-based reasoning for natural language queries over knowledge bases. Rajarshi Das, Manzil Zaheer, Dung Thai, Ameya Godbole, Ethan Perez, Jay Yoon Lee, Lizhen Tan, Lazaros Polymenakos, Andrew Mccallum, 10.18653/v1/2021.emnlp-main.755Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican Republic. Association for Computational LinguisticsRajarshi Das, Manzil Zaheer, Dung Thai, Ameya Godbole, Ethan Perez, Jay Yoon Lee, Lizhen Tan, Lazaros Polymenakos, and Andrew McCallum. 2021. Case-based reasoning for natural language queries over knowledge bases. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 9594-9611, Online and Punta Cana, Dominican Republic. As- sociation for Computational Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong and Short Papers; Minneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
FQuAD: French question answering dataset. Wacim Martin D'hoffschmidt, Quentin Belblidia, Tom Heinrich, Maxime Brendlé, Vidal, 10.18653/v1/2020.findings-emnlp.107Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsMartin d'Hoffschmidt, Wacim Belblidia, Quentin Heinrich, Tom Brendlé, and Maxime Vidal. 2020. FQuAD: French question answering dataset. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 1193-1208, Online. Association for Computational Linguistics.
Question answering benchmarks for Wikidata. Dennis Diefenbach, Thomas Pellissier Tanon, K Singh, P Maret, International Semantic Web Conference. Dennis Diefenbach, Thomas Pellissier Tanon, K. Singh, and P. Maret. 2017. Question answer- ing benchmarks for Wikidata. In International Semantic Web Conference.
A robust selflearning framework for cross-lingual text classification. Xin Dong, Gerard De Melo, 10.18653/v1/D19-1658Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsXin Dong and Gerard de Melo. 2019. A robust self- learning framework for cross-lingual text classi- fication. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6306-6310, Hong Kong, China. Associ- ation for Computational Linguistics.
DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, Matt Gardner, 10.18653/v1/N19-1246Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsDheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gard- ner. 2019. DROP: A reading comprehension benchmark requiring discrete reasoning over para- graphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2368-2378, Minneapolis, Minnesota. As- sociation for Computational Linguistics.
Identifying elements essential for BERT's multilinguality. Philipp Dufter, Hinrich Schütze, 10.18653/v1/2020.emnlp-main.358Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsPhilipp Dufter and Hinrich Schütze. 2020. Identify- ing elements essential for BERT's multilinguality. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing (EMNLP), pages 4423-4437, Online. Association for Computational Linguistics.
The myth of language universals: Language diversity and its importance for cognitive science. Nicholas Evans, Stephen C Levinson, Behavioral and brain sciences. 325Nicholas Evans and Stephen C Levinson. 2009. The myth of language universals: Language diversity and its importance for cognitive science. Behav- ioral and brain sciences, 32(5):429-448.
Joseph Gonzalez, Dawn Song, and Ion Stoica. 2021. Grounded graph decoding improves compositional generalization in question answering. Yu Gai, Paras Jain, Wendi Zhang, 10.18653/v1/2021.findings-emnlp.157Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsYu Gai, Paras Jain, Wendi Zhang, Joseph Gonza- lez, Dawn Song, and Ion Stoica. 2021. Grounded graph decoding improves compositional gener- alization in question answering. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1829-1838, Punta Cana, Do- minican Republic. Association for Computational Linguistics.
Compositional generalization in dependency parsing. Emily Goodwin, Siva Reddy, Timothy J O'donnell, Dzmitry Bahdanau, 10.48550/ARXIV.2110.06843Emily Goodwin, Siva Reddy, Timothy J. O'Donnell, and Dzmitry Bahdanau. 2021. Compositional generalization in dependency parsing.
Beyond iid: three levels of generalization for question answering on knowledge bases. Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy Liang, Xifeng Yan, Yu Su, Proceedings of the Web Conference 2021. the Web Conference 2021Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy Liang, Xifeng Yan, and Yu Su. 2021. Be- yond iid: three levels of generalization for question answering on knowledge bases. In Proceedings of the Web Conference 2021, pages 3477-3488.
Hierarchical poset decoding for compositional generalization in language. Yinuo Guo, Zeqi Lin, Jian-Guang Lou, Dongmei Zhang, Advances in Neural Information Processing Systems. 33Yinuo Guo, Zeqi Lin, Jian-Guang Lou, and Dong- mei Zhang. 2020. Hierarchical poset decoding for compositional generalization in language. Ad- vances in Neural Information Processing Systems, 33:6913-6924.
Revisiting iterative back-translation from the perspective of compositional generalization. Yinuo Guo, Hualei Zhu, Zeqi Lin, Bei Chen, Jian-Guang Lou, Dongmei Zhang, AAAI'21. Yinuo Guo, Hualei Zhu, Zeqi Lin, Bei Chen, Jian- Guang Lou, and Dongmei Zhang. 2021. Revisit- ing iterative back-translation from the perspective of compositional generalization. In AAAI'21.
Phillip Rust, and Anders Søgaard. 2022a. Challenges and strategies in cross-cultural NLP. Daniel Hershcovich, Stella Frank, Heather Lent, Mostafa Miryam De Lhoneux, Stephanie Abdou, Emanuele Brandl, Laura Cabello Bugliarello, Ilias Piqueras, Ruixiang Chalkidis, Constanza Cui, Katerina Fierro, Margatina, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Daniel Hershcovich, Stella Frank, Heather Lent, Miryam de Lhoneux, Mostafa Abdou, Stephanie Brandl, Emanuele Bugliarello, Laura Cabello Pi- queras, Ilias Chalkidis, Ruixiang Cui, Constanza Fierro, Katerina Margatina, Phillip Rust, and An- ders Søgaard. 2022a. Challenges and strategies in cross-cultural NLP. In Proceedings of the 60th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 6997-7013, Dublin, Ireland. Association for Computational Linguistics.
Daniel Hershcovich, Nicolas Webersinke, Mathias Kraus, arXiv:2205.05071Julia Anna Bingler, and Markus Leippold. 2022b. Towards climate awareness in nlp research. arXiv preprintDaniel Hershcovich, Nicolas Webersinke, Mathias Kraus, Julia Anna Bingler, and Markus Leippold. 2022b. Towards climate awareness in nlp re- search. arXiv preprint arXiv:2205.05071.
Neural semantic parsing over multiple knowledge-bases. Jonathan Herzig, Jonathan Berant, 10.18653/v1/P17-2098Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaShort Papers). Association for Computational LinguisticsJonathan Herzig and Jonathan Berant. 2017. Neural semantic parsing over multiple knowledge-bases. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 623-628, Vancouver, Canada. Association for Computational Linguis- tics.
Spanbased semantic parsing for compositional generalization. Jonathan Herzig, Jonathan Berant, 10.18653/v1/2021.acl-long.74Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational Linguistics1Jonathan Herzig and Jonathan Berant. 2021. Span- based semantic parsing for compositional general- ization. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 908-921, Online. Association for Computational Linguistics.
Unlocking compositional generalization in pre-trained models using intermediate representations. Jonathan Herzig, Peter Shaw, Ming-Wei Chang, Kelvin Guu, Panupong Pasupat, Yuan Zhang, arXiv:2104.07478arXiv preprintJonathan Herzig, Peter Shaw, Ming-Wei Chang, Kelvin Guu, Panupong Pasupat, and Yuan Zhang. 2021. Unlocking compositional generalization in pre-trained models using intermediate representa- tions. arXiv preprint arXiv:2104.07478.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
. Aidan Hogan, Eva Blomqvist, Michael Cochez, Claudia Amato, Gerard De Melo, Claudio Gutierrez, José Emilio Labra, S Gayo, Sebastian Kirrane, Axel Neumaier, R Polleres, Axel-Cyrille Ngonga Navigli, Ngomo, M Sabbir, Anisa Rashid, Lukas Rula, Juan Schmelzeisen, Steffen Sequeda, Antoine Staab, Zimmermann, Communications of the ACM. 64Aidan Hogan, Eva Blomqvist, Michael Cochez, Clau- dia d'Amato, Gerard de Melo, Claudio Gutier- rez, José Emilio Labra Gayo, S. Kirrane, Sebas- tian Neumaier, Axel Polleres, R. Navigli, Axel- Cyrille Ngonga Ngomo, Sabbir M. Rashid, Anisa Rula, Lukas Schmelzeisen, Juan Sequeda, Steffen Staab, and Antoine Zimmermann. 2021. Knowl- edge graphs. Communications of the ACM, 64:96 -104.
XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson, PMLRInternational Conference on Machine Learning. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multi- task benchmark for evaluating cross-lingual gener- alisation. In International Conference on Machine Learning, pages 4411-4421. PMLR.
Free-baseQA: A new factoid QA data set matching trivia-style question-answer pairs with Freebase. Kelvin Jiang, Dekun Wu, Hui Jiang, 10.18653/v1/N19-1028Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Kelvin Jiang, Dekun Wu, and Hui Jiang. 2019. Free- baseQA: A new factoid QA data set matching trivia-style question-answer pairs with Freebase. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 318-323, Minneapolis, Minnesota. Association for Computational Linguistics.
TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. Mandar Joshi, Eunsol Choi, Daniel Weld, Luke Zettlemoyer, 10.18653/v1/P17-1147Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Long Papers)Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for read- ing comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada. Association for Computational Linguistics.
The state and fate of linguistic diversity and inclusion in the NLP world. Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, Monojit Choudhury, 10.18653/v1/2020.acl-main.560Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsPratik Joshi, Sebastin Santy, Amar Budhiraja, Ka- lika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 6282-6293, Online. As- sociation for Computational Linguistics.
Ranking knowledge graphs by capturing knowledge about languages and labels. Lucie-Aimée Kaffee, Kemele M Endris, Elena Simperl, Maria-Esther Vidal, 10.1145/3360901.3364443Proceedings of the 10th International Conference on Knowledge Capture, K-CAP 2019. the 10th International Conference on Knowledge Capture, K-CAP 2019Marina Del Rey, CA, USAACMLucie-Aimée Kaffee, Kemele M. Endris, Elena Sim- perl, and Maria-Esther Vidal. 2019. Ranking knowledge graphs by capturing knowledge about languages and labels. In Proceedings of the 10th International Conference on Knowledge Capture, K-CAP 2019, Marina Del Rey, CA, USA, Novem- ber 19-21, 2019. ACM.
CHOLAN: A modular approach for neural entity linking on Wikipedia and Wikidata. Manoj Prabhakar Kannan Ravi, Kuldeep Singh, Isaiah Onando Mulang, ' , Saeedeh Shekarpour, Johannes Hoffart, Jens Lehmann, 10.18653/v1/2021.eacl-main.40Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsManoj Prabhakar Kannan Ravi, Kuldeep Singh, Isaiah Onando Mulang', Saeedeh Shekarpour, Johannes Hoffart, and Jens Lehmann. 2021. CHOLAN: A modular approach for neural en- tity linking on Wikipedia and Wikidata. In Pro- ceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 504-514, Online. Association for Computational Linguistics.
Measuring compositional generalization: A comprehensive method on realistic data. Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Olivier Marc Van Zee, Bousquet, International Conference on Learning Representations. Daniel Keysers, Nathanael Schärli, Nathan Scales, Hylke Buisman, Daniel Furrer, Sergii Kashubin, Nikola Momchev, Danila Sinopalnikov, Lukasz Stafiniak, Tibor Tihon, Dmitry Tsarkov, Xiao Wang, Marc van Zee, and Olivier Bousquet. 2020. Measuring compositional generalization: A com- prehensive method on realistic data. In Interna- tional Conference on Learning Representations.
Cross-lingual transfer learning for POS tagging without crosslingual resources. Joo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, Eric Fosler-Lussier, 10.18653/v1/D17-1302Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsJoo-Kyung Kim, Young-Bum Kim, Ruhi Sarikaya, and Eric Fosler-Lussier. 2017. Cross-lingual transfer learning for POS tagging without cross- lingual resources. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 2832-2838, Copenhagen, Den- mark. Association for Computational Linguistics.
COGS: A compositional generalization challenge based on semantic interpretation. Najoung Kim, Tal Linzen, 10.18653/v1/2020.emnlp-main.731Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsNajoung Kim and Tal Linzen. 2020. COGS: A compositional generalization challenge based on semantic interpretation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9087-9105, Online. Association for Computational Linguis- tics.
RuBQ: A Russian dataset for question answering over Wikidata. Vladislav Korablinov, Pavel Braslavski, International Semantic Web Conference. Vladislav Korablinov and Pavel Braslavski. 2020. RuBQ: A Russian dataset for question answering over Wikidata. In International Semantic Web Conference.
Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. M Brenden, Marco Lake, Baroni, PMLRProceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningStockholmsmässan, Stockholm, Sweden80Brenden M. Lake and Marco Baroni. 2018. Gen- eralization without systematicity: On the compo- sitional skills of sequence-to-sequence recurrent networks. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10- 15, 2018, volume 80 of Proceedings of Machine Learning Research, pages 2879-2888. PMLR.
DBpedia -a large-scale, multilingual knowledge base extracted from Wikipedia. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Sören Patrick Van Kleef, Christian Auer, Bizer, Semantic Web. 62Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, Sören Auer, and Christian Bizer. 2015. DBpedia -a large-scale, multilingual knowledge base extracted from Wikipedia. Semantic Web, 6(2):167-195.
XGLUE: A new benchmark datasetfor cross-lingual pre-training, understanding and generation. Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, Ming Zhou, 10.18653/v1/2020.emnlp-main.484Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsYaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sin- ing Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark dataset- for cross-lingual pre-training, understanding and generation. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 6008-6018, Online. Association for Computational Linguistics.
Antonios Anastasopoulos, Patrick Littell, and Graham Neubig. Yu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijhwani, Junxian He, Zhisong Zhang, Xuezhe Ma, 10.18653/v1/P19-1301Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsChoosing transfer languages for cross-lingual learningYu-Hsiang Lin, Chian-Yu Chen, Jean Lee, Zirui Li, Yuyan Zhang, Mengzhou Xia, Shruti Rijh- wani, Junxian He, Zhisong Zhang, Xuezhe Ma, Antonios Anastasopoulos, Patrick Littell, and Gra- ham Neubig. 2019. Choosing transfer languages for cross-lingual learning. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 3125-3135, Florence, Italy. Association for Computational Linguistics.
Visually grounded reasoning across languages and cultures. Fangyu Liu, Emanuele Bugliarello, Maria Edoardo, Siva Ponti, Nigel Reddy, Desmond Collier, Elliott, 10.18653/v1/2021.emnlp-main.818Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican Republic. Association for Computational LinguisticsFangyu Liu, Emanuele Bugliarello, Edoardo Maria Ponti, Siva Reddy, Nigel Collier, and Desmond El- liott. 2021a. Visually grounded reasoning across languages and cultures. In Proceedings of the 2021 Conference on Empirical Methods in Natural Lan- guage Processing, pages 10467-10485, Online and Punta Cana, Dominican Republic. Associ- ation for Computational Linguistics.
Jiangming Liu, Shay B Cohen, 10.1162/coli_a_00406Mirella Lapata, and Johan Bos. 2021b. Universal Discourse Representation Structure Parsing. Computational Linguistics. 47Jiangming Liu, Shay B. Cohen, Mirella Lapata, and Johan Bos. 2021b. Universal Discourse Represen- tation Structure Parsing. Computational Linguis- tics, 47(2):445-476.
Ngonga Ngomo, 2018. 9th challenge on question answering over linked data (qald-9). language. 7Ngonga Ngomo. 2018. 9th challenge on question answering over linked data (qald-9). language, 7(1):58-64.
Translate & Fill: Improving zero-shot multilingual semantic parsing with synthetic data. Massimo Nicosia, Zhongdi Qu, Yasemin Altun, 10.18653/v1/2021.findings-emnlp.279Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsMassimo Nicosia, Zhongdi Qu, and Yasemin Altun. 2021. Translate & Fill: Improving zero-shot mul- tilingual semantic parsing with synthetic data. In Findings of the Association for Computational Lin- guistics: EMNLP 2021, pages 3272-3284, Punta Cana, Dominican Republic. Association for Com- putational Linguistics.
Knowledge-aware named entity recognition with alleviating heterogeneity. Binling Nie, Ruixue Ding, Pengjun Xie, Fei Huang, Chen Qian, Luo Si, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceBinling Nie, Ruixue Ding, Pengjun Xie, Fei Huang, Chen Qian, and Luo Si. 2021. Knowledge-aware named entity recognition with alleviating hetero- geneity. Proceedings of the AAAI Conference on Artificial Intelligence.
Compositional generalization in image captioning. Mitja Nikolaus, Mostafa Abdou, Matthew Lamm, Rahul Aralikatte, Desmond Elliott, 10.18653/v1/K19-1009Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL). the 23rd Conference on Computational Natural Language Learning (CoNLL)Hong Kong, ChinaAssociation for Computational LinguisticsMitja Nikolaus, Mostafa Abdou, Matthew Lamm, Rahul Aralikatte, and Desmond Elliott. 2019. Compositional generalization in image captioning. In Proceedings of the 23rd Conference on Com- putational Natural Language Learning (CoNLL), pages 87-98, Hong Kong, China. Association for Computational Linguistics.
MRP 2020: The second shared task on cross-framework and cross-lingual meaning representation parsing. Stephan Oepen, Omri Abend, Lasha Abzianidze, Johan Bos, Jan Hajic, Daniel Hershcovich, Bin Li, O' Tim, Nianwen Gorman, Daniel Xue, Zeman, 10.18653/v1/2020.conll-shared.1Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation Parsing. the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation ParsingOnline. Association for Computational LinguisticsStephan Oepen, Omri Abend, Lasha Abzianidze, Jo- han Bos, Jan Hajic, Daniel Hershcovich, Bin Li, Tim O'Gorman, Nianwen Xue, and Daniel Zeman. 2020. MRP 2020: The second shared task on cross-framework and cross-lingual meaning rep- resentation parsing. In Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation Parsing, pages 1-22, Online. As- sociation for Computational Linguistics.
Improving compositional generalization in semantic parsing. Inbar Oren, Jonathan Herzig, Nitish Gupta, Matt Gardner, Jonathan Berant, 10.18653/v1/2020.findings-emnlp.225Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational LinguisticsOnlineInbar Oren, Jonathan Herzig, Nitish Gupta, Matt Gardner, and Jonathan Berant. 2020. Improving compositional generalization in semantic parsing. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2482-2495, On- line. Association for Computational Linguistics.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for auto- matic evaluation of machine translation. In Pro- ceedings of the 40th Annual Meeting of the Associa- tion for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
From Freebase to Wikidata: The great migration. Denny Thomas Pellissier Tanon, Sebastian Vrandečić, Thomas Schaffert, Lydia Steiner, Pintscher, 10.1145/2872427.2874809Proceedings of the 25th International Conference on World Wide Web, WWW '16. the 25th International Conference on World Wide Web, WWW '16Thomas Pellissier Tanon, Denny Vrandečić, Se- bastian Schaffert, Thomas Steiner, and Lydia Pintscher. 2016. From Freebase to Wikidata: The great migration. In Proceedings of the 25th Inter- national Conference on World Wide Web, WWW '16, page 1419-1428.
QALD-9-plus: A multilingual dataset for question answering over DBpedia and Wikidata translated by native speakers. Aleksandr Perevalov, Dennis Diefenbach, Ricardo Usbeck, Andreas Both, 2022 IEEE 16th International Conference on Semantic Computing (ICSC). IEEEAleksandr Perevalov, Dennis Diefenbach, Ricardo Usbeck, and Andreas Both. 2022a. QALD-9- plus: A multilingual dataset for question answer- ing over DBpedia and Wikidata translated by na- tive speakers. In 2022 IEEE 16th International Conference on Semantic Computing (ICSC). IEEE.
Enhancing the accessibility of knowledge graph question answering systems through multilingualization. Aleksandr Perevalov, Axel-Cyrille Ngonga Ngomo, Andreas Both, 10.1109/ICSC52841.2022.000482022 IEEE 16th International Conference on Semantic Computing (ICSC). Aleksandr Perevalov, Axel-Cyrille Ngonga Ngomo, and Andreas Both. 2022b. Enhancing the accessi- bility of knowledge graph question answering sys- tems through multilingualization. In 2022 IEEE 16th International Conference on Semantic Com- puting (ICSC), pages 251-256.
MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. Jonas Pfeiffer, Ivan Vulić, Iryna Gurevych, Sebastian Ruder, 10.18653/v1/2020.emnlp-main.617Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsJonas Pfeiffer, Ivan Vulić, Iryna Gurevych, and Sebastian Ruder. 2020. MAD-X: An Adapter- Based Framework for Multi-Task Cross-Lingual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 7654-7673, Online. As- sociation for Computational Linguistics.
A call for clarity in reporting BLEU scores. Matt Post, 10.18653/v1/W18-6319Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBrussels, BelgiumAssociation for Computational LinguisticsMatt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Con- ference on Machine Translation: Research Papers, pages 186-191, Brussels, Belgium. Association for Computational Linguistics.
Exploring the limits of transfer learning with a unified textto-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Journal of Machine Learning Research. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text- to-text transformer. Journal of Machine Learning Research, 21(140):1-67.
SQuAD: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, 10.18653/v1/D16-1264Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsPranav Rajpurkar, Jian Zhang, Konstantin Lopy- rev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas. Association for Com- putational Linguistics.
Cross-lingual syntactic transfer with limited resources. Mohammad Sadegh, Rasooli , Michael Collins, 10.1162/tacl_a_00061Transactions of the Association for Computational Linguistics. 5Mohammad Sadegh Rasooli and Michael Collins. 2017. Cross-lingual syntactic transfer with lim- ited resources. Transactions of the Association for Computational Linguistics, 5:279-293.
XTREME-R: Towards more challenging and nuanced multilingual evaluation. Sebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, Pengfei Liu, Junjie Hu, Dan Garrette, Graham Neubig, Melvin Johnson, 10.18653/v1/2021.emnlp-main.802Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican Republic. Association for Computational LinguisticsSebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, Pengfei Liu, Junjie Hu, Dan Garrette, Graham Neubig, and Melvin Johnson. 2021. XTREME-R: Towards more challenging and nuanced multilingual evalu- ation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Process- ing, pages 10215-10245, Online and Punta Cana, Dominican Republic. Association for Computa- tional Linguistics.
A benchmark for systematic generalization in grounded language understanding. Advances in neural information processing systems. Laura Ruis, Jacob Andreas, Marco Baroni, Diane Bouchacourt, M Brenden, Lake, 33Laura Ruis, Jacob Andreas, Marco Baroni, Di- ane Bouchacourt, and Brenden M Lake. 2020. A benchmark for systematic generalization in grounded language understanding. Ad- vances in neural information processing systems, 33:19861-19872.
RuBQ 2.0: An innovated Russian question answering dataset. Vladislav Ivan Rybin, Pavel Korablinov, Pavel Efimov, Braslavski, Eighteenth Extended Semantic Web Conference -Resources Track. Ivan Rybin, Vladislav Korablinov, Pavel Efimov, and Pavel Braslavski. 2021. RuBQ 2.0: An innovated Russian question answering dataset. In Eighteenth Extended Semantic Web Conference -Resources Track.
Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph. Amrita Saha, Vardaan Pahuja, M Mitesh, Karthik Khapra, A P S Sankaranarayanan, Chandar, AAAI. Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra, Karthik Sankaranarayanan, and A. P. S. Chandar. 2018. Complex sequential question answering: Towards learning to converse over linked question answer pairs with a knowledge graph. In AAAI.
WikiBank: Using Wikidata to improve multilingual frame-semantic parsing. Cezar Sas, Meriem Beloucif, Anders Søgaard, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationCezar Sas, Meriem Beloucif, and Anders Søgaard. 2020. WikiBank: Using Wikidata to improve multilingual frame-semantic parsing. In Proceed- ings of the 12th Language Resources and Eval- uation Conference, pages 4183-4189, Marseille, France. European Language Resources Associ- ation.
Multi-level alignment pretraining for multi-lingual semantic parsing. Bo Shao, Yeyun Gong, Weizhen Qi, Nan Duan, Xiaola Lin, 10.18653/v1/2020.coling-main.289Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainOnline). International Committee on Computational LinguisticsBo Shao, Yeyun Gong, Weizhen Qi, Nan Duan, and Xiaola Lin. 2020. Multi-level alignment pre- training for multi-lingual semantic parsing. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3246-3256, Barcelona, Spain (Online). International Commit- tee on Computational Linguistics.
Drcd: a chinese machine reading comprehension dataset. Trois Chih Chieh Shao, Yuting Liu, Yiying Lai, Sam Tseng, Tsai, 10.48550/ARXIV.1806.00920Chih Chieh Shao, Trois Liu, Yuting Lai, Yiying Tseng, and Sam Tsai. 2018. Drcd: a chinese machine reading comprehension dataset.
Multi-task learning for conversational question answering over a large-scale knowledge base. Tao Shen, Xiubo Geng, Tao Qin, Daya Guo, Duyu Tang, Nan Duan, Guodong Long, Daxin Jiang, 10.18653/v1/D19-1248Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsTao Shen, Xiubo Geng, Tao Qin, Daya Guo, Duyu Tang, Nan Duan, Guodong Long, and Daxin Jiang. 2019. Multi-task learning for conversational ques- tion answering over a large-scale knowledge base. In Proceedings of the 2019 Conference on Empiri- cal Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 2442-2451, Hong Kong, China. Association for Computational Linguistics.
Zeroshot cross-lingual semantic parsing. Tom Sherborne, Mirella Lapata, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Tom Sherborne and Mirella Lapata. 2022. Zero- shot cross-lingual semantic parsing. In Proceed- ings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4134-4153, Dublin, Ireland. Asso- ciation for Computational Linguistics.
The evolved transformer. David So, Quoc Le, Chen Liang, PMLRProceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine Learning97David So, Quoc Le, and Chen Liang. 2019. The evolved transformer. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pages 5877-5886. PMLR.
The web as a knowledge-base for answering complex questions. Alon Talmor, Jonathan Berant, 10.18653/v1/N18-1059Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Alon Talmor and Jonathan Berant. 2018. The web as a knowledge-base for answering complex ques- tions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 641-651, New Orleans, Louisiana. Association for Computational Linguistics.
Database reasoning over text. James Thorne, Majid Yazdani, Marzieh Saeidi, Fabrizio Silvestri, Sebastian Riedel, Alon Halevy, 10.18653/v1/2021.acl-long.241Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1James Thorne, Majid Yazdani, Marzieh Saeidi, Fab- rizio Silvestri, Sebastian Riedel, and Alon Halevy. 2021. Database reasoning over text. In Proceed- ings of the 59th Annual Meeting of the Associa- tion for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 3091-3104, Online. Association for Computa- tional Linguistics.
Nikola Momchev, Danila Sinopalnikov, and Nathanael Schärli. 2021. *-CFQ: Analyzing the scalability of machine learning on a compositional task. Dmitry Tsarkov, Tibor Tihon, Nathan Scales, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceDmitry Tsarkov, Tibor Tihon, Nathan Scales, Nikola Momchev, Danila Sinopalnikov, and Nathanael Schärli. 2021. *-CFQ: Analyzing the scalabil- ity of machine learning on a compositional task. Proceedings of the AAAI Conference on Artificial Intelligence.
Ten-sor2tensor for neural machine translation. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, Jakob Uszkoreit, abs/1803.07416CoRRAshish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalch- brenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Ten- sor2tensor for neural machine translation. CoRR, abs/1803.07416.
Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Gugger, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnline. Association for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexan- der Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natu- ral Language Processing: System Demonstrations, pages 38-45, Online. Association for Computa- tional Linguistics.
Are all languages created equal in multilingual BERT?. Shijie Wu, Mark Dredze, 10.18653/v1/2020.repl4nlp-1.16Proceedings of the 5th Workshop on Representation Learning for NLP. the 5th Workshop on Representation Learning for NLPOnline. Association for Computational LinguisticsShijie Wu and Mark Dredze. 2020. Are all lan- guages created equal in multilingual BERT? In Proceedings of the 5th Workshop on Representa- tion Learning for NLP, pages 120-130, Online. Association for Computational Linguistics.
Aditya Barua, and Colin Raffel. 2020. mT5: A massively multilingual pre-trained text-to-text transformer. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, arXiv:2010.11934arXiv preprintLinting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mT5: A massively multilingual pre-trained text-to-text transformer. arXiv preprint arXiv:2010.11934.
Frustratingly simple but surprisingly strong: Using language-independent features for zero-shot cross-lingual semantic parsing. Jingfeng Yang, Federico Fancellu, Bonnie Webber, Diyi Yang, 10.18653/v1/2021.emnlp-main.472Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsJingfeng Yang, Federico Fancellu, Bonnie Webber, and Diyi Yang. 2021. Frustratingly simple but surprisingly strong: Using language-independent features for zero-shot cross-lingual semantic pars- ing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Process- ing, pages 5848-5856, Online and Punta Cana, Dominican Republic. Association for Computa- tional Linguistics.
| [] |
[
"EventGraph at CASE 2021 Task 1: A General Graph-based Approach to Protest Event Extraction",
"EventGraph at CASE 2021 Task 1: A General Graph-based Approach to Protest Event Extraction"
] | [
"Huiling You \nUniversity of Oslo\n\n",
"David Samuel \nUniversity of Oslo\n\n",
"Samia Touileb \nUniversity of Bergen\n\n",
"Lilja Øvrelid liljao@ifi.uio.nosamia.touileb@uib.no \nUniversity of Oslo\n\n"
] | [
"University of Oslo\n",
"University of Oslo\n",
"University of Bergen\n",
"University of Oslo\n"
] | [
"Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE)"
] | This paper presents our submission to the 2022 edition of the CASE 2021 shared task 1, subtask 4. The EventGraph system adapts an endto-end, graph-based semantic parser to the task of Protest Event Extraction and more specifically subtask 4 on event trigger and argument extraction. We experiment with various graphs, encoding the events as either "labeled-edge" or "node-centric" graphs. We show that the "nodecentric" approach yields best results overall, performing well across the three languages of the task, namely English, Spanish, and Portuguese. EventGraph is ranked 3rd for English and Portuguese, and 4th for Spanish. Our code is available at: https://github.com/ huiling-y/eventgraph_at_case | 10.48550/arxiv.2210.09770 | [
"https://www.aclanthology.org/2022.case-1.22.pdf"
] | 252,967,953 | 2210.09770 | e10a53741dd298060f547f46da92583520f38b3a |
EventGraph at CASE 2021 Task 1: A General Graph-based Approach to Protest Event Extraction
December 7-8, 2022
Huiling You
University of Oslo
David Samuel
University of Oslo
Samia Touileb
University of Bergen
Lilja Øvrelid liljao@ifi.uio.nosamia.touileb@uib.no
University of Oslo
EventGraph at CASE 2021 Task 1: A General Graph-based Approach to Protest Event Extraction
Proceedings of the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE)
the 5th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE)December 7-8, 2022
This paper presents our submission to the 2022 edition of the CASE 2021 shared task 1, subtask 4. The EventGraph system adapts an endto-end, graph-based semantic parser to the task of Protest Event Extraction and more specifically subtask 4 on event trigger and argument extraction. We experiment with various graphs, encoding the events as either "labeled-edge" or "node-centric" graphs. We show that the "nodecentric" approach yields best results overall, performing well across the three languages of the task, namely English, Spanish, and Portuguese. EventGraph is ranked 3rd for English and Portuguese, and 4th for Spanish. Our code is available at: https://github.com/ huiling-y/eventgraph_at_case
Introduction
The automated extraction of socio-political event information from text constitutes an important NLP task, with a number of application areas for social scientists, policy makers, etc. The task involves analysis at different levels of granularity: document-level, sentence-level, and the finegrained extraction of event triggers and arguments within a sentence. The CASE 2022 Shared Task 1 on Multilingual Protest Event Detection extends the 2021 shared task (Hürriyetoglu et al., 2021a) with additional data in the evaluation phase and features four subtasks: (i) document classification, (ii) sentence classification, (iii) event sentence coreference, and (iv) event extraction.
The task of event extraction involves the detection of explicit event triggers and corresponding arguments in text. Current classification-based approaches to the task typically model the task as a pipeline of classifiers (Ji and Grishman, 2008;Li et al., 2013;Liu et al., 2020;Du and Cardie, 2020; or using joint modeling approaches (Yang and Mitchell, 2016;Nguyen et al., 2016;Liu et al., 2018;Wadden et al., 2019;Lin et al., 2020).
In this paper, we present the EventGraph system and its application to Task 1 Subtask 4 in the 2022 edition of the CASE 2021 shared task. Event-Graph is a joint framework for event extraction, which encodes events as graphs and solves event extraction as semantic graph parsing. We show that it is beneficial to model the relation between event triggers and arguments and approach event extraction via structured prediction instead of sequence labelling. Our system performs well on the three languages, achieving competitive results and consistently ranked among the top four systems.
In the following, we briefly describe the data supplied by the shared task organizers and present Subtask 4 in some more detail. We then go on to present an overview of the EventGraph system focusing on the encoding of the data to semantic graphs and the model architecture. We experiment with several different graph encodings and provide a more detailed analysis of the results.
Data and task
Our contribution is to subtask 4, which falls under shared task 1 -the detection and extraction of sociopolitical and crisis events. While most subtasks of shared task 1 have sentence-level annotations, subtask 4 has been annotated at the token-level while providing the annotators the document-level contexts. Subtask 4 focuses on the extraction of event triggers and event arguments related to contentious politics and riots (Hürriyetoglu et al., 2021a). This subtask has been previously approached as a sequence labeling problem combining various methods of fine-tuning pre-trained language models (Hürriyetoglu et al., 2021a).
The data supplied for Subtask 4 is identical to that of the 2021 edition of the task, as presented in Hürriyetoglu et al. (2021a). The data is part of the multilingual extension of the GLOCON dataset (Hürriyetoglu et al., 2021b) Labeled-edge representation Node-centric representation Node-centric-split representation Figure 1: Graph representations of sentence "Chale was allegedly chased by a group of about 30 people and was hacked to death with pangas, axes and spears." data is protest event coverage in news articles from specific countries: China and South Africa (English), Brazil (Portuguese), and Argentina (Spanish). The data has been doubly annotated by graduate students in political science with token-level information regarding event triggers and arguments. Hürriyetoglu et al. (2021a) reports the token level inter-annotator agreement to be between 0.35 and 0.60. Disagreements between annotators were subsequently resolved by an annotation supervisor. Table 1 shows the number of news articles for each of the languages in the task, distributed over the training and test sets. This clearly shows that the majority of the data is in English with only a fraction of articles in Portuguese and Spanish. Relevant statistics for the different event component annotations for Subtask 4 are presented in Table 1 detailing the number of triggers, participants, and various other types of argument components, such as place, target, organizer, etc. Once again, the table also illustrates the comparative imbalance in data across the three languages.
3 System overview We use our system, EventGraph, that adapts an end-to-end graph-based semantic parser to solve the task of extracting socio-political events. In what follows, we give more details about the graph representation and the model architecture of our system.
Graph representations
We represent each sentence as an event graph, which contains event trigger(s) and arguments as nodes. In an event graph, edges are constrained between the trigger(s) and the corresponding arguments. However, since our system can take as input graphs in a general sense the precise graph representation that works best for this task must be determined empirically. We here explore two different graph encoding methods, where the labels for triggers and arguments are represented either as edge labels or node labels, namely "labelededge" and "node-centric". Since sentences in the data may contain information about several events with arguments shared across these, we also experiment with a version of the "node-centric" approach where multiple triggers give rise to separate nodes in the graph. The intuition behind this is that it is easier for the model to predict a node anchoring to a single span than to several disjoint spans.
• Labeled-edge: labels for event trigger(s) and arguments are represented as edge labels; multiple triggers are merged into one node, as shown by the first graph of Figure 1.
• Node-centric: labels for event trigger(s) and arguments are represented as node labels; there is always a single node for trigger(s), as shown by the second graph of Figure 1.
• Node-centric-split: node labels denote trigger(s) and argument roles; multiple triggers are represented in different nodes, as shown by the third graph of Figure 1.
Model architecture
Our model is built upon a winning framework (Samuel and Straka, 2020) from a previous meaning representation parsing shared task (Oepen et al., 2020). The model contains customizable components for predicting nodes and edges, thus generating event graphs for different graph representations. We introduce each component of the model as following ( Figure 2):
Sentence encoding Each token of an input sentence obtains a contextualized embedding from a pretrained language model, the large version of XLM-R (Conneau et al., 2020) in our implementation. These embeddings are mapped onto latent queries by a linear transformation layer, and processed by a stack of Transformer layers (Vaswani et al., 2017) to model the dependencies between queries.
Node prediction A node-presence classifier processes the queries and predicts nodes by classifying each query. An anchor biaffine classifier (Dozat and Manning, 2017) creates anchors from the nodes to surface strings via deep biaffine attention between the queries and the contextual embeddings.
Edge prediction With predicted nodes, two biaffine classifiers are used to construct the edges between nodes: one classifier predicts the presence of edge between a pair of nodes and the other predicts the corresponding edge label. The graph generated for each input sentence contains the extracted event components. We then convert the labels to BIO format.
Experimental setup
Data We use all the official training data to train our final model, without using any additional data. During development time, we set aside about 10 percent of the training data for development. A breakdown of the number of articles and sentences in train and dev are provided in Table 1.
Joint training We train our model on the training data of all three languages and test on the official test data. As shown in Table 1, the training data for Portuguese and Spanish makes only a small portion of all training data, which leads to few-shot learning for these two languages.
Implementation details We use the large version of XLM-R via HuggingFace transformers library (Wolf et al., 2020). All models were trained with a single Nvidia RTX3090 GPU.
Evaluation metrics
The evaluation metric is a macro F 1 score for individual languages. The predicted event-annotated texts are in BIO format, and the scores are calculated with a python implementation 1 of the conlleval evaluation script used in CoNLL-2000 Shared Task (Tjong Kim Sang and Buchholz, 2000), where precision, recall and F 1 scores are calculated for predicted spans against the gold spans and there is no dependency between event arguments and triggers.
Submitted systems We submitted three models as listed in Table 3.
Results and discussion
We summarize the results of our systems on the official test data in Table 3 Table 3: Results of our systems on the official test data with different graph representations. We also include the winning system results from the shared task leaderboard. Subscripts indicate the ranking on the leaderboard, so we only add corresponding ranking to our best-performing system.
task. 2 Results show that "node-centric" systems generate better results than "label-edge" systems, and it is more beneficial to keep multiple event triggers as separate nodes. In terms of languages, all models perform best on English, which is unsurprising, since the training data consists mostly of English. However, the results on Portuguese are consistently better than those of Spanish, signaling English might be a better transfer language for Portuguese than for Spanish. Compared with other participating systems, in particular the winning systems, 2 as shown in Ta ble 3, our results are still competitive. We rank 3rd for English and Portuguese, and 4th for Spanish; our best results are achieved by a single system. For English and Portuguese, our results are very close to the winning results, which are achieved by different participating systems.
Error analysis on development data
Since the gold data for the test set is not available to task participants, we are not able to perform more detailed error analysis. Hence, to have more insights into our models' performance, we provide some error analysis on the development data (as described in Table 1). As previously mentioned, during our model development phase, we did not use all the official training data for training, but set aside small set for validation (about 10%).
As shown in Table 2, over all event components, target and fname arguments are more difficult to extract than others, with the scores substantially lower across different languages and models. In general, our models perform best in trigger extraction, partly because the number of triggers is much larger than event arguments for all datasets.
We further look at target and fname prediction scores of the English development set. As shown in Table 4, for fname, our systems tend to over-predict, with consistently lower precision scores; by manually going through our systems' predictions, we find many labeled chunks of fname are actually non-event components. For target, our systems tend to under-predict, with consistently higher precision scores; we also find that our systems would predict a longer span, for instance "former diplomat" as opposed to "diplomat", which is the gold span, and sometimes our systems confuse organizer and participant with target, by wrongly labelling the corresponding span as target.
Conclusion
In this paper we have presented the EventGraph system for event extraction and its application to the CASE 2022 shared task on Multilingual Protest Event Detection. EventGraph solves the task as a graph parsing problem hence we experiment with different ways of encoding the event data as general graphs, contrasting a so-called "labeled-edge" and "node-centric" approach. Our results indicate that the "node-centric" approach is beneficial for this task and furthermore that the separation in the graph of nodes belonging to different events in the same sentence proves useful. A more detailed analysis of the development results indicates that our system performs well in trigger identification, however struggles in the identification of target and fname arguments.
Figure 2 :
2EventGraph architecture. 1) the input gets a contextualized representation from the sentence encoding module, 2) graph nodes are decoded by the node prediction module and 3) connected by the edge prediction module. The given example is for "label-edge" event graph parsing.
with data from English, Portuguese, and Spanish. The source of thetrigger
<root>
target
participant
chased, hacked to death
group
Chale
people
participant
Artificial root:
Triggers:
Arguments:
<root>
trigger
chased, hacked to death
participant
group
target
Chale
participant
people
<root>
trigger
chased
participant
group
target
Chale
participant
people
trigger
hacked to death
Table 1: Top: Number of articles (sentences) for the different languages in Subtask 4(Hürriyetoglu et al., 2021a). About 10 percent (in terms of sentences) of the official training data is used as the development split. Bottom: Counts for the different event components in Subtask 4 training data for English, Portuguese, and Spanish(Hürriyetoglu et al., 2021a).English Portuguese Spanish
train
732 (2,925)
29 (78)
29 (91)
dev
76 (323)
4 (9)
1 (15)
test
179 (311)
50 (190) 50 (192)
trigger
4,595
122
157
participant
2,663
73
88
place
1,570
61
15
target
1,470
32
64
organizer
1,261
19
25
etime
1,209
41
40
fname
1,201
48
49
. All scores are obtained by submitting our test predictions to the sharedLanguage
System
trigger target
Place Participant Organizer fname
etime
all
En
457
134
118
293
131
129
121
Label-edge
82.48 56.29
75.44
74.62
74.52
50.42
77.06
73.46
Node-centric
84.21 62.09
74.89
76.42
75.46
54.31
81.22
75.85
Node-centric-split
84.62 52.88
75.11
73.75
74.91
52.28
78.97
73.92
Es
28
5
5
7
4
7
5
Label-edge
66.67 60.00 100.00
100.00
66.67
71.43
80.00
73.85
Node-centric
65.62 72.73 100.00
100.00
80.00
76.92
80.00
75.76
Node-centric-split
71.19 54.55 100.00
100.00
66.67
85.71
60
75.59
Pr
11
7
3
5
2
2
5
Labeled-edge
83.33 71.43
75.00
90.91
66.67 100.00
66.67
78.87
Node-centric
88.00 61.54
66.67
90.91
100.00 100.00
66.67
79.45
Node-centric-split
91.67 71.43
50
90.91
100.00
66.67 100.00
83.78
Table 2 :
2Detailed F 1 scores of our systems on the development data with different graph representations. We also add the number of each event component to better compare the distribution of components against the scores.System
Language
Macro F 1
Labeled-edge
English
73.12
Spanish
64.02
Portuguese
69.62
Node-centric
English
74.02
Spanish
64.16
Portuguese
70.73
Node-centric-split
English
74.76 3
Spanish
64.49 4
Portuguese
71.72 3
Winning systems
English
77.46 1
Spanish
69.87 1
Portuguese
74.57 1
- 2
-https://codalab.lisn.upsaclay.fr/ competitions/7126, accessed on September 29, 2022.Argument System
P
R
F 1
fname
Labeled-edge
47.62 53.57 50.42
Node-centric
52.50 56.25 54.31
Node-centric-split 48.84 56.25 52.28
target
Labeled-edge
60.28 52.80 56.29
Node-centric
65.52 59.01 62.09
Node-centric-split 58.21 48.45 52.88
Table 4 :
4Detailed Precision, Recall, and F 1 scores of fname and target arguments for English developmentset.
https://github.com/sighsmile/ conlleval
AcknowledgmentsThis research was supported by industry partners and the Research Council of Norway with funding to MediaFutures: Research Centre for Responsible Media Technology and Innovation, through the Centres for Research-based Innovation scheme, project number 309339.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, 10.18653/v1/2020.acl-main.747guistics. 3Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics. 3
Deep biaffine attention for neural dependency parsing. Timothy Dozat, Christopher D Manning, International Conference on Learning Representations. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In International Conference on Learning Repre- sentations. 3
Event extraction by answering (almost) natural questions. Xinya Du, Claire Cardie, 10.18653/v1/2020.emnlp-main.49Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsXinya Du and Claire Cardie. 2020. Event extraction by answering (almost) natural questions. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 671-683, Online. Association for Computational Linguistics.
Multilingual protest news detectionshared task 1, CASE 2021. Ali Hürriyetoglu, Osman Mutlu, Erdem Yörük, Ritesh Farhana Ferdousi Liza, Shyam Kumar, Ratan, 10.18653/v1/2021.case-1.11Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021). the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021)1Ali Hürriyetoglu, Osman Mutlu, Erdem Yörük, Farhana Ferdousi Liza, Ritesh Kumar, and Shyam Ratan. 2021a. Multilingual protest news detection - shared task 1, CASE 2021. In Proceedings of the 4th Workshop on Challenges and Applications of Auto- mated Extraction of Socio-political Events from Text (CASE 2021), pages 79-91, Online. Association for Computational Linguistics. 1, 2
Çagrı Yoltar, Deniz Yüret, and Burak Gürel. 2021b. Cross-context news corpus for protest eventrelated knowledge base construction. Ali Hürriyetoglu, Erdem Yörük, Osman Mutlu, Fırat Duruşan, Data Intelligence. 32Ali Hürriyetoglu, Erdem Yörük, Osman Mutlu, Fırat Duruşan, Çagrı Yoltar, Deniz Yüret, and Burak Gürel. 2021b. Cross-context news corpus for protest event- related knowledge base construction. Data Intelli- gence, 3(2):308-335. 1
Refining event extraction through cross-document inference. Heng Ji, Ralph Grishman, Proceedings of ACL-08: HLT. ACL-08: HLTOhioAssociation for Computational LinguisticsColumbusHeng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Pro- ceedings of ACL-08: HLT, pages 254-262, Colum- bus, Ohio. Association for Computational Linguistics. 1
Event extraction as multi-turn question answering. Fayuan Li, Weihua Peng, Yuguang Chen, Quan Wang, Lu Pan, Yajuan Lyu, Yong Zhu, 10.18653/v1/2020.findings-emnlp.73Findings of the Association for Computational Linguistics: EMNLP 2020. Fayuan Li, Weihua Peng, Yuguang Chen, Quan Wang, Lu Pan, Yajuan Lyu, and Yong Zhu. 2020. Event extraction as multi-turn question answering. In Find- ings of the Association for Computational Linguistics: EMNLP 2020, pages 829-838, Online. Association for Computational Linguistics. 1
Joint event extraction via structured prediction with global features. Qi Li, Ji Heng, Liang Huang, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaAssociation for Computational LinguisticsLong Papers)Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global fea- tures. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 73-82, Sofia, Bulgaria. Association for Computational Linguistics. 1
A joint neural model for information extraction with global features. Ying Lin, Heng Ji, Fei Huang, Lingfei Wu, 10.18653/v1/2020.acl-main.713Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsYing Lin, Heng Ji, Fei Huang, and Lingfei Wu. 2020. A joint neural model for information extraction with global features. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 7999-8009, Online. Association for Computational Linguistics. 1
Event extraction as machine reading comprehension. Jian Liu, Yubo Chen, Kang Liu, Wei Bi, Xiaojiang Liu, 10.18653/v1/2020.emnlp-main.128Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Jian Liu, Yubo Chen, Kang Liu, Wei Bi, and Xiaojiang Liu. 2020. Event extraction as machine reading com- prehension. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1641-1651, Online. Association for Computational Linguistics. 1
Jointly multiple events extraction via attention-based graph information aggregation. Xiao Liu, Zhunchen Luo, Heyan Huang, 10.18653/v1/D18-1156Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsXiao Liu, Zhunchen Luo, and Heyan Huang. 2018. Jointly multiple events extraction via attention-based graph information aggregation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1247-1256, Brussels, Belgium. Association for Computational Linguistics.
Joint event extraction via recurrent neural networks. Kyunghyun Thien Huu Nguyen, Ralph Cho, Grishman, 10.18653/v1/N16-1034Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsThien Huu Nguyen, Kyunghyun Cho, and Ralph Grish- man. 2016. Joint event extraction via recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 300-309, San Diego, California. Association for Computational Linguistics. 1
MRP 2020: The second shared task on crossframework and cross-lingual meaning representation parsing. Stephan Oepen, Omri Abend, Lasha Abzianidze, Johan Bos, Jan Hajic, Daniel Hershcovich, Bin Li, O' Tim, Nianwen Gorman, Daniel Xue, Zeman, 10.18653/v1/2020.conll-shared.1Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation Parsing. the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation ParsingOnline. Association for Computational LinguisticsStephan Oepen, Omri Abend, Lasha Abzianidze, Jo- han Bos, Jan Hajic, Daniel Hershcovich, Bin Li, Tim O'Gorman, Nianwen Xue, and Daniel Zeman. 2020. MRP 2020: The second shared task on cross- framework and cross-lingual meaning representa- tion parsing. In Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Represen- tation Parsing, pages 1-22, Online. Association for Computational Linguistics. 3
ÚFAL at MRP 2020: Permutation-invariant semantic parsing in PERIN. David Samuel, Milan Straka, 10.18653/v1/2020.conll-shared.5Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation Parsing. the CoNLL 2020 Shared Task: Cross-Framework Meaning Representation ParsingOnline. Association for Computational LinguisticsDavid Samuel and Milan Straka. 2020. ÚFAL at MRP 2020: Permutation-invariant semantic pars- ing in PERIN. In Proceedings of the CoNLL 2020 Shared Task: Cross-Framework Meaning Represen- tation Parsing, pages 53-64, Online. Association for Computational Linguistics. 3
Introduction to the CoNLL-2000 shared task chunking. Erik F Tjong, Kim Sang, Sabine Buchholz, Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop. Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. In- troduction to the CoNLL-2000 shared task chunking. In Fourth Conference on Computational Natural Lan- guage Learning and the Second Learning Language in Logic Workshop. 3
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30. 3
Entity, relation, and event extraction with contextualized span representations. David Wadden, Ulme Wennberg, Yi Luan, Hannaneh Hajishirzi, 10.18653/v1/D19-1585Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational Linguistics. 1David Wadden, Ulme Wennberg, Yi Luan, and Han- naneh Hajishirzi. 2019. Entity, relation, and event extraction with contextualized span representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5784- 5789, Hong Kong, China. Association for Computa- tional Linguistics. 1
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Drame, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsQuentin Lhoest, and Alexander Rush; OnlineAssociation for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics. 3
Joint extraction of events and entities within a document context. Bishan Yang, Tom M Mitchell, 10.18653/v1/N16-1033Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsBishan Yang and Tom M. Mitchell. 2016. Joint extrac- tion of events and entities within a document context. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 289-299, San Diego, California. Association for Computational Linguistics. 1
| [
"https://github.com/sighsmile/"
] |
[
"N -ary Relation Extraction using Graph State LSTM",
"N -ary Relation Extraction using Graph State LSTM"
] | [
"Linfeng Song \nDepartment of Computer Science\nUniversity of Rochester\n14627RochesterNY\n",
"Yue Zhang \nSchool of Engineering\nWestlake University\nChina\n",
"Zhiguo Wang \nIBM T.J. Watson Research Center\n10598Yorktown HeightsNY\n",
"Daniel Gildea \nDepartment of Computer Science\nUniversity of Rochester\n14627RochesterNY\n"
] | [
"Department of Computer Science\nUniversity of Rochester\n14627RochesterNY",
"School of Engineering\nWestlake University\nChina",
"IBM T.J. Watson Research Center\n10598Yorktown HeightsNY",
"Department of Computer Science\nUniversity of Rochester\n14627RochesterNY"
] | [
"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing"
] | Cross-sentence n-ary relation extraction detects relations among n entities across multiple sentences. Typical methods formulate an input as a document graph, integrating various intra-sentential and inter-sentential dependencies. The current state-of-the-art method splits the input graph into two DAGs, adopting a DAG-structured LSTM for each. Though being able to model rich linguistic knowledge by leveraging graph edges, important information can be lost in the splitting procedure. We propose a graph-state LSTM model, which uses a parallel state to model each word, recurrently enriching state values via message passing. Compared with DAG LSTMs, our graph LSTM keeps the original graph structure, and speeds up computation by allowing more parallelization. On a standard benchmark, our model shows the best result in the literature. | 10.18653/v1/d18-1246 | [
"https://www.aclweb.org/anthology/D18-1246.pdf"
] | 52,115,592 | 1808.09101 | 2d0af3015df1806c332d674fd0b9f883d4f9bdc9 |
N -ary Relation Extraction using Graph State LSTM
Association for Computational LinguisticsCopyright Association for Computational LinguisticsOctober 31 -November 4. 2018. 2018
Linfeng Song
Department of Computer Science
University of Rochester
14627RochesterNY
Yue Zhang
School of Engineering
Westlake University
China
Zhiguo Wang
IBM T.J. Watson Research Center
10598Yorktown HeightsNY
Daniel Gildea
Department of Computer Science
University of Rochester
14627RochesterNY
N -ary Relation Extraction using Graph State LSTM
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsOctober 31 -November 4. 2018. 20182226
Cross-sentence n-ary relation extraction detects relations among n entities across multiple sentences. Typical methods formulate an input as a document graph, integrating various intra-sentential and inter-sentential dependencies. The current state-of-the-art method splits the input graph into two DAGs, adopting a DAG-structured LSTM for each. Though being able to model rich linguistic knowledge by leveraging graph edges, important information can be lost in the splitting procedure. We propose a graph-state LSTM model, which uses a parallel state to model each word, recurrently enriching state values via message passing. Compared with DAG LSTMs, our graph LSTM keeps the original graph structure, and speeds up computation by allowing more parallelization. On a standard benchmark, our model shows the best result in the literature.
Introduction
As a central task in natural language processing, relation extraction has been investigated on news, web text and biomedical domains. It has been shown to be useful for detecting explicit facts, such as cause-effect (Hendrickx et al., 2009), and predicting the effectiveness of a medicine on a cancer caused by mutation of a certain gene in the biomedical domain Peng et al., 2017). While most existing work extracts relations within a sentence (Zelenko et al., 2003;Palmer et al., 2005;Zhao and Grishman, 2005;Jiang and Zhai, 2007;Plank and Moschitti, 2013;Li and Ji, 2014;Gormley et al., 2015;Miwa and Bansal, 2016;Zhang et al., 2017), the task of cross-sentence relation extraction has received increasing attention (Gerber and Chai, 2010;Yoshikawa et al., 2011). Recently, Peng ⇤ Equal contribution The deletion mutation on exon-19 of EGFR gene was present in 16 patients, while the 858E point mutation on exon-21 was noted in 10. All patients were treated with gefitinib and showed a partial response. et al. (2017) extend cross-sentence relation extraction by further detecting relations among several entity mentions (n-ary relation). Table 1 shows an example, which conveys the fact that cancers caused by the 858E mutation on EGFR gene can respond to the gefitinib medicine. The three entity mentions form a ternary relation yet appear in distinct sentences. Peng et al. (2017) proposed a graph-structured LSTM for n-ary relation extraction. As shown in Figure 1 (a), graphs are constructed from input sentences with dependency edges, links between adjacent words, and inter-sentence relations, so that syntactic and discourse information can be used for relation extraction. To calculate a hidden state encoding for each word, Peng et al. (2017) first split the input graph into two directed acyclic graphs (DAGs) by separating left-to-right edges from right-to-left edges (Figure 1 (b)). Then, two separate gated recurrent neural networks, which extend tree LSTM (Tai et al., 2015), were adopted for each single-directional DAG, respectively. Finally, for each word, the hidden states of both directions are concatenated as the final state. The bi-directional DAG LSTM model showed superior performance over several strong baselines, such as tree-structured LSTM (Miwa and Bansal, 2016), on a biomedical-domain benchmark.
However, the bidirectional DAG LSTM model suffers from several limitations. First, important information can be lost when converting a graph A potential solution to the problems above is to model a graph as a whole, learning its representation without breaking it into two DAGs. Due to the existence of cycles, naive extension of tree LSTMs cannot serve this goal. Recently, graph convolutional networks (GCN) (Kipf and Welling, 2017;Bastings et al., 2017) and graph recurrent networks (GRN) have been proposed for representing graph structures for NLP tasks. Such methods encode a given graph by hierarchically learning representations of neighboring nodes in the graphs via their connecting edges. While GCNs use CNN for information exchange, GRNs take gated recurrent steps to this end. For fair comparison with DAG LSTMs, we build a graph LSTM by extending , which strictly follow the configurations of Peng et al. (2017) such as the source of features and hyper parameter settings. In particular, the full input graph is modeled as a single state, with words in the graph being its sub states. State transitions are performed on the graph recurrently, allowing word-level states to exchange information through dependency and discourse edges. At each recurrent step, each word advances its current state by receiving information from the current states of its adjacent words. Thus with increasing numbers of recurrent steps each word receives information from a larger context. Figure 2 shows the recurrent transition steps where each node works simultaneously within each transition step.
Compared with bidirectional DAG LSTM, our method has several advantages. First, it keeps the original graph structure, and therefore no information is lost. Second, sibling information can be easily incorporated by passing information up and then down from a parent. Third, information exchange allows more parallelization, and thus can be very efficient in computation.
Results show that our model outperforms a bidirectional DAG LSTM baseline by 5.9% in accuracy, overtaking the state-of-the-art system of Peng et al. (2017) by 1.2%. Our code is available at https://github.com/ freesunshine0316/nary-grn.
Our contributions are summarized as follows.
• We empirically compared graph LSTM with DAG LSTM for n-ary relation extraction tasks, showing that the former is better by more effective use of structural information;
• To our knowledge, we are the first to investigate a graph recurrent network for modeling dependency and discourse relations.
Task Definition
Formally, the input for cross-sentence n-ary relation extraction can be represented as a pair (E, T ), where E = (✏ 1 , . . . , ✏ N ) is the set of entity mentions, and T = [S 1 ; . . . ; S M ] is a text consisting of multiple sentences. Each entity mention ✏ i belongs to one sentence in T . There is a predefined relation set R = (r 1 , . . . , r L , None), where None represents that no relation holds for the entities.
This task can be formulated as a binary classification problem of determining whether ✏ 1 , . . . , ✏ N together form a relation (Peng et al., 2017), or a multi-class classification problem of detecting which relation holds for the entity mentions. Take Table 1 as an example. The binary classification task is to determine whether gefitinib would have an effect on this type of cancer, given a cancer patient with 858E mutation on gene EGFR. The multi-class classification task is to detect the exact drug effect: response, resistance, sensitivity, etc.
3 Baseline: Bi-directional DAG LSTM Peng et al. (2017) formulate the task as a graphstructured problem in order to adopt rich dependency and discourse features. In particular, Stanford parser ) is used to assign syntactic structure to input sentences, and heads of two consecutive sentences are connected to represent discourse information, resulting in a graph structure. For each input graph G = (V, E), the nodes V are words within input sentences, and each edge e 2 E connects two words that either have a relation or are adjacent to each other. Each edge is denoted as a triple (i, j, l), where i and j are the indices of the source and target words, respectively, and the edge label l indicates either a dependency or discourse relation (such as "nsubj") or a relative position (such as "next tok" or "prev tok"). Throughout this paper, we use E in (j) and E out (j) to denote the sets of incoming and outgoing edges for word j. For a bi-directional DAG LSTM baseline, we follow Peng et al. (2017), splitting each input graph into two separate DAGs by separating leftto-right edges from right-to-left edges ( Figure 1). Each DAG is encoded by using a DAG LSTM (Section 3.2), which takes both source words and edge labels as inputs (Section 3.1). Finally, the hidden states of entity mentions from both LSTMs are taken as inputs to a logistic regression classifier to make a prediction:
y = softmax(W 0 [h ✏ 1 ; . . . ; h ✏ N ] + b 0 ), (1)
where h ✏ j is the hidden state of entity ✏ j . W 0 and b 0 are parameters.
Input Representation
Both nodes and edge labels are useful for modeling a syntactic graph. As the input to our DAG LSTM, we first calculate the representation for each edge (i, j, l) by:
x l i,j = W 1 ⇣ [e l ; e i ] ⌘ + b 1 ,(2)
where W 1 and b 1 are model parameters, e i is the embedding of the source word indexed by i, and e l is the embedding of the edge label l.
State transition
The baseline LSTM model learns DAG representations sequentially, following word orders. Taking the edge representations (such as x l i,j ) as input, gated state transition operations are executed on both the forward and backward DAGs. For each word j, the representations of its incoming edges E in (j) are summed up as one vector:
x in j = X (i,j,l)2E in (j) x l i,j(3)
Similarly, for each word j, the states of all incoming nodes are summed to a single vector before being passed to the gated operations:
h in j = X (i,j,l)2E in (j) h i(4)
Finally, the gated state transition operation for the hidden state h j of the j-th word can be defined as:
i j = (W i x in j + U i h in j + b i ) o j = (W o x in j + U o h in j + b o ) f i,j = (W f x l i,j + U f h i + b f ) u j = (W u x in j + U u h in j + b u ) c j = i j u j + X (i,j,l)2E in (j) f i,j c i h j = o j tanh(c j ),(5)
where i j , o j and f i,j are a set of input, output and forget gates, respectively, and W x , U x and b x (x 2 {i, o, f, u}) are model parameters.
Comparison with Peng et al. (2017)
Our baseline is computationally similar to Peng et al. (2017), but different on how to utilize edge labels in the gated network. In particular, Peng et al. (2017) U s (in Equation 5) to different edge types, so that each edge label is associated with a 2D weight matrix to be tuned in training. On the other hand, EM-BED assigns each edge label to an embedding vector, but complicates the gated operations by changing the U s to be 3D tensors. 1 In contrast, we take edge labels as part of the input to the gated network. In general, the edge labels are first represented as embeddings, before being concatenated with the node representation vectors (Equation 2). We choose this setting for both the baseline and our graph state LSTM model in Section 4, since it requires fewer parameters compared with FULL and EMBED, thus being less exposed to overfitting on small-scaled data.
Graph State LSTM
Our input graph formulation strictly follows Section 3. In particular, our model adopts the same methods for calculating input representation (as in Section 3.1) and performing classification as the baseline model. However, different from the baseline bidirectional DAG LSTM model, we leverage a graph-structured LSTM to directly model the input graph, without splitting it into two DAGs. Figure 2 shows an overview of our model. Formally, given an input graph G = (V, E), we define a state vector h j for each word v j 2 V . The state of the graph consists of all word states, and thus can be represented as:
g = {h j }| v j 2V(6)
1 For more information please refer Section 3.3 of Peng et al. (2017).
In order to capture non-local information, our model performs information exchange between words through a recurrent state transition process, resulting in a sequence of graph states g 0 , g 1 , . . . , g t , where g t = {h j t }| v j 2V . The initial graph state g 0 consists of a set of initial word states h j 0 = h 0 , where h 0 is a zero vector.
State transition
Following the approches of and , a recurrent neural network is utilized to model the state transition process. In particular, the transition from g t 1 to g t consists of hidden state transition for each word, as shown in Figure 2. At each step t, we allow information exchange between a word and all words that are directly connected to the word. To avoid gradient diminishing or bursting, gated LSTM cells are adopted, where a cell c j t is taken to record memory for h j t . We use an input gate i j t , an output gate o j t and a forget gate f j t to control information flow from the inputs and to h j t . The inputs to a word v j , include representations of edges that are connected to v j , where v j can be either the source or the target of the edge. Similar to Section 3.1, we define each edge as a triple (i, j, l), where i and j are indices of the source and target words, respectively, and l is the edge label.
x l i,j is the representation of edge (i, j, l). The inputs for v j are distinguished by incoming and outgoing directions, where:
x i j = X (i,j,l)2E in (j) x l i,j x o j = X (j,k,l)2Eout(j) x l j,k(7)
Here E in (j) and E out (j) denote the sets of incoming and outgoing edges of v j , respectively. In addition to edge inputs, a cell also takes the hidden states of its incoming and outgoing words during a state transition. In particular, the states of all incoming words and outgoing words are summed up, respectively:
h i j = X (i,j,l)2E in (j) h i t 1 h o j = X (j,k,l)2Eout(j) h k t 1 ,(8)
Based on the above definitions of x i j , x o j , h i j and h o j , the recurrent state transition from g t 1 to g t , as represented by h j t , is defined as:
i j t = (W i x i j +Ŵ i x o j + U i h i j +Û i h o j + b i ) o j t = (W o x i j +Ŵ o x o j + U o h i j +Û o h o j + b o ) f j t = (W f x i j +Ŵ f x o j + U f h i j +Û f h o j + b f ) u j t = (W u x i j +Ŵ u x o j + U u h i j +Û u h o j + b u ) c j t = f j t c j t 1 + i j t u j t h j t = o j t tanh(c j t ),
where i j t , o j t and f j t are the input, output and forget gates, respectively.
W x ,Ŵ x , U x ,Û x , b x (x 2 {i, o, f, u}) are model parameters.
Graph State LSTM vs bidirectional DAG LSTM A contrast between the baseline DAG LSTM and our graph LSTM can be made from the perspective of information flow. For the baseline, information flow follows the natural word order in the input sentence, with the two DAG components propagating information from left to right and from right to left, respectively. In contrast, information flow in our graph state LSTM is relatively more concentrated at individual words, with each word exchanging information with all its graph neighbors simultaneously at each sate transition. As a result, wholistic contextual information can be leveraged for extracting features for each word, as compared to separated handling of bi-directional information flow in DAG LSTM. In addition, arbitrary structures, including arbitrary cyclic graphs, can be handled.
From an initial state with isolated words, information of each word propagates to its graph neighbors after each step. Information exchange between non-neighboring words can be achieved through multiple transition steps. We experiment with different transition step numbers to study the effectiveness of global encoding. Unlike the baseline DAG LSTM encoder, our model allows parallelization in node-state updates, and thus can be highly efficient using a GPU.
Training
We train our models with a cross-entropy loss over a set of gold standard data:
l = log p(y i |X i ; ✓),(9)
where X i is an input graph, y i is the gold class label of X i , and ✓ is the model parameters. Adam (Kingma and Ba, 2014)
Experiments
We conduct experiments for the binary relation detection task and the multi-class relation extraction task discussed in Section 2.
Data
We use the dataset of Peng et al. (2017), which is a biomedical-domain dataset focusing on druggene-mutation ternary relations, 2 extracted from PubMed. It contains 6987 ternary instances about drug-gene-mutation relations, and 6087 binary instances about drug-mutation sub-relations. Table 2 shows statistics of the dataset. Most instances of ternary data contain multiple sentences, and the average number of sentences is around 2. There are five classification labels: "resistance or nonresponse", "sensitivity", "response", "resistance" and "None". We follow Peng et al. (2017) and binarize multi-class labels by grouping all relation classes as "Yes" and treat "None" as "No".
Settings
Following Peng et al. (2017)
Development Experiments
We first analyze our model on the drug-genemutation ternary relation dataset, taking the first among 5-fold cross validation settings for our data setting. Figure 3 shows the devset accuracies of different state transition numbers, where forward and backward execute our graph state model only on the forward or backward DAG, respectively. Concat concatenates the hidden states of forward and backward. All executes our graph state model on original graphs. The performance of forward and backward lag behind concat, which is consistent with the intuition that both forward and backward relations are useful (Peng et al., 2017). In addition, all gives better accuracies compared with concat, demonstrating the advantage of simultaneously considering forward and backward relations during representation learning. For all the models, more state transition steps result in better accuracies, where larger contexts can be integrated in the representations of graphs. The performance of all starts to converge after 4 and 5 state transitions, so we set the number of state transitions to 5 in the remaining experiments.
Final results
Table 3 compares our model with the bidirectional DAG baseline and the state-of-the-art results on this dataset, where EMBED and FULL have been briefly introduced in Section 3.3. +multitask applies joint training of both ternary (druggene-mutation) relations and their binary (drugmutation) sub-relations. use a statistical method with a logistic regression classifier and features derived from shortest paths between all entity pairs. Bidir DAG LSTM
Model
Single Cross Table 3), our graph state LSTM model shows the highest test accuracy among all methods, which is 5.9% higher than our baseline. 4 The accuracy of our baseline is lower than EMBED and FULL of Peng et al. (2017), which is likely due to the differences mentioned in Section 3.3. Our final results are better than Peng et al. (2017), despite the fact that we do not use multi-task learning.
We also report accuracies only on instances within single sentences (column Single in Table 3), which exhibit similar contrasts. Note that all systems show performance drops when evaluated only on single-sentence relations, which are actually more challenging. One reason may be that some single sentences cannot provide sufficient context for disambiguation, making it necessary to study cross-sentence context. Another reason may be overfitting caused by relatively fewer training instances in this setting, as only 30% instances are within a single sentence. One interesting observation is that our baseline shows the least performance drop of 1.7 points, in contrast to up to 4.1 for other neural systems. This can be a supporting evidence for overfitting, as our baseline has fewer parameters at least than FULL and EMBED.
Analysis
Efficiency. Table 4 shows the training and decoding time of both the baseline and our model. Our model is 8 to 10 times faster than the baseline in training and decoding speeds, respectively. By revisiting 74, which means that the baseline model has to execute 74 recurrent transition steps for calculating a hidden state for each input word. On the other hand, our model only performs 5 state transitions, and calculations between each pair of nodes for one transition are parallelizable. This accounts for the better efficiency of our model.
Accuracy against sentence length Figure 5 (a) shows the test accuracies on different sentence lengths. We can see that GS GLSTM and Bidir DAG LSTM show performance increase along increasing input sentence lengths. This is likely because longer contexts provide richer information for relation disambiguation. GS GLSTM is consistently better than Bidir DAG LSTM, and the gap is larger on shorter instances. This demonstrates that GS GLSTM is more effective in utilizing a smaller context for disambiguation.
Accuracy against the maximal number of neighbors Figure 5 (b) shows the test accuracies against the maximum number of neighbors. Intuitively, it is easier to model graphs containing nodes with more neighbors, because these nodes can serve as a "supernode" that allow more efficient information exchange. The performances of both GS GLSTM and Bidir DAG LSTM increase with increasing maximal number of neighbors, which coincide with this intuition. In addition, GS GLSTM shows more advantage than Bidir DAG LSTM under the inputs having lower maximal number of neighbors, which further demonstrates the superiority of GS GLSTM over Bidir DAG LSTM in utilizing context information.
Case study Figure 4 visualizes the merits of GS GLSTM over Bidir DAG LSTM using two examples. GS GLSTM makes the correct predictions for both cases, while Bidir DAG LSTM fails to.
The first case generally mentions that Gefitinib does not have an effect on T790M mutation on EGFR gene. Note that both "However" and "was not" serve as indicators; thus incorporating them into the contextual vectors of these entity men-
Model
Single Cross 73.9 75.2 Miwa and Bansal (2016) 75.9 75.9 Peng et al. (2017) tions is important for making a correct prediction. However, both indicators are leaves of the dependency tree, making it impossible for Bidir DAG LSTM to incorporate them into the contextual vectors of entity mentions up the tree through dependency edges. 5 On the other hand, it is easier for GS GLSTM. For instance, "was not" can be incorporated into "Gefitinib" through "suppressed agent ! treatment nn ! Gefitinib". The second case is to detect the relation among "cetuximab" (drug), "EGFR" (gene) and "S492R" (mutation), which does not exist. However, the context introduces further ambiguity by mentioning another drug "Panitumumab", which does have a relation with "EGFR" and "S492R". Being sibling nodes in the dependency tree, "can not" is an indicator for the relation of "cetuximab". GS GLSTM is correct, because "can not" can be easily included into the contextual vector of "cetuximab" in two steps via "bind nsubj !cetuximab".
Results on Binary Sub-relations
Following previous work, we also evaluate our model on drug-mutation binary relations. Table 5 shows the results, where Miwa and Bansal (2016) is a state-of-the-art model using sequential and tree-structured LSTMs to jointly capture linear and dependency contexts for relation extraction. Other models have been introduced in Section 6.4.
Similar to the ternary relation extraction experiments, GS GLSTM outperforms all the other systems with a large margin, which shows that the message passing graph LSTM is better at encoding rich linguistic knowledge within the input graphs. Binary relations being easier, both GS GLSTM and Bidir DAG LSTM show increased or similar performances compared with the ternary relation ex- periments. On this set, our bidirectional DAG LSTM model is comparable to FULL using all instances ("Cross") and slightly better than FULL using only single-sentence instances ("Single").
Fine-grained Classification
Our dataset contains five classes as mentioned in Section 6.1. However, previous work only investigates binary relation detection. Here we also study the multi-class classification task, which can be more informative for applications. Table 6 shows accuracies on multi-class relation extraction, which makes the task more ambiguous compared with binary relation extraction. The results show similar comparisons with the binary relation extraction results. However, the performance gaps between GS GLSTM and Bidir DAG LSTM dramatically increase, showing the superiority of GS GLSTM over Bidir DAG LSTM in utilizing context information. 1998), which focuses on entity-attribution relations. It has also been studied in biomedical domain (McDonald et al., 2005), but only the instances within a single sentence are considered. Previous work on cross-sentence relation extraction relies on either explicit co-reference annotation (Gerber and Chai, 2010;Yoshikawa et al., 2011), or the assumption that the whole document refers to a single coherent event (Wick et al., 2006;Swampillai and Stevenson, 2011). Both simplify the problem and reduce the need for learning better contextual representation of entity mentions. A notable exception is , who adopt distant supervision and integrated contextual evidence of diverse types without relying on these assumptions. However, they only study binary relations. We follow Peng et al. (2017) by studying ternary cross-sentence relations. Graph encoder Liang et al. (2016) build a graph LSTM model for semantic object parsing, which aims to segment objects within an image into more fine-grained, semantically meaningful parts. The nodes of an input graph come from image superpixels, and the edges are created by connecting spatially neighboring nodes. Their model is similar as Peng et al. (2017) by calculating node states sequentially: for each input graph, a start node and a node sequence are chosen, which determines the order of recurrent state updates. In contrast, our graph LSTM do not need ordering of graph nodes, and is highly parallelizable.
Related Work
Graph convolutional networks (GCNs) and very recently graph recurrent networks (GRNs) have been used to model graph structures in NLP tasks, such as semantic role labeling , machine translation (Bastings et al., 2017), text generation , text representation and semantic parsing (Xu et al., 2018b,a). In particular, use GRN to represent raw sentences by building a graph structure of neighboring words and a sentence-level node, showing that the encoder outperforms BiLSTMs and Transformer (Vaswani et al., 2017) on classification and sequence labeling tasks; build a GRN for encoding AMR graphs, showing that the representation is superior compared to BiLSTM on serialized AMR. Our work is in line with their work in the investigation of GRN on NLP. To our knowledge, we are the first to use GRN for representing dependency and discourse structures. Under the same recurrent framework, we show that modeling the original graphs with one GRN model is more useful than two DAG LSTMs for our relation extraction task. We choose GRN as our main method because it gives a more fair comparison with DAG LSTM. We leave it to future work to compare GCN and GRN for our task.
Conclusion
We explored a graph-state LSTM model for crosssentence n-ary relation extraction, which uses a recurrent state transition process to incrementally refine a neural graph state representation capturing graph structure contexts. Compared with a bidirectional DAG LSTM baseline, our model has several advantages. First, it does not change the input graph structure, so that no information can be lost. For example, it can easily incorporate sibling information when calculating the contextual vector of a node. Second, it is better parallelizable. Experiments show significant improvements over the previously reported numbers, including that of the bidirectional graph LSTM model.
For future work, we consider adding coreference information as an entity mention can have coreferences, which help on information collection. Another possible direction is including word sense information. Confusing caused by word senses can be a severe problem. Not only content words, but also propositions can introduce word sense problem (Gong et al., 2018).
Figure 2 :
2Graph state transitions via message passing, where each w i is a word.
Figure 3 :
3Dev accuracies against transition steps for the graph state LSTM model. initialized. Pretrained word embeddings are not updated during training. The dimension of hidden vectors in LSTM units is set to 150.
Figure 5 :
5Test set performances on (a) different sentence lengths, and (b) different maximal number of neighbors.
Table 1 :
1An example showing that tumors with L858E mutation in EGFR gene respond to gefitinib treatment.
DET NN
DETThe ? deletion ?.. EGFR" is lost from the original subgraph. Second, using LSTMs on both DAGs, information of only ancestors and descendants can be incorporated for each word. Sibling information, which may also be important, is not included.(a)
PREP_ON
PREP_OF
NN
NSUBJ
COP
ROOT
PREP_IN
NUM
(b)
Figure 1: (a) A fraction of the dependency graph of the example in Table 1. For simplicity, we omit edges of
discourse relations. (b) Results after splitting the graph into two DAGs.
into two separate DAGs. For the example in Fig-
ure 1, the conversion breaks the inner structure of
"exon-19 of EGFR gene", where the relation be-
tween "exon-19" and "EGFR" via the dependency
path "exon-19
PREP OF
! gene
NN
!
make model parameters specific to edge labels. They consider two model variations, namely Full Parametrization (FULL) and Edge-Type Embedding (EMBED). FULL assigns distinct.. .
.. .
.. .
.. .
t
i
m
e
0
1
.
.
.
t
-
1
t
.. .
.. .
.. .
.. .
.. .
.. .
.. .
.. .
Table 2 :
2Dataset statistics. Avg. Tok. and Avg. Sent. are the average number of tokens and sentences, respectively. Cross is the percentage of instances that contain multiple sentences.0.001 is used as the optimizer, and the model that
yields the best devset performance is selected to
evaluate on the test set. Dropout with rate 0.3 is
used during training. Both training and evaluation
are conducted using a Tesla K20X GPU.
Table 2 ,
2we can see that the average number of tokens for the ternary-relation data isModel
Train Decode
Bidir DAG LSTM 281s
27.3s
GS GLSTM
36.7s
2.7s
Table 4 :
4The average times for training one epoch
and decoding (seconds) over five folds on drug-gene-
mutation TERNARY cross sentence setting.
However , the phosphorylation level of EGFR in EGFR 2 T790M 3 mutatnt cells ( H1975TM/ LR ) was not suppressed by Gefitinib 1 treatment .(a)
det
advmod
nn
nsubjpass
prep
nn
nn
amod
prep
appos
auxpass
neg
agent
nn
Panitumumab can still bind to an EGFR 2 mutant S492R 3 to which cetuximab 1 can not bind to .
advmod
nsubj
prep
det
nn
dep
rel
nsubj
aux
neg
rcmod
prep
aux
(b)
Figure 4: Example cases. Words with subindices 1, 2 and 3 represent drugs, genes and mutations, respectively.
References for both cases are "No". For both cases, GS GLSTM makes the correct predictions, while Bidir DAG
LSTM does incorrectly.
74.6
75.3
78.7
81.5
82.2
84.4
N -ary relation extraction N -ary relation extractions can be traced back to Model
TERNARY BINARY
Bidir DAG LSTM
51.7
50.7
GS GLSTM
71.1*
71.7*
Table 6 :
6Average test accuracies for multi-class relation extraction with all instances ("Cross").
The dataset is available at http://hanover.azurewebsites.net.3 The released data has been separated into 5 portions, and we follow the exact split.
p < 0.01 using t-test. For the remaining of this paper, we use the same measure for statistical significance.
As shown inFigure 1, a directional DAG LSTM propagates information according to the edge directions.
Acknowledge We thank the anonymized reviewers for their insightful comments, and the Center for Integrated Research Computing (CIRC) of University of Rochester for making special reservations for computation resources.
Graph convolutional encoders for syntax-aware neural machine translation. Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, Khalil Simaan, Conference on Empirical Methods in Natural Language Processing. 17Joost Bastings, Ivan Titov, Wilker Aziz, Diego Marcheggiani, and Khalil Simaan. 2017. Graph convolutional encoders for syntax-aware neural ma- chine translation. In Conference on Empirical Meth- ods in Natural Language Processing (EMNLP-17).
Overview of muc-7/met-2. Nancy A Chinchor ; Science Applications In-Ternational Corp San Diego Ca, Technical reportNancy A Chinchor. 1998. Overview of muc-7/met-2. Technical report, SCIENCE APPLICATIONS IN- TERNATIONAL CORP SAN DIEGO CA.
Beyond nombank: A study of implicit arguments for nominal predicates. Matthew Gerber, Joyce Chai, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10). the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10)Matthew Gerber and Joyce Chai. 2010. Beyond nom- bank: A study of implicit arguments for nominal predicates. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguistics (ACL-10).
Embedding syntax and semantics of prepositions via tensor decomposition. Hongyu Gong, Suma Bhat, Pramod Viswanath, Proceedings of the 2018 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-18). the 2018 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-18)Hongyu Gong, Suma Bhat, and Pramod Viswanath. 2018. Embedding syntax and semantics of prepo- sitions via tensor decomposition. In Proceedings of the 2018 Meeting of the North American chap- ter of the Association for Computational Linguistics (NAACL-18).
Improved relation extraction with feature-rich compositional embedding models. Matthew R Gormley, Mo Yu, Mark Dredze, Conference on Empirical Methods in Natural Language Processing. 15Matthew R. Gormley, Mo Yu, and Mark Dredze. 2015. Improved relation extraction with feature-rich com- positional embedding models. In Conference on Empirical Methods in Natural Language Processing (EMNLP-15).
Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuidó Séaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenza Romano, Stan Szpakowicz, Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions. the Workshop on Semantic Evaluations: Recent Achievements and Future DirectionsIris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, DiarmuidÓ Séaghdha, Sebastian Padó, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2009. Semeval-2010 task 8: Multi-way classification of semantic relations be- tween pairs of nominals. In Proceedings of the Workshop on Semantic Evaluations: Recent Achievements and Future Directions.
A systematic exploration of the feature space for relation extraction. Jing Jiang, Chengxiang Zhai, Proceedings of the 2015 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-15). the 2015 Meeting of the North American chapter of the Association for Computational Linguistics (NAACL-15)Jing Jiang and ChengXiang Zhai. 2007. A systematic exploration of the feature space for relation extrac- tion. In Proceedings of the 2015 Meeting of the North American chapter of the Association for Com- putational Linguistics (NAACL-15).
Adam: A method for stochastic optimization. Diederik Kingma, Jimmy Ba, arXiv:1412.6980arXiv preprintDiederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 .
Semisupervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, International Conference on Learning Representations. ICLRThomas N. Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In International Conference on Learning Representations (ICLR).
Incremental joint extraction of entity mentions and relations. Qi Li, Heng Ji, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational Linguistics14Qi Li and Heng Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL-14).
Semantic object parsing with graph LSTM. Xiaodan Liang, Xiaohui Shen, Jiashi Feng, Liang Lin, Shuicheng Yan, European Conference on Computer Vision. Xiaodan Liang, Xiaohui Shen, Jiashi Feng, Liang Lin, and Shuicheng Yan. 2016. Semantic object pars- ing with graph LSTM. In European Conference on Computer Vision.
The Stanford CoreNLP natural language processing toolkit. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, David Mc-Closky, Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit.
Encoding sentences with graph convolutional networks for semantic role labeling. Diego Marcheggiani, Ivan Titov, Conference on Empirical Methods in Natural Language Processing. 17Diego Marcheggiani and Ivan Titov. 2017. Encoding sentences with graph convolutional networks for se- mantic role labeling. In Conference on Empirical Methods in Natural Language Processing (EMNLP- 17).
Simple algorithms for complex relation extraction with applications to biomedical IE. Ryan Mcdonald, Fernando Pereira, Seth Kulick, Scott Winters, Yang Jin, Pete White, Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05). the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)Ryan McDonald, Fernando Pereira, Seth Kulick, Scott Winters, Yang Jin, and Pete White. 2005. Simple algorithms for complex relation extraction with ap- plications to biomedical IE. In Proceedings of the 43rd Annual Meeting of the Association for Compu- tational Linguistics (ACL'05).
End-to-end relation extraction using LSTMs on sequences and tree structures. Makoto Miwa, Mohit Bansal, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL-16). the 54th Annual Meeting of the Association for Computational Linguistics (ACL-16)Makoto Miwa and Mohit Bansal. 2016. End-to-end re- lation extraction using LSTMs on sequences and tree structures. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (ACL-16).
The proposition bank: An annotated corpus of semantic roles. Martha Palmer, Daniel Gildea, Paul Kingsbury, Computational linguistics. 311Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics 31(1):71- 106.
Cross-sentence n-ary relation extraction with graph LSTMs. Transactions of the Association for. Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, Wen-Tau Yih, Computational Linguistics. 5Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, and Wen-tau Yih. 2017. Cross-sentence n-ary relation extraction with graph LSTMs. Trans- actions of the Association for Computational Lin- guistics 5:101-115.
GloVe: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Conference on Empirical Methods in Natural Language Processing. 14Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global vectors for word representation. In Conference on Empirical Methods in Natural Language Processing (EMNLP- 14).
Embedding semantic similarity in tree kernels for domain adaptation of relation extraction. Barbara Plank, Alessandro Moschitti, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL-13). the 51st Annual Meeting of the Association for Computational Linguistics (ACL-13)Barbara Plank and Alessandro Moschitti. 2013. Em- bedding semantic similarity in tree kernels for do- main adaptation of relation extraction. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL-13).
Distant supervision for relation extraction beyond the sentence boundary. Chris Quirk, Hoifung Poon, Proceedings of the 15th Conference of the European Chapter of the ACL (EACL-17). the 15th Conference of the European Chapter of the ACL (EACL-17)Chris Quirk and Hoifung Poon. 2017. Distant super- vision for relation extraction beyond the sentence boundary. In Proceedings of the 15th Conference of the European Chapter of the ACL (EACL-17).
A graph-to-sequence model for amrto-text generation. Linfeng Song, Yue Zhang, Zhiguo Wang, Daniel Gildea, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL-18). the 56th Annual Meeting of the Association for Computational Linguistics (ACL-18)Linfeng Song, Yue Zhang, Zhiguo Wang, and Daniel Gildea. 2018. A graph-to-sequence model for amr- to-text generation. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (ACL-18).
Extracting relations within and across sentences. Kumutha Swampillai, Mark Stevenson, Proceedings of the International Conference Recent Advances in Natural Language Processing. the International Conference Recent Advances in Natural Language ProcessingKumutha Swampillai and Mark Stevenson. 2011. Ex- tracting relations within and across sentences. In Proceedings of the International Conference Recent Advances in Natural Language Processing 2011.
Improved semantic representations from tree-structured long short-term memory networks. Kai Sheng Tai, Richard Socher, Christopher D Manning, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL-15). the 53rd Annual Meeting of the Association for Computational Linguistics (ACL-15)Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics (ACL-15).
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008.
Learning field compatibilities to extract database records from unstructured text. Michael Wick, Aron Culotta, Andrew Mccallum, Conference on Empirical Methods in Natural Language Processing. 6Michael Wick, Aron Culotta, and Andrew McCal- lum. 2006. Learning field compatibilities to extract database records from unstructured text. In Con- ference on Empirical Methods in Natural Language Processing (EMNLP-06).
Graph2seq: Graph to sequence learning with attention-based neural networks. Kun Xu, Lingfei Wu, Zhiguo Wang, Vadim Sheinin, arXiv:1804.00823arXiv preprintKun Xu, Lingfei Wu, Zhiguo Wang, and Vadim Sheinin. 2018a. Graph2seq: Graph to se- quence learning with attention-based neural net- works. arXiv preprint arXiv:1804.00823 .
Exploiting rich syntactic information for semantic parsing with graph-to-sequence model. Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Liwei Chen, Vadim Sheinin, Conference on Empirical Methods in Natural Language Processing. 18Kun Xu, Lingfei Wu, Zhiguo Wang, Mo Yu, Li- wei Chen, and Vadim Sheinin. 2018b. Exploit- ing rich syntactic information for semantic parsing with graph-to-sequence model. In Conference on Empirical Methods in Natural Language Processing (EMNLP-18).
Coreference based event-argument relation extraction on biomedical text. Katsumasa Yoshikawa, Sebastian Riedel, Tsutomu Hirao, Masayuki Asahara, Yuji Matsumoto, Journal of Biomedical Semantics. 256Katsumasa Yoshikawa, Sebastian Riedel, Tsutomu Hi- rao, Masayuki Asahara, and Yuji Matsumoto. 2011. Coreference based event-argument relation extrac- tion on biomedical text. Journal of Biomedical Se- mantics 2(5):S6.
Kernel methods for relation extraction. Dmitry Zelenko, Chinatsu Aone, Anthony Richardella, Journal of machine learning research. 3Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. Journal of machine learning research 3(Feb):1083-1106.
End-to-end neural relation extraction with global optimization. Meishan Zhang, Yue Zhang, Guohong Fu, Conference on Empirical Methods in Natural Language Processing. 17Meishan Zhang, Yue Zhang, and Guohong Fu. 2017. End-to-end neural relation extraction with global op- timization. In Conference on Empirical Methods in Natural Language Processing (EMNLP-17).
Sentencestate lstm for text representation. Yue Zhang, Qi Liu, Linfeng Song, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL-18). the 56th Annual Meeting of the Association for Computational Linguistics (ACL-18)Yue Zhang, Qi Liu, and Linfeng Song. 2018. Sentence- state lstm for text representation. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (ACL-18).
Extracting relations with integrated information using kernel methods. Shubin Zhao, Ralph Grishman, Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05). the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)Shubin Zhao and Ralph Grishman. 2005. Extracting relations with integrated information using kernel methods. In Proceedings of the 43rd Annual Meet- ing of the Association for Computational Linguistics (ACL'05).
| [] |
[
"Semantic Frame Induction using Masked Word Embeddings and Two-Step Clustering",
"Semantic Frame Induction using Masked Word Embeddings and Two-Step Clustering"
] | [
"Kosuke Yamada yamada.kosuke@c.mbox.nagoya-u.ac.jp \nGraduate School of Informatics\nNagoya University\nJapan\n",
"Ryohei Sasano \nGraduate School of Informatics\nNagoya University\nJapan\n\nRIKEN Center for Advanced Intelligence Project\nJapan\n",
"Koichi Takeda takedasu@i.nagoya-u.ac.jp \nGraduate School of Informatics\nNagoya University\nJapan\n"
] | [
"Graduate School of Informatics\nNagoya University\nJapan",
"Graduate School of Informatics\nNagoya University\nJapan",
"RIKEN Center for Advanced Intelligence Project\nJapan",
"Graduate School of Informatics\nNagoya University\nJapan"
] | [
"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing"
] | Recent studies on semantic frame induction show that relatively high performance has been achieved by using clustering-based methods with contextualized word embeddings. However, there are two potential drawbacks to these methods: one is that they focus too much on the superficial information of the frameevoking verb and the other is that they tend to divide the instances of the same verb into too many different frame clusters. To overcome these drawbacks, we propose a semantic frame induction method using masked word embeddings and two-step clustering. Through experiments on the English FrameNet data, we demonstrate that using the masked word embeddings is effective for avoiding too much reliance on the surface information of frameevoking verbs and that two-step clustering can improve the number of resulting frame clusters for the instances of the same verb. | 10.18653/v1/2021.acl-short.102 | [
"https://www.aclanthology.org/2021.acl-short.102.pdf"
] | 235,248,108 | 2105.13466 | 6ea47cecf8f9db38d7703fb26e15d6b21f7988c7 |
Semantic Frame Induction using Masked Word Embeddings and Two-Step Clustering
August 1-6, 2021
Kosuke Yamada yamada.kosuke@c.mbox.nagoya-u.ac.jp
Graduate School of Informatics
Nagoya University
Japan
Ryohei Sasano
Graduate School of Informatics
Nagoya University
Japan
RIKEN Center for Advanced Intelligence Project
Japan
Koichi Takeda takedasu@i.nagoya-u.ac.jp
Graduate School of Informatics
Nagoya University
Japan
Semantic Frame Induction using Masked Word Embeddings and Two-Step Clustering
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAugust 1-6, 2021811
Recent studies on semantic frame induction show that relatively high performance has been achieved by using clustering-based methods with contextualized word embeddings. However, there are two potential drawbacks to these methods: one is that they focus too much on the superficial information of the frameevoking verb and the other is that they tend to divide the instances of the same verb into too many different frame clusters. To overcome these drawbacks, we propose a semantic frame induction method using masked word embeddings and two-step clustering. Through experiments on the English FrameNet data, we demonstrate that using the masked word embeddings is effective for avoiding too much reliance on the surface information of frameevoking verbs and that two-step clustering can improve the number of resulting frame clusters for the instances of the same verb.
Introduction
Semantic frame induction is a task of mapping frame-evoking words, typically verbs, into semantic frames they evoke (and the collection of instances of words to be mapped into the same semantic frame forms a cluster). For example, in the case of example sentences from FrameNet (Baker et al., 1998) shown in (1) to (4) in Table 1, the goal is to group the examples into three clusters according to the frame that each verb evokes; namely, {(1)}, {(2)}, and {(3), (4)}. Unsupervised semantic frame induction methods help to automatically build high-coverage frame-semantic resources.
Recent studies have shown the usefulness of contextualized word embeddings such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) for semantic frame induction. For example, the top three methods Anwar et al., 2019;Ribeiro et al., 2019) in Subtask-A of (1) We'll not get there before the rain comes.
(2) The problem continued to get worse.
(3) You may get more money from the basic pension.
(4) We have acquired more than 100 works.
(ARRIVING) (TRANSITION_TO_STATE) (GETTING) (GETTING)
(2)
(4) Figure 1: 2D projections of BERT embeddings of verbs (left) and masked verbs (right). Numbers in the figure correspond to numbers in Table 1, and + are verbs "get" and "acquire", respectively, and each color indicates ARRIVING, TRANSI-TION TO STATE, and GETTING frame.
SemEval-2019 Task 2 (QasemiZadeh et al., 2019) perform clustering of contextualized word embeddings of frame-evoking verbs. However, these methods have two potential drawbacks. First, the contextualized word embeddings of the frame-evoking verbs strongly reflect the superficial information of the verbs. The left side of Figure 1 shows a 2D projection of contextualized embeddings of instances of the verbs "get" and "acquire" extracted from example sentences in FrameNet. Specifically, we extracted instances of "get" and "acquire" from FrameNet, obtained their embeddings by using a pre-trained BERT, and projected them into two dimensions by using tdistributed stochastic neighbor embedding (t-SNE) (Maaten and Hinton, 2008). As shown in the figure, among instances of "get", those that evoke the GETTING frame tend to be located close to instances of "acquire" that evokes the same GETTING frame. However, we can see that the difference be-tween verbs is larger than the difference between the frames that each verb evokes.
To remedy this drawback, we propose a method that uses a masked word embedding, a contextualized embedding of a masked word. The right side of Figure 1 shows a 2D projection of masked word embeddings for instances of the verbs "get" and "acquire". The use of masks can hide the superficial information of the verbs, and consequently we can confirm that instances of verbs that evoke the same frame are located close to each other.
The second drawback is that these methods perform clustering instances across all verbs simultaneously. Such clustering may divide instances of the same verb into too many different frame clusters. For example, if there are outlier vectors that are not typical for a particular verb, they tend to form individual clusters with instances of other frames in most cases. To solve this problem, we propose a two-step clustering, which first performs clustering instances of the same verb according to their meaning and then performs further clustering across all verbs.
Proposed Method
The proposed semantic frame induction method uses masked word embeddings and two-step clustering. We explain these details below.
Masked Word Embedding
A masked word embedding is a contextualized embedding of a word in a text where the word is replaced with a special token indicating that it has been masked, i.e., "[MASK]" in BERT. Our method leverages masked word embeddings of frame-evoking verbs in addition to standard contextualized word embeddings of frame-evoking verbs. In this paper, we consider the following three types of contextualized word embeddings.
v WORD : Standard contextualized embedding of a frame-evoking verb. v MASK : Contextualized embedding of a frameevoking verb that is masked. v W+M : The weighted average of the above two, which is defined as:
v W+M = (1 − α) · v WORD + α · v MASK . (1)
Here, v W+M is the weighted average of contextualized word embeddings with and without masking the frame-evoking verb. By properly setting the First step Second step Figure 2: Flow of the two-step clustering. and + denote the embeddings of "get" and "acquire", respectively.
weight α using a development set, we expect to obtain embeddings that properly adjust the weight of superficial information of the target verb and information obtained from its context. v W+M is identical to v WORD when α is set to 0 and identical to v MASK when α is set to 1.
Two-Step Clustering
In the two-step clustering, we first perform clustering instances of the same verb according to the semantic meaning and then perform further clustering across verbs. Finally, each generated cluster is regarded as an induced frame. Figure 2 shows the flow of the two-step clustering using the instances of "get" and "acquire" from FrameNet. As a result of the clustering in the first step, the instances of "get" are grouped into three clusters and the instances of "acquire" into one cluster. In the second step, one of the clusters of "get" and the cluster of "acquire" are merged. Consequently, three clusters are generated as the final clustering result. The details of each clustering are as follows.
Clustering Instances of the Same Verb The clustering in the first step aims to cluster instances of the same verb according to their semantic meaning. Since all the targets of the clustering are the same verbs, there should be no difference in the results between the cases using v WORD and v MASK as embeddings. Therefore, we use only v MASK for this process. We adopt X-means (Pelleg and Moore, 2000) or group average clustering based on a Euclidean distance as the clustering algorithm. While X-means automatically determine the number of clusters, group average clustering requires a clustering termination threshold. In the group average clustering, the distance between two clusters is defined as the average distances of all instance pairs between clusters, and the cluster pairs with the smallest distance between clusters are merged in order. The clustering is terminated when there are no more cluster pairs for which the distance between two clusters is less than or equal to a threshold θ. In this study, θ is shared across verbs, not determined for each verb. Note that when θ is set to a sufficiently large value, the number of clusters is one for all verbs. To set θ to an appropriate value, we gradually decrease θ from a sufficiently large value and fix it to a value where the number of the generated frame clusters is equal to the actual number of frames in the development set.
In the theory of Frame Semantics (Fillmore, 2006) on which FrameNet is based, the association between a word and a semantic frame is called a lexical unit (LU). Since each cluster generated as the result of clustering in the first step is a set of instances of the same verb used in the same meaning, it can be considered to correspond to an LU. Therefore, we refer to it as pseudo-LU (pLU).
Clustering across Verbs
The clustering in the second step aims to cluster the pLUs generated as the result of the first-step clustering across verbs according to their meaning. This step calculates average contextualized embeddings of each pLU and then clusters the pLUs by using the calculated embeddings across verbs. We adopt Ward clustering or group average clustering based on a Euclidean distance as the clustering algorithm.
We need a termination criterion for both clustering algorithms. A straightforward approach is to use the ratio of the number of frames to the number of verbs. However, this approach does not work well in this case since there is an upper limit to the number of frame types and the number of frames to be generated does not increase linearly with the number of verbs. Therefore, in this study, we use the ratio of pLU pairs belonging to the same cluster as the termination criterion. Specifically, the clustering is terminated when the ratio of pLU pairs belonging to the same cluster p F 1 =F 2 is greater than or equal to the ratio of LU pairs belonging to the same frame in the development set p C 1 =C 2 . Here, p F 1 =F 2 is calculated as: (
p F 1 =F 2 = #
While the number of all pLU pairs is constant regardless of clustering process, the number of pLU pairs belonging to the same cluster monotonically increases as the clustering process progresses. p C 1 =C 2 can be calculated as well as p F 1 =F 2 and p C 1 =C 2 reaches 1 when the number of the entire cluster becomes one cluster. Therefore, p C 1 =C 2 is guaranteed to be greater than or equal to p F 1 =F 2 during the clustering process. Since the probability that randomly selected LU pairs belong to the same frame is not affected by the data size, the criterion is considered valid regardless of the data size.
Experiment
We conducted an experiment of semantic frame induction to confirm the efficacy of our method. In this experiment, the objective is to group the given frame-evoking verbs with their context according to the frames they evoke.
Setting
Dataset From Berkeley FrameNet data release 1.7 1 in English, we extracted verbal LUs with at least 20 example sentences and used their example sentences. That is, all target verbs in the dataset have at least 20 example sentences for each frame they evoke. We limited the maximum number of sentence examples for each LU to 100 and if there were more examples, we randomly selected 100. Note that we did not use the SemEval-2019 Task 2 dataset because the dataset is no longer available as described on the official web page. 2 The extracted dataset contained 1,272 different verbs as frame-evoking words. We used the examples for 255 verbs (20%) as the development set and those for the remaining 1,017 verbs (80%) as the test set. Thus, there are no overlapping frameevoking verbs or LUs between the development and test sets, but there is an overlap in the frames evoked. We divided the development and test sets so that the proportion of verbs that evoke more than one frames would be the same. The development set was used to determine the alpha of v W+M and the termination criterion for the clustering in each step and layers to be used as contextualized word embeddings. Table 2 lists the statistics of the dataset.
Models We compared four models, all combinations of group average clustering or X-means in the first step and Ward clustering or group average clustering in the second step. We also compared a model that treats all instances of one verb as one cluster (1-cluster-per-verb; 1cpv) and models that treat all instances of one verb as one cluster (1cpv') in the first step and then perform the clustering in the second step.
In addition, we compared our models with the top three models in Subtask-A of SemEval-2019 Task 2. first perform group average clustering using BERT embeddings of frameevoking verbs. Then, they perform clustering to split each cluster into two by using TF-IDF features with paraphrased words. Anwar et al. (2019) use the concatenation of the embedding of a frameevoking verb and the average word embedding of all words in a sentence obtained by skip-gram (Mikolov et al., 2013). They perform group average clustering based on Manhattan distance by using the embedding. Ribeiro et al. (2019) perform graph clustering based on Chinese whispers (Biemann, 2006) by using ELMo embeddings of frame-evoking verbs.
To confirm the usefulness of the two-step clustering, we also compared our models with models that perform a one-step clustering. For the model, we used Ward clustering or group average clustering as the clustering method and v W+M as the contextualized word embedding. We gave the oracle number of clusters to these models, i.e., we stopped cluster-ing when the number of human-annotated frames and the number of cluster matched.
Metrics and Embeddings
We used six evaluation metrics: B-CUBED PRECISION (BCP), B-CUBED RECALL (BCR), and their harmonic mean, F-SCORE (BCF) (Bagga and Baldwin, 1998), and PURITY (PU), INVERSE PURITY (IPU), and their harmonic mean, F-SCORE (PIF) (Karypis et al., 2000). We used BERT (bert-base-uncased) in Hugging Face 3 as the contextualized word embedding. Table 3 shows the experimental results. 4 When focusing on BCF, which was used to rank the systems in Subtask-A of SemEval-2019 Task 2, our model using X-means as the first step and group average clustering as the second step achieved the highest score of 64.4. It also got the highest PIF score of 73.0. The number of human-annotated frames was 393, while the number of generated clusters was 410. These results demonstrate that the termination criterion of the two-step clustering works effectively.
Results
In all two-step clustering methods, α was tuned between 0.0 and 1.0, which shows that both v WORD and v MASK should be considered. In addition, α was close to 1.0 for these methods, which indicates that v MASK is more useful for clustering instances across verbs. In contrast, v W+M in the one-step clustering methods was equivalent to v WORD with α = 0.0. This indicates that there is no effect of using v MASK for the one-step clustering-based methods.
The two-step clustering-based models that use group average clustering as the second clustering algorithm tended to achieve high scores. This indicates that the two-step clustering-based approach, which first cluster instances of the same verb and then cluster across verbs, is effective. However, as to the first clustering, 1cpv' strategy, which treats all the instances of the same verb as one cluster, achieved a higher accuracy than the clustering of the group average method, and achieved an accuracy close to the clustering of X-means, and thus we can say that 1cpv' strategy is effective enough for this dataset. We think this is due to the fact that the dataset used in this study is quite biased towards verbs that evoke only one frame, and we believe that the effectiveness of the 1cpv' may be limited in a more practical setting. Further investigation of this is one of our future works.
Conclusion
We proposed a method that uses masked word embeddings and two-step clustering for semantic frame induction. The results of experiments using FrameNet data showed that masked word embeddings and two-step clustering are quite effective for this frame induction task. We will conduct experiments in a setting where nouns and adjectives are also accounted for as frame-evoking words. The future goal of this research is to build a framesemantic resource, which requires not only the induction of semantic frames but also the determination of the arguments required by each frame and the induction of semantic roles of the arguments. A possible extension of our approach is to utilize contextualized word embeddings of arguments of verbs to see if it is possible to generalize our approach for achieving this goal.
of pLU pairs in the same cluster # of all pLU pairs .
Table 1 :
1Example sentences of verbs "get" and "acquire" and frames that each verb evokes in FrameNet. (FRAME)
Table 2 :
2Statistics of the dataset from FrameNet.
Table 3 :
3Experimental results. #pLU denotes the number of pLUs and #C denotes the number of frame clusters. Note that theactual numbers of LUs and frames are 1,188 and 393, respectively. GA means group average clustering.
https://framenet.icsi.berkeley.edu/ 2 https://competitions.codalab. org/competitions/19159#learn_the_ details-datasets
https://huggingface.co/transformers/ 4 The performance of the top three models in Subtask-A of SemEval-2019 Task 2 is lower than reported in the task because the dataset used in this study has a high proportion of verbs that evoke multiple frames and is, therefore, a challenging dataset.
AcknowledgementsThis work was supported by JSPS KAKENHI Grant Numbers 18H03286 and 21K12012.
HHMM at SemEval-2019 task 2: Unsupervised frame induction using contextualized word embeddings. Saba Anwar, Dmitry Ustalov, Nikolay Arefyev, Simone Paolo Ponzetto, Chris Biemann, Alexander Panchenko, Proceedings of the 13th International Workshop on Semantic Evaluation. the 13th International Workshop on Semantic EvaluationSaba Anwar, Dmitry Ustalov, Nikolay Arefyev, Si- mone Paolo Ponzetto, Chris Biemann, and Alexan- der Panchenko. 2019. HHMM at SemEval-2019 task 2: Unsupervised frame induction using contex- tualized word embeddings. In Proceedings of the 13th International Workshop on Semantic Evalua- tion (SemEval 2019), pages 125-129.
Neural GRANNy at SemEval-2019 task 2: A combined approach for better modeling of semantic relationships in semantic frame induction. Nikolay Arefyev, Boris Sheludko, Adis Davletov, Dmitry Kharchev, Alex Nevidomsky, Alexander Panchenko, Proceedings of the 13th International Workshop on Semantic Evaluation. the 13th International Workshop on Semantic EvaluationNikolay Arefyev, Boris Sheludko, Adis Davletov, Dmitry Kharchev, Alex Nevidomsky, and Alexan- der Panchenko. 2019. Neural GRANNy at SemEval- 2019 task 2: A combined approach for better mod- eling of semantic relationships in semantic frame in- duction. In Proceedings of the 13th International Workshop on Semantic Evaluation (SemEval 2019), pages 31-38.
Entitybased cross-document coreferencing using the vector space model. Amit Bagga, Breck Baldwin, Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (COLING 1998). the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (COLING 1998)Amit Bagga and Breck Baldwin. 1998. Entity- based cross-document coreferencing using the vec- tor space model. In Proceedings of the 36th Annual Meeting of the Association for Computational Lin- guistics and 17th International Conference on Com- putational Linguistics (COLING 1998), pages 79- 85.
The Berkeley FrameNet project. F Collin, Baker, J Charles, John B Fillmore, Lowe, Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (ACL-COLING 1998). the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (ACL-COLING 1998)Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The Berkeley FrameNet project. In Pro- ceedings of the 36th Annual Meeting of the Associ- ation for Computational Linguistics and 17th Inter- national Conference on Computational Linguistics (ACL-COLING 1998), pages 86-90.
Chinese whispers-an efficient graph clustering algorithm and its application to natural language processing problems. Chris Biemann, Proceedings of TextGraphs: the First Workshop on Graph Based Methods for Natural Language Processing. TextGraphs: the First Workshop on Graph Based Methods for Natural Language ProcessingChris Biemann. 2006. Chinese whispers-an efficient graph clustering algorithm and its application to nat- ural language processing problems. In Proceed- ings of TextGraphs: the First Workshop on Graph Based Methods for Natural Language Processing (TextGraphs 2006), pages 73-80.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2019)Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies (NAACL-HLT 2019), pages 4171-4186.
J Charles, Fillmore, Frame semantics. Cognitive Linguistics: Basic Readings. 34Charles J Fillmore. 2006. Frame semantics. Cognitive Linguistics: Basic Readings, 34:373-400.
A comparison of document clustering techniques. Michael Karypis, Steinbach George, Vipin Kumar, Proceedings of the Sixth International Conference on Knowledge Discovery and Data Mining Workshop on Text Mining. the Sixth International Conference on Knowledge Discovery and Data Mining Workshop on Text MiningMichael Karypis, Steinbach George, and Vipin Kumar. 2000. A comparison of document clustering tech- niques. In Proceedings of the Sixth International Conference on Knowledge Discovery and Data Min- ing Workshop on Text Mining.
Visualizing data using t-SNE. Laurens Van Der Maaten, Geoffrey Hinton, Journal of Machine Learning Research. 9Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in Neural Information Processing Systems (NIPS 2013). Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems (NIPS 2013), pages 3111-3119.
X-means: Extending k-means with efficient estimation of the number of clusters. Dan Pelleg, Andrew Moore, Proceedings of the 17th International Conference on Machine Learning (ICML 2000). the 17th International Conference on Machine Learning (ICML 2000)Dan Pelleg and Andrew Moore. 2000. X-means: Ex- tending k-means with efficient estimation of the number of clusters. In Proceedings of the 17th Inter- national Conference on Machine Learning (ICML 2000), pages 727-734.
Deep contextualized word representations. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018). the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018)Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies (NAACL-HLT 2018), pages 2227-2237.
SemEval-2019 task 2: Unsupervised lexical frame induction. Behrang Qasemizadeh, R L Miriam, Regina Petruck, Laura Stodden, Marie Kallmeyer, Candito, Proceedings of the 13th International Workshop on Semantic Evaluation. the 13th International Workshop on Semantic EvaluationBehrang QasemiZadeh, Miriam R. L. Petruck, Regina Stodden, Laura Kallmeyer, and Marie Candito. 2019. SemEval-2019 task 2: Unsupervised lexical frame induction. In Proceedings of the 13th International Workshop on Semantic Evaluation (SemEval 2019), pages 16-30.
L2F/INESC-ID at SemEval-2019 task 2: Unsupervised lexical semantic frame induction using contextualized word representations. Eugénio Ribeiro, Vânia Mendonça, Ricardo Ribeiro, David Martins De Matos, Alberto Sardinha, Ana Lúcia Santos, Luísa Coheur, Proceedings of the 13th International Workshop on Semantic Evaluation. the 13th International Workshop on Semantic EvaluationEugénio Ribeiro, Vânia Mendonça, Ricardo Ribeiro, David Martins de Matos, Alberto Sardinha, Ana Lúcia Santos, and Luísa Coheur. 2019. L2F/INESC-ID at SemEval-2019 task 2: Unsu- pervised lexical semantic frame induction using contextualized word representations. In Proceed- ings of the 13th International Workshop on Semantic Evaluation (SemEval 2019), pages 130-136.
| [] |
[
"Positional Encoding to Control Output Sequence Length",
"Positional Encoding to Control Output Sequence Length"
] | [
"Sho Takase sho.takase@nlp.c \nTokyo Institute of Technology\n\n",
"Naoaki Okazaki okazaki@c.titech.ac.jp \nTokyo Institute of Technology\n\n"
] | [
"Tokyo Institute of Technology\n",
"Tokyo Institute of Technology\n"
] | [
"Proceedings of NAACL-HLT 2019"
] | Neural encoder-decoder models have been successful in natural language generation tasks. However, real applications of abstractive summarization must consider additional constraint that a generated summary should not exceed a desired length. In this paper, we propose a simple but effective extension of a sinusoidal positional encoding(Vaswani et al., 2017)to enable neural encoder-decoder model to preserves the length constraint. Unlike in previous studies where that learn embeddings representing each length, the proposed method can generate a text of any length even if the target length is not present in training data. The experimental results show that the proposed method can not only control the generation length but also improve the ROUGE scores. | 10.18653/v1/n19-1401 | [
"https://www.aclweb.org/anthology/N19-1401.pdf"
] | 119,306,596 | 1904.07418 | ff482c357716e884f64e8e54d8f1307df6e061b5 |
Positional Encoding to Control Output Sequence Length
June 2 -June 7, 2019
Sho Takase sho.takase@nlp.c
Tokyo Institute of Technology
Naoaki Okazaki okazaki@c.titech.ac.jp
Tokyo Institute of Technology
Positional Encoding to Control Output Sequence Length
Proceedings of NAACL-HLT 2019
NAACL-HLT 2019Minneapolis, MinnesotaJune 2 -June 7, 20193999
Neural encoder-decoder models have been successful in natural language generation tasks. However, real applications of abstractive summarization must consider additional constraint that a generated summary should not exceed a desired length. In this paper, we propose a simple but effective extension of a sinusoidal positional encoding(Vaswani et al., 2017)to enable neural encoder-decoder model to preserves the length constraint. Unlike in previous studies where that learn embeddings representing each length, the proposed method can generate a text of any length even if the target length is not present in training data. The experimental results show that the proposed method can not only control the generation length but also improve the ROUGE scores.
Introduction
Neural encoder-decoder models have been successfully applied to various natural language generation tasks including machine translation (Sutskever et al., 2014), summarization (Rush et al., 2015), and caption generation (Vinyals et al., 2015). Still, it is necessary to control the output length for abstractive summarization, which generates a summary for a given text while satisfying a space constraint. In fact, Figure 1 shows a large variance in output sequences produced by a widely used encoder-decoder model (Luong et al., 2015), which has no mechanism for controlling the length of the output sequences. Fan et al. (2018) trained embeddings that correspond to each output length to control the output sequence length. Since the embeddings for different lengths are independent, it is hard to generate a sequence of the length that is infrequent in training data. Thus, a method that can model any lengths continuously is required. Figure 1: Difference in number of characters between correct headlines and outputs of a widely used LSTM encoder-decoder (Luong et al., 2015) which is trained on sentence-headline pairs created by Rush et al. (2015) from the annotated English Gigaword corpus. The difference was investigated for 3,000 sentence-headline pairs randomly sampled from the test splits. Kikuchi et al. (2016) proposed two learning based methods for an LSTM encoder-decoder: LenEmb and LenInit. LenEmb inputs an embedding representing the remaining length in each decoding step. Since this approach also prepares embeddings for each length independently, it suffers from the same problem as that in Fan et al. (2018).
On the other hand, LenInit can handle arbitrary lengths because it combines the scalar value of a desired length with a trainable embedding. LenInit initializes the LSTM cell of the decoder with the embedding depending on the scalar value of the desired length. Liu et al. (2018) incorporated such scalar values into the initial state of the decoder in a CNN encoder-decoder. These approaches deal with any length but it is reasonable to incorporate the distance to the desired terminal position into each decoding step such as in LenEmb.
In this study, we focused on Transformer (Vaswani et al., 2017), which recently achieved the state-of-the-art score on the machine translation task. We extend the sinusoidal positional encoding, which represents a position of each token in Transformer (Vaswani et al., 2017), to represent a distance from a terminal position on the decoder side. In this way, the proposed method considers the remaining length explicitly at each decoding step. Moreover, the proposed method can handle any desired length regardless of its appearance in a training corpus because it uses the same continuous space for any length.
We conduct experiments on the headline generation task. The experimental results show that our proposed method is able to not only control the output length but also improve the ROUGE scores from the baselines. Our code and constructed test data are publicly available at: https://github.com/takase/control-length. (
In short, each dimension of the positional encoding corresponds to a sinusoid whose period is 10000 2i/d × 2π. Since this function returns an identical value at the same position pos, the above positional encoding can be interpreted as representing the absolute position of each input token.
In this paper, we extend Equations (1) and (2) to depend on the given output length and the distance from the terminal position. We propose two extensions: length-difference positional encoding (LDP E) and length-ratio positional encoding (LRP E). Then we replace Equations (1) and (2) with (3) and (4) (or (5) and (6)) on the decoder side to control the output sequence length. We define LDP E and LRP E as follows:
LRP E (pos,len,2i+1) = cos pos
len 2i d ,(6)
where len presents the given length constraint. LDP E returns an identical value at the position where the remaining length to the terminal position is the same. LRP E returns a similar value at the positions where the ratio of the remaining length to the terminal position is similar. Let us consider the d-th dimension as the simplest example.
Since we obtain sin(pos/len) (or cos(pos/len)) at this dimension, the equations yield the same value when the remaining length ratio is the same, e.g., pos = 5, len = 10 and pos = 10, len = 20. We add LDP E (or LRP E) to the input layer of Transformer in the same manner as in Vaswani et al. (2017). In the training step, we assign the length of the correct output to len. In the test phase, we control the output length by assigning the desired length to len.
Experiments
Datasets
We conduct experiments on the headline generation task on Japanese and English datasets. The purpose of the experiments is to evaluate the ability of the proposed method to generate a summary of good quality within a specified length. We used JAMUL corpus as the Japanese test set (Hitomi et al., 2019). This test set contains three kinds of headlines for 1,181 1 news articles written by professional editors under the different upper bounds of headline lengths. The upper bounds are 10, 13, and 26 characters (len = 10, 13, 26). This test set is suitable for simulating the real process of news production because it is constructed by a Japanese media company.
In contrast, we have no English test sets that contain headlines of multiple lengths. Thus, we randomly extracted 3,000 sentence-headline pairs that satisfy a length constraint from the test set constructed from annotated English Gigaword (Napoles et al., 2012) by pre-processing scripts of Rush et al. (2015) 2 . We set three configurations for the number of characters as the length constraint: 0 to 30 characters (len = 30), 30 to 50 characters (len = 50), and 50 to 75 characters (len = 75). Moreover, we also evaluate the proposed method on the DUC-2004 task 1 (Over et al., 2007) for comparison with published scores in previous studies.
Unfortunately, we have no large supervision data with multiple headlines of different lengths associated with each news article in both languages. Thus, we trained the proposed method on pairs with a one-to-one correspondences between the source articles and headlines. In the training step, we regarded the length of the target headline as the desired length len. For Japanese, we used the JNC corpus, which contains a pair of the lead three sentences of a news article and its headline (Hitomi et al., 2019). The training set contains about 1.6M pairs 3 . For English, we used sentence-headline pairs extracted from the annotated English Gigaword with the same pre-processing script used in the construction of the test set. The training set contains about 3.8M pairs.
In this paper, we used a character-level decoder to control the number of characters. On the encoder side, we used subword units to construct the vocabulary (Sennrich et al., 2016;Kudo, 2018). We set the hyper-parameter to fit the vocabulary size to about 8k for Japanese and 16k for English.
Baselines
We implemented two methods proposed by previous studies to control the output length and handle arbitrary lengths. We employed them and Transformer as baselines.
LenInit Kikuchi et al. (2016) proposed LenInit, which controls the output length by initializing the LSTM cell m of the decoder as follows:
m = len × b,(7)
where b is a trainable vector. We incorporated this method with a widely used LSTM encoderdecoder model (Luong et al., 2015) 4 . For a fair comparison, we set the same hyper-parameters as in Takase et al. (2018) because they indicated that the LSTM encoder-decoder model trained with the hyper-parameters achieved a similar performance to the state-of-the-art on the headline generation.
Length Control (LC) Liu et al. (2018) proposed a length control method that multiplies the desired length by input token embeddings. We trained the model with their hyper-parameters.
Transformer Our proposed method is based on Transformer (Vaswani et al., 2017) 5 . We trained Transformer with the equal hyper-parameters as in the base model in Vaswani et al. (2017). Table 1 shows the recall-oriented ROUGE-1 (R-1), 2 (R-2), and L (R-L) scores of each method on the Japanese test set 6 . This table indicates that Transformer with the proposed method (Transformer+LDP E and Transformer+LRP E) outperformed the baselines for all given constraints (len = 10, 13, 26). Transformer+LRP E performed slightly better than Transformer+LDP E. Moreover, we improved the performance by incorporating the standard sinusoidal positional encoding (+P E) on len = 10 and 26. The results imply that the absolute position also helps to generate better headlines while controlling the output length. Table 2 shows the recall-oriented ROUGE scores on the English Gigaword test set. This table indicates that LDP E and LRP E significantly improved the performance on len = 75. Moreover, the absolute position (P E) also improved the performance in this test set. In particular, P E was very effective in the setting of very short headlines (len = 30). However, the proposed method slightly lowered ROUGE-2 scores from the bare Transformer on len = 30, 50. We infer that the bare Transformer can generate headlines whose lengths are close to 30 and 50 because the majority of the training set consists of headlines whose lengths are less than or equal to 50. However, most of the generated headlines breached the length constraints, as explained in Section 3.4.
Results
To investigate whether the proposed method can generate good headlines for unseen lengths, we excluded headlines whose lengths are equal to the 5 We used an implementation at https://github.com/pytorch/fairseq. 6 To calculate ROUGE scores on the Japanese dataset, we used https://github.com/asahi-research/Gingo. desired length (len) from the training data. The lower parts of Table 1 and 2 show ROUGE scores of the proposed method trained on the modified training data. These parts show that the proposed method achieved comparable scores to ones trained on whole training dataset. These results indicate that the proposed method can generate high-quality headlines even if the length does not appear in the training data. Table 3 shows the recall-oriented ROUGE scores on the DUC-2004 test set. Following the evaluation protocol (Over et al., 2007), we truncated characters over 75 bytes. The table indicates that LDP E and LRP E significantly improved the performance compared to the bare Transformer, and achieved better performance than the baselines except for R-2 of LenInit. This table also shows the scores reported in the previous studies. The proposed method outperformed the previous methods that control the output length and achieved the competitive score to the state-of-the-art scores.
Since the proposed method consists of a character-based decoder, it sometimes generated words unrelated to a source sentence. Thus, we applied a simple re-ranking to each n-best headlines generated by the proposed method (n = 20 in this experiment) based on the contained words. Our re-ranking strategy selects a headline that contains source-side words the most. Table 3 shows that Transformer+LRP E+P E with this reranking (+Re-ranking) achieved better scores than the state-of-the-art (Suzuki and Nagata, 2017).
Analysis of Output Length
Following Liu et al. (2018), we used the variance of the generated summary lengths against the desired lengths as an indicator of the preciseness of the output lengths. We calculated variance (var) for n generated summaries as follows 7 :
var = 1 n n i=1 |l i − len| 2 ,(8)
where len is the desired length and l i is the length of the generated summary. Table 4 shows the values of Equation (8) computed for each method and the desired lengths. This table indicates that LDP E could control the length of headlines precisely. In particular, LDP E could generate headlines with the identical length to the desired one in comparison with LenInit and LC. LRP E also generated headlines with a precise length but its variance is larger than those of previous studies in very short lengths, i.e., len = 10 and 13 in Japanese. However, we consider LRP E is enough for real applications because the averaged difference between its output and the desired length is small, e.g., 0.1 for len = 10. Table 4 shows the variances of the proposed method trained on the modified training data that does not contain headlines whose lengths are equal to the desired length, similar to the lower parts of Table 1 and 2. The variances for this part are comparable to the ones obtained when we trained the proposed method with whole training dataset. This fact indicates that the proposed method can generate an output that satisfies the constraint of the desired length even if the training data does not contain instances of such a length.
The lower part of
Conclusion
In this paper, we proposed length-dependent positional encodings, LDP E and LRP E, that can control the output sequence length in Transformer. The experimental results demonstrate that the proposed method can generate a headline with the desired length even if the desired length is not present in the training data. Moreover, the proposed method significantly improved the quality of headlines on the Japanese headline generation task while preserving the given length constraint. For English, the proposed method also generated headlines with the desired length precisely and achieved the top ROUGE scores on the DUC-2004 test set.
Vaswani et al., 2017) uses a sinusoidal positional encoding to represent the position of an input. Transformer feeds the sum of the positional encoding and token embedding to the input layer of its encoder and decoder. Let pos be the position and d be the embedding size. Then, the ith dimension of the sinusoidal positional encoding P E (pos,i) is as follows: P E (pos,2i)
7
Liu et al. (2018) multiplies Equation (8) by 0.001.
Table 2 :
2Recall-oriented ROUGE scores for each length on test data extracted from annotated English Gigaword.Model
R-1
R-2
R-L
Baselines
LenInit
29.78 11.05 26.49
LC
28.68 10.79 25.72
Transformer
26.15
9.14 23.19
Proposed method
Transformer+LDP E
30.95 10.53 26.79
+P E
31.00 10.78 27.02
+Re-ranking
31.65 11.25 27.46
Transformer+LRP E
30.74 10.83 26.69
+P E
31.10 11.05 27.25
+Re-ranking
32.29 11.49 28.03
Previous studies for controlling output length
Kikuchi et al. (2016)
26.73
8.39 23.88
Fan et al. (2018)
30.00 10.27 26.43
Other previous studies
Rush et al. (2015)
28.18
8.49 23.81
Suzuki and Nagata (2017) 32.28 10.54 27.80
Zhou et al. (2017)
29.21
9.56 25.51
Li et al. (2017)
31.79 10.75 27.48
Li et al. (2018)
29.33 10.24 25.24
Table 3 :
3Recall-oriented ROUGE scores in DUC-2004.
Table 4 :
4Variances of generated headlines.
We obtained this test set by applying the pre-processing script at https://github.com/asahi-research/Gingo to the original JAMUL corpus.
https://github.com/facebookarchive/NAMAS 3 We obtained this training set by applying the preprocessing script at https://github.com/asahi-research/Gingo.4 We used an implementation at https://github.com/mlpnlp/mlpnlp-nmt.
AcknowledgmentsThe research results have been achieved by "Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation", the Commissioned Research of National Institute of Information and Communications Technology (NICT), Japan.
Controllable abstractive summarization. Angela Fan, David Grangier, Michael Auli, Proceedings of the 2nd Workshop on Neural Machine Translation and Generation (WMT 2018). the 2nd Workshop on Neural Machine Translation and Generation (WMT 2018)Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization. In Proceed- ings of the 2nd Workshop on Neural Machine Trans- lation and Generation (WMT 2018), pages 45-54.
A large-scale multi-length headline corpus for improving length-constrained headline generation model evaluation. Yuta Hitomi, Yuya Taguchi, Hideaki Tamori, Ko Kikuta, Jiro Nishitoba, Naoaki Okazaki, Inui Kentaro, Manabu Okumura, CoRRYuta Hitomi, Yuya Taguchi, Hideaki Tamori, Ko Kikuta, Jiro Nishitoba, Naoaki Okazaki, Inui Kentaro, and Manabu Okumura. 2019. A large-scale multi-length headline corpus for improv- ing length-constrained headline generation model evaluation. CoRR.
Controlling output length in neural encoder-decoders. Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, Manabu Okumura, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016). the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016)Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2016. Control- ling output length in neural encoder-decoders. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP 2016), pages 1328-1338.
Subword regularization: Improving neural network translation models with multiple subword candidates. Taku Kudo, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018). the 56th Annual Meeting of the Association for Computational Linguistics (ACL 2018)Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple sub- word candidates. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (ACL 2018), pages 66-75.
Ensure the correctness of the summary: Incorporate entailment knowledge into abstractive sentence summarization. Haoran Li, Junnan Zhu, Jiajun Zhang, Chengqing Zong, Proceedings of the 27th International Conference on Computational Linguistics (COLING 2018). the 27th International Conference on Computational Linguistics (COLING 2018)Haoran Li, Junnan Zhu, Jiajun Zhang, and Chengqing Zong. 2018. Ensure the correctness of the summary: Incorporate entailment knowledge into abstractive sentence summarization. In Proceedings of the 27th International Conference on Computational Linguis- tics (COLING 2018), pages 1430-1441.
Deep recurrent generative decoder for abstractive text summarization. Piji Li, Wai Lam, Lidong Bing, Zihao Wang, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingPiji Li, Wai Lam, Lidong Bing, and Zihao Wang. 2017. Deep recurrent generative decoder for abstractive text summarization. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing (EMNLP 2017), pages 2091-2100.
Controlling length in abstractive summarization using a convolutional neural network. Yizhu Liu, Zhiyi Luo, Kenny Zhu, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018). the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018)Yizhu Liu, Zhiyi Luo, and Kenny Zhu. 2018. Con- trolling length in abstractive summarization using a convolutional neural network. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018), pages 4110- 4119.
Effective approaches to attention-based neural machine translation. Thang Luong, Hieu Pham, Christopher D Manning, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015). the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015)Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 1412- 1421.
Annotated Gigaword. Courtney Napoles, Matthew Gormley, Benjamin Van Durme, Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction. the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction12Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated Gigaword. In Pro- ceedings of the Joint Workshop on Automatic Knowl- edge Base Construction and Web-scale Knowledge Extraction, AKBC-WEKEX '12, pages 95-100.
Paul Over, Hoa Dang, Donna Harman, Duc in context. Information Processing & Management. 43Paul Over, Hoa Dang, and Donna Harman. 2007. Duc in context. Information Processing & Management, 43(6):1506-1520.
A Neural Attention Model for Abstractive Sentence Summarization. Alexander M Rush, Sumit Chopra, Jason Weston, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015). the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015)Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A Neural Attention Model for Abstractive Sentence Summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), pages 379- 389.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016)Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (ACL 2016), pages 1715-1725.
Sequence to Sequence Learning with Neural Networks. Ilya Sutskever, Oriol Vinyals, V Quoc, Le, Advances in Neural Information Processing Systems 27 (NIPS 2014). Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Net- works. In Advances in Neural Information Process- ing Systems 27 (NIPS 2014), pages 3104-3112.
Cutting-off redundant repeating generations for neural abstractive summarization. Jun Suzuki, Masaaki Nagata, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017). the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017)Jun Suzuki and Masaaki Nagata. 2017. Cutting-off re- dundant repeating generations for neural abstractive summarization. In Proceedings of the 15th Confer- ence of the European Chapter of the Association for Computational Linguistics (EACL 2017), pages 291- 297.
Direct output connection for a high-rank language model. Jun Sho Takase, Masaaki Suzuki, Nagata, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018). the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018)Sho Takase, Jun Suzuki, and Masaaki Nagata. 2018. Direct output connection for a high-rank language model. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP 2018), pages 4599-4609.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30 (NIPS 2017), pages 5998-6008.
Show and tell: A neural image caption generator. Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2015)Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition (CVPR 2015), pages 3156-3164.
Selective encoding for abstractive sentence summarization. Qingyu Zhou, Nan Yang, Furu Wei, Ming Zhou, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017). the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017)Qingyu Zhou, Nan Yang, Furu Wei, and Ming Zhou. 2017. Selective encoding for abstractive sentence summarization. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (ACL 2017), pages 1095-1104.
| [
"https://github.com/takase/control-length.",
"https://github.com/pytorch/fairseq.",
"https://github.com/asahi-research/Gingo.",
"https://github.com/asahi-research/Gingo",
"https://github.com/facebookarchive/NAMAS",
"https://github.com/asahi-research/Gingo.4",
"https://github.com/mlpnlp/mlpnlp-nmt."
] |
[
"Hierarchical Transformers for Multi-Document Summarization",
"Hierarchical Transformers for Multi-Document Summarization"
] | [
"Yang Liu yang.liu2@ed.ac.uk \nInstitute for Language, Cognition and Computation School of Informatics\nUniversity of Edinburgh\n\n",
"Mirella Lapata \nInstitute for Language, Cognition and Computation School of Informatics\nUniversity of Edinburgh\n\n"
] | [
"Institute for Language, Cognition and Computation School of Informatics\nUniversity of Edinburgh\n",
"Institute for Language, Cognition and Computation School of Informatics\nUniversity of Edinburgh\n"
] | [] | In this paper, we develop a neural summarization model which can effectively process multiple input documents and distill abstractive summaries. Our model augments a previously proposed Transformer architecture with the ability to encode documents in a hierarchical manner. We represent cross-document relationships via an attention mechanism which allows to share information as opposed to simply concatenating text spans and processing them as a flat sequence. Our model learns latent dependencies among textual units, but can also take advantage of explicit graph representations focusing on similarity or discourse relations. Empirical results on the WikiSum dataset demonstrate that the proposed architecture brings substantial improvements over several strong baselines. 1 | 10.18653/v1/p19-1500 | [
"https://arxiv.org/pdf/1905.13164v1.pdf"
] | 170,079,112 | 1905.13164 | 8f8fc8e4a8629bad5a4974eec0fa5d7dd0a1f612 |
Hierarchical Transformers for Multi-Document Summarization
Yang Liu yang.liu2@ed.ac.uk
Institute for Language, Cognition and Computation School of Informatics
University of Edinburgh
Mirella Lapata
Institute for Language, Cognition and Computation School of Informatics
University of Edinburgh
Hierarchical Transformers for Multi-Document Summarization
In this paper, we develop a neural summarization model which can effectively process multiple input documents and distill abstractive summaries. Our model augments a previously proposed Transformer architecture with the ability to encode documents in a hierarchical manner. We represent cross-document relationships via an attention mechanism which allows to share information as opposed to simply concatenating text spans and processing them as a flat sequence. Our model learns latent dependencies among textual units, but can also take advantage of explicit graph representations focusing on similarity or discourse relations. Empirical results on the WikiSum dataset demonstrate that the proposed architecture brings substantial improvements over several strong baselines. 1
Introduction
Automatic summarization has enjoyed renewed interest in recent years, thanks to the popularity of neural network models and their ability to learn continuous representations without recourse to preprocessing tools or linguistic annotations. The availability of large-scale datasets (Sandhaus, 2008;Hermann et al., 2015;Grusky et al., 2018) containing hundreds of thousands of documentsummary pairs has driven the development of neural architectures for summarizing single documents. Several approaches have shown promising results with sequence-to-sequence models that encode a source document and then decode it into an abstractive summary (See et al., 2017;Celikyilmaz et al., 2018;Paulus et al., 2018;Gehrmann et al., 2018).
Multi-document summarization -the task of producing summaries from clusters of themati-cally related documents -has received significantly less attention, partly due to the paucity of suitable data for the application of learning methods. High-quality multi-document summarization datasets (i.e., document clusters paired with multiple reference summaries written by humans) have been produced for the Document Understanding and Text Analysis Conferences (DUC and TAC), but are relatively small (in the range of a few hundred examples) for training neural models. In an attempt to drive research further, tap into the potential of Wikipedia and propose a methodology for creating a large-scale dataset (WikiSum) for multidocument summarization with hundreds of thousands of instances. Wikipedia articles, specifically lead sections, are viewed as summaries of various topics indicated by their title, e.g.,"Florence" or "Natural Language Processing". Documents cited in the Wikipedia articles or web pages returned by Google (using the section titles as queries) are seen as the source cluster which the lead section purports to summarize.
Aside from the difficulties in obtaining training data, a major obstacle to the application of end-to-end models to multi-document summarization is the sheer size and number of source documents which can be very large. As a result, it is practically infeasible (given memory limitations of current hardware) to train a model which encodes all of them into vectors and subsequently generates a summary from them. propose a two-stage architecture, where an extractive model first selects a subset of salient passages, and subsequently an abstractive model generates the summary while conditioning on the extracted subset. The selected passages are concatenated into a flat sequence and the Transformer (Vaswani et al., 2017), an architecture well-suited to language modeling over long sequences, is used to decode the summary.
Although the model of takes an important first step towards abstractive multidocument summarization, it still considers the multiple input documents as a concatenated flat sequence, being agnostic of the hierarchical structures and the relations that might exist among documents. For example, different web pages might repeat the same content, include additional content, present contradictory information, or discuss the same fact in a different light (Radev, 2000). The realization that cross-document links are important in isolating salient information, eliminating redundancy, and creating overall coherent summaries, has led to the widespread adoption of graph-based models for multi-document summarization (Erkan and Radev, 2004;Christensen et al., 2013;Wan, 2008;Parveen and Strube, 2014). Graphs conveniently capture the relationships between textual units within a document collection and can be easily constructed under the assumption that text spans represent graph nodes and edges are semantic links between them.
In this paper, we develop a neural summarization model which can effectively process multiple input documents and distill abstractive summaries. Our model augments the previously proposed Transformer architecture with the ability to encode multiple documents in a hierarchical manner. We represent cross-document relationships via an attention mechanism which allows to share information across multiple documents as opposed to simply concatenating text spans and feeding them as a flat sequence to the model. In this way, the model automatically learns richer structural dependencies among textual units, thus incorporating well-established insights from earlier work. Advantageously, the proposed architecture can easily benefit from information external to the model, i.e., by replacing inter-document attention with a graph-matrix computed based on the basis of lexical similarity (Erkan and Radev, 2004) or discourse relations (Christensen et al., 2013). We evaluate our model on the WikiSum dataset and show experimentally that the proposed architecture brings substantial improvements over several strong baselines. We also find that the addition of a simple ranking module which scores documents based on their usefulness for the target summary can greatly boost the performance of a multi-document summarization system.
Related Work
Most previous multi-document summarization methods are extractive operating over graph-based representations of sentences or passages. Approaches vary depending on how edge weights are computed e.g., based on cosine similarity with tf-idf weights for words (Erkan and Radev, 2004) or on discourse relations (Christensen et al., 2013), and the specific algorithm adopted for ranking text units for inclusion in the final summary. Several variants of the PageRank algorithm have been adopted in the literature (Erkan and Radev, 2004) in order to compute the importance or salience of a passage recursively based on the entire graph. More recently, Yasunaga et al. (2017) propose a neural version of this framework, where salience is estimated using features extracted from sentence embeddings and graph convolutional networks (Kipf and Welling, 2017) applied over the relation graph representing cross-document links.
Abstractive approaches have met with limited success. A few systems generate summaries based on sentence fusion, a technique which identifies fragments conveying common information across documents and combines these into sentences (Barzilay and McKeown, 2005;Filippova and Strube, 2008;Bing et al., 2015). Although neural abstractive models have achieved promising results on single-document summarization (See et al., 2017;Paulus et al., 2018;Gehrmann et al., 2018;Celikyilmaz et al., 2018), the extension of sequence-to-sequence architectures to multi-document summarization is less straightforward. Apart from the lack of sufficient training data, neural models also face the computational challenge of processing multiple source documents. Previous solutions include model transfer (Zhang et al., 2018;Lebanoff and Liu, 2018), where a sequence-to-sequence model is pretrained on single-document summarization data and finetuned on DUC (multi-document) benchmarks, or unsupervised models relying on reconstruction objectives (Ma et al., 2016;Chu and Liu, 2018). propose a methodology for constructing large-scale summarization datasets and a two-stage model which first extracts salient information from source documents and then uses a decoder-only architecture (that can attend to very long sequences) to generate the summary. We follow their setup in viewing multi-document summarization as a supervised machine learning prob- Figure 1: Pipeline of our multi-document summarization system. L source paragraphs are first ranked and the L -best ones serve as input to an encoder-decoder model which generates the target summary. lem and for this purpose assume access to large, labeled datasets (i.e., source documents-summary pairs). In contrast to their approach, we use a learning-based ranker and our abstractive model can hierarchically encode the input documents, with the ability to learn latent relations across documents and additionally incorporate information encoded in well-known graph representations.
Model Description
We follow in treating the generation of lead Wikipedia sections as a multidocument summarization task. The input to a hypothetical system is the title of a Wikipedia article and a collection of source documents, while the output is the Wikipedia article's first section. Source documents are webpages cited in the References section of the Wikipedia article and the top 10 search results returned by Google (with the title of the article as the query). Since source documents could be relatively long, they are split into multiple paragraphs by line-breaks. More formally, given title T , and L input paragraphs {P 1 , · · · , P L } (retrieved from Wikipedia citations and a search engine), the task is to generate the lead section D of the Wikipedia article.
Our summarization system is illustrated in Figure 1. Since the input paragraphs are numerous and possibly lengthy, instead of directly applying an abstractive system, we first rank them and summarize the L -best ones. Our summarizer follows the very successful encoder-decoder architecture (Bahdanau et al., 2015), where the encoder encodes the input text into hidden representations and the decoder generates target summaries based on these representations. In this paper, we focus exclusively on the encoder part of the model, our decoder follows the Transformer architecture in-troduced in Vaswani et al. (2017); it generates a summary token by token while attending to the source input. We also use beam search and a length penalty (Wu et al., 2016) in the decoding process to generate more fluent and longer summaries.
Paragraph Ranking
Unlike who rank paragraphs based on their similarity with the title (using tf-idfbased cosine similarity), we adopt a learningbased approach. A logistic regression model is applied to each paragraph to calculate a score indicating whether it should be selected for summarization. We use two recurrent neural networks with Long-Short Term Memory units (LSTM; Hochreiter and Schmidhuber 1997) to represent title T and source paragraph P :
{u t1 , · · · , u tm } = lstm t ({w t1 , · · · , w tm }) (1) {u p1 , · · · , u pn } = lstm p ({w p1 , · · · , w pn }) (2)
where w ti , w pj are word embeddings for tokens in T and P , and u ti , u pj are the updated vectors for each token after applying the LSTMs.
A max-pooling operation is then used over title vectors to obtain a fixed-length representationû t :
u t = maxpool({u t1 , · · · , u tm })(3)
We concatenateû t with the vector u pi of each token in the paragraph and apply a non-linear transformation to extract features for matching the title and the paragraph. A second max-pooling operation yields the final paragraph vectorp:
p i = tanh(W 1 ([u pi ;û t ])) (4) p = maxpool({p 1 , · · · , p n })(5)
Finally, to estimate whether a paragraph should be selected, we use a linear transformation and a sigmoid function:
s = sigmoid(W 2( p))(6)
where s is the score indicating whether paragraph P should be used for summarization. All input paragraphs {P 1 , · · · , P L } receive scores {s 1 , · · · , s L }. The model is trained by minimizing the cross entropy loss between s i and ground-truth scores y i denoting the relatedness of a paragraph to the gold standard summary. We adopt ROUGE-2 recall (of paragraph P i against gold target text D) as y i . In testing, input paragraphs are ranked based on the model predicted scores and an ordering {R 1 , · · · , R L } is generated. The first L paragraphs {R 1 , · · · , R L } are selected as input to the second abstractive stage.
Paragraph Encoding
Instead of treating the selected paragraphs as a very long sequence, we develop a hierarchical model based on the Transformer architecture (Vaswani et al., 2017) to capture inter-paragraph relations. The model is composed of several local and global transformer layers which can be stacked freely. Let t ij denote the j-th token in the i-th ranked paragraph R i ; the model takes vectors x 0 ij (for all tokens) as input. For the l-th transformer layer, the input will be x l−1 ij , and the output is written as x l ij .
Embeddings
Input tokens are first represented by word embeddings. Let w ij ∈ R d denote the embedding assigned to t ij . Since the Transformer is a nonrecurrent model, we also assign a special positional embedding pe ij to t ij , to indicate the position of the token within the input.
To calculate positional embeddings, we follow Vaswani et al. (2017) and use sine and cosine functions of different frequencies. The embedding e p for the p-th element in a sequence is:
e p [i] = sin(p/10000 2i/d ) (7) e p [2i + 1] = cos(p/10000 2i/d )(8)
where e p [i] indicates the i-th dimension of the embedding vector. Because each dimension of the positional encoding corresponds to a sinusoid, for any fixed offset o, e p+o can be represented as a linear function of e p , which enables the model to distinguish relative positions of input elements. In multi-document summarization, token t ij has two positions that need to be considered, namely i (the rank of the paragraph) and j (the position of the token within the paragraph). Positional embedding pe ij ∈ R d represents both positions (via concatenation) and is added to word embedding w ij to obtain the final input vector x 0 ij :
pe ij = [e i ; e j ](9)
x 0 ij = w ij + pe ij (10)
Local Transformer Layer
A local transformer layer is used to encode contextual information for tokens within each paragraph. The local transformer layer is the same as the vanilla transformer layer (Vaswani et al., 2017), and composed of two sub-layers:
h = LayerNorm(x l−1 + MHAtt(x l−1 )) (11) x l = LayerNorm(h + FFN(h))(12)
where LayerNorm is layer normalization proposed in Ba et al. (2016); MHAtt is the multihead attention mechanism introduced in Vaswani et al. (2017) which allows each token to attend to other tokens with different attention distributions; and FFN is a two-layer feed-forward network with ReLU as hidden activation function.
Global Transformer Layer
A global transformer layer is used to exchange information across multiple paragraphs. As shown in Figure 2, we first apply a multi-head pooling operation to each paragraph. Different heads will encode paragraphs with different attention weights. Then, for each head, an inter-paragraph attention mechanism is applied, where each paragraph can collect information from other paragraphs by selfattention, generating a context vector to capture contextual information from the whole input. Finally, context vectors are concatenated, linearly transformed, added to the vector of each token, and fed to a feed-forward layer, updating the representation of each token with global information.
Multi-head Pooling To obtain fixed-length paragraph representations, we apply a weightedpooling operation; instead of using only one representation for each paragraph, we introduce a multi-head pooling mechanism, where for each paragraph, weight distributions over tokens are calculated, allowing the model to flexibly encode paragraphs in different representation subspaces by attending to different words. Let x l−1 ij ∈ R d denote the output vector of the last transformer layer for token t ij , which is used as input for the current layer. For each paragraph R i , for head z ∈ {1, · · · , n head }, we first transform the input vectors into attention scores a z ij and value vectors b z ij . Then, for each head, we calculate a probability distributionâ z ij over tokens within the paragraph based on attention scores:
a z ij = W z a x l−1 ij (13) b z ij = W z b x l−1 ij (14) a z ij = exp(a z ij )/ n j=1 exp(a z ij )(15)
where W z a ∈ R 1 * d and W z b ∈ R d head * d are weights. d head = d/n head is the dimension of each head. n is the number of tokens in R i . We next apply a weighted summation with another linear transformation and layer normalization to obtain vector head z i for the paragraph:
head z i = LayerNorm(W z c n j=1 a z ij b z ij )(16)
where W z c ∈ R d head * d head is the weight. The model can flexibly incorporate multiple heads, with each paragraph having multiple attention distributions, thereby focusing on different views of the input.
Inter-paragraph Attention We model the dependencies across multiple paragraphs with an inter-paragraph attention mechanism. Similar to self-attention, inter-paragraph attention allows for each paragraph to attend to other paragraphs by calculating an attention distribution:
q z i = W z q head z i (17) k z i = W z k head z i (18) v z i = W z v head z i (19) context z i = m i=1 exp(q z i T k z i ) m o=1 exp(q z i T k z o ) v z i(20)
where q z i , k z i , v z i ∈ R d head * d head are query, key, and value vectors that are linearly transformed from head z i as in Vaswani et al. (2017); context z i ∈ R d head represents the context vector generated by a self-attention operation over all paragraphs. m is the number of input paragraphs. Figure 2 provides a schematic view of inter-paragraph attention.
Feed-forward Networks We next update token representations with contextual information. We first fuse information from all heads by concatenating all context vectors and applying a linear transformation with weight W c ∈ R d * d : We then add c i to each input token vector x l−1 ij , and feed it to a two-layer feed-forward network with ReLU as the activation function and a highway layer normalization on top:
c i = W c [context 1 i ; · · · ; context n head i ](21)g ij = W o2 ReLU(W o1 (x l−1 ij + c i ))(22)
x l ij = LayerNorm(g ij + x l−1 ij )
where W o1 ∈ R d f f * d and W o2 ∈ R d * d f f are the weights, d f f is the hidden size of the feed-forward later. This way, each token within paragraph R i can collect information from other paragraphs in a hierarchical and efficient manner.
Graph-informed Attention
The inter-paragraph attention mechanism can be viewed as learning a latent graph representation (self-attention weights) of the input paragraphs.
Although previous work has shown that similar latent representations are beneficial for downstream NLP tasks (Liu and Lapata, 2018;Kim et al., 2017;Williams et al., 2018;Niculae et al., 2018;Fernandes et al., 2019), much work in multi-document summarization has taken advantage of explicit graph representations, each focusing on different facets of the summarization task (e.g., capturing redundant information or representing passages referring to the same event or entity). One advantage of the hierarchical transformer is that we can easily incorporate graphs external to the model, to generate better summaries. We experimented with two well-established graph representations which we discuss briefly below. However, there is nothing inherent in our model that restricts us to these, any graph modeling relationships across paragraphs could have been used instead. Our first graph aims to capture lexical relations; graph nodes correspond to paragraphs and edge weights are cosine similarities based on tf-idf representations of the paragraphs. Our second graph aims to capture discourse relations (Christensen et al., 2013); it builds an Approximate Discourse Graph (ADG) (Yasunaga et al., 2017) over paragraphs; edges between paragraphs are drawn by counting (a) co-occurring entities and (b) discourse markers (e.g., however, nevertheless) connecting two adjacent paragraphs (see the Appendix for details on how ADGs are constructed).
We represent such graphs with a matrix G, where G ii is the weight of the edge connecting paragraphs i and i . We can then inject this graph into our hierarchical transformer by simply substituting one of its (learned) heads z with G. Equation (20) for calculating the context vector for this head is modified as:
context z i = m i =1 G ii m o=1 G io v z i(24)
Experimental Setup
WikiSum Dataset We used the scripts and urls provided in to crawl Wikipedia articles and source reference documents. We successfully crawled 78.9% of the original documents (some urls have become invalid and corresponding documents could not be retrieved). We further removed clone paragraphs (which are exact copies of some parts of the Wikipedia articles); these were paragraphs in the source documents whose bigram recall against the target summary was higher than 0. Table 1: ROUGE-L recall against target summary for L -best paragraphs obtained with tf-idf cosine similarity and our ranking model.
For both ranking and summarization stages, we encode source paragraphs and target summaries using subword tokenization with Sentence-Piece (Kudo and Richardson, 2018). Our vocabulary consists of 32, 000 subwords and is shared for both source and target.
Paragraph Ranking
To train the regression model, we calculated the ROUGE-2 recall (Lin, 2004) of each paragraph against the target summary and used this as the ground-truth score. The hidden size of the two LSTMs was set to 256, and dropout (with dropout probability of 0.2) was used before all linear layers. Adagrad (Duchi et al., 2011) with learning rate 0.15 is used for optimization. We compare our ranking model against the method proposed in who use the tf-idf cosine similarity between each paragraph and the article title to rank the input paragraphs. We take the first L paragraphs from the ordered paragraph set produced by our ranker and the similarity-based method, respectively. We concatenate these paragraphs and calculate their ROUGE-L recall against the gold target text. The results are shown in Table 1. We can see that our ranker effectively extracts related paragraphs and produces more informative input for the downstream summarization task.
Training Configuration In all abstractive models, we apply dropout (with probability of 0.1) before all linear layers; label smoothing (Szegedy et al., 2016) with smoothing factor 0.1 is also used. Training is in traditional sequence-to-sequence manner with maximum likelihood estimation. The optimizer was Adam (Kingma and Ba, 2014) with learning rate of 2, β 1 = 0.9, and β 2 = 0.998; we also applied learning rate warmup over the first 8, 000 steps, and decay as in (Vaswani et al., 2017). All transformer-based models had 256 hidden units; the feed-forward hidden size was 1, 024 for all layers. All models were trained on 4 GPUs (NVIDIA TITAN Xp) for 500, 000 steps. We used gradient accumulation to keep training time for all models approximately consistent. We selected the 5 best checkpoints based on performance on the validation set and report averaged results on the test set. During decoding we use beam search with beam size 5 and length penalty with α = 0.4 (Wu et al., 2016); we decode until an end-of-sequence token is reached.
Comparison Systems
We compared the proposed hierarchical transformer against several strong baselines:
Lead is a simple baseline that concatenates the title and ranked paragraphs, and extracts the first k tokens; we set k to the length of the ground-truth target.
LexRank (Erkan and Radev, 2004) is a widelyused graph-based extractive summarizer; we build a graph with paragraphs as nodes and edges weighted by tf-idf cosine similarity; we run a PageRank-like algorithm on this graph to rank and select paragraphs until the length of the ground-truth summary is reached.
Flat Transformer (FT) is a baseline that applies a Transformer-based encoder-decoder model to a flat token sequence. We used a 6-layer transformer. The title and ranked paragraphs were concatenated and truncated to 600, 800, and 1, 200 tokens.
T-DMCA is the best performing model of and a shorthand for Transformer Decoder with Memory Compressed Attention; they only used a Transformer decoder and compressed the key and value in selfattention with a convolutional layer. The model has 5 layers as in . Its hidden size is 512 and its feed-forward hidden size is 2, 048. The title and ranked paragraphs were concatenated and truncated to 3,000 tokens.
Hierarchical Transformer (HT) is the model proposed in this paper. The model architecture is a 7-layer network (with 5 localattention layers at the bottom and 2 global attention layers at the top). The model takes the title and L = 24 paragraphs as input to produce a target summary, which leads to approximately 1, 600 input tokens per instance.
Results
Automatic Evaluation We evaluated summarization quality using ROUGE F 1 (Lin, 2004). We report unigram and bigram overlap (ROUGE-1 and ROUGE-2) as a means of assessing informativeness and the longest common subsequence (ROUGE-L) as a means of assessing fluency. Table 2 summarizes our results. The first block in the table includes extractive systems (Lead, LexRank), the second block includes several variants of Flat Transformer-based models (FT, T-DMCA), while the rest of the table presents the results of our Hierarchical Transformer (HT). As can be seen, abstractive models generally outperform extractive ones. The Flat Transformer, achieves best results when the input length is set to 800 tokens, while longer input (i.e., 1, 200 tokens) actually hurts performance. The Hierarchical Transformer with 1, 600 input tokens, outper- forms FT, and even T-DMCA when the latter is presented with 3, 000 tokens. Adding an external graph also seems to help the summarization process. The similarity graph does not have an obvious influence on the results, while the discourse graph boosts ROUGE-L by 0.16.
We also found that the performance of the Hierarchical Transformer further improves when the model is presented with longer input at test time. 2 As shown in the last row of Table 2, when testing on 3, 000 input tokens, summarization quality improves across the board. This suggests that the model can potentially generate better summaries without increasing training time. Table 3 summarizes ablation studies aiming to assess the contribution of individual components. Our experiments confirmed that encoding paragraph position in addition to token position within each paragraph is beneficial (see row w/o PP), as well as multi-head pooling (w/o MP is a model where the number of heads is set to 1), and the global transformer layer (w/o GT is a model with only 5 local transformer layers in the encoder).
Human Evaluation In addition to automatic evaluation, we also assessed system performance by eliciting human judgments on 20 randomly selected test instances. Our first evaluation study quantified the degree to which summarization models retain key information from the documents following a question-answering (QA) paradigm (Clarke and Lapata, 2010;Narayan et al., 2018). We created a set of questions based on the gold summary under the assumption that it contains the most important information from the input paragraphs. We then examined whether participants were able to answer these questions by reading system summaries alone without access to the gold summary. The more questions a system can answer, the better it is at summarization. We created 57 questions in total varying from two to 2 This was not the case with the other Transformer models. four questions per gold summary. Examples of questions and their answers are given in Table 5. We adopted the same scoring mechanism used in Clarke and Lapata (2010), i.e., correct answers are marked with 1, partially correct ones with 0.5, and 0 otherwise. A system's score is the average of all question scores.
Our second evaluation study assessed the overall quality of the summaries by asking participants to rank them taking into account the following criteria: Informativeness (does the summary convey important facts about the topic in question?), Fluency (is the summary fluent and grammatical?), and Succinctness (does the summary avoid repetition?). We used Best-Worst Scaling (Louviere et al., 2015), a less labor-intensive alternative to paired comparisons that has been shown to produce more reliable results than rating scales (Kiritchenko and Mohammad, 2017). Participants were presented with the gold summary and summaries generated from 3 out of 4 systems and were asked to decide which summary was the best and which one was the worst in relation to the gold standard, taking into account the criteria mentioned above. The rating of each system was computed as the percentage of times it was chosen as best minus the times it was selected as worst. Ratings range from −1 (worst) to 1 (best).
Both evaluations were conducted on the Amazon Mechanical Turk platform with 5 responses per hit. Participants evaluated summaries produced by the Lead baseline, the Flat Transformer, T-DMCA, and our Hierarchical Transformer. All evaluated systems were variants that achieved the best performance in automatic evaluations. As shown in Table 4, on both evaluations, participants overwhelmingly prefer our model (HT). All pairwise comparisons among systems are statistically significant (using a one-way ANOVA with posthoc Tukey HSD tests; p < 0.01). Examples of system output are provided in Table 5. The Australian golden whistler (Pachycephala pectoralis) is a species of bird found in forest, woodland, mallee, mangrove and scrub in Australia (except the interior and most of the north) Most populations are resident, but some in south-eastern Australia migrate north during the winter.
FT
The Melanesian whistler (P. Caledonica) is a species of bird in the family Muscicapidae. It is endemic to Melanesia. T-DMCA The Australian golden whistler (Pachycephala chlorura) is a species of bird in the family Pachycephalidae, which is endemic to Fiji.
HT
The Melanesian whistler (Pachycephala chlorura) is a species of bird in the family Pachycephalidae, which is endemic to Fiji. Table 5: GOLD human authored summaries, questions based on them (answers shown in square brackets) and automatic summaries produced by the LEAD-3 baseline, the Flat Transformer (FT), T-DMCA and our Hierachical Transformer (HT).
Conclusions
In this paper we conceptualized abstractive multidocument summarization as a machine learning problem. We proposed a new model which is able to encode multiple input documents hierarchically, learn latent relations across them, and additionally incorporate structural information from well-known graph representations. We have also demonstrated the importance of a learning-based approach for selecting which documents to summarize. Experimental results show that our model produces summaries which are both fluent and in-formative outperforming competitive systems by a wide margin. In the future we would like to apply our hierarchical transformer to question answering and related textual inference tasks.
Figure 2 :
2A global transformer layer. Different colors indicate different heads in multi-head pooling and inter-paragraph attention.
3 :
3Hierarchical Transformer and versions thereof without (w/o) paragraph position (PP), multi-head pooling (MP), and global transformer layer (GT).
FTthe
Pentagoet Archeological district is a National Historic Landmark District located at the southern edge of the Bagaduce Peninsula in Treaty Of Breda. It was listed on the national register of historic places in 1983. T-DMCA The Pentagoet Archeological District is a national historic landmark district located in castine , maine . this district forms part of the traditional homeland of the abenaki indians , in particular the Penobscot tribe. The district was listed on the national register of historic places in 1982. HT The Pentagoet Archeological district is a National Historic Landmark District located in Castine, Maine. This district forms part of the traditional homeland of the Abenaki Indians, in particular the Penobscot tribe. In the colonial period, Abenaki frequented the fortified trading post at this site, bartering moosehides, sealskins, beaver and other furs in exchange for European commodities. Melanesian Whistler GOLD The Melanesian whistler or Vanuatu whistler (Pachycephala chlorura) is a species of passerine bird in the whistler family Pachycephalidae. It is found on the Loyalty Islands, Vanuatu, and Vanikoro in the far southeastern Solomons. QA What is the Melanesian Whistler? [a species of passerine bird in the whistler family Pachycephalidae] Where is it found? [Loyalty Islands , Vanuatu , and Vanikoro in the far south-eastern Solomons] LEAD
Table 2 :
2Test set results on the WikiSum dataset using ROUGE F 1 .
Table
Table 4 :
4System scores based on questions answered by AMT participants and summary quality rating.
Pentagoet Archeological District GOLD The Pentagoet Archeological District is a National Historic Landmark District located at the southern edge of the Bagaduce Peninsula in Castine, Maine. It is the site of Fort Pentagoet, a 17th-century fortified trading post established by fur traders of French Acadia. From 1635 to 1654 this site was a center of trade with the local Abenaki, and marked the effective western border of Acadia with New England. From 1654 to 1670 the site was under English control, after which it was returned to France by the Treaty of Breda. The fort was destroyed in 1674 by Dutch raiders. The site was designated a National Historic Landmark in 1993. It is now a public park.QAWhat is the Pentagoet Archeological District? [a National Historic Landmark District] Where is it located?[Castine , Maine] What did the Abenaki Indians use the site for?[trading center]LEADThe Pentagoet Archeological District is a National Historic Landmark District located in Castine, Maine. This district forms part of the traditional homeland of the Abenaki Indians, in particular the Penobscot tribe. In the colonial period, Abenakis frequented the fortified trading post at this site, bartering moosehides, sealskins, beaver and other furs in exchange for European commodities. "Pentagoet Archeological district" is a National Historic Landmark District located at the southern edge of the Bagaduce Peninsula in Treaty Of Breda.
Our code and data is available at https://github. com/nlpyang/hiersumm.
AcknowledgmentsWe would like to thank Laura Perez-Beltrachini for her help with preprocessing the dataset. This research is supported by a Google PhD Fellowship to the first author. The authors gratefully acknowledge the financial support of the European Research Council (award number 681760).A AppendixWe describe here how the similarity and discourse graphs discussed in Section 3.2.4 were created. These graphs were added to the hierarchical transformer model as a means to enhance summary quality (see Section 5 for details).A.1 Similarity GraphThe similarity graph S is based on tf-idf cosine similarity. The nodes of the graph are paragraphs. We first represent each paragraph p i as a bag of words. Then, we calculate the tf-idf value v ik for each token t ik in a paragraph:where N w(t) is the count of word t in the paragraph, N d is the total number of paragraphs, and N dw (t) is the total number of paragraphs containing the word. We thus obtain a tf-idf vector for each paragraph. Then, for all paragraph pairs < p i , p i >, we calculate the cosine similarity of their tf-idf vectors and use this as the weight S ii for the edge connecting the pair in the graph. We remove edges with weights lower than 0.2.A.2 Discourse GraphsTo build the Approximate Discourse Graph (ADG) D, we followChristensen et al. (2013)andYasunaga et al. (2017). The original ADG makes use of several complex features. Here, we create a simplified version with only two features (nodes in this graph are again paragraphs).Co-occurring EntitiesFor each paragraph p i , we extract a set of entities E i in the paragraph using the Spacy 3 NER recognizer. We only use entities with type {PERSON, NORP, FAC, ORG, GPE, LOC, EVENT, WORK OF ART, LAW}. For each paragraph pair < p i , p j >, we count e ij , the number of entities with exact match.Discourse MarkersWe use the following 36 explicit discourse markers to identify edges between two adjacent paragraphs in a source webpage:again, also, another, comparatively, furthermore, at the same time,however, immediately, indeed, instead, to be sure, likewise, meanwhile, moreover, nevertheless, nonetheless, notably, otherwise, regardless, similarly, unlike, in addition, even, in turn, in exchange, in this case, in any event, finally, later, as well, especially, as a result, example, in fact, then, the day before 3 https://spacy.io/api/entityrecognizer If two paragraphs < p i , p i > are adjacent in one source webpage and they are connected with one of the above 36 discourse markers, m ii will be 1, otherwise it will be 0. The final edge weight D ii is the weighted sum of e ii and m ii D ii = 0.2 * e ii + m ii (26)
. Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E Hin, arXiv:1607.06450ton. 2016. Layer normalization. arXiv preprintJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hin- ton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Proceedings of the 3rd International Conference on Learning Representations. the 3rd International Conference on Learning RepresentationsSan Diego, CaliforniaDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In In Proceedings of the 3rd International Conference on Learning Rep- resentations, San Diego, California.
Sentence fusion for multidocument news summarization. Regina Barzilay, Kathleen R Mckeown, Computational Linguistics. 313Regina Barzilay and Kathleen R. McKeown. 2005. Sentence fusion for multidocument news summa- rization. Computational Linguistics, 31(3):297- 327.
Abstractive multidocument summarization via phrase selection and merging. Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, Rebecca Passonneau, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaLong Papers1Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, and Rebecca Passonneau. 2015. Abstractive multi- document summarization via phrase selection and merging. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1587-1597, Beijing, China.
Deep communicating agents for abstractive summarization. Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, Yejin Choi, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers; New Orleans, Louisiana1Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 1662-1675, New Orleans, Louisiana.
Towards coherent multidocument summarization. Janara Christensen, Stephen Mausam, Oren Soderland, Etzioni, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAtlanta, Georgia. Association for Computational LinguisticsJanara Christensen, Mausam, Stephen Soderland, and Oren Etzioni. 2013. Towards coherent multi- document summarization. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1163-1173, At- lanta, Georgia. Association for Computational Lin- guistics.
Unsupervised neural multi-document abstractive summarization. Eric Chu, J Peter, Liu, arXiv:1810.05739arXiv preprintEric Chu and Peter J Liu. 2018. Unsupervised neural multi-document abstractive summarization. arXiv preprint arXiv:1810.05739.
Discourse constraints for document compression. James Clarke, Mirella Lapata, Computational Linguistics. 363James Clarke and Mirella Lapata. 2010. Discourse constraints for document compression. Computa- tional Linguistics, 36(3):411-441.
Adaptive subgradient methods for online learning and stochastic optimization. John Duchi, Elad Hazan, Yoram Singer, Journal of Machine Learning Research. 12John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121-2159.
Lexrank: Graph-based lexical centrality as salience in text summarization. Günes Erkan, Dragomir R Radev, Journal of artificial intelligence research. 22Günes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. Journal of artificial intelligence re- search, 22:457-479.
Structured neural summarization. Patrick Fernandes, Miltiadis Allamanis, Marc Brockschmidt, Proceedings of the 7th International Conference on Learning Representations. the 7th International Conference on Learning RepresentationsNew Orleans, LouisianaPatrick Fernandes, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Structured neural summariza- tion. In Proceedings of the 7th International Con- ference on Learning Representations, New Orleans, Louisiana.
Sentence fusion via dependency graph compression. Katja Filippova, Michael Strube, Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. the 2008 Conference on Empirical Methods in Natural Language ProcessingHonolulu, HawaiiKatja Filippova and Michael Strube. 2008. Sentence fusion via dependency graph compression. In Pro- ceedings of the 2008 Conference on Empirical Meth- ods in Natural Language Processing, pages 177- 185, Honolulu, Hawaii.
Bottom-up abstractive summarization. Sebastian Gehrmann, Yuntian Deng, Alexander Rush, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumSebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 4098-4109, Brussels, Belgium.
Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. Max Grusky, Mor Naaman, Yoav Artzi, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers; New Orleans, Louisiana1Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 708-719, New Orleans, Louisiana.
Teaching machines to read and comprehend. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, Advances in Neural Information Processing Systems. Curran Associates, Inc28Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Su- leyman, and Phil Blunsom. 2015. Teaching ma- chines to read and comprehend. In Advances in Neu- ral Information Processing Systems 28, pages 1693- 1701. Curran Associates, Inc.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
Structured attention networks. Yoon Kim, Carl Denton, Luong Hoang, Alexander M Rush, Proceedings of the 5th International Conference on Learning Representations. the 5th International Conference on Learning RepresentationsToulon, FranceYoon Kim, Carl Denton, Luong Hoang, and Alexan- der M Rush. 2017. Structured attention networks. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Semisupervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, Proceedings of the 4th International Conference on Learning Representations. the 4th International Conference on Learning RepresentationsSan Juan, Puerto RicoThomas N Kipf and Max Welling. 2017. Semi- supervised classification with graph convolutional networks. In Proceedings of the 4th International Conference on Learning Representations, San Juan, Puerto Rico.
Best-worst scaling more reliable than rating scales: A case study on sentiment intensity annotation. Svetlana Kiritchenko, Saif Mohammad, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaSvetlana Kiritchenko and Saif Mohammad. 2017. Best-worst scaling more reliable than rating scales: A case study on sentiment intensity annotation. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics, pages 465- 470, Vancouver, Canada.
Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. Taku Kudo, John Richardson, arXiv:1808.06226arXiv preprintTaku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226.
Automatic detection of vague words and sentences in privacy policies. Logan Lebanoff, Fei Liu, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumLogan Lebanoff and Fei Liu. 2018. Automatic detec- tion of vague words and sentences in privacy poli- cies. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 3508-3517, Brussels, Belgium.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, Text Summarization Branches Out: Proceedings of the ACL-04 Workshop. Barcelona, SpainAssociation for Computational LinguisticsChin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Work- shop, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
Generating Wikipedia by summarizing long sequences. J Peter, Mohammad Liu, Etienne Saleh, Ben Pot, Ryan Goodrich, Lukasz Sepassi, Noam Kaiser, Shazeer, Proceedings of the 6th International Conference on Learning Representations. the 6th International Conference on Learning RepresentationsVancouver, CanadaPeter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating Wikipedia by summariz- ing long sequences. In Proceedings of the 6th Inter- national Conference on Learning Representations, Vancouver, Canada.
Learning structured text representations. Yang Liu, Mirella Lapata, Transactions of the Association for Computational Linguistics. 6Yang Liu and Mirella Lapata. 2018. Learning struc- tured text representations. Transactions of the Asso- ciation for Computational Linguistics, 6:63-75.
Best-worst scaling: Theory, methods and applications. J Jordan, Louviere, N Terry, Anthony Alfred John Flynn, Marley, Cambridge University PressJordan J Louviere, Terry N Flynn, and Anthony Al- fred John Marley. 2015. Best-worst scaling: The- ory, methods and applications. Cambridge Univer- sity Press.
An unsupervised multi-document summarization framework based on neural document model. Shulei Ma, Zhi-Hong Deng, Yunlun Yang, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanShulei Ma, Zhi-Hong Deng, and Yunlun Yang. 2016. An unsupervised multi-document summarization framework based on neural document model. In Proceedings of COLING 2016, the 26th Interna- tional Conference on Computational Linguistics: Technical Papers, pages 1514-1523, Osaka, Japan.
Ranking sentences for extractive summarization with reinforcement learning. Shashi Narayan, Shay B Cohen, Mirella Lapata, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers; New Orleans, Louisiana1Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Ranking sentences for extractive summariza- tion with reinforcement learning. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1747-1759, New Orleans, Louisiana.
Towards dynamic computation graphs via sparse latent structure. Vlad Niculae, F T André, Claire Martins, Cardie, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumVlad Niculae, André F. T. Martins, and Claire Cardie. 2018. Towards dynamic computation graphs via sparse latent structure. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 905-911, Brussels, Bel- gium.
Multidocument summarization using bipartite graphs. Daraksha Parveen, Michael Strube, Proceedings of TextGraphs-9: the workshop on Graph-based Methods for Natural Language Processing. TextGraphs-9: the workshop on Graph-based Methods for Natural Language ProcessingDoha, QatarDaraksha Parveen and Michael Strube. 2014. Multi- document summarization using bipartite graphs. In Proceedings of TextGraphs-9: the workshop on Graph-based Methods for Natural Language Pro- cessing, pages 15-24, Doha, Qatar.
A deep reinforced model for abstractive summarization. Romain Paulus, Caiming Xiong, Richard Socher, Proceedings of the 6th International Conference on Learning Representations. the 6th International Conference on Learning RepresentationsVancouver, CanadaRomain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive sum- marization. In Proceedings of the 6th International Conference on Learning Representations, Vancou- ver, Canada.
A common theory of information fusion from multiple text sources step one: Cross-document structure. Dragomir Radev, 1st SIGdial Workshop on Discourse and Dialogue. Hong Kong, ChinaDragomir Radev. 2000. A common theory of infor- mation fusion from multiple text sources step one: Cross-document structure. In 1st SIGdial Workshop on Discourse and Dialogue, pages 74-83, Hong Kong, China.
Evan Sandhaus, The New York Times Annotated Corpus. Linguistic Data Consortium. Philadelphia6Evan Sandhaus. 2008. The New York Times Annotated Corpus. Linguistic Data Consortium, Philadelphia, 6(12).
Get to the point: Summarization with pointergenerator networks. Abigail See, J Peter, Christopher D Liu, Manning, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaLong Papers1Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083, Vancouver, Canada.
Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In The IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR).
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.
An exploration of document impact on graph-based multi-document summarization. Xiaojun Wan, Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. the 2008 Conference on Empirical Methods in Natural Language ProcessingHonolulu, HawaiiXiaojun Wan. 2008. An exploration of document impact on graph-based multi-document summariza- tion. In Proceedings of the 2008 Conference on Em- pirical Methods in Natural Language Processing, pages 755-762, Honolulu, Hawaii.
Do latent tree learning models identify meaningful structure in sentences. Adina Williams, Andrew Drozdov, Samuel R Bowman, Transactions of the Association for Computational Linguistics. 6Adina Williams, Andrew Drozdov, and Samuel R. Bowman. 2018. Do latent tree learning models iden- tify meaningful structure in sentences? Transac- tions of the Association for Computational Linguis- tics, 6:253-267.
Google's neural machine translation system: Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, Zhifeng Chen, V Quoc, Mohammad Le, Wolfgang Norouzi, Maxim Macherey, Yuan Krikun, Qin Cao, Klaus Gao, Macherey, arXiv:1609.08144In arXiv preprintYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. In arXiv preprint arXiv:1609.08144.
Graph-based neural multi-document summarization. Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, Dragomir Radev, Proceedings of the 21st Conference on Computational Natural Language Learning. the 21st Conference on Computational Natural Language LearningVancouver, CanadaMichihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of the 21st Confer- ence on Computational Natural Language Learning (CoNLL 2017), pages 452-462, Vancouver, Canada.
Adapting neural single-document summarization model for abstractive multi-document summarization: A pilot study. Jianmin Zhang, Jiwei Tan, Xiaojun Wan, Proceedings of the International Conference on Natural Language Generation. the International Conference on Natural Language GenerationJianmin Zhang, Jiwei Tan, and Xiaojun Wan. 2018. Adapting neural single-document summarization model for abstractive multi-document summariza- tion: A pilot study. In Proceedings of the Interna- tional Conference on Natural Language Generation.
| [] |
[
"Cold-start Active Learning through Self-supervised Language Modeling",
"Cold-start Active Learning through Self-supervised Language Modeling"
] | [
"Michelle Yuan myuan@cs.umd.edu \nUniversity of Maryland\nNational Taiwan University\nUniversity of Maryland\n\n",
"Hsuan-Tien Lin htlin@csie.ntu.edu.tw \nUniversity of Maryland\nNational Taiwan University\nUniversity of Maryland\n\n",
"Jordan Boyd-Graber \nUniversity of Maryland\nNational Taiwan University\nUniversity of Maryland\n\n"
] | [
"University of Maryland\nNational Taiwan University\nUniversity of Maryland\n",
"University of Maryland\nNational Taiwan University\nUniversity of Maryland\n",
"University of Maryland\nNational Taiwan University\nUniversity of Maryland\n"
] | [] | Active learning strives to reduce annotation costs by choosing the most critical examples to label. Typically, the active learning strategy is contingent on the classification model. For instance, uncertainty sampling depends on poorly calibrated model confidence scores. In the cold-start setting, active learning is impractical because of model instability and data scarcity. Fortunately, modern NLP provides an additional source of information: pretrained language models. The pre-training loss can find examples that surprise the model and should be labeled for efficient fine-tuning. Therefore, we treat the language modeling loss as a proxy for classification uncertainty. With BERT, we develop a simple strategy based on the masked language modeling loss that minimizes labeling costs for text classification. Compared to other baselines, our approach reaches higher accuracy within less sampling iterations and computation time. | 10.18653/v1/2020.emnlp-main.637 | [
"https://arxiv.org/pdf/2010.09535v2.pdf"
] | 224,724,415 | 2010.09535 | 2e06b7f72270900544284e0898aac2bb564ff58b |
Cold-start Active Learning through Self-supervised Language Modeling
Michelle Yuan myuan@cs.umd.edu
University of Maryland
National Taiwan University
University of Maryland
Hsuan-Tien Lin htlin@csie.ntu.edu.tw
University of Maryland
National Taiwan University
University of Maryland
Jordan Boyd-Graber
University of Maryland
National Taiwan University
University of Maryland
Cold-start Active Learning through Self-supervised Language Modeling
Active learning strives to reduce annotation costs by choosing the most critical examples to label. Typically, the active learning strategy is contingent on the classification model. For instance, uncertainty sampling depends on poorly calibrated model confidence scores. In the cold-start setting, active learning is impractical because of model instability and data scarcity. Fortunately, modern NLP provides an additional source of information: pretrained language models. The pre-training loss can find examples that surprise the model and should be labeled for efficient fine-tuning. Therefore, we treat the language modeling loss as a proxy for classification uncertainty. With BERT, we develop a simple strategy based on the masked language modeling loss that minimizes labeling costs for text classification. Compared to other baselines, our approach reaches higher accuracy within less sampling iterations and computation time.
Introduction
Labeling data is a fundamental bottleneck in machine learning, especially for NLP, due to annotation cost and time. The goal of active learning (AL) is to recognize the most relevant examples and then query labels from an oracle. For instance, policymakers and physicians want to quickly fine-tune a text classifier to understand emerging medical conditions (Voorhees et al., 2020). Finding labeled data for medical text is challenging because of privacy issues or shortage in expertise (Dernoncourt and Lee, 2017). Using AL, they can query labels for a small subset of the most relevant documents and immediately train a robust model.
Modern transformer models dominate the leaderboards for several NLP tasks (Devlin et al., 2019;Yang et al., 2019). Yet the price of adopting * Work done while visiting National Taiwan University.
transformer-based models is to use more data. If these models are not fine-tuned on enough examples, their accuracy drastically varies across different hyperparameter configurations (Dodge et al., 2020). Moreover, computational resources are a major drawback as training one model can cost thousands of dollars in cloud computing and hundreds of pounds in carbon emissions (Strubell et al., 2019). These problems motivate further work in AL to conserve resources.
Another issue is that traditional AL algorithms, like uncertainty sampling (Lewis and Gale, 1994), falter on deep models. These strategies use model confidence scores, but neural networks are poorly calibrated (Guo et al., 2017). High confidence scores do not imply high correctness likelihood, so the sampled examples are not the most uncertain ones (Zhang et al., 2017). Plus, these strategies sample one document on each iteration. The singledocument sampling requires training the model after each query and increases the overall expense.
These limitations of modern NLP models illustrate a twofold effect: they show a greater need for AL and make AL more difficult to deploy. Ideally, AL could be most useful during low-resource situations. In reality, it is impractical to use because the AL strategy depends on warm-starting the model with information about the task (Ash and Adams, 2019). Thus, a fitting solution to AL for deep classifiers is a cold-start approach, one that does not rely on classification loss or confidence scores.
To develop a cold-start AL strategy, we should extract knowledge from pre-trained models like BERT (Devlin et al., 2019). The model encodes syntactic properties (Tenney et al., 2019), acts as a database for general world knowledge (Petroni et al., 2019;Davison et al., 2019), and can detect out-of-distribution examples (Hendrycks et al., 2020). Given the knowledge already encoded in pre-trained models, the annotation for a new task should focus on the information missing from pretraining. If a sentence contains many words that perplex the language model, then it is possibly unusual or not well-represented in the pre-training data. Thus, the self-supervised objective serves as a surrogate for classification uncertainty.
We develop ALPS (Active Learning by Processing Surprisal), an AL strategy for BERT-based models. 1 While many AL methods randomly choose an initial sample, ALPS selects the first batch of data using the masked language modeling loss. As the highest and most extensive peaks in Europe are found in the Alps, the ALPS algorithm finds examples in the data that are both surprising and substantial. To the best of our knowledge, ALPS is the first AL algorithm that only relies on a selfsupervised loss function. We evaluate our approach on four text classification datasets spanning across three different domains. ALPS outperforms AL baselines in accuracy and algorithmic efficiency. The success of ALPS highlights the importance of self-supervision for cold-start AL.
Preliminaries
We formally introduce the setup, notation, and terminology that will be used throughout the paper.
Pre-trained Encoder Pre-training uses the language modeling loss to train encoder parameters for generalized representations. We call the model input x = (w i ) l i=1 a "sentence", which is a sequence of tokens w from a vocabulary V with sequence length l. Given weights W , the encoder h maps x to a d-dimensonal hidden representation h(x; W ). We use BERT (Devlin et al., 2019) as our data encoder, so h is pre-trained with two tasks: masked language modeling (MLM) and next sentence prediction. The embedding h(x; W ) is computed as the final hidden state of the [CLS] token in x. We also refer to h(x; W ) as the BERT embedding.
Fine-tuned Model
We fine-tune BERT on the downstream task by training the pre-trained model and the attached sequence classification head. Suppose that f represents the model with the classification head, has parameters θ = (W, V ), and maps input x to a C-dimensional vector with confidence scores for each label. Specifically, f (x; θ) = σ(V · h(x; W )) where σ is a softmax function.
Let D be the labeled data for our classification task where the labels belong to set Y = 1 https://github.com/forest-snow/alps if A is cold-start for iteration t then 4:
M t (x) = f (x; θ 0 ) 5: else 6: M t (x) = f (x; θ t−1 ) 7: Q t ← Apply A on model M t (x), data U 8: D t ← Label queries Q t 9: D = D ∪ D t 10: U = U \ D t 11: θ t ← Fine-tune f (x; θ 0 ) on D 12: return f (x; θ T )
{1, ..., C}. During fine-tuning, we take a base classifier f with weights W 0 from a pre-trained encoder h and fine-tune f on D for new parameters θ t . Then, the predicted classification label iŝ
y = arg max y∈Y f (x; θ t ) y . AL for Sentence Classification Assume that there is a large unlabeled dataset U = {(x i )} n i=1
of n sentences. The goal of AL is to sample a subset D ⊂ U efficiently so that fine-tuning the classifier f on subset D improves test accuracy. On each iteration t, the learner uses strategy A to acquire k sentences from dataset U and queries for their labels (Algorithm 1). Strategy A usually depends on an acquisition model M t (Lowell et al., 2019). If the strategy depends on model warm-starting, then the acquisition model M t is f with parameters θ t−1 from the previous iteration. Otherwise, we assume that M t is the pre-trained model with parameters θ 0 . After T rounds, we acquire labels for T k sentences. We provide more concrete details about AL simulation in Section 5.
The Uncertainty-Diversity Dichotomy
This section provides background on prior work in AL. First, we discuss two general AL strategies: uncertainty sampling and diversity sampling. Then, we explain the dichotomy between the two concepts and introduce BADGE (Ash et al., 2020), a SOTA method that attempts to resolve this issue. Finally, we focus on the limitations of BADGE and other AL strategies to give motivation for our work. Dasgupta (2011) describes uncertainty and diversity as the "two faces of AL". While uncertainty sampling efficiently searches the hypothesis space by finding difficult examples to label, diversity sampling exploits heterogeneity in the feature space (Xu et al., 2003;Hu et al., 2010;Bodó et al., 2011). Uncertainty sampling requires model warmstarting because it depends on model predictions, whereas diversity sampling can be a cold-start approach. A successful AL strategy should integrate both aspects, but its exact implementation is an open research question. For example, a naïve idea is to use a fixed combination of strategies to sample points. Nevertheless, Hsu and Lin (2015) experimentally show that this approach hampers accuracy. BADGE optimizes for both uncertainty and diversity by using confidence scores and clustering. This strategy beats uncertainty-based algorithms (Wang and Shang, 2014), sampling through bandit learning (Hsu and Lin, 2015), and CORESET (Sener and Savarese, 2018), a diversity-based method for convolutional neural networks.
BADGE
The goal of BADGE is to sample a diverse and uncertain batch of points for training neural networks. The algorithm transforms data into representations that encode model confidence and then clusters these transformed points. First, an unlabeled point x passes through the trained model to obtain its predicted labelŷ. Next, a gradient embedding g x is computed for x such that it embodies the gradient of the cross-entropy loss on (f (x; θ),ŷ) with respect to the parameters of the model's last layer. The gradient embedding is
(g x ) i = (f (x; θ) i − 1(ŷ = i))h(x; W ). (1)
The i-th block of g x is the hidden representation h(x; W ) scaled by the difference between model confidence score f (x; θ) i and an indicator function 1 that indicates whether the predictive labelŷ is label i. Finally, BADGE chooses a batch to sample by applying k-MEANS++ (Arthur and Vassilvitskii, 2006) on the gradient embeddings. These embeddings consist of model confidence scores and hidden representations, so they encode information about both uncertainty and the data distribution. By applying k-MEANS++ on the gradient embeddings, the chosen examples differ in feature representation and predictive uncertainty.
Limitations
BADGE combines uncertainty and diversity sampling to profit from advantages of both methods but also brings the downsides of both: reliance on warm-starting and computational inefficiency. Dodge et al. (2020) observe that training is highly unstable when fine-tuning pre-trained language models on small datasets. Accuracy significantly varies across different random initializations. The model has not fine-tuned on enough examples, so model confidence is an unreliable measure for uncertainty. While BADGE improves over uncertaintybased methods, it still relies on confidence scores f (x; θ) i when computing the gradient embeddings (Equation 1). Also, it uses labels inferred by the model to compensate for lack of supervision in AL, but this inference is inaccurate for ill-trained models. Thus, warm-start methods may suffer from problems with model uncertainty or inference.
Model Uncertainty and Inference
Algorithmic Efficiency
Many diversity-based methods involve distance comparison between embedding representations, but this computation can be expensive, especially in high-dimensional space. For instance, CORESET is a farthest-first traversal in the embedding space where it chooses the farthest point from the set of points already chosen on each iteration (Sener and Savarese, 2018). The embeddings may appropriately represent the data, but issues, like the "curse of dimensionality" (Beyer et al., 1999) and the "hubness problem" (Tomasev et al., 2013), persist. As the dimensionality increase, the distance between any two points converges to the same value. Moreover, the gradient embeddings in BADGE have dimensionality of Cd for a C-way classification task with data dimensionality of d (Equation 1). These issues make distance comparison between gradient embeddings less meaningful and raises costs to compute those distances.
A Self-supervised Active Learner
Cold-start AL is challenging because of the shortage in labeled data. Prior work, like BADGE, often depend on model uncertainty or inference, but these measures can be unreliable if the model has not trained on enough data (Section 3.2.1). To overcome the lack of supervision, what if we apply self-supervision to AL? For NLP, the language modeling task is self-supervised because the label x, we pass in unmasked x through the BERT MLM head and compute cross-entropy loss for a random 15% subsample of tokens against the target labels. The unsampled tokens have entries of zero in s x . ALPS clusters these surprisal embeddings to sample sentences for AL.
for each token is the token itself. If the task has immensely improved transfer learning, then it may reduce generalization error in AL too.
For our approach, we adopt the uncertaintydiversity BADGE framework for clustering embeddings that encode information about uncertainty. However, rather than relying on the classification loss gradient, we use the MLM loss to bootstrap uncertainty estimates. Thus, we combine uncertainty and diversity sampling for cold-start AL.
Masked Language Modeling
To pre-train BERT with MLM, input tokens are randomly masked, and the model needs to predict the token labels of the masked tokens. BERT is bidirectional, so it uses context from the left and right of the masked token to make predictions. BERT also uses next sentence prediction for pre-training, but this task shows minimal effect for fine-tuning (Liu et al., 2019). So, we focus on applying MLM to AL. The MLM head can capture syntactic phenomena (Goldberg, 2019) and performs well on psycholinguistic tests (Ettinger, 2020).
Algorithm 2 Single iteration of ALPS
Require: Pre-trained encoder h(x; W 0 ), unlabeled data pool U, number of queries k 1: for sentences x ∈ U do 2:
Compute s x with MLM head of h(x; W 0 ) 3: M = {s x | x ∈ U} 4: C ← k-MEANS cluster centers of M 5: Q = {arg min x∈U c − s x |c ∈ C} 6: return Q 4.2 ALPS
Surprisal Embeddings Inspired by how BADGE forms gradient embeddings from the classification loss, we create surprisal embeddings from language modeling. For sentence x, we compute surprisal embedding s x by evaluating x with the MLM objective. To evaluate MLM loss, BERT randomly masks 15% of the tokens in x and computes crossentropy loss for the masked tokens against their true token labels. When computing surprisal embeddings, we make one crucial change: none of the tokens are masked when the input is passed into BERT. However, we still randomly choose 15% of the tokens in the input to evaluate with cross-entropy against their target token labels. The unchosen tokens are assigned a loss of zero as they are not evaluated (Figure 1).
These decisions for not masking input (Appendix A.1) and evaluating only 15% of tokens (Appendix A.2) are made because of experiments on the validation set. Proposition 1 provides insight on the information encoded in surprisal embeddings. Finally, the surprisal embedding is l 2 -normalized as normalization improves clustering (Aytekin et al., 2018). If the input sentences have a fixed length of l, then the surprisal embeddings have dimensionality of l. The length l is usually less than the hidden size of BERT embeddings.
Proposition 1. For an unnormalized surprisal embedding s x , each nonzero entry (s x ) i estimates I(w i ), the surprisal of its corresponding token within the context of sentence x.
Proof. Extending notation from Section 2, assume that m is the MLM head, with parameters φ = (W, Z), which maps input x to a l × |V| matrix m(x; φ). The ith row m(x; φ) i contains prediction scores for w i , the ith token in x. Suppose that w i is the jth token in vocabulary V. Then, m(x; φ) i,j is the likelihood of predicting w i correctly. Now, assume that context is the entire input x and define the language model probability p m as,
p m (w i | x) = m(x; φ) i,j .
(2) Salazar et al. (2020) have a similar definition as Equation 2 but instead have defined it in terms of the masked input. We argue that their definition can be extended to the unmasked input x. During BERT pre-training, the MLM objective is evaluated on the [MASK] token for 80% of the time, random token for 10% of the time, and the original token for 10% of the time. This helps maintain consistency across pre-training and fine-tuning because [MASK] never appears in fine-tuning (Devlin et al., 2019). Thus, we assume that m estimates occurrence of tokens within a maskless context as well.
Next, the information-theoretic surprisal (Shannon, 1948) is defined as I(w) = − log p(w | c), the negative log likelihood of word w given context c. If w i is sampled and evaluated, then the ith entry of the unnormalized surprisal embedding is,
(s x ) i = − log m(x; φ) i,j = − log p m (w i | x) = I(w i ).
Proposition 1 shows that the surprisal embeddings comprise of estimates for token-context surprisal. Intuitively, these values can help with AL because they highlight the information missing from the pre-trained model. For instance, consider the sentences: "this is my favorite television show" and "they feel ambivalent about catholic psychedelic synth folk music". Tokens from the latter have higher surprisal than those from the former. If this is a sentiment classification task, the second sentence is more confusing for the classifier to learn. The surprisal embeddings indicate sentences challenging for the pre-trained model to understand and difficult for the fine-tuned model to label.
The most surprising sentences contain many rare tokens. If we only train our model on the most surprising sentences, then it may not generalize well across different examples. Plus, we may sample several atypical sentences that are similar to each other, which is often an issue for uncertainty-based methods (Kirsch et al., 2019). Therefore, we incorporate clustering in ALPS to maintain diversity.
k-MEANS Clustering After computing surprisal embeddings for each sentence in the unlabeled pool, we use k-MEANS to cluster the surprisal embeddings. Then, for each cluster center, we select the sentence that has the nearest surprisal embedding to it. The final set of sentences are the queries to be labeled by an oracle (Algorithm 2). Although BADGE uses k-MEANS++ to cluster, experiments show that k-MEANS works better for surprisal embeddings (Appendix A.3).
Active Sentence Classification
We evaluate ALPS on sentence classification for three different domains: sentiment reviews, news articles, and medical abstracts (Table 1). To simulate AL, we sample a batch of 100 sentences from the training dataset, query labels for this batch, and then move the batch from the unlabeled pool to the labeled dataset (Algorithm 1). The initial encoder h(x; θ 0 ), is an already pre-trained, BERTbased model (Section 5.2). In a given iteration, we fine-tune the base classifier f (x; θ 0 ) on the labeled dataset and evaluate the fine-tuned model with classification micro-F 1 score on the test set. We do not fine-tune the model f (x; θ t−1 ) from the previous iteration to avoid issues with warm-starting (Ash and Adams, 2019). We repeat for ten iterations, collecting a total of 1,000 sentences.
Baselines
We compare ALPS against warm-start methods (Entropy, BADGE, FT-BERT-KM) and cold-start methods (Random, BERT-KM). For FT-BERT-KM, we use BERT-KM to sample data in the first iteration. For other warm-start methods, data is randomly sampled in the first iteration.
Entropy Sample k sentences with highest predictive entropy measured by and Gale, 1994;Wang and Shang, 2014).
C i=1 (f (x; θ) i ) ln(f (x; θ) i ) −1 (Lewis
BADGE Sample k sentences based on diversity in loss gradient (Section 3.1).
BERT-KM Cluster pre-trained, l 2 -normalized BERT embeddings with k-MEANS and sample the nearest neighbors of the k cluster centers. The algorithm is the same as ALPS except that BERT embeddings are used.
FT-BERT-KM This is the same algorithm as BERT-KM except the BERT embeddings h(x; W t−1 ) from the previously fine-tuned model are used.
Setup
For each sampling algorithm and dataset, we run the AL simulation five times with different random seeds. We set the maximum sequence length to 128. We fine-tune on a batch size of thirty-two for three epochs. We use AdamW (Loshchilov and Hutter, 2019) with learning rate of 2e-5, β 1 = 0.9, β 2 = 0.999, and a linear decay of learning rate.
For IMDB (Maas et al., 2011), SST-2 (Socher et al., 2013), and AG NEWS (Zhang et al., 2015), the data encoder is the uncased BERT-Base model with 110M parameters. 2 For PUBMED (Dernoncourt and Lee, 2017), the data encoder is SCIBERT, a BERT model pre-trained on scientific texts (Beltagy et al., 2019). All experiments are run on GeForce GTX 1080 GPU and 2.6 GHz AMD Opteron 4180 CPU processor; runtimes in Table 2. 2 https://huggingface.co/transformers/
Results
The model fine-tuned with data sampled by ALPS has higher test accuracy than the baselines (Figure 2). For AG NEWS, IMDB, and SST-2, this is true in earlier iterations. We often see the most gains in the beginning for crowdsourcing (Felt et al., 2015). Interestingly, clustering the fine-tuned BERT embeddings is not always better than clustering the pre-trained BERT embeddings for AL. The finetuned BERT embeddings may require training on more data for more informative representations.
For PUBMED, test accuracy greatly varies between the strategies. The dataset belongs to a specialized domain and is class-imbalanced, so naïve methods show poor accuracy. Entropy sampling has the lowest accuracy because the classification entropy is uninformative in early iterations. The models fine-tuned on data sampled by ALPS and BADGE have about the same accuracy. Both methods strive to optimize for uncertainty and diversity, which alleviates problems with class imbalance. Our experiments cover the first ten iterations because we focus on the cold-start setting. As sampling iterations increase, test accuracy across the different methods converges. Both ALPS and BADGE already approach the model trained on the full training dataset across all tasks (Figure 2). Once the cold-start issue subsides, uncertaintybased methods can be employed to further query the most confusing examples for the model to learn.
6 Analyzing ALPS Sampling Efficiency Given that the gradient embeddings are computed, BADGE has a time complexity of O(Cknd) for a C-way classification task, k queries, n points in the unlabeled pool, and d-dimensional BERT embeddings. Given that the surprisal embeddings are computed, ALPS has a time complexity of O(tknl) where t is the fixed number of iterations for k-MEANS and l is the maximum sequence length. In our experiments, k = 100, d = 768, t = 10, and l = 128. In practice, t will not change much, but n and C could be much higher. For large dataset PUBMED, the average runtime per iteration is 24 minutes for ALPS and 70 minutes for BADGE (Table 2). So, ALPS can match BADGE's accuracy more quickly.
Diversity and Uncertainty
We estimate diversity and uncertainty for data sampled across different strategies. For diversity, we look at the overlap between tokens in the sampled sentences and tokens from the rest of the data pool. A diverse batch of sentences should share many of the same tokens with the data pool. In other words, the sampled sentences can represent the data pool because of the substantial overlap between their tokens. In our simulations, the entire data pool is the training dataset (Section 5). So, we compute the Jaccard similarity between V D , set of tokens from the sam- Figure 3: Plot of diversity against uncertainty estimates from AL simulations for AG NEWS and PUBMED. Each point represents a sampled batch of sentences from the AL experiments. The shape indicates the strategy used to sample the sentences. The color indicates the sample iteration. The lightest color corresponds to the first iteration and the darkest color represents the tenth iteration. While uncertainty estimates are similar across different batches, ALPS shows a consistent increase in diversity without drops in uncertainty. pled sentences D, and V D , set of tokens from the unsampled sentences U \ D,
G d (D) = J(V D , V D ) = |V D ∩ V D | |V D ∪ V D | .(3)
If G d is high, this indicates high diversity because the sampled and unsampled sentences have many tokens in common. If G d is low, this indicates poor diversity and representation.
To measure uncertainty, we use f (x, θ * ), the classifier trained on the full training dataset. In our experiments, classifier f (x, θ * ) has high accuracy ( Figure 2) and inference is stable after training on many examples. Thus, we can use the logits from the classifier to understand its uncertainty toward a particular sentence. First, we compute predictive entropy of sentence x when evaluated by model f (x, θ * ). Then, we take the average of predictive (a) BERT embeddings with k-MEANS centers (b) Surprisal embeddings with k-MEANS centers Figure 4: T-SNE plots of BERT embeddings and surprisal embeddings for each sequence in the IMDB training dataset. The enlarged points are the centers determined by k-MEANS (left) and k-MEANS++ (right). The points are colored according to their classification labels. In both sets of embeddings, we cannot clearly separate the points from their labels, but the distinction between clusters in surprisal embeddings seems more obvious. entropy over all sentences in a sampled batch D. We use the average predictive entropy to esimate uncertainty of the sampled sentences,
G u (D) = 1 |D| x∈D C i=1 (f (x; θ * ) i ) ln(f (x; θ * ) i ) −1 .
(4) We compute G d and G u for batches sampled in the AL experiments of AG NEWS and PUBMED. Diversity is plotted against uncertainty for batches sampled across different iterations and AL strategies ( Figure 3). For AG NEWS, G d and G u are relatively low for ALPS in the first iteration. As iterations increase, samples from ALPS increase in diversity and decrease minimally in uncertainty. Samples from other methods have a larger drop in uncertainty as iterations increase. For PUBMED, ALPS again increases in sample diversity without drops in uncertainty. In the last iteration, ALPS has the highest diversity among all the algorithms.
Surprisal Clusters Prior work use k-MEANS to cluster feature representations as a cold-start AL approach (Zhu et al., 2008;Bodó et al., 2011). Rather than clustering BERT embeddings, ALPS clusters surprisal embeddings. We compare the clusters between surprisal embeddings and BERT embeddings to understand the structure of the surprisal clusters. First, we use t-SNE (Maaten and Hinton, 2008) to plot the embeddings for each sentence in the IMDB training set (Figure 4). The labels are not well-separated for both embedding sets, but the surprisal embeddings seem easier to cluster. To quantitively measure cluster quality, we use the Silhouette Coefficient for which larger values indicate desirable clustering (Rousseeuw, 1987). The surprisal clusters have a coefficient of 0.38, whereas the BERT clusters have a coefficient of only 0.04.
These results, along with the classification experiments, show that naïvely clustering BERT embeddings is not suited for AL. Possibly, more complicated clustering algorithms can capture the intrinsic structure of the BERT embeddings. However, this would increase the algorithmic complexity and runtime. Alternatively, one can map the feature representations to a space where simple clustering algorithms work well. During this transformation, important information for AL must be preserved and extracted. Our approach uses the MLM head, which has already been trained on extensive corpora, to map the BERT embeddings into the surprisal embedding space. As a result, simple k-MEANS can efficiently choose representative sentences.
Single-iteration Sampling In Section 5, we sample data iteratively (Algorithm 1) to fairly compare the different AL algorithms. However, ALPS does not require updating the classifier because it only depends on the pre-trained encoder. Rather than sampling data in small batches and re-training the model, ALPS can sample a batch of k sentences in one iteration (Algorithm 2). Between using ALPS iteratively and deploying the algorithm for a single iteration, the difference is insignificant (Table 3). Plus, sampling 1,000 sentences only takes about 97 minutes for PUBMED and 7 minutes for IMDB. With this flexibility in sampling, ALPS can accommodate different budget constraints. For example, re-training the classifier may be costly, so users want a sampling algorithm that can query k sentences all at once. In other cases, annotators are not always available, so the number of obtainable annotations is unpredictable. Then, users would prefer an AL strategy that can query a variable number of sentences for any iteration. These cases illustrate practical needs for a cold-start algorithm like ALPS.
Related Work
Active learning has shown success in tasks, such as named entity recognition (Shen et al., 2004), word sense disambiguation (Zhu and Hovy, 2007), and sentiment analysis (Li et al., 2012). Wang and Shang (2014) are the first to adapt prior AL work to deep learning. However, popular heuristics (Settles, 2009) for querying individual points do not work as well in a batch setting. Since then, more research has been conducted on batch AL for deep learning. Zhang et al. (2017) propose the first work on AL for neural text classification. They assume that the classifier is a convolutional neural network and use expected gradient length (Settles et al., 2008) to choose sentences that contain words with the most label-discriminative embeddings. Besides text classification, AL has been applied to neural models for semantic parsing (Duong et al., 2018), named entity recognition (Shen et al., 2018), and machine translation (Liu et al., 2018). ALPS makes use of BERT, a model that excels at transfer learning. Other works also combine AL and transfer learning to select training data that reduce generalization error. Rai et al. (2010) mea-sures domain divergence from the source domain to select the most informative texts in the target domain. use AL to query points for a target task through matching conditional distributions. Additionally, combining word-level and document-level annotations can improve knowledge transfer (Settles, 2011;Yuan et al., 2020).
In addition to uncertainty and diversity sampling, other areas of deep AL focus on Bayesian approaches (Siddhant and Lipton, 2018;Kirsch et al., 2019) and reinforcement learning (Fang et al., 2017). An interesting research direction can integrate one of these approaches with ALPS.
Conclusion
Transformers are powerful models that have revolutionized NLP. Nevertheless, like other deep models, their accuracy and stability require fine-tuning on large amounts of data. AL should level the playing field by directing limited annotations most effectively so that labels complement, rather than duplicate, unsupervised data. Luckily, transformers have generalized knowledge about language that can help acquire data for fine-tuning. Like BADGE, we project data into an embedding space and then select the most representative points. Our method is unique because it only relies on self-supervision to conduct sampling. Using the pre-trained loss guides the AL process to sample diverse and uncertain examples in the cold-start setting. Future work may focus on finding representations that encode the most important information for AL.
A.1 Token Masking
In our preliminary experiments on the validation set, we notice improvement in accuracy after passing in the original input with no masks (Table 4).
The purpose of the [MASK] token during pretraining is to train the token embeddings to learn context so that it can predict the token labels. Since we are not training the token embeddings to learn context, masking the tokens does not help much for AL. We use AL for fine-tuning, so the input should be in the same format for AL and fine-tuning. Otherwise, there is a mismatch between the two stages.
A.2 Token Sampling for Evaluation
When BERT evaluates MLM loss, it only focuses on the masked tokens, which are from a 15% random subsample of tokens in the sentence. We experiment with varying this subsample percentage on the validation set (Table 4). We try sampling 10%, 15%, 20%, and 100%. Overall, we notice that mean accuracy are roughly the same, but variance in accuracy across different runs is slightly higher for percentages other than 15%.
After the second AL iteration, we notice that accuracy mean and variance between the different token sampling percentages converge. So, the token sampling percentage makes more of a difference in early stages of AL. Devlin et al. (2019) show that (a) Surprisal embeddings with k-MEANS++ centers (b) Surprisal embeddings with k-MEANS centers Figure 6: T-SNE plots of surprisal embeddings for IMDB training data. The centers are either picked by k-MEANS++ (right) or k-MEANS (left). There is less overlap between the centers with k-MEANS compared to k-MEANS++. So, using k-MEANS is better for exploiting diversity in the surprisal embedding space. the difference in accuracy between various mask strategies is minimal for fine-tuning BERT. We believe this can also be applied to what we have observed for ALPS.
A.3 k-MEANS vs. k-MEANS++
The state-of-the-art baseline BADGE applies k-MEANS++ on gradient embeddings to select points to query. Initially, we also use k-MEANS++ on the surprisal embeddings but validation accuracy is only slightly higher than random sampling. Since k-MEANS++ is originally an algorithm for robust initialization of k-MEANS, we instead apply k-MEANS on the surprisal embeddings. As a result, we see more significant increase in accuracy over baselines, especially for PubMed ( Figure 5). Additionally, the t-SNE plots show that k-MEANS selects centers that are further apart compared to the ones chosen by k-MEANS++ ( Figure 6). This shows that k-MEANS can help sample a more diverse batch of data. 0.59 ± 0.03 0.63 ± 0.09 0.56 ± 0.03 0.60 ± 0.02 Table 4: Comparison of validation accuracy between the variants of ALPS to sample data for IMDB and SST-2 in the first two iterations. ALPS-tokens-p varies the percentage p of tokens evaluated with MLM loss when computing surprisal embeddings. ALPS-masked passes in the input with masks as originally done in pre-training. Overall, we observe that ALPS has higher mean and smaller variance in accuracy.
AG NEWS PUBMED ALPS Jason Thomas matches a career-high with 26 points and American wins its fifth straight by beating visiting Ohio, 64-55, Saturday at Bender Arena (Sports) The results showed that physical activity and exercise capacity in the intervention group was significantly higher than the control group after the intervention . (results) Sainsbury says it will take a 550 million pound hit to profits this year as it invests to boost sales and reverse falling market share (Business) The study population consisted of 20 interns and medical students (methods) BLOOMFIELD TOWNSHIP, Mich. -When yesterday's Ryder Cup pairings were announced, Bernhard Langer knew his team had been given an opportunity. (Sports)
The subject , health care provider , and research staff were blinded to the treatment . (methods) Table 5: Sample sentences from AG News and PubMed while using ALPS and Random in the first iteration. For ALPS, highlighted tokens are the ones that have a nonzero entry in the surprisal embedding. Compared to random sampling, ALPS samples sentences with more diverse content.
A.4 Sample Sentences
Section 6 quantitatively analyzes diversity of ALPS. Here, we take a closer look at the kind of sentences that are sampled by ALPS. Table 5 compares sentences that are chosen by ALPS and random sampling in the first AL iteration. The tokens highlighted are the ones evaluated with surprisal loss. Random sampling can fall prey to data idiosyncracies. For example, AG News has sixty-two articles about the German golfer Bernhard Langer, and random sampling picks multiple articles about him on one of five runs. For PubMed, many sentences labeled as "methods" are simple sentences with a short, independent clause. While random sampling chooses many sentences of this form, ALPS seems to avoid this problem. Since the surprisal embedding encodes the fluctuation in information content across the sentence, ALPS is less likely to repeatedly choose sentences with similar patterns in surprisal. This may possibly diversify syntactic structure in a sampled batch.
Figure 1 :
1To form surprisal embedding s x for sentence
Figure 2 :
2Test accuracy of simulated AL over ten iterations with 100 sentences queried per iteration. The dashed line is the test accuracy when the model is fine-tuned on the entire dataset. Overall, models trained with data sampled from ALPS have the highest test accuracy, especially for the earlier iterations.
Figure 5 :
5Comparing validation accuracy between using k-MEANS and k-MEANS++ to select centroids in the surprisal embeddings. Using k-MEANS reaches higher accuracy.
Flumazenil was administered after the completion of endoscopy under sedation to reduce recovery time and increase patient safety . (objective) Random Bernhard Langer and Hal Sutton stressed the importance of playing this year's 135th Ryder Cup . . . (Sports)
Algorithm 1 AL for Sentence ClassificationRequire: Inital model f (x; θ 0 ) with pre-trained encoder h(x; W 0 ), unlabeled data pool U, number of queries per iteration k, number of iterations T , sampling algorithm A 1: D = {} 2: for iterations t = 1, . . . , T do3:
Table 2 :
2Average runtime (minutes) per sampling iteration during AL simulation for large datasets. BADGE, FT-BERT-KM, and BERT-KM take much longer to run.
Table 3 :
3Test accuracy on IMDB and PubMed between
different uses of ALPS for various k, the number of sen-
tences to query. We compare using ALPS iteratively (It-
erative) as done in Section 5 with using ALPS to query
all k sentences in one iteration (Single). The test ac-
curacy does not change much, showing that ALPS is
flexible to apply in different settings.
ALPS 0.60 ± 0.03 0.69 ± 0.04 0.57 ± 0.06 0.64 ± 0.04 ALPS-tokens-0.1 0.61 ± 0.05IMDB
SST-2
k = 100
k = 200
k = 100
k = 200
0.63 ± 0.11
0.56 ± 0.07
0.63 ± 0.04
ALPS-tokens-0.2
0.55 ± 0.07
0.65 ± 0.05 0.57 ± 0.05
0.63 ± 0.05
ALPS-tokens-1.0
0.59 ± 0.05
0.65 ± 0.07
0.56 ± 0.05
0.62 ± 0.05
ALPS-masked
AcknowledgmentsWe thank Kuen-Han Tsai, Chien-Min Yu, Si-An Chen, Pedro Rodriguez, Eleftheria Briakou, and the anonymous reviewers for their feedback. Michelle Yuan is supported by JHU Human Language Technology Center of Excellence (HLTCOE). Jordan Boyd-Graber is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the BETTER Program contract #2019-19051600005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
k-means++: The advantages of careful seeding. David Arthur, Sergei Vassilvitskii, StanfordTechnical reportDavid Arthur and Sergei Vassilvitskii. 2006. k- means++: The advantages of careful seeding. Tech- nical report, Stanford.
Jordan T Ash, Ryan P Adams, arXiv:1910.08475On warmstarting neural network training. arXiv preprintJordan T. Ash and Ryan P. Adams. 2019. On warm- starting neural network training. arXiv preprint arXiv:1910.08475.
Deep batch active learning by diverse, uncertain gradient lower bounds. Jordan T Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, Alekh Agarwal, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsJordan T. Ash, Chicheng Zhang, Akshay Krishna- murthy, John Langford, and Alekh Agarwal. 2020. Deep batch active learning by diverse, uncertain gra- dient lower bounds. In Proceedings of the Interna- tional Conference on Learning Representations.
Clustering and unsupervised anomaly detection with L2 normalized deep autoencoder representations. Caglar Aytekin, Xingyang Ni, Francesco Cricri, Emre Aksu, International Joint Conference on Neural Networks. Caglar Aytekin, Xingyang Ni, Francesco Cricri, and Emre Aksu. 2018. Clustering and unsupervised anomaly detection with L2 normalized deep auto- encoder representations. In International Joint Con- ference on Neural Networks.
SciB-ERT: A pretrained language model for scientific text. Iz Beltagy, Kyle Lo, Arman Cohan, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingIz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A pretrained language model for scientific text. In Proceedings of Empirical Methods in Natural Language Processing.
Raghu Ramakrishnan, and Uri Shaft. 1999. When is "nearest neighbor" meaningful. Kevin Beyer, Jonathan Goldstein, International Conference on Database Theory. Kevin Beyer, Jonathan Goldstein, Raghu Ramakrish- nan, and Uri Shaft. 1999. When is "nearest neigh- bor" meaningful? In International Conference on Database Theory.
Active learning with clustering. Zalán Bodó, Zsolt Minier, Lehel Csató, Active Learning and Experimental Design Workshop in Conjunction with AISTATS. Zalán Bodó, Zsolt Minier, and Lehel Csató. 2011. Ac- tive learning with clustering. In Active Learning and Experimental Design Workshop in Conjunction with AISTATS 2010.
Two faces of active learning. Sanjoy Dasgupta, Theoretical computer science. 41219Sanjoy Dasgupta. 2011. Two faces of active learning. Theoretical computer science, 412(19):1767-1781.
Commonsense knowledge mining from pretrained models. Joe Davison, Joshua Feldman, Alexander M Rush, 10.18653/v1/D19-1109Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingJoe Davison, Joshua Feldman, and Alexander M. Rush. 2019. Commonsense knowledge mining from pre- trained models. In Proceedings of Empirical Meth- ods in Natural Language Processing.
PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts. Franck Dernoncourt, Ji Young Lee, International Joint Conference on Natural Language Processing. 2Franck Dernoncourt and Ji Young Lee. 2017. PubMed 200k RCT: a dataset for sequential sentence classifi- cation in medical abstracts. International Joint Con- ference on Natural Language Processing, 2:308- 313.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Conference of the North American Chapter of the Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Conference of the North American Chapter of the Association for Computational Lin- guistics.
Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, Noah Smith, arXiv:2002.06305Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping. arXiv preprintJesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stop- ping. arXiv preprint arXiv:2002.06305.
Active learning for deep semantic parsing. Long Duong, Hadi Afshar, Dominique Estival, Glen Pink, Mark Philip R Cohen, Johnson, 10.18653/v1/P18-2008Proceedings of the Association for Computational Linguistics. the Association for Computational LinguisticsLong Duong, Hadi Afshar, Dominique Estival, Glen Pink, Philip R Cohen, and Mark Johnson. 2018. Ac- tive learning for deep semantic parsing. In Proceed- ings of the Association for Computational Linguis- tics.
What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Allyson Ettinger, 10.1162/tacl_a_00298Transactions of the Association for Computational Linguistics. 8Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34-48.
Learning how to active learn: A deep reinforcement learning approach. Meng Fang, Yuan Li, Trevor Cohn, 10.18653/v1/D17-1063Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingMeng Fang, Yuan Li, and Trevor Cohn. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of Empirical Methods in Natural Language Processing.
Early gains matter: A case for preferring generative over discriminativecrowdsourcing models. Paul Felt, Eric Ringger, Kevin Seppi, Kevin Black, Robbie Haertel, 10.3115/v1/N15-1089Proceedings of the Association for Computational Linguistics. the Association for Computational LinguisticsPaul Felt, Eric Ringger, Kevin Seppi, Kevin Black, and Robbie Haertel. 2015. Early gains matter: A case for preferring generative over discriminativecrowd- sourcing models. In Proceedings of the Association for Computational Linguistics.
Assessing BERT's syntactic abilities. Yoav Goldberg, arXiv:1901.05287arXiv preprintYoav Goldberg. 2019. Assessing BERT's syntactic abilities. arXiv preprint arXiv:1901.05287.
On calibration of modern neural networks. Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q Weinberger, Journal of Machine Learning Research. 70Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Wein- berger. 2017. On calibration of modern neural net- works. Journal of Machine Learning Research, 70:1321-1330.
Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, 10.18653/v1/2020.acl-main.244Proceedings of the Association for Computational Linguistics. the Association for Computational LinguisticsDan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. In Proceedings of the Association for Computational Linguistics.
Active learning by learning. Wei-Ning Hsu, Hsuan-Tien Lin, Association for the Advancement of Artificial Intelligence. Wei-Ning Hsu and Hsuan-Tien Lin. 2015. Active learn- ing by learning. In Association for the Advancement of Artificial Intelligence.
Off to a good start: Using clustering to select the initial training set in active learning. Rong Hu, Brian Mac Namee, Sarah Jane Delany, Florida Artificial Intelligence Research Society Conference. Rong Hu, Brian Mac Namee, and Sarah Jane Delany. 2010. Off to a good start: Using clustering to select the initial training set in active learning. In Florida Artificial Intelligence Research Society Conference.
BatchBALD: Efficient and diverse batch acquisition for deep Bayesian active learning. Andreas Kirsch, Yarin Joost Van Amersfoort, Gal, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsAndreas Kirsch, Joost van Amersfoort, and Yarin Gal. 2019. BatchBALD: Efficient and diverse batch ac- quisition for deep Bayesian active learning. In Pro- ceedings of Advances in Neural Information Pro- cessing Systems.
A sequential algorithm for training text classifiers. D David, William A Lewis, Gale, Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval. the ACM SIGIR Conference on Research and Development in Information RetrievalDavid D. Lewis and William A. Gale. 1994. A sequen- tial algorithm for training text classifiers. In Pro- ceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval.
Active learning for imbalanced sentiment classification. Shoushan Li, Shengfeng Ju, Guodong Zhou, Xiaojun Li, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingShoushan Li, Shengfeng Ju, Guodong Zhou, and Xiao- jun Li. 2012. Active learning for imbalanced sen- timent classification. In Proceedings of Empirical Methods in Natural Language Processing.
Learning to actively learn neural machine translation. Ming Liu, Wray Buntine, Gholamreza Haffari, 10.18653/v1/K18-1033Conference on Computational Natural Language Learning. Ming Liu, Wray Buntine, and Gholamreza Haffari. 2018. Learning to actively learn neural machine translation. In Conference on Computational Nat- ural Language Learning.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692RoBERTa: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsIlya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In Proceedings of the International Conference on Learning Representa- tions.
Practical obstacles to deploying active learning. David Lowell, Zachary C Lipton, Byron C , 10.18653/v1/D19-1003Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingWallaceDavid Lowell, Zachary C. Lipton, and Byron C. Wal- lace. 2019. Practical obstacles to deploying active learning. In Proceedings of Empirical Methods in Natural Language Processing.
Learning word vectors for sentiment analysis. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, Christopher Potts, Proceedings of the Association for Computational Linguistics. the Association for Computational LinguisticsAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the Association for Computational Linguistics.
Visualizing data using t-SNE. Laurens Van Der Maaten, Geoffrey Hinton, Journal of Machine Learning Research. 9Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605.
Language models as knowledge bases?. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander Miller, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingFabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of Empirical Methods in Natural Language Processing.
Domain adaptation meets active learning. Piyush Rai, Avishek Saha, Hal Daumé, Iii , Suresh Venkatasubramanian, Conference of the North American Chapter of the Association for Computational Linguistics. Piyush Rai, Avishek Saha, Hal Daumé III, and Suresh Venkatasubramanian. 2010. Domain adaptation meets active learning. In Conference of the North American Chapter of the Association for Computa- tional Linguistics.
Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J Peter, Rousseeuw, Journal of Computational and Applied Mathematics. 20Peter J. Rousseeuw. 1987. Silhouettes: A graphical aid to the interpretation and validation of cluster analy- sis. Journal of Computational and Applied Mathe- matics, 20:53-65.
Masked language model scoring. Julian Salazar, Davis Liang, Toan Q Nguyen, Katrin Kirchhoff, 10.18653/v1/2020.acl-main.240Proceedings of the Association for Computational Linguistics. the Association for Computational LinguisticsJulian Salazar, Davis Liang, Toan Q. Nguyen, and Ka- trin Kirchhoff. 2020. Masked language model scor- ing. In Proceedings of the Association for Computa- tional Linguistics.
Active learning for convolutional neural networks: A core-set approach. Ozan Sener, Silvio Savarese, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsOzan Sener and Silvio Savarese. 2018. Active learn- ing for convolutional neural networks: A core-set approach. In Proceedings of the International Con- ference on Learning Representations.
Active learning literature survey. Burr Settles, University of Wisconsin-Madison Department of Computer SciencesTechnical reportBurr Settles. 2009. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences.
Closing the loop: Fast, interactive semi-supervised annotation with queries on features and instances. Burr Settles, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingBurr Settles. 2011. Closing the loop: Fast, interactive semi-supervised annotation with queries on features and instances. In Proceedings of Empirical Methods in Natural Language Processing.
Multiple-instance active learning. Burr Settles, Mark Craven, Soumya Ray, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsBurr Settles, Mark Craven, and Soumya Ray. 2008. Multiple-instance active learning. In Proceedings of Advances in Neural Information Processing Sys- tems.
A mathematical theory of communication. Bell system technical journal. Claude Elwood Shannon, 27Claude Elwood Shannon. 1948. A mathematical the- ory of communication. Bell system technical jour- nal, 27.
Multi-criteria-based active learning for named entity recognition. Dan Shen, Jie Zhang, Jian Su, Guodong Zhou, Chew-Lim Tan, Proceedings of the Association for Computational Linguistics. the Association for Computational LinguisticsDan Shen, Jie Zhang, Jian Su, Guodong Zhou, and Chew-Lim Tan. 2004. Multi-criteria-based active learning for named entity recognition. In Proceed- ings of the Association for Computational Linguis- tics.
Deep active learning for named entity recognition. Yanyao Shen, Hyokun Yun, Zachary C Lipton, Yakov Kronrod, Animashree Anandkumar, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsYanyao Shen, Hyokun Yun, Zachary C. Lipton, Yakov Kronrod, and Animashree Anandkumar. 2018. Deep active learning for named entity recogni- tion. In Proceedings of the International Conference on Learning Representations.
Deep bayesian active learning for natural language processing: Results of a large-scale empirical study. Aditya Siddhant, Zachary C Lipton, 10.18653/v1/D18-1318Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingAditya Siddhant and Zachary C. Lipton. 2018. Deep bayesian active learning for natural language pro- cessing: Results of a large-scale empirical study. In Proceedings of Empirical Methods in Natural Lan- guage Processing.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, Christopher Potts, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of Empirical Methods in Natural Language Processing.
Energy and policy considerations for deep learning in NLP. Emma Strubell, Ananya Ganesh, Andrew Mccallum, 10.18653/v1/P19-1355Proceedings of the Association for Computational Linguistics. the Association for Computational LinguisticsEmma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in NLP. In Proceedings of the Asso- ciation for Computational Linguistics.
What do you learn from context? Probing for sentence structure in contextualized word representations. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas Mccoy, Najoung Kim, Benjamin Van Durme, Samuel R Bowman, Dipanjan Das, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsIan Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R. Thomas McCoy, Najoung Kim, Benjamin Van Durme, Samuel R. Bowman, Dipan- jan Das, et al. 2019. What do you learn from con- text? Probing for sentence structure in contextual- ized word representations. In Proceedings of the International Conference on Learning Representa- tions.
The role of hubness in clustering high-dimensional data. Nenad Tomasev, Milos Radovanovic, Dunja Mladenic, Mirjana Ivanovic, IEEE Transactions on Knowledge and Data Engineering. 263Nenad Tomasev, Milos Radovanovic, Dunja Mladenic, and Mirjana Ivanovic. 2013. The role of hub- ness in clustering high-dimensional data. IEEE Transactions on Knowledge and Data Engineering, 26(3):739-751.
Ellen Voorhees, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, William R Hersh, Kyle Lo, Kirk Roberts, Ian Soboroff, Lucy Lu Wang, arXiv:2005.04474TREC-COVID: Constructing a pandemic information retrieval test collection. arXiv preprintEllen Voorhees, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, William R. Hersh, Kyle Lo, Kirk Roberts, Ian Soboroff, and Lucy Lu Wang. 2020. TREC-COVID: Constructing a pandemic in- formation retrieval test collection. arXiv preprint arXiv:2005.04474.
A new active labeling method for deep learning. Dan Wang, Yi Shang, International Joint Conference on Neural Networks. Dan Wang and Yi Shang. 2014. A new active label- ing method for deep learning. In International Joint Conference on Neural Networks.
Active transfer learning under model shift. Xuezhi Wang, Tzu-Kuo Huang, Jeff Schneider, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsXuezhi Wang, Tzu-Kuo Huang, and Jeff Schneider. 2014. Active transfer learning under model shift. In Proceedings of the International Conference on Learning Representations.
Representative sampling for text classification using support vector machines. Zhao Xu, Kai Yu, Volker Tresp, Xiaowei Xu, Jizhi Wang, Proceedings of the European Conference on Information Retrieval. the European Conference on Information RetrievalZhao Xu, Kai Yu, Volker Tresp, Xiaowei Xu, and Jizhi Wang. 2003. Representative sampling for text classi- fication using support vector machines. In Proceed- ings of the European Conference on Information Re- trieval.
XLNet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, V Quoc, Le, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsZhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretraining for language understanding. In Proceedings of Ad- vances in Neural Information Processing Systems.
Interactive refinement of cross-lingual word embeddings. Michelle Yuan, Mozhi Zhang, Benjamin Van Durme, Leah Findlater, Jordan Boyd-Graber, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingMichelle Yuan, Mozhi Zhang, Benjamin Van Durme, Leah Findlater, and Jordan Boyd-Graber. 2020. In- teractive refinement of cross-lingual word embed- dings. In Proceedings of Empirical Methods in Nat- ural Language Processing.
Character-level convolutional networks for text classification. Xiang Zhang, Junbo Zhao, Yann Lecun, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsXiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Proceedings of Advances in Neural In- formation Processing Systems.
Active discriminative text representation learning. Ye Zhang, Matthew Lease, Byron C Wallace, Association for the Advancement of Artificial Intelligence. Ye Zhang, Matthew Lease, and Byron C. Wallace. 2017. Active discriminative text representation learning. In Association for the Advancement of Ar- tificial Intelligence.
Active learning for word sense disambiguation with methods for addressing the class imbalance problem. Jingbo Zhu, Eduard Hovy, Proceedings of Empirical Methods in Natural Language Processing. Empirical Methods in Natural Language ProcessingJingbo Zhu and Eduard Hovy. 2007. Active learning for word sense disambiguation with methods for ad- dressing the class imbalance problem. In Proceed- ings of Empirical Methods in Natural Language Pro- cessing.
Active learning with sampling by uncertainty and density for word sense disambiguation and text classification. Jingbo Zhu, Huizhen Wang, Tianshun Yao, Benjamin K Tsou, Proceedings of International Conference on Computational Linguistics. International Conference on Computational LinguisticsJingbo Zhu, Huizhen Wang, Tianshun Yao, and Ben- jamin K. Tsou. 2008. Active learning with sampling by uncertainty and density for word sense disam- biguation and text classification. In Proceedings of International Conference on Computational Linguis- tics.
| [
"https://github.com/forest-snow/alps"
] |
[
"Multi-Task Cross-Lingual Sequence Tagging from Scratch",
"Multi-Task Cross-Lingual Sequence Tagging from Scratch"
] | [
"Zhilin Yang zhiliny@cs.cmu.edu \nSchool of Computer Science\nCarnegie Mellon University\n\n",
"Ruslan Salakhutdinov \nSchool of Computer Science\nCarnegie Mellon University\n\n",
"William Cohen wcohen@cs.cmu.edu \nSchool of Computer Science\nCarnegie Mellon University\n\n"
] | [
"School of Computer Science\nCarnegie Mellon University\n",
"School of Computer Science\nCarnegie Mellon University\n",
"School of Computer Science\nCarnegie Mellon University\n"
] | [] | We present a deep hierarchical recurrent neural network for sequence tagging. Given a sequence of words, our model employs deep gated recurrent units on both character and word levels to encode morphology and context information, and applies a conditional random field layer to predict the tags. Our model is task independent, language independent, and feature engineering free. We further extend our model to multi-task and crosslingual joint training by sharing the architecture and parameters. Our model achieves state-of-the-art results in multiple languages on several benchmark tasks including POS tagging, chunking, and NER. We also demonstrate that multi-task and cross-lingual joint training can improve the performance in various cases. | null | [
"https://arxiv.org/pdf/1603.06270v1.pdf"
] | 1,548,828 | 1603.06270 | 48e8e8085907192d501eb2bcc582035e90431a2f |
Multi-Task Cross-Lingual Sequence Tagging from Scratch
Zhilin Yang zhiliny@cs.cmu.edu
School of Computer Science
Carnegie Mellon University
Ruslan Salakhutdinov
School of Computer Science
Carnegie Mellon University
William Cohen wcohen@cs.cmu.edu
School of Computer Science
Carnegie Mellon University
Multi-Task Cross-Lingual Sequence Tagging from Scratch
We present a deep hierarchical recurrent neural network for sequence tagging. Given a sequence of words, our model employs deep gated recurrent units on both character and word levels to encode morphology and context information, and applies a conditional random field layer to predict the tags. Our model is task independent, language independent, and feature engineering free. We further extend our model to multi-task and crosslingual joint training by sharing the architecture and parameters. Our model achieves state-of-the-art results in multiple languages on several benchmark tasks including POS tagging, chunking, and NER. We also demonstrate that multi-task and cross-lingual joint training can improve the performance in various cases.
Introduction
Sequence tagging is a fundamental problem in natural language processing which has many wide applications, including part-of-speech (POS) tagging, chunking, and named entity recognition (NER). Given a sequence of words, sequence tagging aims to predict a linguistic tag for each word such as the POS tag. Recently progress has been made on neural sequence-tagging models which make only minimal assumptions about the language, task, and feature set (Collobert et al., 2011) This paper explores an important potential advantage of these task-independent, languageindependent and feature-engineering free models: their ability to be jointly trained on multiple tasks. In particular, we explore two types of joint training. In multi-task joint training, a model is jointly trained to perform multiple sequence-tagging tasks in the same language-e.g., POS tagging and NER for English. In cross-lingual joint training, a model is trained to perform the same task in multiple languages-e.g., NER in English and Spanish.
Multi-task joint training can exploit the fact that different sequence tagging tasks in one language share language-specific regularities. For example, models of English POS tagging and English NER might benefit from using similar underlying representations for words, and in past work, certain sequence-tagging tasks have benefitted by leveraging the underlying similarity of related tasks (Ando and Zhang, 2005). Currently, however, the best results on specific sequence-tagging tasks are usually achieved by approaches that target only one specific task, either POS tagging (Søgaard, 2011;Toutanova et al., 2003), chunking (Shen and Sarkar, 2005), or NER (Luo et al., 2015;Passos et al., 2014). Such approaches employ separate model development for each individual task, which makes joint training difficult. In other work, some recent neural approaches have been proposed to address multiple sequence tagging problems in a unified framework . Though gains have been shown using multi-task joint training, the prior models that benefit from multi-task joint training did not achieve state-ofthe-art performance (Collobert et al., 2011); thus the question of whether joint training can improve over strong baseline methods is still unresolved.
Cross-lingual joint training typically uses word alignments or parallel corpora to improve the performance on different languages (Kiros et al., 2014;Gouws et al., 2014). However, many successful approaches in sequence tagging rely heavily on feature engineering to handcraft languagedependent features, such as character-level morphological features and word-level N-gram patterns Toutanova et al., 2003;Sun et al., 2008), making it difficult to share latent representations between different languages. Some multilingual taggers that do not rely on feature engineering have also been presented (Lample et al., 2016;dos Santos et al., 2015), but while these methods are language-independent, they do not study the effect of cross-lingual joint training.
In this work, we focus on developing a general model that can be applied in both multi-task and cross-lingual settings by learning from scratch, i.e., without feature engineering or pipelines. Given a sequence of words, our model employs deep gated recurrent units on both character and word levels, and applies a conditional random field layer to make the structured prediction. On the character level, the gated recurrent units capture the morphological information; on the word level, the gated recurrent units learn N-gram patterns and word semantics.
Our model can handle both multi-task and cross-lingual joint training in a unified manner by simply sharing the network architecture and model parameters between tasks and languages. For multi-task joint training, we share both character and word level parameters between tasks to learn language-specific regularities. For crosslingual joint training, we share the character-level parameters to capture the morphological similarity between languages without use of parallel corpora or word alignments. We evaluate our model on five datasets of different tasks and languages, including POS tagging, chunking and NER in English; and NER in Dutch and Spanish. We achieve state-of-the-art results on several standard benchmarks: CoNLL 2000 chunking (95.41%), CoNLL 2002Dutch NER (85.19%), CoNLL 2003Spanish NER (85.77%), and CoNLL 2003. We also achieve very competitive results on Penn Treebank POS tagging (97.55%, the second best result in the literature). Finally, we conduct experiments to systematically explore the effectiveness of multi-task and cross-lingual joint training on several tasks.
Related Work
Ando and Zhang (2005) proposed a multi-task joint training framework that shares structural parameters among multiple tasks, and improved the performance on various tasks including NER. Collobert et al. (2011) presented a task indepen-dent convolutional network and employed multitask joint training to improve the performance of chunking. However, there is still a gap between these multi-task approaches and the state-of-theart results on individual tasks. Furthermore, it is unclear whether these approaches can be effective in a cross-lingual setting.
Multilingual resources were extensively used for cross-lingual sequence tagging through various ways, such as cross-lingual feature extraction (Darwish, 2013), text categorization (Virga and Khudanpur, 2003), and Bayesian parallel data prediction (Snyder et al., 2008). Parallel corpora and word alignments are also used for training crosslingual distributed word representations (Kiros et al., 2014;Gouws et al., 2014;Zhou et al., 2015). Unlike these approaches, our method mainly focuses on using morphological similarity for crosslingual joint training.
Several neural architectures based on recurrent networks were proposed for sequence tagging. used word-level Long Short-Term Memory (LSTM) units based on handcrafted features; dos Santos et al. (2015) employed convolutional layers on both character and word levels; Chiu and Nichols (2015) applied convolutional layers on the character level and LSTM units on the word level; Gillick et al. (2015) employed a sequence-to-sequence LSTM with a novel tagging scheme. We show that our architecture gives better performance experimentally than these approaches in Section 5.
Most similar to our work is the recent approach independently developed by Lample et al. (2016) (published two weeks before our submission), which employs LSTM on both character and word levels. However, there are several crucial differences. First, we study cross-lingual joint training and show improvement over their approach in various cases. Second, while they mainly focus on NER, we generalize our model to other sequence tagging tasks, and also demonstrate the effectiveness of multi-task joint training. There are also differences in the technical aspect, such as the cost-sensitive loss function and gated recurrent units used in our work.
Model
In this section, we present our model for sequence tagging based on deep hierarchical gated recurrent units and conditional random fields. Our recurrent Figure 1: The architecture of our hierarchical GRU network with CRF, when L c = L w = 1 (only one layer for word-level and character-level GRUs respectively). We only display the character-level GRU for the word Mike and omit others.
networks are hierarchical since we have multiple layers on both word and character levels in a hierarchy.
Gated Recurrent Unit
A gated recurrent unit (GRU) network is a type of recurrent neural networks first introduced for machine translation (Cho et al., 2014). A recurrent network can be represented as a sequence of units, corresponding to the input sequence (x 1 , x 2 , · · · , x T ), which can be either a word sequence in a sentence or a character sequence in a word. The unit at position t takes x t and the previous hidden state h t−1 as input, and outputs the current hidden state h t . The model parameters are shared between different units in the sequence.
A gated recurrent unit at position t has two gates, an update gate z t and a reset gate r t . More specifically, each gated recurrent unit can be expressed as follows
r t = σ(W rx x t + W rh h t−1 ) z t = σ(W zx x t + W zh h t−1 ) h t = tanh(W hx x t + W hh (r t h t−1 )) h t = z t h t−1 + (1 − z t ) h t ,
where W 's are model parameters of each unit,h t is a candidate hidden state that is used to compute h t , σ is an element-wise sigmoid logistic function defined as σ(x) = 1/(1 + e −x ), and denotes element-wise multiplication of two vectors. Intuitively, the update gate z t controls how much the unit updates its hidden state, and the reset gate r t determines how much information from the previous hidden state needs to be reset.
Since a recurrent neural network only models the information flow in one direction, it is usually helpful to use an additional recurrent network that goes in the reverse direction. More specifically, we use bidirectional gated recurrent units, where given a sequence of length T , we have one GRU going from 1 to T and the other from T to 1. Let − → h t and ← − h t denote the hidden states at position t of the forward and backward GRUs respectively. We concatenate the two hidden states to form the final hidden
state h t = [ − → h t , ← − h t ].
We stack multiple recurrent layers together to form a deep recurrent network (Sutskever et al., 2014). Each layer learns a more effective representation taking the hidden states of the previous layer as input. Let h l,t denote the hidden state at position t in layer l. The forward GRU at position t in layer l computes − → h l,t using − → h l,t−1 and h l−1,t as input, and the backward GRU performs similar operations but in a reverse direction.
Hierarchical GRU
Our model employs a hierarchical GRU that encodes both word-level and character-level sequential information.
The input of our model is a sequence of words (x 1 , x 2 , · · · , x T ) of length T , where x t is a oneof-K embedding of the t-th word. The word at each position t also has a character-level representation, denoted as a sequence of length S t , (c t,1 , c t,2 , · · · , c t,St ) where c t,s is the one-of-K embedding of the s-th character in the t-th word.
Character-Level GRU
Given a word, we first employ a deep bidirectional GRU to learn useful morphological representation from the character sequence of the word. Suppose the character-level GRU has L c layers, we then obtain forward and backward hidden states ← − h L c ,s and − → h L c ,s at each position s in the character sequence. Since recurrent networks usually tend to memorize more short-term patterns, we concatenate the first hidden state of the backward GRU and the last hidden state of the forward GRU to encode character-level morphology in both prefixes and suffixes. We further concatenate the characterlevel representation with the one-of-K word embedding x t to form the final representation h w t for the t-th word. More specifically, we have
h w t = [ − → h L c ,St , ← − h L c ,1 , x t ],
where h w t is a representation of the t-th word, which encodes both character-level morphology and word-level semantics, as shown in Figure 1.
Word-Level GRU
The character-level GRU outputs a sequence of
word representations h w = (h w 1 , h w 2 , · · · , h w T )
. We employ a word-level deep bidirectional GRU with L w layers on top of these word representations. The word-level GRU takes the sequence h w as input, and computes a sequence of hidden states h = (h 1 , h 2 , · · · , h T ). Different from the character-level GRU, the word-level GRU aims to extract the context information in the word sequence, such as N-gram patterns and neighbor word dependencies. Such information is usually encoded using handcrafted features. However, as we show in our experimental results, the word-level GRU can learn the relevant information without being language-specific or task-specific. The hidden states h output by the word-level GRU will be used as input features for the next layers.
Conditional Random Field
The goal of sequence tagging is to predict a sequence of tags y = (y 1 , y 2 , · · · , y T ). To model the dependencies between tags in a sequence, we apply a conditional random field (Lafferty et al., 2001) layer on top of the hidden states h output by the word-level GRU . Let Y(h) denote the space of tag sequences for h. The conditional log probability of a tag sequence y, given the hidden state sequence h, can be written as
log p(y|h) = f (h, y) − log y ∈Y(h) exp f (h, y ),
(1) where f is a function that assigns a score for each pair of h and y.
To define the function f (h, y), for each position t, we multiply the hidden state h w t with a parameter vector w yt that is indexed by the the tag y t , to obtain the score for assigning y t at position t. Since we also need to consider the correlation between tags, we impose first order dependency by adding a score A y t−1 ,yt at position t, where A is a parameter matrix defining the similarity scores between different tag pairs. Formally, the function f can be written as
f (h, y) = T t=1 w T yt h w t + T t=1 A y t−1 ,yt ,
where we set y 0 to be a START token. It is possible to directly maximize the conditional log likelihood based on Eq. (1). However, this training objective is usually not optimal since each possible y contributes equally to the objective function. Therefore, we add a cost function between y and y based on the max-margin principle that high-cost tags y should be penalized more heavily (Gimpel and Smith, 2010). More specifically, the objective function to maximize for each training instance y and h is written as
f (h, y) − log y ∈Y(h)
exp(f (h, y ) + cost(y, y )).
(2) In our work, the cost function is defined as the tag-wise Hamming loss between two tag sequences multiplied by a constant. The objective function on the training set is the sum of Eq. (2) over all the training instances. The full architecture of our model is illustrated in Figure 1.
Training
We employ mini-batch AdaGrad (Duchi et al., 2011) to train our neural network in an end-toend manner with backpropagation. Both the character embeddings and word embeddings are finetuned during training. We use dynamic programming to compute the normalizer of the CRF layer in Eq. (2). When making prediction, we again use dynamic programming in the CRF layer to decode the most probable tag sequence.
Multi-Task and Cross-Lingual Joint Training
In this section we study joint training of multiple tasks and multiple languages. On one hand, different sequence tagging tasks in the same language share language-specific regularities. For example, POS tagging and NER in English should learn similar underlying representation since they are in the same language. On the other hand, some languages share character-level morphologies, such as English and Spanish. Therefore, it is desirable to leverage multi-task and cross-lingual joint training to boost model performance. Since our model is generally applicable to different tasks in different languages, it can be naturally extended to multi-task and cross-lingual joint training. The basic idea is to share part of the architecture and parameters between tasks and languages, and to jointly train multiple objective functions with respect to different tasks and languages.
We now discuss the details of our joint training algorithm in the multi-task setting. Suppose we have D tasks, with the training instances of each task being (X 1 , X 2 , · · · , X D ). Each task d has a set of model parameters W d , which is divided into two sets, task specific parameters and shared parameters, i.e.,
W d = W d,spec ∪ W shared ,
where shared parameters W shared are a set of parameters that are shared among the D tasks, while task specific parameters W d,spec are the rest of the parameters that are trained for each task d separately.
During joint training, we are optimizing the average over all objective functions of D tasks. We iterate over each task d, sample a batch of training instances from X d , and perform a gradient descent step to update model parameters W d . Similarly, we can derive a cross-lingual joint training algorithm by replacing D tasks with D languages.
The network architectures we employ for joint training are illustrated in Figure 2. For multi-task joint training, we share all the parameters below the CRF layer including word embeddings to learn language-specific regularities shared by the tasks. For cross-lingual joint training, we share the parameters of the character-level GRU to capture the morphological similarity between languages. Note that since we do not consider using parallel corpus in this work, we mainly focus on joint training between languages with similar morphology. We leave the study of cross-lingual joint training by sharing word semantics based on parallel corpora to future work.
Experiments
In this section, we use several benchmark datasets for multiple tasks in multiple languages to evaluate our model as well as the joint training algorithm.
Datasets and Settings
We use the following benchmark datasets in our experiments: Penn Treebank (PTB) POS tagging, CoNLL 2000chunking, CoNLL 2003English NER, CoNLL 2002Dutch NER and CoNLL 2002 Spanish NER. The statistics of the datasets are described in Table 1.
We construct the POS tagging dataset with the instructions described in Toutanova et al. (2003). Note that as a standard practice, the POS tags are extracted from the parsed trees. (Collobert et al., 2011;Chiu and Nichols, 2015) to append one-hot gazetteer features to the input of the CRF layer for fair comparison. 1 We set the hidden state dimensions to be 300 for the word-level GRU. We set the number of GRU layers to L c = L w = 2 (two layers for the wordlevel and character-level GRUs respectively). The learning rate is fixed at 0.01. We use the development set to tune the other hyperparameters of our model. Since the CoNLL 2000 chunking dataset does not have a development set, we hold out one fifth of the training set for parameter tuning.
We truncate all words whose character sequence length is longer than a threshold (17 for English, 35 for Dutch, and 20 for Spanish). We replace all numeric characters with "0". We also use the BIOES (Begin, Inside, Outside, End, Single) tagging scheme (Ratinov and Roth, 2009).
Pre-Trained Word Embeddings
Since the training corpus for a sequence tagging task is relatively small, it is difficult to train ran-1 Although gazetteers are arguably a type of feature engineering, we note that unlike most feature engineering techniques they are straightforward to include in a model. We use only the gazetteer file provided by the CoNLL 2003 shared task, and do not use gazetteers for any other tasks or languages described here. domly initialized word embeddings to accurately capture the word semantics. Therefore, we leverage word embeddings pre-trained on large-scale corpora. All the pre-trained embeddings we use are publicly available.
On the English datasets, following previous works that are based on neural networks (Collobert et al., 2011;Chiu and Nichols, 2015), we use the 50-dimensional SENNA embeddings 2 trained on Wikipedia. For Spanish and Dutch, we use the 64-dimensional Polyglot embeddings 3 (Al- Rfou et al., 2013), which are trained on Wikipedia articles of the corresponding languages. We use pre-trained word embeddings as initialization, and fine-tune the embeddings during training.
Performance
In this section, we report the results of our model on the benchmark datasets and compare to the previously-reported state-of-the-art results. Santos et al. (2015) 82.21 Gillick et al. (2015) 82.95 Lample et al. (2016) 85 . For English NER, there are two evaluation methods used in the literature. Some models are trained with both the training and development set, while others are trained with the training set only. We report our results in both cases. In the first case, we tune the hyperparameters by training on the training set and testing on the development set. Besides our standalone model, we experimented with multi-task and cross-lingual joint training as well, using the architecture described in Section 4. For multi-task joint training, we jointly train all tasks in English, including POS tagging, chunking and NER. For cross-lingual joint training, we jointly train NER in English, Dutch and Spanish. We also remove the word embeddings and the character-level GRU respectively to analyze the contribution of different components.
The results are shown in Tables 2, 3, 4, 5, 6 and 7. We achieve state-of-the-art results on English NER, Dutch NER, Spanish NER and English chunking. Our model outperforms the best previously-reported results on Dutch NER and English chunking by 2.35 points and 0.95 points respectively. We also achieve the second best re- sult on English POS tagging, which is 0.23 points worse than the current state-of-the-art. Joint training improves the performance on Spanish NER, Dutch NER and English chunking by 1.08 points, 0.19 points and 0.75 points respectively, and has no significant improvement on English POS tagging and English NER.
On POS tagging, the best result is 97.78% reported by Ling et al. (2015). However, the embeddings they used are not publicly available. To demonstrate the effectiveness of our model, we slightly revise our model to reimplement their model with the same parameter settings described in their original paper. We use SENNA embeddings to initialize the reimplemented model for fair comparison, and obtain an accuracy of 97.41% that is 0.14 points worse than our result, which indicates that our model is more effective and the main difference lies in using different pre-trained embeddings.
By comparing the results without the character- level GRU and without word embeddings, we can observe that both components contribute to the final results. It is also clear that word embeddings have significantly more contribution than the character-level GRU, which indicates that our model largely depends on memorizing the word semantics. Character-level morphology, on the other hand, has relatively smaller but still critical contribution.
Joint Training
In this section, we analyze the effectiveness of multi-task and cross-lingual joint training in more detail. In order to explore possible gains in performance of joint training for resource-poor languages or tasks, we consider joint training of various task pairs and language pairs where differentsized subsets of the actual labeled corpora are made available. Given a pair of tasks of languages, we jointly train one task with full labels and the other with partial labels. In particular, we introduce a labeling rate r, and sample a fraction r of the sentences in the training set, discarding the rest. Evaluation is based on the partially-labeled task. The results are reported in Table 8.
We observe that the performance of a specific task with relatively lower labeling rates (0.1 and 0.3) can usually benefit from other tasks with full labels through multi-task or cross-lingual joint training. The performance gain can be up to 1.99 points when the labeling rate of the target task is 0.1. The improvement with 0.1 labeling rate is on average 0.37 points larger than with 0.3 labeling rate, which indicates that the improvement of joint training is more significant when the target Table 8: Multi-task and cross-lingual joint training. We compare the results obtained by a standalone model and joint training with another task or language. The number following a task is the labeling rate (0.1 or 0.3). Eng and NER both refer to English NER, Span means Spanish. In the column titles, Task is the target task, J. Task is the jointly-trained task with full labels, Sep. is the F1/Accuracy of the target task trained separately, Joint is the F1/Accuracy of the target task with joint training, and Delta is the improvement. task has less labeled data. We also use t-SNE (Van der Maaten and Hinton, 2008) to obtain a 2-dimensional visualization of the character-level GRU output for the country names in English and Spanish, shown in Figure 3. We can clearly see that our model captures the morphological similarity between two languages through joint training, since all corresponding pairs are nearest neighbors in the original embedding space.
Conclusion
We presented a new model for sequence tagging based on gated recurrent units and conditional random fields. We explored multi-task and crosslingual joint training through sharing part of the network architecture and model parameters. We achieved state-of-the-art results on various tasks including POS tagging, chunking, and NER, in multiple languages. We also demonstrated that joint training can improve model performance in various cases.
In this work, we mainly focus on leveraging morphological similarities for cross-lingual joint training. In the future, an important problem will be joint training based on cross-lingual word semantics with the help of parallel data. Furthermore, it will be interesting to apply our joint training approach to low-resource tasks and languages.
Figure 2 :
2Network architectures for multi-task and crosslingual joint training. Red boxes indicate shared architecture and parameters. Blue boxes are task/language specific components trained separately. Eng, Span, Char, and Emb refer to English, Spanish, Character and Embeddings.
Figure 3 :
32-dimensional t-SNE visualization of the character-level GRU output for country names in English and Spanish. Black words are English and red ones are Spanish. Note that all corresponding pairs are nearest neighbors in the original embedding space.
Table 1 :
1Dataset Statistics
Benchmark
Task
Language # Training Tokens # Dev Tokens # Test Tokens
PTB (2003)
POS Tagging English
912,344
131,768
129,654
CoNLL 2000 Chunking
English
211,727
-
47,377
CoNLL 2003 NER
English
204,567
51,578
46,666
CoNLL 2002 NER
Dutch
202,931
37,761
68,994
CoNLL 2002 NER
Spanish
207,484
51,645
52,098
Table 2 :
2Comparison with state-of-the-art results on For the task of CoNLL 2003 English NER, we follow previous worksCoNLL 2003 English NER when trained with training set
only. † means using handcrafted features. ‡ means being
task-specific.
Model
F1 (%)
Chieu et al. (2002) † ‡
88.31
Florian et al. (2003) † ‡
88.76
Ando and Zhang (2005) † 89.31
Lin and Wu (2009) † ‡
90.90
Collobert et al. (2011)
89.59
Huang et al. (2015) †
90.10
Ours
90.94
Table 3 :
3Comparison with state-of-the-art results on CoNLL 2003 English NER when trained with both training and dev sets. † means using handcrafted features. ‡ means being task-specific. * means not using gazetteer lists.Model
F1 (%)
Ratinov and Roth (2009) † ‡
90.80
Passos et al. (2014) † ‡
90.90
Chiu and Nichols (2015)
90.77
Luo et al. (2015) † ‡
91.2
Lample et al. (2016) *
90.94
Ours
91.20
Ours − no gazetteer *
90.96
Ours − no char GRU
88.00
Ours − no word embeddings 77.20
Table 4 :
4Comparison with state-of-the-art results on CoNLL 2002 Dutch NER. † means using handcrafted features. ‡ means being task-specific.Model
F1 (%)
Carreras et al. (2002) † ‡
77.05
Nothman et al. (2013) † ‡
78.6
Gillick et al. (2015)
82.84
Lample et al. (2016)
81.74
Ours
85.00
Ours + joint training
85.19
Ours − no char GRU
77.76
Ours − no word embeddings 67.36
Table 5 :
5Comparison with state-of-the-art results on CoNLL 2002 Spanish NER. † means using handcrafted features. ‡ means being task-specific.Model
F1 (%)
Carreras et al. (2002) † ‡
81.39
dos
Table 6 :
6Comparison with state-of-the-art results on CoNLL 2000 English chunking. † means using handcrafted features. ‡ means being task-specific.Model
F1 (%)
Kudo and Matsumoto (2001) † ‡ 93.91
Shen and Sarkar (2005) † ‡
94.01 4
Sun et al. (2008) † ‡
94.34
Collobert et al. (2011)
94.32
Huang et al. (2015) †
94.46
Ours
94.66
Ours + joint training
95.41
Ours − no char GRU
94.44
Ours − no word embeddings
88.13
Table 7 :
7Comparison with state-of-the-art results on PTB POS tagging. † means using handcrafted features. ‡ means being task-specific. * indicates our reimplementation (using SENNA embeddings).Model
Accuracy (%)
Toutanova et al. (2003) † ‡
97.24
Shen et al. (2007) † ‡
97.33
Søgaard et al. (2011) † ‡
97.50
Collobert et al. (2011)
97.29
Huang et al. (2015) †
97.55
Ling et al. (2015)
97.78
Ling et al. (2015) (SENNA) * 97.41
Ours (SENNA)
97.55
Ours − no char GRU
96.69
Ours − no word embeddings 95.43
http://ronan.collobert.com/senna/ 3 https://sites.google.com/site/rmyeid/ projects/polyglot 4 We note that this number is often mistakenly cited as 95.23, which is actually the score on base NP chunking rather than CoNLL 2000.
Polyglot: Distributed word representations for multilingual nlp. Al-Rfou, ACL. [Ando and Zhang2005] Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. JMLR. 6Al-Rfou et al.2013] Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multilingual nlp. In ACL. [Ando and Zhang2005] Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. JMLR, 6:1817-1853.
Xavier Carreras, Lluis Marquez, and Lluís Padró. [ Carreras, CoNLL. Named entity extraction using adaboost[Carreras et al.2002] Xavier Carreras, Lluis Marquez, and Lluís Padró. 2002. Named entity extraction us- ing adaboost. In CoNLL, pages 1-4.
Named entity recognition: a maximum entropy approach using global information. COL-ING. Ng2002] Hai Leong Chieu and Hwee Tou Ng[Chieu and Ng2002] Hai Leong Chieu and Hwee Tou Ng. 2002. Named entity recognition: a maximum entropy approach using global information. In COL- ING, pages 1-7.
On the properties of neural machine translation: Encoder-decoder approaches. P C Jason, Eric Chiu, Nichols ; Kyunghyun, Bart Cho, Dzmitry Van Merriënboer, Yoshua Bahdanau, Bengio, arXiv:1511.08308Named entity recognition with bidirectional lstm-cnns. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa12arXiv preprintACL. Collobert et al.2011. Natural language processing (almost) from scratch. JMLR[Chiu and Nichols2015] Jason PC Chiu and Eric Nichols. 2015. Named entity recognition with bidirectional lstm-cnns. arXiv preprint arXiv:1511.08308. [Cho et al.2014] Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. In ACL. [Collobert et al.2011] Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language process- ing (almost) from scratch. JMLR, 12:2493-2537.
Named entity recognition using cross-lingual resources: Arabic as an example. Kareem Darwish, ACL. Kareem Darwish. 2013. Named entity recognition using cross-lingual resources: Arabic as an example. In ACL, pages 1558-1567.
Boosting named entity recognition with neural character embeddings. Santos, Proceedings of NEWS 2015 The Fifth Named Entities Workshop. NEWS 2015 The Fifth Named Entities Workshop25[dos Santos et al.2015] Cıcero dos Santos, Victor Guimaraes, RJ Niterói, and Rio de Janeiro. 2015. Boosting named entity recognition with neural character embeddings. In Proceedings of NEWS 2015 The Fifth Named Entities Workshop, page 25.
Adaptive subgradient methods for online learning and stochastic optimization. [ Duchi, JMLR. 12[Duchi et al.2011] John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12:2121-2159.
Named entity recognition through classifier combination. Florian , HLT-NAACL. [Florian et al.2003] Radu Florian, Abe Ittycheriah, Hongyan Jing, and Tong Zhang. 2003. Named en- tity recognition through classifier combination. In HLT-NAACL, pages 168-171.
[ Gillick, arXiv:1512.00103Oriol Vinyals, and Amarnag Subramanya. 2015. Multilingual language processing from bytes. arXiv preprint[Gillick et al.2015] Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Mul- tilingual language processing from bytes. arXiv preprint arXiv:1512.00103.
Softmax-margin crfs: Training loglinear models with cost functions. Kevin Gimpel, A Noah, Smith, NAACL. Gimpel and Smith2010[Gimpel and Smith2010] Kevin Gimpel and Noah A Smith. 2010. Softmax-margin crfs: Training log- linear models with cost functions. In NAACL, pages 733-736.
Bilbowa: Fast bilingual distributed representations without word alignments. Gouws, ICML. [Gouws et al.2014] Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2014. Bilbowa: Fast bilingual distributed representations without word alignments. In ICML.
A multiplicative model for learning distributed text-based attribute representations. Huang, arXiv:1508.01991NIPS. arXiv preprintBidirectional lstm-crf models for sequence tagging[Huang et al.2015] Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for se- quence tagging. arXiv preprint arXiv:1508.01991. [Kiros et al.2014] Ryan Kiros, Richard Zemel, and Ruslan R Salakhutdinov. 2014. A multiplicative model for learning distributed text-based attribute representations. In NIPS, pages 2348-2356.
Chunking with support vector machines. NAACL. Taku Kudo and Yuji Matsumoto[Kudo and Matsumoto2001] Taku Kudo and Yuji Mat- sumoto. 2001. Chunking with support vector ma- chines. In NAACL, pages 1-8.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. [ Lafferty, ICML. [Lafferty et al.2001] John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional ran- dom fields: Probabilistic models for segmenting and labeling sequence data. In ICML.
[ Lample, arXiv:1603.01360Neural architectures for named entity recognition. arXiv preprint[Lample et al.2016] Guillaume Lample, Miguel Balles- teros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360.
Phrase clustering for discriminative learning. Wu2009] Dekang Lin, Xiaoyun Lin, Wu, ACL. [Lin and Wu2009] Dekang Lin and Xiaoyun Wu. 2009. Phrase clustering for discriminative learning. In ACL, pages 1030-1038.
Finding function in form: Compositional character models for open vocabulary word representation. Ling, EMNLP. [Ling et al.2015] Wang Ling, Tiago Luís, Luís Marujo, Ramón Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Finding function in form: Compositional character models for open vocabulary word representation. In EMNLP.
Joint named entity recognition and disambiguation. [ Luo, Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. ACL[Luo et al.2015] Gang Luo, Xiaojiang Huang, Chin- Yew Lin, and Zaiqing Nie. 2015. Joint named entity recognition and disambiguation. In ACL.
Learning multilingual named entity recognition from wikipedia. Nothman, Artificial Intelligence. 194[Nothman et al.2013] Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R Cur- ran. 2013. Learning multilingual named entity recognition from wikipedia. Artificial Intelligence, 194:151-175.
Lexicon infused phrase embeddings for named entity resolution. Passos, ACL. [Passos et al.2014] Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase embeddings for named entity resolution. In ACL.
Design challenges and misconceptions in named entity recognition. Lev Ratinov, Dan Roth, CoNLL. Ratinov and Roth2009[Ratinov and Roth2009] Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In CoNLL, pages 147- 155.
Voting between multiple data representations for text chunking. Sarkar2005] Hong Shen, Anoop Shen, Sarkar, Springer[Shen and Sarkar2005] Hong Shen and Anoop Sarkar. 2005. Voting between multiple data representations for text chunking. Springer.
Guided learning for bidirectional sequence classification. [ Shen, ACL. [Shen et al.2007] Libin Shen, Giorgio Satta, and Ar- avind Joshi. 2007. Guided learning for bidirectional sequence classification. In ACL, pages 760-767.
Unsupervised multilingual learning for pos tagging. [ Snyder, EMNLP. [Snyder et al.2008] Benjamin Snyder, Tahira Naseem, Jacob Eisenstein, and Regina Barzilay. 2008. Un- supervised multilingual learning for pos tagging. In EMNLP, pages 1041-1050.
Semisupervised condensed nearest neighbor for part-of-speech tagging. Anders Søgaard, ACL. Anders Søgaard. 2011. Semisupervised condensed nearest neighbor for part-of-speech tag- ging. In ACL, pages 48-52.
Modeling latent-dynamic in shallow parsing: a latent conditional model with improved inference. Sun, COLING. [Sun et al.2008] Xu Sun, Louis-Philippe Morency, Daisuke Okanohara, and Jun'ichi Tsujii. 2008. Modeling latent-dynamic in shallow parsing: a la- tent conditional model with improved inference. In COLING, pages 841-848.
Sequence to sequence learning with neural networks. [ Sutskever, NIPS. [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learn- ing with neural networks. In NIPS, pages 3104- 3112.
Feature-rich part-of-speech tagging with a cyclic dependency network. [ Toutanova, NAACL. [Toutanova et al.2003] Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic de- pendency network. In NAACL, pages 173-180.
Van der Maaten and Hinton2008] Laurens Van der Maaten and Geoffrey Hinton. JMLR. 985Visualizing data using t-sne[Van der Maaten and Hinton2008] Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. JMLR, 9(2579-2605):85.
Transliteration of proper names in cross-lingual information retrieval. [ Virga, Paola Virga, Sanjeev Khudanpur, Proceedings of the ACL 2003 workshop on Multilingual and mixed-language named entity recognition. the ACL 2003 workshop on Multilingual and mixed-language named entity recognition15[Virga and Khudanpur2003] Paola Virga and Sanjeev Khudanpur. 2003. Transliteration of proper names in cross-lingual information retrieval. In Proceed- ings of the ACL 2003 workshop on Multilingual and mixed-language named entity recognition-Volume 15, pages 57-64.
Learning bilingual sentiment word embeddings for cross-language sentiment classification. [ Zhou, ACL. [Zhou et al.2015] Huiwei Zhou, Long Chen, Fulin Shi, and Degen Huang. 2015. Learning bilingual sen- timent word embeddings for cross-language senti- ment classification. In ACL.
| [] |
[
"End-to-End Compositional Models of Vector-Based Semantics, 2022 (E2ECOMPVEC) EPTCS 366",
"End-to-End Compositional Models of Vector-Based Semantics, 2022 (E2ECOMPVEC) EPTCS 366"
] | [
"M Moortgat ",
"G Wijnholds ",
"\nMIT Cambridge\nMAUSA\n",
"\nMIT Cambridge\nMAUSA\n",
"\nMIT Cambridge\nMAUSA\n"
] | [
"MIT Cambridge\nMAUSA",
"MIT Cambridge\nMAUSA",
"MIT Cambridge\nMAUSA"
] | [] | Proponents of the Distributed Morphology framework have posited the existence of two levels of morphological word formation: a lower one, leading to loose input-output semantic relationships; and an upper one, leading to tight input-output semantic relationships. In this work, we propose to test the validity of this assumption in the context of Hebrew word embeddings. If the two-level hypothesis is borne out, we expect state-of-the-art Hebrew word embeddings to encode (1) a noun, (2) a denominal derived from it (via an upper-level operation), and (3) a verb related to the noun (via a lower-level operation on the noun's root), in such a way that the denominal (2) should be closer in the embedding space to the noun (1) than the related verb (3) is to the same noun (1). We report that this hypothesis is verified by four embedding models of Hebrew: fastText, GloVe, Word2Vec and AlephBERT. This suggests that word embedding models are able to capture complex and fine-grained semantic properties that are morphologically motivated. | 10.4204/eptcs.366.6 | [
"https://export.arxiv.org/pdf/2208.05721v1.pdf"
] | 251,482,599 | 2208.05721 | c7b6b8aff199544e12bdf64884ae7e0cbd1b3f0a |
End-to-End Compositional Models of Vector-Based Semantics, 2022 (E2ECOMPVEC) EPTCS 366
2022
M Moortgat
G Wijnholds
MIT Cambridge
MAUSA
MIT Cambridge
MAUSA
MIT Cambridge
MAUSA
End-to-End Compositional Models of Vector-Based Semantics, 2022 (E2ECOMPVEC) EPTCS 366
202210.4204/EPTCS.366.6
Proponents of the Distributed Morphology framework have posited the existence of two levels of morphological word formation: a lower one, leading to loose input-output semantic relationships; and an upper one, leading to tight input-output semantic relationships. In this work, we propose to test the validity of this assumption in the context of Hebrew word embeddings. If the two-level hypothesis is borne out, we expect state-of-the-art Hebrew word embeddings to encode (1) a noun, (2) a denominal derived from it (via an upper-level operation), and (3) a verb related to the noun (via a lower-level operation on the noun's root), in such a way that the denominal (2) should be closer in the embedding space to the noun (1) than the related verb (3) is to the same noun (1). We report that this hypothesis is verified by four embedding models of Hebrew: fastText, GloVe, Word2Vec and AlephBERT. This suggests that word embedding models are able to capture complex and fine-grained semantic properties that are morphologically motivated.
Proponents of the Distributed Morphology framework have posited the existence of two levels of morphological word formation: a lower one, leading to loose input-output semantic relationships; and an upper one, leading to tight input-output semantic relationships. In this work, we propose to test the validity of this assumption in the context of Hebrew word embeddings. If the two-level hypothesis is borne out, we expect state-of-the-art Hebrew word embeddings to encode (1) a noun, (2) a denominal derived from it (via an upper-level operation), and (3) a verb related to the noun (via a lower-level operation on the noun's root), in such a way that the denominal (2) should be closer in the embedding space to the noun (1) than the related verb (3) is to the same noun (1). We report that this hypothesis is verified by four embedding models of Hebrew: fastText, GloVe, Word2Vec and AlephBERT. This suggests that word embedding models are able to capture complex and fine-grained semantic properties that are morphologically motivated.
1 Introduction: A few basic principles of word formation 1
.1 Morphological processes sometimes appear "irregular"
A common assumption in generative morphology is that word formation differs essentially from the formation of larger phrasal structures [33,9,10]. Phrase formation on the one hand, is generally productive in the sense that it is seldom subject to arbitrary constraints; it also proves to be semantically compositional. Word formation on the other hand, is compositional at times and non-compositional at others, and is known to exhibit arbitrary paradigmatic gaps [3].
The lack of compositionality in word formation as been observed in certain English "berry-words" [23,3]. The berry-words in (1a) seem semantically compositional as they involve concatenation of two independent morphemes, each of which contributes its meaning to the word as a whole. In contrast, the berry-words in (1b) cannot be said to be compositional, as one of their morphemes fails to convey any meaning when uttered independently from the other (i.e., the morphemes cran, boysen, and huckle are not meaningful units in English).
(1) a. blackberry, blueberry "compositional" berries b. cranberry, boysenberry, huckleberry "non-compositional" berries For an example of the arbitrariness in the application of morphological processes, consider the English nominals in (2). (2a) illustrates a regular morphological process that merges the morpheme -ity to an adjective ending with the phonemes /-(i)ous/, resulting in a noun. As is illustrated in (2b), some arbitrary constraint prevents this process from applying to the adjective atrocious [3]. Instead, a different process seems to apply in the derivation of atrocity from atrocious -perhaps one which involves truncation of part of the adjective, followed by the merger of the nominalizing morpheme. (2) a. curiousity, monstrosity, pomposity regular ity-nominalization b. atrocity. *atrociousity irregular -ity-nominalization
The two-level model
To account for this ambivalent nature of morphological processes, linguists working in the tradition of the Distributed Morphology (DM) framework ( [21,13,6,22]) have posited the existence of two "levels" of morphological derivation; a "lower" level, where word formation may be irregular, arbitrary, and non-productive, and an "upper" level, where word formation is mostly regular and productive (cf. [26]).
· · · · · · √ · · · n, a, v ←−word "lower" level "upper" level The DM framework assumes that while morphological processes apply as part of the syntactic derivation, the "upper" and "lower" levels of morphological derivation are distinguished by the merger of a so-called "functional head" (n, v, a, etc.). A functional head sets the semantic, syntactic and phonological features of the word it creates. It may be merged directly with a root, which is an atomic element devoid of functional material [2]; or it may be merged with some other non-atomic constituent already dominating a functional head. More specifically, it has been assumed that any operation that applies directly to a root, (i.e., an operation that applies at the "lower", non-word level), should remain impenetrable to upper-level operations. Fig. 2 below is an illustration of how this model accounts for the difference between the compositional berry-names (a.) and the non-compositional ones (b. In the compositional case, the roots of the first morpheme combine with an adjectivizing head (a) to form an adjective, and the root of the second morpheme combines with a nominalizing head (n) to form a noun. The two are then conjoined to form a word whose meaning is compositional in that it simply involves intersecting the meaning of the adjective with that of the noun. In the non-compositional case, the root that does not convey any meaning in the context of the whole compound ( √ cran, √ boysen, √ huckle), is not merged with any functional head before it combines with the noun (berry).
The derivation is in that case a "low" level process. The divide between pre-word "lower" level and post-word "upper" level processes leads to two related predictions:
Prediction (a): Elements derived from the same root via a lower-level morphological operation may arbitrarily differ semantically; Prediction (b): Elements that derive from the same word via upper-level morphological operations should be closely related semantically.
The relevance of Semitic morphology
Semitic languages provide a useful testing ground for the predictions of the two-level model. This is because in many of these languages, words can be decomposed into consonantal roots and fixed morphophonemic patterns that introduce functional information, like part of speech (n, a, v etc.) and valence in the case of verbs ( [36]). As is illustrated in Tab. 1, patterns can be seen as recipes to "fill the gaps" between the consonants of the root. These roots and patterns, which only form independent words when merged together, have nevertheless been argued to be mentally represented as accessible independent morphological units (cf. [34]).
Verbal pattern
Root √ xSv √ ktv (i) CaCaC xaSav 'thought' katav 'wrote' (ii) niCCaC nexSav 'was well considered' nixtav 'was written' (iii) CiCCeC xiSev 'calculated' kitev 'CC-ed' (iv) CuCCaC xuSav 'was calculated' kutav 'was CC-ed' (v) hiCCiC hixSiv 'considered' hixtiv 'dictated' (vi)
huCCaC huxSav 'was considered' huxtav 'was dictated' (vii) hitCaCCeC hitxaSev 'was considerate' hitkatev 'corresponded' Table 1: Varieties of meaning for the same root While Semitic roots do seem to signal some broad semantic field, the meaning that results from combining a certain root with a certain pattern is highly unpredictable and non-systematic. This is also illustrated in Tab. 1, where two tri-consonantal roots ( √ xSv and √ ktv), are combined with each of the language's seven verbal patterns. Crucially however, there is no transparent semantic operation that the patterns seem to denote. While niCCaC (ii) seems to constitute the passive version of CaCaC (i) when the two patterns are combined with the root √ ktv, this is not the case when they are combined with the root √ xSv. Some patterns seem to behave more systematically than others (for instance, huCCaC (vi) is usually just the passive of hiCCiC (v)), but this is not generally the case.
It is also worth noting that Hebrew has three verbal patterns containing four consonant slots -Ci-CCeC (iii), CuCCaC (iv), and hitCaCCeC (vii). As pointed out in [2], no gemination exists in these patterns in Modern Hebrew. However, we nevertheless have evidence that these patterns contain an extra consonant slot, as they productively combine with roots of four consonant. For instance, the root √ SxKK can combine with these patterns to form SixKeK ('liberated'), SuxKaK ('was liberated'), and hiS-taxKeK ('liberated oneself'). Other patterns (such as hiCCiC (v)) can combine with four non-templatic consonants, but this process is less productive and is generally restricted to loan-words (for instance, the Hebrew verb hiSpKiţ ('splashed') is derived from the German verb with the same meaning spritzen). This will be relevant to our discussion of the behavior of denominal verbs in Hebrew in the next section.
The case of Hebrew denominal verbs
Given our two-level model of morphological processes, root-pattern combination is a lower-level operation by definition. Therefore, it is not surprising that the meanings obtained from combining the same root with different patterns differ in arbitrary ways. Furthermore, a given root can yield forms with potentially divergent syntactic categories, as the root can combine with functional heads of various labels (n, v, a, etc.). This is illustrated in Tab. 2, which combines the same roots used in Tab. 1 with nominal and adjectival patterns, rather than verbal ones. Recall that the two-level model made two predictions: (a) Elements derived from the same root via a lower-level morphological operation may arbitrarily differ semantically; and (b) elements that derive from the same word via upper-level morphological operations should be closely related semantically. The data in Tab. 1 and 2 seem to confirm the former prediction, as the meanings that result from combining the different roots with nominal and adjectival patterns seem quite unpredictable. Arad [2] claims that Hebrew provides us with the possibility to test the second prediction. More specifically, Hebrew comes with a morphophonemic diagnostic that can help identify elements derived via upper-level operations, such as denominal and de-adjectival verbs. 1 Denominal verbs are derived by merging a verbal pattern with a noun, rather than with a root. The base noun is itself derived from a root:
Pattern Root √ xSv √ ktv (viii) miCCaCa (n) maxSava 'thought' mixtava 'desk' (ix) maCCeC (n) maxSev 'computer' NA (x) miCCuC (n) mixSuv 'computing' NA (xi) CCiCut (n) xaSivut 'importance' NA (xii) taCCiC (n) taxSiv 'calculation' taxtiv 'decree' (xiii) CeCCon (n) xeSbon 'bill' NA (xiv) CCiCa (n) xaSiva 'thinking' ktiva 'writing' (xv) CCaC (n) NA ktav 'hand-writing' (xvi) CaCaC (n) xaSav 'accountant' katav 'correspondent' (xvii) miCCaC (n) NA mixtav 'letter' (xviii) CaCuC (a) xaSuv 'important' katuv 'written' (xix) CaCCan (a) NA katvan 'pulp writer'√ lower−level −→ N upper−level −→ V denom
In Hebrew, certain nominal patterns include templatic consonants (marked in blue in the Figures), in addition to the templatic vowels. For instance, templates (viii-x) and (xvii) in Tab. 2 have a templatic /m/, templates (xi-xii) have a templatic /t/, and templates (xiii) and (xix) have a templatic /n/. Arad notes that certain verbs (such as mixSev and hitxaSben in Tab. 3 below) seem to behave as if a consonant has been adjoined to their root. She argues that this consonant is a templatic consonant originating from the root-derived noun that served as a base for the derivation of the corresponding verb (denominal). In other words, it is argued that mixSev and hitxaSben do not derive directly form the root √ xSv, but rather, from the root-derived nouns maxSev and xeSbon, respectively. This is illustrated in Fig. 3a. Other verbs, such as xiSev and hitxaSev, which are derived using the same pattern as mixSev and hitxaSben respectively (cf. Tab. 3), lack a templatic consonant and are therefore argued to derive directly from their respective roots. This is illustrated in Fig. 3b. Also note in Tab. 3 that verbal forms involving a templatic consonant (mixSev, hitxaSben) have a meaning which seems closely related to the meaning of the nouns derived from the same root (maxSev, xeSbon). If the presence of a templatic consonant is indeed the marker of an upper-level operation mapping a noun to a verb, as Arad argues, this semantic property is in line with Prediction (b) of two-level model (cf. p. 3). In contrast, verbal forms derived using the same patterns, but devoid of any templatic consonant (xiSev, hitxaSev) seem not as close to the corresponding root-derived nouns in terms of meaning, this time in line with Prediction (a) of the two-level model.
N pattern √ -derived Noun V pattern Possible Verbs maCCeC maxSev 'computer' CiCCeC mixSev 'computerized' (ix) xiSev 'calculated' CeCCon xeSbon 'bill' hitCaCCeC hitxaSben 'settled up financially' (xiii)
hitxaSev 'was considerate' In brief, the two-level model, together with Arad's claim that verbs with an extra nominal-template consonant are denominal rather than root-derived, predicts that these denominal verbs exhibit a closer semantic similarity to the noun from which they were derived, than root-derived verbs do.
vp np √ xSv n maCCeC v CiCCeC maxSev ( √ -derived) mixSev (denominal) (a) Denominal verb (derived from a root-derived noun) vp √ xSv v CiCCeC xiSev ( √ -derived) (b) Root derived verb
Previous empirical investigations into denominals in the two-level model
As far as we know, our paper is the first to investigate the semantic encoding of Hebrew denominal verbs within word embedding models. However, an experimental study on the processing of Hebrew denominal verbs has already been conducted by Brice on human participants [8]. A priming experiment was used to provide further evidence for the two-level model, based on the background assumption that morphological inclusion corresponds to a strong priming connection. This claim is supported by a series of papers ( [16], [15], [17], [39], [40]), showing that roots have a strong priming effect for verbs that are derived from them. More generally, it can be assumed that if some language component A is contained in the morphological structure of a component B, then the string that orthographically represents A should prime the string that represents B. In [8], Brice tests this property on Hebrew nouns and denominals, by comparing the priming effect of a noun on the denominal verb derived from it, to that of a verb derived from the same root. An example for such triplets is given in Tab. 4. Crucially, the two stimuli were orthographically equally similar to the target, so any difference in priming could only be attributed to deeper connections between the words. Given these assumptions, the two-level model's prediction is that the noun stimulus should have a stronger priming effect on the denominal than the rootderived verb stimulus. That is because the noun is contained in the denominal verb's morphological structure, while the root-derived verb is not. Brice indeed found that the noun stimuli yield significantly shorter reaction times among the participants than the verb stimuli, i.e., demonstrate a stronger priming effect. We view our study as complementing this result. However, Brice argued that a given noun's priming its corresponding denominal verb could not be explained by a semantic connection between them, since priming has been shown to be unaffected by the meaning of the stimulus (see [32], [29], a.o.). Therefore, Brice took his results as evidence for the morphosyntactic aspect of the two-level model. Our study, on the other hand, aims to provide evidence in favor of the existence of semantic consequences of the the two-level model; i.e., the idea that upper-level derivational operations entail a high degree of semantic similarity between their input and output.
3 Testing the predictions using Hebrew word-embedding models 3
.1 Words embeddings capture meaningful semantic generalizations
Static word embedding models such as Word2Vec [27], GloVe [31] and fastText [7] map each word of a given lexicon to a dense, high-dimensional vector. This mapping is obtained by training a neural network on specific language-related tasks, such as predicting the context of any given word (Skip-gram), or predicting a word given a context (CBOW) [27]. By contrast, contextualized word embeddings like BERT [12], and its variant AlephBERT [35], pretrained on Hebrew data, can take whole sentences as input, and may map the same word to different representations, depending on the context. BERT in particular, adopts a deep encoder-decoder architecture ("Transformer", [38]). A stack of encoders (12 for the basic model) use attention mechanisms to forward a more complete picture of the whole sequence to the decoder.
Word embeddings in general have been argued to encode lexical meaning, in the sense that wordvectors that are close to each other in the embedding space are expected to be close in meaning [24]. The relevant metric is usually taken to be the cosine similarity (measure of the angle) between two vectors. In particular, word embeddings have been shown to encode specific morphological/semantic relationships (such as comparative/superlative formation, masculine/feminine nominalizations, some part-whole relationships) as stable linear transformations, a somewhat surprising result that has led to various explanations in the recent years [4,18,14,1]. Contextualized embeddings like BERT were shown to be especially good in capturing polysemy and homonymy in natural language [28]. Given that word embeddings provide a quantitative measure of semantic similarity, and that our morphological model makes specific semantic predictions, those language models appear as an interesting testing ground for the predictions of the two-level model.
Word embeddings and the two-level model
We propose here that the abstract linguistic notion of root be modeled as a subspace of the word embedding. This region should contain (at least) all the vectors corresponding to words derived from the root through a merger with a functional head (more specifically in our case, a pattern). 2 For instance, the root √ xSv designates a region that contains, among others, the vectors for xaSuv ('important'), maxSava ('thought'), and hitxaSev ('was considerate').
In the two-level framework, the generation of root-derived elements is semantically opaque (cf. Prediction (a)). In other words, elements derived from the same root via a lower-level process (i.e. the merger of a head directly with the root), are expected to exhibit arbitrary semantic differences. Assuming that roots denote regions in the embedding space, semantic opacity corresponds to an expectation that for any root √ x , and any set of templates {t 1 ,t 2 , . . . ,t i }, the vectors corresponding to the words derived from applying the templates to the root (i.e., {
− −−− → t 1 ( √ x ), − −−− → t 2 ( √ x ), . . . , − −−− → t i ( √ x
)} ) will be arbitrarily distributed over the region designated by the root. The generation of word-derived elements, on the other hand, is argued to be semantically restricted and transparent (cf . Prediction (b)). More specifically, if Y is an element derived by merging a functional head with an element that already contains a functional head X, then the meaning of Y is expected to be close to the meaning of X in a systematic way [2]. In other words, if the merger of the first functional head can lead to a vector X located anywhere within the region denoted by the root (Prediction (a)), the merger of a second head on X (yielding Y ) should lead to a representation Y that is in the close vicinity of X (Prediction (b)). An illustration of this interpretation of the two-level model predictions is provided in Fig. 4.
Testing the two-level model in the context of Hebrew word embeddings
∀ √ : 1 k k ∑ i=1 S −→ N √ , − − → V (k) √ < S −→ N √ , − − → V N √ (1)
Hypothesis 1
Assuming that vectors of root-derived verbs are arbitrarily distributed across the region defined by the root, some root-derived verbs might be accidentally closer to the root-derived noun than the denominal verb derived from that noun. By using the mean similarity between the root-derived noun and the various root-derived verbs, the prediction is rendered compatible with this possibility. However, it might also be worthwhile to test a stronger prediction, according to which the similarity between a noun and a denominal verb derived from it should be greater than the similarity between the noun and the rootderived verb that is maximally similar to the noun. This is formalized in Eq. 2 below.
∀ √ : max i∈[1,k] S −→ N √ , − − → V (k) √ < S −→ N √ , − − → V N √ (2)
Hypothesis 2 4 Implementation and results
Dataset creation
We generated and tested Hebrew data in order to validate our two hypotheses. Each data point in our dataset contained (1) A list of nominal patterns containing templatic consonants was constructed using introspection and previous linguistic papers on the subject ( [5,2,8]). Each nominal template was mapped to the verbal template which incorporates the nominal consonant into the verbal form. This is illustrated in Tab. 5 (partial list). Given that Modern Hebrew lacks vowel marking in the orthography, and therefore involves a high rate of ambiguity (cf. [37]), we used the infinitival form of the verbal templates, which are not ambiguous with nominal elements in the language even in the absence of vowel markings.
A list of nouns instantiating the relevant nominal templates was generated by matching nouns from a PoS-tagged Hebrew corpus, The Knesset Meetings Corpus 5 , against the various nominal templates.
Candidate denominal verbs corresponding to each noun could be subsequently created, using the template mapping established in Tab. 5. Root-derived verbs were generated using the five infinitival verbal templates of Hebrew, corresponding to the seven inflected templates in Tab. 1. This process generated a list of 1435 potential data points. Given that not all nouns can productively give rise to denominal verbs, most of these were not actual verbs of Hebrew, or did not have a root that could productively combine with other verbal templates. We first eliminated the data points that were obviously not part of the grammar, by checking if the denominal (or any inflected form thereof) could be found in the list of verbs extracted from the Knesset dataset. This first filtering step ruled out 1322 data points, and left us with 113 data points to inspect further. We manually discarded the remaining defective items, ending up with a list of 66 denominal verbs. To convert the words in our dataset into vectors, we used pretrained models from fast-Text [19], and BERT (AlephBERT) [35], and trained GloVe and Word2Vec 8 models on Hebrew Wikipedia dumps. The characteristics of those various embeddings are given in Tab. 6.
Word embeddings
The AlephBERT embedding was obtained by summing the last 4 layers obtained after feeding the model with individual tokenized words, one at a time. Tokenization was performed using a dedicated function from the BERT library. If a given word was represented using several tokens, then, the representation of the word was obtained by averaging the representations of the individual tokens.
Since the dimensions of the embeddings were close to our dataset's total size, we reduced the space using Principal Component Analysis (PCA, [30]) prior to computing the similarities and testing the hypotheses. We relied on the Guttman-Kaiser Criterion [20] to determine the target dimension for each space. Before computing the similarities, we chose to plot a few data points in a 2D space. For the visualizations to be as meaningful and readable as possible, we used PCA with a cosine kernel on each separate data point. A few such plots are represented in Fig 5. (c) Noun: 'frame'; Denominal: 'to frame' 7 We trained GloVe using the default dimension of 50, then switched to 100 to allow for a better comparison with the other models, which have similar dimensions. 7 As BERT models do not assign words to a fixed vector. 8 For training the Word2Vec model, we used the Skip-Gram architecture. Those 2D plots bring preliminary evidence in support of the denominal verbs (blue dots) being closer to their respective nouns (orange dots) in a cosine space, than the other root-derived verbs (green dots).
Computation of the similarities
For each data point, our main hypothesis predicts that the cosine similarity between the noun and the denominal verb should be higher than the mean cosine similarity between the noun and all the other verbs sharing the same root. The stronger hypothesis predicts that the same kind of inequality holds when the "mean" operator is replaced by a "max" operator. In both cases, for each data point, the difference between two measures of similarities (noun/denominal; noun/other root derived element) is expected to be positive. The distributions of those two measures of similarity are represented in Fig. 6, for the main hypothesis, and the various models we tested. Those plots suggest that the main hypothesis is verified, since all distributions seem to have a mean and median above 0. In the next section, we test the significance of those empirical observations.
Tests and results
To test whether denominals were significantly more similar to their corresponding nouns than other rootderived verbs, we performed (non-parametric) one-tailed Wilcoxon tests for matched-pairs 9 on the data described and plotted in the previous section. We did not feel the need perform any correction on the p-values, even though 2 hypotheses were tested for each model, because H1 was entailed by H2 by construction (so, a significant p-value for H2 implies a significant p-value for H1 as well). The p-values and effect sizes are compiled in Tab. 7. The effect sizes correspond to Cliff's ∆ [11], which is a robust measure for non-parametric samples. Table 7: p-values and effect sizes (Cliff's ∆) for H1 and H2 and 4 embedding models As shown above, our main hypothesis (H1) was verified in all the models tested, and the stronger hypothesis (H2) appeared to be verified in all but two models (GloVe50 and AlephBERT). This indicates that our main prediction is quite robust across various language models. Overall large effect sizes also suggest that the effect, whenever present, is quite strong.
The fact that GloVe100, but not GloVe50, verified both hypotheses is somewhat interesting, as this might mean that a certain richness is needed in the original embedding space (despite the fact that this space is subsequently reduced via PCA), in order to capture the relevant generalization. This claim however, seems to be disproved by the results of AlephBERT, since this model failed to verify H2 just like GloVe50, while having the highest original dimension (768). Moreover, the failure of AlephBERT on H2 does not seem to be caused by the dimension reduction process (PCA) being too permissive in keeping too many irrelevant dimensions. Indeed, reducing the space further to the arbitrary dimension of 50 (which retained 71% of the explained variance) did not change the overall outcome -H2 was still rejected. Rather, the inefficiency of AlephBERT may be caused by the relative misuse of that model in the context of our experiment; sequential models such as BERT perform well on contextualized data, but here, the model was fed with isolated words, and could not benefit from the presence of surrounding words to enrich its representations.
Discussion, conclusion, and further questions
Our results seem to corroborate the claim of the two-level model that word-derived elements are systematically closer semantically to the word from which they derive, than elements derived from the same root are similar to that word. This prediction was verified in all the word embeddings we tested, provided that the original dimension was high enough. This result complements Brice's contribution (cf. [8]), by establishing that morphological inclusion has some real semantic import, in addition to having consequences in terms of priming. It also gives support to the claim that word embedding models might not only capture superficial linguistic regularities; and that, at least in some very specific corners of the language, those models show behaviors similar to humans'.
However, there is one potential confound in the way we tested our prediction. Static word embedding models obtain a single global representation for each word [25]. Therefore, those models fail to capture the different meanings of ambiguous words. As mentioned earlier, the absence of vowel marking in Modern Hebrew orthography (coupled with the "double life" of certain letters that have multiple pronunciations) renders many written words in Hebrew highly ambiguous. When it comes to denominal verbs matters get even worse, as certain nominal templates that give rise to such verbs are systematically ambiguous with certain inflections of a corresponding root derived verb or with an inflection of the denominal verb itself. The systematically ambiguous templates are listed in Tab. 8. This makes testing predictions regarding Hebrew in a static word embedding model problematic to begin with, and even more so when it comes to predictions regarding Hebrew denominal verbs. One way to minimize ambiguity in our data was to use forms of each word that are least ambiguous. For instance, to resolve the ambiguity in (i), we used the plural inflections of nouns in the taCCiC template, which are no longer ambiguous with any root-derived verbs. This fix, however, does not resolve the ambiguity in (ii), as the plural form of the noun in that case is still ambiguous -this time with a plural root-derived verb. We also tried to minimize ambiguity by using verbs in their infinitival form, as infinitival verbs are not ambiguous with any nouns. However, this does not fully resolve the confound, as the infinitival form of verbs in Hebrew involves concatenation of the prefix /le-/ to some combination of the root consonants with other templatic information, and the prefix /le-/ itself is ambiguous between an infinitival marker and the preposition 'to'. Therefore, the orthographic representation of some of our denominal verbs is ambiguous between a verbal interpretation and a the prepositional interpretation 'to N', where N is the noun from which the denominal is derived. It is unclear to us how the possible ambiguities discussed here influenced our results.
Another way to avoid the problems imposed on us by the high ambiguity rates in Modern Hebrew could be to test our hypotheses in a proper contextual word embedding, obtained by feeding AlephBERT with words put in disambiguating contexts. However, choosing this solution raises the issue of the design of a relevant context for each target word. First, choosing the right context for a given word is a subjective task that might make our experimental set up biased, or, at least, less controlled. Second, verbs derived from the same root but associated to different templates have different valences and senses, and require complements of different kinds. Those fundamental structural differences would add extra noise that would influence the similarity comparisons, and could not be reasonably counterbalanced. We have yet to investigate how to use a proper contextual embedding model while overcoming this concern.
Figure 1 :
1Basic decomposition of the word formation process
Figure 2 :
2Compositional vs non-compositional berry-words
Figure 3 :
3The structure of denominal vs root-derived verbs
Figure 4 :
4Expected distribution of rootderived vs word-derived elements within a simplified embedding space.
When it comes to Hebrew denominal verbs, the prediction of our interpretation of the two-level model in the context of word embeddings is the following. Given a root ( √ ); a noun derived from it via a lowerlevel operation (N √ ); a denominal verb derived from that noun via an upper-level operation (V N √ ); and finally, a verb derived directly from the root via a lower-level operation (V √ ), we expect − − → V N √ to be generally 3 closer to −→ N √ within the embedding space, than − → V √ is. The exact nature of the root-derived verb V √ remains to be fleshed out however. Indeed, if a given root √ normally yields a single root-derived noun (N √ ), it can in principle give rise to many differ-ent root-derived verbs {V (1) √ , . . . ,V (k)√ } , some of them being closer to N √ than others. are somewhat uniformly distributed across the region defined by √ , we expect the mean similarity between −→ N √ and each of the−→ V (i)√ to be lower than the similarity between −→ N √ and the denominal derived from it, − − → V N √ . This is formalized in Eq. 1 below (where S stands for cosine similarity):
Figure 5 :
52D projection of a few data points (PCA, cosine kernel, fastText model)
Figure 6 :
6Paired similarity differences for various models under H1 (main hypothesis)
M.SG PRESENT root-derived verb in CiCCeC mexašev 'calculates' 3.M.SG PAST denominal verb in CiCCeC mixšev 'computerized' C(a)CC(a)n, C(a)C(a)C(a)t, Noun with templatic consonant ţalaxat 'plate' 3.M.SG PAST denominal verb in CiCCeC ţilxet 'plated'
).np
ap
√
blue
√
black
a
np
√
berry n
n
(a) Compositional case: the 2 sub-words are
merged with their respective heads (a, n) before be-
ing compounded.
np
√
cran
√
boysen
√
huckle
np
√
berry n
n
(b) Non-compositional case: 1 of the sub-words
is not merged with a functional head before com-
pounding, and thus remains semantically opaque.
Table 2 :
2Nominal and adjectival patterns
Table 3 :
3The root √ xSv and the templatic consonants
Table 4 :
4Example stimulus from[8]
a noun with a templatic consonant, (2) a denominal verb containing the templatic consonant from the noun, in addition to the three root consonants (cf.Tab. 3), and (3) a list of verbs
derived directly from the same root as the noun (and thus devoid of templatic consonants). 4 Each data
point contained between one and five root-derived verbs, depending on how productively the given root
combined with the verbal templates.
Nominal pattern
Denominal pattern
tiCCoCet
tiCCoCa
letaCCeC
taCCiC
CeCCon
leCaCCen, lehitCaCCen
maCCeC
miCCeCet
lemaCCeC, lehitmaCCeC
miCCaC
šaCCeCet
lešaCCeC, lehištaCCeC
CaCaCat
leCaCCet, lehitCaCCet
Table 5 :
5Templates used for data generation
Table 6 :
6Characteristics of the models
Table 8 :
8Systematically ambiguous patterns
In the prose, we will refer going forward only to denominal verbs, but the observations apply to de-adjectival ones as well.
A possible way to define this region in mathematical terms would be to use the notion of convex hull, although our analysis does not depend on any specific implementation thereof.3 We think that it is too restrictive to state that this property should hold for any given root, due to the arbitrary character of lower-level morphological operations. Since lower-level operations are assumed to map root-derived elements somewhat randomly within the subspace defined by the root, a root-derived verb might accidentally end up very close to a root-derived noun (meaning, closer to this noun than the noun-derived denominal is). Our modeling however, predicts that this configuration should be rare enough for our inequalities to have some statistical significance.
It is worth mentioning that a single noun could in certain cases give rise to two data points, when the noun's root happened to be compatible with two denominal templates.5 This corpus gathers protocols of sessions in the Israeli parliament between January 2004 and November 2005. The particular archive we used was kneset16.
We preferred a non-parametric test as opposed to a standard t-test because the similarity plots suggested that the distributions did not satisfy the t-test assumptions. This was confirmed by Levene tests conducted prior to performing the main tests, for all models but AlephBERT.
Vowels that are not orthographically represented are put in parentheses.
Analogies Explained: Towards Understanding Word Embeddings. Carl Allen, & Timothy, M Hospedales, 10.48550/arXiv.1901.09813arXiv:1901.09813CoRR abs/1901.09813Carl Allen & Timothy M. Hospedales (2019): Analogies Explained: Towards Understanding Word Embed- dings. CoRR abs/1901.09813, doi:10.48550/arXiv.1901.09813. arXiv:1901.09813.
Locality Constraints on the Interpretation of Roots: The Case of Hebrew Denominal Verbs. Maya Arad, 10.1023/a:1025533719905Natural Language and Linguistic Theory. 214Maya Arad (2003): Locality Constraints on the Interpretation of Roots: The Case of Hebrew Denominal Verbs. Natural Language and Linguistic Theory 21(4), pp. 737-778, doi:10.1023/a:1025533719905.
Word Formation in Generative Grammar. Linguistic Inquiry monographs. Mark Aronoff, MIT pressMark Aronoff (1976): Word Formation in Generative Grammar. Linguistic Inquiry monographs, MIT press.
A Latent Variable Model Approach to PMI-based Word Embeddings. Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma & Andrej Risteski, 10.1162/tacl_a_00106Transactions of the Association for Computational Linguistics. 4Sanjeev Arora, Yuanzhi Li, Yingyu Liang, Tengyu Ma & Andrej Risteski (2016): A Latent Variable Model Approach to PMI-based Word Embeddings. Transactions of the Association for Computational Linguistics 4, pp. 385-399, doi:10.1162/tacl_a_00106. Available at https://aclanthology.org/Q16-1028.
Stem Modification and Cluster Transfer in Modern Hebrew. Outi Bat-El, 10.1007/BF00992928Natural Language and Linguistic Theory. 124Outi Bat-El (1994): Stem Modification and Cluster Transfer in Modern Hebrew. Natural Language and Linguistic Theory 12(4), pp. 571-596, doi:10.1007/BF00992928. Available at http://www.jstor.org/ stable/4047868.
J D Bobaljik, 10.7551/mitpress/9069.001.0001Universals in Comparative Morphology: Suppletion, Superlatives, and the Structure of Words. Current Studies in Linguistics. MIT PressJ.D. Bobaljik (2012): Universals in Comparative Morphology: Suppletion, Superlatives, and the Structure of Words. Current Studies in Linguistics, MIT Press, doi:10.7551/mitpress/9069.001.0001.
Piotr Bojanowski, Edouard Grave, Armand Joulin & Tomas Mikolov, 10.48550/arXiv.1607.04606arXiv:1607.04606Enriching Word Vectors with Subword Information. arXiv preprintPiotr Bojanowski, Edouard Grave, Armand Joulin & Tomas Mikolov (2016): Enriching Word Vectors with Subword Information. arXiv preprint arXiv:1607.04606, doi:10.48550/arXiv.1607.04606. Available at https://arxiv.org/abs/1607.04606.
The root and word distinction: an experimental study of Hebrew denominal verbs. Henry Brice, 10.1007/s11525-016-9297-0Morphology. 272Henry Brice (2016): The root and word distinction: an experimental study of Hebrew denominal verbs. Morphology 27(2), pp. 159-177, doi:10.1007/s11525-016-9297-0.
Remarks on Nominalization. Noam Chomsky, Reading in English Transformational Grammar. R. Jacobs & P. S. RosenbaumGinn, WalthamNoam Chomsky (1970): Remarks on Nominalization. In R. Jacobs & P. S. Rosenbaum, editors: Reading in English Transformational Grammar, Ginn, Waltham, pp. 184-221.
Noam Chomsky, Conditions on Transformations. S. R. Anderson & R. P. V. KiparskyMorris Halle, Holt, Rinehart & Winston, New YorkA Festschrift forNoam Chomsky (1973): Conditions on Transformations. In S. R. Anderson & R. P. V. Kiparsky, editors: A Festschrift for Morris Halle, Holt, Rinehart & Winston, New York, pp. 232-286.
Dominance statistics: Ordinal analyses to answer ordinal questions. Norman Cliff, 10.1037/0033-2909.114.3.494Psychological Bulletin. 1143Norman Cliff (1993): Dominance statistics: Ordinal analyses to answer ordinal questions. Psychological Bulletin 114(3), pp. 494-509, doi:10.1037/0033-2909.114.3.494.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, 10.48550/arXiv.1810.04805arXiv:1810.04805Kenton Lee & Kristina Toutanova. Jacob Devlin, Ming-Wei Chang, Kenton Lee & Kristina Toutanova (2018): BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR abs/1810.04805, doi:10.48550/arXiv.1810.04805. arXiv:1810.04805.
Localism versus globalism in morphology and phonology. David Embick, 10.7551/mitpress/9780262014229.001.0001MIT PressCambridgeDavid Embick (2010): Localism versus globalism in morphology and phonology. MIT Press, Cambridge, doi:10.7551/mitpress/9780262014229.001.0001.
10.48550/ARXIV.1810.04882Towards Understanding Linear Word Analogies. Kawin Ethayarajh, David Duvenaud & Graeme HirstKawin Ethayarajh, David Duvenaud & Graeme Hirst (2018): Towards Understanding Linear Word Analo- gies, doi:10.48550/ARXIV.1810.04882. Available at https://arxiv.org/abs/1810.04882.
Decomposing morphologically complex words in a nonlinear morphology. 10.1037/0278-7393.26.3.751Journal of Experimental Psychology: Learning, Memory, and Cognition. Ram Frost, Avital Deutsch & Kenneth I. Forster263Ram Frost, Avital Deutsch & Kenneth I. Forster (2000): Decomposing morphologically complex words in a nonlinear morphology. Journal of Experimental Psychology: Learning, Memory, and Cognition 26(3), pp. 751-765, doi:10.1037/0278-7393.26.3.751.
What can we learn from the morphology of Hebrew? A masked-priming investigation of morphological representation. 10.1037/0278-7393.23.4.829Journal of Experimental Psychology: Learning, Memory, and Cognition. Ram Frost, Kenneth I. Forster & Avital Deutsch234Ram Frost, Kenneth I. Forster & Avital Deutsch (1997): What can we learn from the morphology of He- brew? A masked-priming investigation of morphological representation. Journal of Experimental Psychol- ogy: Learning, Memory, and Cognition 23(4), pp. 829-856, doi:10.1037/0278-7393.23.4.829.
Orthographic Structure Versus Morphological Structure: Principles of Lexical Organization in a Given Language. Ram Frost, Tamar Kugler, Avital Deutsch, & Kenneth, I Forster, 10.1037/0278-7393.31.6.1293Journal of Experimental Psychology: Learning, Memory, and Cognition. 316Ram Frost, Tamar Kugler, Avital Deutsch & Kenneth I. Forster (2005): Orthographic Structure Versus Mor- phological Structure: Principles of Lexical Organization in a Given Language. Journal of Experimental Psychology: Learning, Memory, and Cognition 31(6), pp. 1293-1326, doi:10.1037/0278-7393.31.6.1293.
Alex Gittens, Dimitris Achlioptas, & Michael, W Mahoney, 10.18653/v1/P17-1007Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Long Papers)Alex Gittens, Dimitris Achlioptas & Michael W. Mahoney (2017): Skip-Gram -Zipf + Uniform = Vector Additivity. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguis- tics (Volume 1: Long Papers), Association for Computational Linguistics, Vancouver, Canada, pp. 69-76, doi:10.18653/v1/P17-1007. Available at https://aclanthology.org/P17-1007.
Learning Word Vectors for 157 Languages. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, 10.48550/arXiv.1802.06893arXiv:1802.06893Armand Joulin & Tomás Mikolov. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin & Tomás Mikolov (2018): Learning Word Vectors for 157 Languages. CoRR abs/1802.06893, doi:10.48550/arXiv.1802.06893. arXiv:1802.06893.
Some necessary conditions for common-factor analysis. Louis Guttman, 10.1007/bf02289162Psychometrika. 192Louis Guttman (1954): Some necessary conditions for common-factor analysis. Psychometrika 19(2), pp. 149-161, doi:10.1007/bf02289162.
Distributed Morphology and the Pieces of Inflection. Morris Halle, & Alec Marantz, The View from Building. Cambridge, MAMIT Press20Morris Halle & Alec Marantz (1993): Distributed Morphology and the Pieces of Inflection. In: The View from Building 20, MIT Press, Cambridge, MA, pp. 111-176.
On the identity of roots. Heidi Harley, 10.1515/tl-2014-0010Theoretical Linguistics. 40Heidi Harley (2014): On the identity of roots. Theoretical Linguistics 40(3-4), pp. 225-276, doi:10.1515/tl- 2014-0010.
Pseudo-Composites and Pseudo-Words: Sufficient and Necessary Criteria for Morphological Analysis. G J Sándor, W F Hervey & Jan, Mulder, La Linguistique. 91Sándor G. J. Hervey & Jan W. F. Mulder (1973): Pseudo-Composites and Pseudo-Words: Sufficient and Necessary Criteria for Morphological Analysis. La Linguistique 9(1), pp. 41-70. Available at http://www. jstor.org/stable/30248840.
Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice Hall series in artificial intelligence. D & J H Jurafsky, Martin, Pearson Prentice HallD. Jurafsky & J.H. Martin (2000): Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice Hall series in artificial intelligence, Pearson Prentice Hall.
A Survey on Contextual Embeddings. 10.48550/arXiv.2003.07278arXiv:2003.07278Qi Liu, Matt J. Kusner & Phil BlunsomQi Liu, Matt J. Kusner & Phil Blunsom (2020): A Survey on Contextual Embeddings. CoRR abs/2003.07278, doi:10.48550/arXiv.2003.07278. arXiv:2003.07278.
Roots: the universality of root and pattern morphology. Alec Marantz, In: conference on Afro-Asiatic languages. 314University of Paris VIIAlec Marantz (2000): Roots: the universality of root and pattern morphology. In: conference on Afro-Asiatic languages, University of Paris VII, 3, p. 14.
Efficient Estimation of Word Representations in Vector Space. Tomás Mikolov, Kai Chen, Greg Corrado & Jeffrey Dean, 10.48550/arXiv.1301.37811st International Conference on Learning Representations, ICLR 2013. Scottsdale, Arizona, USAWorkshop Track ProceedingsTomás Mikolov, Kai Chen, Greg Corrado & Jeffrey Dean (2013): Efficient Estimation of Word Representa- tions in Vector Space. In Yoshua Bengio & Yann LeCun, editors: 1st International Conference on Learn- ing Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings, doi:10.48550/arXiv.1301.3781. Available at http://arxiv.org/abs/1301.3781.
Sathvik Nair, Mahesh Srinivasan & Stephan, C Meylan, 10.48550/arXiv.2010.13057arXiv:2010.13057Contextualized Word Embeddings Encode Aspects of Human-Like Word Sense Knowledge. CoRR abs/2010.13057. Sathvik Nair, Mahesh Srinivasan & Stephan C. Meylan (2020): Contextualized Word Embeddings Encode Aspects of Human-Like Word Sense Knowledge. CoRR abs/2010.13057, doi:10.48550/arXiv.2010.13057. arXiv:2010.13057.
Semantic priming effects in visual word recognition: A selective review of current findings and theories. Basic processes in reading. H James, Neely, James H Neely (2012): Semantic priming effects in visual word recognition: A selective review of current findings and theories. Basic processes in reading, pp. 272-344.
LIII. On lines and planes of closest fit to systems of points in space. Karl Pearson, 10.1080/14786440109462720The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 211Karl Pearson (1901): LIII. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 2(11), pp. 559-572, doi:10.1080/14786440109462720.
GloVe: Global Vectors for Word Representation. Jeffrey Pennington, Richard Socher, Christopher Manning, 10.3115/v1/D14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsJeffrey Pennington, Richard Socher & Christopher Manning (2014): GloVe: Global Vectors for Word Rep- resentation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Doha, Qatar, pp. 1532-1543, doi:10.3115/v1/D14- 1162. Available at https://aclanthology.org/D14-1162.
The effects of associative and semantic priming in the lexical decision task. Manuel Perea, & Eva Rosa, 10.1007/s00426-002-0086-5Psychological Research. 663Manuel Perea & Eva Rosa (2002): The effects of associative and semantic priming in the lexical decision task. Psychological Research 66(3), pp. 180-194, doi:10.1007/s00426-002-0086-5.
Anaphoric Islands. Paul Martin Postal, Chicago Linguistic Society. 5Paul Martin Postal (1969): Anaphoric Islands. In: Chicago Linguistic Society 5, pp. 205-239.
The Mental Representation of Semitic Words. Jean-François Prunet, Renée Béland & Ali Idrissi, 10.1162/002438900554497Linguistic Inquiry. 314Jean-François Prunet, Renée Béland & Ali Idrissi (2000): The Mental Representation of Semitic Words. Linguistic Inquiry 31(4), pp. 609-648, doi:10.1162/002438900554497. Available at http://www.jstor. org/stable/4179126.
AlephBERT: A Hebrew Large Pre-Trained Language Model to Start-off your Hebrew NLP Application With. Amit Seker, Elron Bandel, Dan Bareket, Idan Brusilovsky, 10.48550/arXiv.2104.04052arXiv:2104.04052Refael Shaked Greenfeld & Reut Tsarfaty. Amit Seker, Elron Bandel, Dan Bareket, Idan Brusilovsky, Refael Shaked Greenfeld & Reut Tsarfaty (2021): AlephBERT: A Hebrew Large Pre-Trained Language Model to Start-off your Hebrew NLP Application With. CoRR abs/2104.04052, doi:10.48550/arXiv.2104.04052. arXiv:2104.04052.
Hebrew (Semitic). Yishai Tobin, 10.1515/9783110172782.2.16.1343Geert Booij, Christian Lehmann, Joachim Mugdan, & Stavros Skopeteas. Berlin and New YorkMorphologie/Morphology, de GruyterYishai Tobin (2004): Hebrew (Semitic). In Geert Booij, Christian Lehmann, Joachim Mugdan, & Stavros Skopeteas, editors: Morphologie/Morphology, de Gruyter, Berlin and New York, pp. 1343-58, doi:10.1515/9783110172782.2.16.1343.
Automatic Hebrew Text Vocalization. Eran Tomer, Ben Gurion University of the NegevMaster's thesisEran Tomer (2012): Automatic Hebrew Text Vocalization. Master's thesis, Ben Gurion University of the Negev.
Attention Is All You Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser & Illia Polosukhin, 10.48550/arXiv.1706.03762arXiv:1706.03762Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser & Illia Polosukhin (2017): Attention Is All You Need. CoRR abs/1706.03762, doi:10.48550/arXiv.1706.03762. arXiv:1706.03762.
Hadas Velan, & Ram Frost, 10.3758/bf03194121The impact of letter transposition on reading English and Hebrew. 14Cambridge University versus Hebrew UniversityHadas Velan & Ram Frost (2007): Cambridge University versus Hebrew University: The impact of let- ter transposition on reading English and Hebrew. Psychonomic Bulletin & Review 14(5), pp. 913-918, doi:10.3758/bf03194121.
Words with and without internal structure: What determines the nature of orthographic and morphological processing. Hadas Velan, & Ram Frost, 10.1016/j.cognition.2010.11.013Cognition. 1182Hadas Velan & Ram Frost (2011): Words with and without internal structure: What determines the nature of orthographic and morphological processing? Cognition 118(2), pp. 141-156, doi:10.1016/j.cognition.2010.11.013.
| [] |
[
"JaMIE: A Pipeline Japanese Medical Information Extraction System with Novel Relation Annotation",
"JaMIE: A Pipeline Japanese Medical Information Extraction System with Novel Relation Annotation"
] | [
"Fei Cheng \nKyoto University\nKyotoJapan\n",
"Shuntaro Yada \nNara Institute of Science and Technology\nNaraJapan\n",
"Ribeka Tanaka tanaka.ribeka@is.ocha.ac.jp \nOchanomizu University\nTokyoJapan\n",
"Eiji Aramaki aramaki@is.naist.jp \nNara Institute of Science and Technology\nNaraJapan\n",
"Sadao Kurohashi \nKyoto University\nKyotoJapan\n"
] | [
"Kyoto University\nKyotoJapan",
"Nara Institute of Science and Technology\nNaraJapan",
"Ochanomizu University\nTokyoJapan",
"Nara Institute of Science and Technology\nNaraJapan",
"Kyoto University\nKyotoJapan"
] | [
"Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)"
] | In the field of Japanese medical information extraction, few analyzing tools are available and relation extraction is still an under-explored topic. In this paper, we first propose a novel relation annotation schema for investigating the medical and temporal relations between medical entities in Japanese medical reports. We experiment with the practical annotation scenarios by separately annotating two different types of reports. We design a pipeline system with three components for recognizing medical entities, classifying entity modalities, and extracting relations. The empirical results show accurate analyzing performance and suggest the satisfactory annotation quality, the superiority of the latest contextual embedding models. and the feasible annotation strategy for high-accuracy demand. | null | [
"https://www.aclanthology.org/2022.lrec-1.397.pdf"
] | 243,847,710 | 2111.04261 | a9dc98a427e7f13d788cf6821ee15f509e458c52 |
JaMIE: A Pipeline Japanese Medical Information Extraction System with Novel Relation Annotation
June 2022
Fei Cheng
Kyoto University
KyotoJapan
Shuntaro Yada
Nara Institute of Science and Technology
NaraJapan
Ribeka Tanaka tanaka.ribeka@is.ocha.ac.jp
Ochanomizu University
TokyoJapan
Eiji Aramaki aramaki@is.naist.jp
Nara Institute of Science and Technology
NaraJapan
Sadao Kurohashi
Kyoto University
KyotoJapan
JaMIE: A Pipeline Japanese Medical Information Extraction System with Novel Relation Annotation
Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)
the 13th Conference on Language Resources and Evaluation (LREC 2022)MarseilleJune 202210.6084/m9Language Resources Association (ELRA), licensed under CC-BY-NC-4.0 3724Medical Information ExtractionCorpus AnnotationRelation ExtractionOpen-access Toolkit
In the field of Japanese medical information extraction, few analyzing tools are available and relation extraction is still an under-explored topic. In this paper, we first propose a novel relation annotation schema for investigating the medical and temporal relations between medical entities in Japanese medical reports. We experiment with the practical annotation scenarios by separately annotating two different types of reports. We design a pipeline system with three components for recognizing medical entities, classifying entity modalities, and extracting relations. The empirical results show accurate analyzing performance and suggest the satisfactory annotation quality, the superiority of the latest contextual embedding models. and the feasible annotation strategy for high-accuracy demand.
Introduction
Electronic medical record systems have been widely adopted in the hospitals. In the past decade, research efforts have been devoted to automated Information Extraction (IE) from raw medical reports. This approach should be able to liberate users from the burden of reading and understanding large volumes of records manually. While substantial progress has been made already in medical IE, it still suffers from the following limitations. First, languages are the natural boundaries to hinder the existing research from being reused across languages. The development of the English corpora and approaches can less reflect the progress in other languages. Morita et al. (2013;Aramaki et al. (2014;Aramaki et al. (2016) present a series of Japanese clinical IE shared tasks. However, more semantic-aware tasks such as medical relation extraction (Uzuner et al., 2011) and temporal relation extraction (Bethard et al., 2017) are still undeveloped. Second, most existing medical IE datasets focus on general report content such as discharge summary, instead of more specific report types and diseases. Such settings potentially sacrifice the accuracy for analyzing specific report types, such as radiography interpretation reports. In this work, we first propose a novel relation annotation scheme for investigating the medical and temporal relations in Japanese medical reports. Then, we intend to explore the correlation between the annotation efforts on specific report types and their analyzing accuracy, which is especially in demand for practical medical applications. Therefore, we target the comparison of analyzing two report types involved with the * These authors contributed equally to this work diseases of high death rates: (1) specific radiography interpretation reports of lung cancer (LC), (2) medical history reports (containing multiple types of reports relevant to a patient) of idiopathic pulmonary fibrosis (IPF). The relation annotation is based on the existing entities presented by Yada et al. (2020), which annotated the medical entities (e.g. disease, anatomical) and their modality information (e.g. positive, suspicious) in Japanese medical reports.
While rich English NLP tools for medical IE have been developed such as cTAKES (Savova et al., 2010) and MetaMap (Aronson and Lang, 2010), there are few Japanese tools available until MedEx/J (Aramaki et al., 2018). MedEx/J extracts only diseases and their negation information. In this paper, we present JaMIE: a pipeline Japanese Medical IE system, which can extract a wider range of medical information including medical entities, entity modalities, and relations from raw medical reports.
In summary, we achieves three-fold contributions as following:
• We present a novel annotation schema for both medical and temporal relations in Japanese medical reports. • We manually annotate the relations for two types of reports and empirically analyze their performance and desired annotation amount. After <CC>visiting the cardiovascular department</CC>, she was hospitalized <TIMEX3>from April 11th to April 22nd, 2024</TIMEX3>. after(C, TIMEX3 ) PSL 10mg/day had been kept since <TIMEX3>11 Aug</TIMEX3>, but it was <C>normalized</C>. start(M -key, TIMEX3 ) <M-key>Equa</M-key> started at <TIMEX3>23 April</TIMEX3>. finish(R, TIMEX3 )
On <TIMEX3>17 Nov</TIMEX3>, quitting <R>HOT</R>.
Relation Annotation
On the top of the entity and modality annotation above, we designed relation types between two entities. They can be categorized into medical relations and temporal relations. The example of each relation type is presented in Table 1.
Medical Relations
A relation(X, Y ) denotes an entity of <X> type has a relation type toward another entity of the type <Y>, in which <X> and <Y> can be any entity type defined above (including the case that <X> is the same type as <Y>). change: A <C> entity changes the status of another entity, the type of which can be <D>, <A>, <T/M-key>. A <C> is often presented as 'dilate', 'shrink', 'appear', etc. compare: A <C> entity's change is compared to a certain point <Y>, typically <TIMEX3>. feature: A <F> entity describes a certain entity <Y>.
A <F> is often presented as 'significant', 'mild', the size (of a tumor), etc. region: An entity of an object includes or contains another object entity (often <D> or <A>).
1 https://github.com/racerandom/JaMIE/ tree/demo Figure 1: Visualization of temporal relations, i.e., on, before, after, start, and finish value: The correspondence relation between <T/M-key> and <T/M-val>. In a rare case, however, other entities of the type <TIMEX3> and <D> may correspond to a value of a <X-key> entity.
Temporal Relations
Based on an existing medical temporal-relation annotation schema, THYME (Bethard et al., 2017), we propose a simplified temporal-relation set below. Note that any temporal relation is defined as a form relation(X, TIMEX3 ), where the type of <X> can also be another <TIMEX3> entity. Figure 1 portrays a visualized comparison among the proposed temporal relations. on: A <X> entity happens at the meantime of a time span described by a <TIMEX3> entity. before: A <X> entity happens before a time span described by a <TIMEX3> entity. after: A <X> entity happens after a time span described by a <TIMEX3> entity. start: A <X> entity starts at a time span described by a <TIMEX3> entity. finish: A <X> entity finishes at a time span described by a <TIMEX3> entity. We show the XML-style radiography interpretation report example with the entity-level information and our relation annotation in Figure 2.
The test '<T-test>CT scan<T-text>' is executed 'on' the day '<TIMEX3>July 26, 2016</TIMEX3>'. A disease '<D>right pleural effusion</D>' is observed in the 'region' of the anatomical entity '<A>the upper Figure 2: An annotated radiography interpretation report example (translated into English). To be noticed, the translation may lead to unnatural annotation. For instance, 'after the surgery' in the second sentence is a specific temporal expression often used in Japanese clinical reports, while it look strange to be annotated with a time tag.
lobe of the lung</A>'. A '<F>new</F>' disease '<D>nodules</D>' is in the 'region' of '<A>the lung field</A>. The '<brel>' and '<trel>' tags distinguish the medical relations and temporal relations. JaMIE supports this XML-style format for training models or outputting system prediction. The complete annotation guideline is available. 2
Annotation
In practice, we annotated two datasets: 1,000 radiography interpretation reports of LC and 156 medical history reports of IPF. We annotate all reports with two passes. One annotator conducted the first pass relation annotation for a report. In the second pass, the expert supervisor examined the annotation and led the final adjudication by discussing the inconsistency with the first pass annotator. This procedure is to balance the quality and cost, since it does not fully rely on the expert annotation. We separately calculate the Inter-Annotator Agreement (IAA) of the relation annotation (gold entity annotation based) between two independent annotators on the same five reports randomly selected from each report types. The radiography interpretation reports achieve the IAA with F1 95.19 and Accuracy 91.75%. The medical history reports achieve the IAA with F1 70.58 and Accuracy 70.35%. Considering the low agreement of temporal relation annotation reported by (Bethard et al., 2017), our annotation IAAs show that the over all annotation quality is guaranteed. The lower IAA in medical history reports also suggests that extracting relations from medical history reports is a more difficult task than radiography interpretation reports. Table 2 shows the statistics of the relations annotation. Though the number of the medical history reports is relatively smaller, they usually contain more content per report and a wider coverage of entity types. Considering that the popular English 2010 i2b2/VA medical dataset contains 170 documents (3,106 relations) for training, our annotation scale are comparable with or even larger than it. The results show very different relation type distribution in the two types of reports. As the medical history reports of IPF can be viewed as the mixture of several types of reports such as radiography reports, examination reports, test results, etc., they show a more balanced coverage of relation types, while the radiography interpretation reports of LC are more narrowly distributed among the disease-relevant relation types such as 'region' and 'feature'. Although our annotation experiment is conducted on Japanese medical reports, the annotation guideline is not limited to any specific languages. Figure 3 shows the overview of our Japanese medical IE system with a pipeline process of three components: medical entity recognition, modality classification, and relation extraction. The over all implementation is based on the Pytorch Transformers 3 .
System Architecture of JaMIE
Sentence Encoder
Recent medical IE research (Si et al., 2019;Alsentzer et al., 2019;Peng et al., 2019) suggests the contextual pre-trained models such as ELMO (Peters et al., 2018) and BERT (Devlin et al., 2019) markedly outperform traditional word embedding methods (e.g., word2vec, glove, and fastText). In our pipeline system, we adopt the Japanese pre-trained BERT as the sentence encoder for retrieving token embeddings. Formally, a sentence S = [x 0 , x 1 , x 2 , ..., x n ] is encoded by a contextual BERT or word embedding with bidirectional Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997) as:
X = Encoder([x 0 , x 1 , x 2 ,
Medical Entity Recognition
Medical entity recognition (MER) aims to predict the token spans of entities and their types from the text. We formulate Medical Entity Recognition as sequential tagging with the BIO (begin, inside, outside) tags. The outputs are constrained with a conditional random field (CRF) (Lafferty et al., 2001) layer. For a tag sequence y = [y 0 , y 1 , y 2 , ..., y n ], the probability of a sequence y given X is the softmax over all possible tag sequences:
P (y|X) = e s(X,y) ŷ∈Y e s(X,ŷ)
where the score function s(X, y) represents the sum of the transition scores and tag probabilities. In practice, we adopt a CRF implementation PyTorchcrf 4 on the top of the sentence encoder.
Modality Classification
The modality classification (MC) component is to classify the modality types of the given entities. For a multi-token entity E i predicted by the MER model, we represents the entity embedding as the element sum of embeddings in the entity span. To enrich the context for predicting assertion, we concatenate the entity embedding with the auxiliary entity type. The i-step modality prediction is:
y i = sof tmax(f c([E i ; E type i ]))
where E i denotes the i-th entity embedding, E type i denote the entity type embedding predicted by the MER model, and f c(.) denotes a single full-connected layer.
Relation Extraction
The relation extraction (RE) component is to predict the relations and their types between two named entities. Semantic or temporal relation extraction has been widely explored by recent work (Cheng and Miyao, 2017;Bekoulis et al., 2018;Cheng and Miyao, 2018;Zhang et al., 2020;Cheng et al., 2020;Zhong and Chen, 2021). For pursuing efficiency, we formulate the 4 https://pytorch-crf.readthedocs.io/ en/stable/ relation extraction problem as the multiple head selection (Zhang et al., 2017) of each entity in the sentence. Given each entity E i in the sentence, the model predicts whether another entity E j is the head of this token with a relation r k . The probability of a relation between two entities is defined as:
P (E j , r k |E i ; θ) = sigmoid(f c(E j , r k , E i ))
where f c(.) denotes a single full-connected layer. An additional 'N' relation presents no relation between two tokens. The final representation of an entity E i is the concatenated embeddings of the entity, entity type, and modality type.
Experiments
Settings
For each dataset, we conduct the 5-fold crossvalidation to evaluate the performance of our system. 10% training data is split as the validation set for tuning best hyper-parameters and checkpoints. In each stage in the pipeline, the current component is trained with the gold inputs. The Japanese text is segmented into tokens by MeCab (Kudo et al., 2004).We adopt NICT Japanese BERT 5 as the sentence encoder. The following hyper-parameters are empirically chosen: training epoch as 10, batch size as 16, AdamW Optimizer with learning rate as 5e-5. The best checkpoints on the validation set are saved to produce test results. Our model is also compatible with other Japanese morphological analyzers and pre-trained models, such as: Ju-man++ (Tolmachev et al., 2018) and Ku-BERT 6 .
Evaluation
Instead of applying the usual pipeline evaluation with the gold inputs at each stage, we are more interested in the practical performance of the system and adopt the joint evaluation (Zheng et al., 2017) • Medical entity recognition identifies medical entity from raw reports. We evaluate each {entity, entity type} to the reference. • Modality classification classifies the modality types of the entities identified by the former stage. The evaluation is on each {entity, entity type, modality type}. • Relation extraction extracts the relations between the entities identified by the former stages. The evaluation is on each triplet {head entity, relation, tail entity}. We measure micro-F1 of the system prediction to the gold reference in each pipeline stage. Table 3 shows our system performance on two types of reports: radiography interpretation reports of LC and medical history reports of IPF. The radiography interpretation reports' performance suggests that by concentrating annotation efforts on a specific report type the system achieves high F1 with sufficient training data. Compared to 95.30 MER F1 reported by (Yada et al., 2020), our MER score outperforms their score by 0.35 with the additional CRF layer. The RE model obtains 86.53 F1 of the radiography interpretation reports and 71.04 F1 of the medical history reports. We offer the baseline encoder with LSTM 7 upon word2vec embeddings (Mikolov et al., 2013) trained on Japanese Wikipedia. We observe significant drops in all three tasks, especially in the final relation extraction. In both radiography interpretation reports of LC and medical history reports of IPF, the BERT-based RE models leading 'LSTM + word2vec' by approximately 7 LSTM and Word2vec hidden size equal to 256. 10 points F1. We suggest that solving relation extraction requires long-range information between entities. BERT naturally models such long-range dependency between any two tokens via the self-attention mechanism, while word2vec is trained with a fixed local window and LSTM could also accumulate fails over the long sequential actions. While the medical history reports contain broader relation types and the data size is relatively smaller, the system still obtains satisfactory performance. In addition, we present each relation F1 in Table 5. Except for three rarely appearing relations, i.e. 'finish', 'after' and 'before', the F1 scores on the other types are balanced and match the statistics in Table 2. As for the radiography interpretation report results in Table 4, the major re- One question is whether concentrating annotation efforts on a specific report type can quickly obtain high accuracy to meet the requirements of the practical applications. A valid approach is to compare the RE performance of two report types with the comparable annotation efforts i.e. training data size. The medical history report of IPF contains total 5,432 relations, which is approximately 39% of the radiography interpretation reports. We designed the experiment by reducing the train set of the radiography interpretation reports to the comparable 39% of the origin. The results in Table 6 show that even with comparable training size, the specific radiography interpretation reports lead the performance by 11.29 points F1. To be clarified, the two results are still not exactly comparable due to the different relation distributions in two report types. However, the radiography interpretation reports more densely spread in the relation types such as 'region' and 'feature' (Table 2), which usually means less number of reports needed for achieving the similar over all accuracy compared to the medical history reports. In the scenario of demanding high accuracy for practical medical applications, the results suggest that the annotation strategy of starting from a specific type of report and gradually increasing the coverage of report types is more feasible.
Experiment Results
Main Performance of JaMIE
System Application
User Interface
JaMIE provides an easy-to-use Command-Line Interface (CLI). We design our training/testing scripts similar to the official Transformers examples, in order to be friendly to the Transformers users. We demonstrate how to train/test a relation model with the following script:
$ # T r a i n i n g
Use Case
In the case of annotating raw medical reports with our trained model, users need to download our trained models from the JaMIE GitHub beforehand. Users then execute the pipeline 'test' scripts to annotate entities, modalities, and relations step by step. At each stage, the model will generate the prediction as the input of the next stage model. The prediction is presented in the same XML-style as shown in Figure 2. Our medical IE annotation schema serves to encode a wide range of general medical information not limited to any specific disease, report types and languages. Users can manually annotate their medical reports by following our guideline. Users can apply the 'train' scripts to train the pipeline models on their newly annotated corpus for providing automatic annotation.
Conclusion
We propose a novel annotation schema for investigating medical and temporal relations between medical entities in Japanese medial reports. We empirically compare the annotation on two types of reports: specific radiography interpretation reports of LC and medical history reports of IPF. The system obtains overall satisfactory performance in three tasks, supporting the valuable findings of the good annotation quality, the feasible annotation strategies for targeting report types, and the superior performance of the contextual BERT encoder. The system code and trained models on our annotation are open-access.
In the future, we plan to stick to LC and IPF, cover more specific report types involved with LC, and increase the annotation amount of medical history reports of IPF.
Figure 3 :
3..., x n ]) 3 https://github.com/huggingfaceThe overview of JaMIE.
$ p y t h o n c l i n i c a l p i p e l i n e r e l . py \ $ −− p r e t r a i n e d m o d e l $JAPANESE BERT \ $ −− s a v e d m o d e l $MODEL TO SAVE \ $ −− t r a i n f i l e $TRAIN FILE \
The <A>intrahepatic bile ducts</A> are <C>dilated</C>. compare(C, TIMEX3 ) <C>Not much has changed</C> since <TIMEX3>September 2003</TIMEX3>. feature(F, D) No <F>pathologically significant</F> <D>lymph node enlargement</D>.region(A, D)There are no <D>abnormalities</D> in the <A>liver</A>.value(T -key, T -val )<T-key> Smoking</T-key>: <T-val>20 cigarettes</T-val> Temporal on(D, TIMEX3 ) On <TIMEX3>Sep 20XX</TIMEX3>, diagnosed as <D>podagra</D>. before(CC , TIMEX3 )• We release an open-access toolkit JaMIE for
automatically and accurately annotating medi-
cal entities (F 1:95.65/85.49), entity modalities
(F 1:94.10/78.06), relations (F 1:86.53/71.04) for
two report types.
Although the annotated corpus is not possible to be
opened due to the increase of anonymization level, the
Category
Relation Type
Example
Medical
change(C, A)
Table 1 :
1The example of each relation type. Anatomical entities <A>, Features and measurements <F>, Change <C>, Time <TIMEX3>, Test <T-test/key/val>, Medicine <M-key/val>, Remedy <R>, Clinical Context <CC>. The complete entity and modality definition refers to the original paper.system code and trained models are to be released. 1
2. Japanese Medical IE Annotation
2.1. Entity and Modality Annotation
We leverage an existing corpus (Yada et al., 2020)
with entity and modality information annotated as the
base for our relation annotation. The entity types
are defined as following: Diseases and symptoms
<D>,
Table 2 :
2The statistics of the relation annotation. 'Med' and 'Temp' denote the medical and temporal relations.Report Type
Encoder
MER F1 MC F1 RE F1
Radiography Interpretation Reports (LC) LSTM + word2vec
93.63
93.01
77.88
BERT
95.65
94.10
86.53
(Yada et al., 2020)
95.30
-
-
Medical History Reports (IPF)
LSTM + word2vec
82.73
75.26
60.42
BERT
85.49
78.06
71.04
Table 3 :
3The main results for automatically analyzing two types of reports.
Table 4 :
4Each relation F1 (BERT-based) in the radiography interpretation reports of LC.Med REL RE F1 Temp REL RE F1
region
71.73 on
70.48
change
58.66 start
49.33
feature
60.54 finish
12.02
value
83.12 after
-
compare
75.47 before
11.38
Table 5 :
5Each relation F1 (BERT-based) in the medical
history reports of IPF.
Table 6 :
6The RE performance comparison between the radiography interpretation reports and medical history reports with comparable training size
Japanese:https://doi.org/10.6084/m9. figshare.16418787 English:https://doi.org/10.6084/m9. figshare.16418811 Some notations might be slightly different in the latest version.
AcknowledgementsWe thank the reviewers for their helpful feedback. This work has been supported by the research grant: JPMW21AC500 from Ministry of Health, Labour and Welfare of Japan.Bibliographical ReferencesAlsentzer, E., Murphy, J. R., Boag, W., Weng, W., Jin, D., Naumann, T., and McDermott, M. B. A. (2019). Publicly available clinical BERT embeddings. CoRR, abs/1904.03323. Aramaki, E., Morita, M., Kano, Y., and Ohkuma, T. (2014). Overview of the ntcir-11 mednlp-2 task. In
Proceedings of the 11th NTCIR Workshop Meeting on Evaluation of Information Access Technologies. the 11th NTCIR Workshop Meeting on Evaluation of Information Access TechnologiesIn Proceedings of the 11th NTCIR Workshop Meet- ing on Evaluation of Information Access Technolo- gies.
Overview of the ntcir-12 mednlpdoc task. E Aramaki, M Morita, Y Kano, T Ohkuma, Proceedings of the 12th NTCIR Workshop Meeting on Evaluation of Information Access Technologies. the 12th NTCIR Workshop Meeting on Evaluation of Information Access TechnologiesAramaki, E., Morita, M., Kano, Y., and Ohkuma, T. (2016). Overview of the ntcir-12 mednlpdoc task. In In Proceedings of the 12th NTCIR Workshop Meet- ing on Evaluation of Information Access Technolo- gies.
. E Aramaki, K Yano, S Wakamiya, Aramaki, E., Yano, K., and Wakamiya, S. (2018).
Medex/j: A one-scan simple and fast nlp tool for japanese clinical texts. MEDINFO 2017: Precision Healthcare Through Informatics: Proceedings of the 16th World Congress on Medical and Health Informatics. IOS Press245285Medex/j: A one-scan simple and fast nlp tool for japanese clinical texts. In MEDINFO 2017: Preci- sion Healthcare Through Informatics: Proceedings of the 16th World Congress on Medical and Health Informatics, volume 245, page 285. IOS Press.
An overview of metamap: historical perspective and recent advances. A R Aronson, F.-M Lang, Journal of the American Medical Informatics Association. 173Aronson, A. R. and Lang, F.-M. (2010). An overview of metamap: historical perspective and recent ad- vances. Journal of the American Medical Informat- ics Association, 17(3):229-236.
Joint entity recognition and relation extraction as a multi-head selection problem. G Bekoulis, J Deleu, T Demeester, C Develder, Expert Systems with Applications. 114Bekoulis, G., Deleu, J., Demeester, T., and Develder, C. (2018). Joint entity recognition and relation ex- traction as a multi-head selection problem. Expert Systems with Applications, 114:34-45.
SemEval-2017 Task 12: Clinical Tem-pEval. S Bethard, G Savova, M Palmer, J Pustejovsky, Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017). the 11th International Workshop on Semantic Evaluation (SemEval-2017)Stroudsburg, PA, USAAssociation for Computational LinguisticsBethard, S., Savova, G., Palmer, M., and Pustejovsky, J. (2017). SemEval-2017 Task 12: Clinical Tem- pEval. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 565-572, Stroudsburg, PA, USA. Association for Computational Linguistics.
Classifying temporal relations by bidirectional LSTM over dependency paths. F Cheng, Y Miyao, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics2Short Papers)Cheng, F. and Miyao, Y. (2017). Classifying tem- poral relations by bidirectional LSTM over depen- dency paths. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 1-6, Van- couver, Canada, July. Association for Computational Linguistics.
Inducing temporal relations from time anchor annotation. F Cheng, Y Miyao, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers; New Orleans, LouisianaAssociation for Computational Linguistics1Cheng, F. and Miyao, Y. (2018). Inducing temporal re- lations from time anchor annotation. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1833-1843, New Orleans, Louisiana, June. Association for Computational Linguistics.
Dynamically updating event representations for temporal relation classification with multi-category learning. F Cheng, M Asahara, I Kobayashi, S Kurohashi, Findings of the Association for Computational Linguistics: EMNLP 2020. OnlineAssociation for Computational LinguisticsCheng, F., Asahara, M., Kobayashi, I., and Kuro- hashi, S. (2020). Dynamically updating event rep- resentations for temporal relation classification with multi-category learning. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2020, pages 1352-1357, Online, November. Association for Computational Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171- 4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.
Long short-term memory. S Hochreiter, J Schmidhuber, Neural Computation. 98Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9(8):1735-1780.
Applying conditional random fields to Japanese morphological analysis. T Kudo, K Yamamoto, Y Matsumoto, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. the 2004 Conference on Empirical Methods in Natural Language ProcessingBarcelona, SpainAssociation for Computational LinguisticsKudo, T., Yamamoto, K., and Matsumoto, Y. (2004). Applying conditional random fields to Japanese morphological analysis. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 230-237, Barcelona, Spain, July. Association for Computational Linguis- tics.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. J D Lafferty, A Mccallum, F C N Pereira, Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01. the Eighteenth International Conference on Machine Learning, ICML '01San Francisco, CA, USAMorgan Kaufmann Publishers IncLafferty, J. D., McCallum, A., and Pereira, F. C. N. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Con- ference on Machine Learning, ICML '01, page 282-289, San Francisco, CA, USA. Morgan Kauf- mann Publishers Inc.
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efficient estimation of word representations in vector space.
Overview of the ntcir-10 mednlp task. M Morita, Y Kano, T Ohkuma, M Miyabe, Aramaki , E , In Proceedings of NTCIR-10. NTCIR-10Morita, M., Kano, Y., Ohkuma, T., Miyabe, M., and Aramaki, E. (2013). Overview of the ntcir-10 mednlp task. In In Proceedings of NTCIR-10.
Transfer learning in biomedical natural language processing: An evaluation of BERT and elmo on ten benchmarking datasets. Y Peng, S Yan, Z Lu, CoRR, abs/1906.05474Peng, Y., Yan, S., and Lu, Z. (2019). Transfer learn- ing in biomedical natural language processing: An evaluation of BERT and elmo on ten benchmarking datasets. CoRR, abs/1906.05474.
Deep contextualized word representations. M E Peters, M Neumann, M Iyyer, M Gardner, C Clark, K Lee, L Zettlemoyer, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLouisianaLong PapersAssociation for Computational LinguisticsPeters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. (2018). Deep contextualized word representations. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers), pages 2227-2237, New Or- leans, Louisiana, June. Association for Computa- tional Linguistics.
Mayo clinical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and applications. G K Savova, J J Masanz, P V Ogren, J Zheng, S Sohn, K C Kipper-Schuler, C G Chute, Journal of the American Medical Informatics Association. 175Savova, G. K., Masanz, J. J., Ogren, P. V., Zheng, J., Sohn, S., Kipper-Schuler, K. C., and Chute, C. G. (2010). Mayo clinical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and applications. Journal of the Amer- ican Medical Informatics Association, 17(5):507- 513.
Enhancing clinical concept extraction with contextual embeddings. Y Si, J Wang, H Xu, K Roberts, Journal of the American Medical Informatics Association. 2611Si, Y., Wang, J., Xu, H., and Roberts, K. (2019). En- hancing clinical concept extraction with contextual embeddings. Journal of the American Medical In- formatics Association, 26(11):1297-1304.
Juman++: A morphological analysis toolkit for scriptio continua. A Tolmachev, D Kawahara, S Kurohashi, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2018 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsBrussels, BelgiumAssociation for Computational LinguisticsTolmachev, A., Kawahara, D., and Kurohashi, S. (2018). Juman++: A morphological analysis toolkit for scriptio continua. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing: System Demonstrations, pages 54-59, Brussels, Belgium, November. Association for Computational Linguistics.
2010 i2b2/va challenge on concepts, assertions, and relations in clinical text. Ö Uzuner, B R South, S Shen, S L Duvall, Journal of the American Medical Informatics Association. 185Uzuner,Ö., South, B. R., Shen, S., and DuVall, S. L. (2011). 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Association, 18(5):552-556.
Towards a versatile medical-annotation guideline feasible without heavy medical knowledge: Starting from critical lung diseases. S Yada, A Joh, R Tanaka, F Cheng, E Aramaki, S Kurohashi, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, France, MayEuropean Language Resources AssociationYada, S., Joh, A., Tanaka, R., Cheng, F., Aramaki, E., and Kurohashi, S. (2020). Towards a versatile medical-annotation guideline feasible without heavy medical knowledge: Starting from critical lung dis- eases. In Proceedings of the 12th Language Re- sources and Evaluation Conference, pages 4565- 4572, Marseille, France, May. European Language Resources Association.
Dependency parsing as head selection. X Zhang, J Cheng, M Lapata, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational LinguisticsValencia, SpainAssociation for Computational Linguistics1Zhang, X., Cheng, J., and Lapata, M. (2017). Depen- dency parsing as head selection. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 665-676, Valencia, Spain, April. Association for Computational Linguistics.
. R H Zhang, Q Liu, A X Fan, H Ji, D Zeng, F Cheng, D Kawahara, S Kurohashi, Zhang, R. H., Liu, Q., Fan, A. X., Ji, H., Zeng, D., Cheng, F., Kawahara, D., and Kurohashi, S. (2020).
Minimize exposure bias of Seq2Seq models in joint entity and relation extraction. Findings of the Association for Computational Linguistics: EMNLP 2020. Online, November. Association for Computational LinguisticsMinimize exposure bias of Seq2Seq models in joint entity and relation extraction. In Findings of the As- sociation for Computational Linguistics: EMNLP 2020, pages 236-246, Online, November. Associa- tion for Computational Linguistics.
Joint extraction of entities and relations based on a novel tagging scheme. S Zheng, F Wang, H Bao, Y Hao, P Zhou, B Xu, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Long Papers)Zheng, S., Wang, F., Bao, H., Hao, Y., Zhou, P., and Xu, B. (2017). Joint extraction of entities and rela- tions based on a novel tagging scheme. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1227-1236, Vancouver, Canada, July. Association for Computational Linguistics.
A frustratingly easy approach for entity and relation extraction. Z Zhong, D Chen, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnlineAssociation for Computational LinguisticsZhong, Z. and Chen, D. (2021). A frustratingly easy approach for entity and relation extraction. In Pro- ceedings of the 2021 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 50-61, Online, June. Association for Computational Linguistics.
| [
"https://github.com/racerandom/JaMIE/",
"https://github.com/huggingfaceThe"
] |
[
"NPRF: A Neural Pseudo Relevance Feedback Framework for Ad-hoc Information Retrieval",
"NPRF: A Neural Pseudo Relevance Feedback Framework for Ad-hoc Information Retrieval"
] | [
"Canjia Li \nUniversity of Chinese Academy of Sciences\nBeijingChina\n",
"Yingfei Sun \nUniversity of Chinese Academy of Sciences\nBeijingChina\n",
"Ben He \nUniversity of Chinese Academy of Sciences\nBeijingChina\n\nInstitute of Software\nChinese Academy of Sciences\nBeijingChina\n",
"Le Wang \nUniversity of Chinese Academy of Sciences\nBeijingChina\n\nComputer Network Information Center\nChinese Academy of Sciences\nBeijingChina\n",
"Kai Hui kai.hui@sap.com \nSAP SE\nBerlinGermany\n",
"Andrew Yates ayates@mpi-inf.mpg.de \nMax Planck Institute for Informatics\nSaarbrückenGermany\n",
"Le Sun sunle@iscas.ac.cn \nInstitute of Software\nChinese Academy of Sciences\nBeijingChina\n",
"Jungang Xu \nUniversity of Chinese Academy of Sciences\nBeijingChina\n"
] | [
"University of Chinese Academy of Sciences\nBeijingChina",
"University of Chinese Academy of Sciences\nBeijingChina",
"University of Chinese Academy of Sciences\nBeijingChina",
"Institute of Software\nChinese Academy of Sciences\nBeijingChina",
"University of Chinese Academy of Sciences\nBeijingChina",
"Computer Network Information Center\nChinese Academy of Sciences\nBeijingChina",
"SAP SE\nBerlinGermany",
"Max Planck Institute for Informatics\nSaarbrückenGermany",
"Institute of Software\nChinese Academy of Sciences\nBeijingChina",
"University of Chinese Academy of Sciences\nBeijingChina"
] | [
"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing"
] | Pseudo relevance feedback (PRF) is commonly used to boost the performance of traditional information retrieval (IR) models by using top-ranked documents to identify and weight new query terms, thereby reducing the effect of query-document vocabulary mismatches. While neural retrieval models have recently demonstrated strong results for adhoc retrieval, combining them with PRF is not straightforward due to incompatibilities between existing PRF approaches and neural architectures. To bridge this gap, we propose an end-to-end neural PRF framework that can be used with existing neural IR models by embedding different neural models as building blocks. Extensive experiments on two standard test collections confirm the effectiveness of the proposed NPRF framework in improving the performance of two state-of-theart neural IR models. | 10.18653/v1/d18-1478 | [
"https://www.aclweb.org/anthology/D18-1478.pdf"
] | 53,081,945 | 1810.12936 | 7173b45ee1ede20cf8c7f38903f7b744f1667322 |
NPRF: A Neural Pseudo Relevance Feedback Framework for Ad-hoc Information Retrieval
October 31 -November 4. 2018
Canjia Li
University of Chinese Academy of Sciences
BeijingChina
Yingfei Sun
University of Chinese Academy of Sciences
BeijingChina
Ben He
University of Chinese Academy of Sciences
BeijingChina
Institute of Software
Chinese Academy of Sciences
BeijingChina
Le Wang
University of Chinese Academy of Sciences
BeijingChina
Computer Network Information Center
Chinese Academy of Sciences
BeijingChina
Kai Hui kai.hui@sap.com
SAP SE
BerlinGermany
Andrew Yates ayates@mpi-inf.mpg.de
Max Planck Institute for Informatics
SaarbrückenGermany
Le Sun sunle@iscas.ac.cn
Institute of Software
Chinese Academy of Sciences
BeijingChina
Jungang Xu
University of Chinese Academy of Sciences
BeijingChina
NPRF: A Neural Pseudo Relevance Feedback Framework for Ad-hoc Information Retrieval
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels; BelgiumOctober 31 -November 4. 20184482
Pseudo relevance feedback (PRF) is commonly used to boost the performance of traditional information retrieval (IR) models by using top-ranked documents to identify and weight new query terms, thereby reducing the effect of query-document vocabulary mismatches. While neural retrieval models have recently demonstrated strong results for adhoc retrieval, combining them with PRF is not straightforward due to incompatibilities between existing PRF approaches and neural architectures. To bridge this gap, we propose an end-to-end neural PRF framework that can be used with existing neural IR models by embedding different neural models as building blocks. Extensive experiments on two standard test collections confirm the effectiveness of the proposed NPRF framework in improving the performance of two state-of-theart neural IR models.
Introduction
Recent progress in neural information retrieval models (NIRMs) has highlighted promising performance on the ad-hoc search task. State-of-theart NIRMs, such as DRMM , HiNT (Fan et al., 2018), (Conv)-KNRM (Xiong et al., 2017;Dai et al., 2018), and (Co)-PACRR (Hui et al., , 2018, have successfully implemented insights from traditional IR models using neural building blocks. Meanwhile, existing IR research has already demonstrated the effectiveness of incorporating relevance signals from top-ranked documents through pseudo relevance feedback (PRF) models (Buckley and Robertson, 2008;Diaz et al., 2016). PRF models expand the query with terms selected from top-ranked documents, thereby boosting ranking performance by reducing the problem of vocabulary mismatch between the original query and documents (Roc-chio, 1971). Existing neural IR models do not have a mechanism for treating expansion terms differently from the original query terms, however, making it non-trivial to combine them with existing PRF approaches. In addition, neural IR models differ in their architectures, making the development of a widely-applicable PRF approach a challenging task.
To bridge this gap, we propose a generic neural pseudo relevance feedback framework, coined NPRF, that enables the use of PRF with existing neural IR models. Given a query and a target document, the top-ranked documents from the initial ranking are consumed by NPRF, which expands the query by interpreting it from different perspectives. Given a target document to evaluate, NPRF produces a final relevance score by considering the target document's relevance to these top-ranked documents and to the original query.
The proposed NPRF framework can directly incorporate different established neural IR models, which serve as the concrete scorers in evaluating the relevance of a document relative to the top-ranked documents and to the query, without changing their architectures. We instantiate the NPRF framework using two state-of-the-art neural IR models, and we evaluate their performance on two widely-used TREC benchmark datasets for ad-hoc retrieval. Our results confirm that the NPRF framework can substantially improve the performance of both models. Moreover, both neural models perform similarly inside the NPRF framework despite the fact that without NPRF one model performed substantially worse than the other model. The contributions of this work are threefold: 1) the novel NPRF framework; 2) two instantiations of the NPRF framework using two state-of-the-art neural IR models; and 3) the experiments that confirm the effectiveness of the NPRF framework.
The rest of this paper is organized as follows. Section 2 presents the proposed NPRF framework in details. Following that, Section 3 describes the setup of the evaluation, and reports the results. Finally, Section 4 recaps existing literature, before drawing conclusions in Section 5.
Method
In this section, we introduce the proposed neural framework for pseudo relevance feedback (NPRF). Recall that existing unsupervised PRF models (Rocchio, 1971;Lavrenko and Croft, 2001;Ye et al., 2009) issue a query to obtain an initial ranking, identify promising terms from the top-m documents returned, and expand the original query with these terms. Rather than selecting the expanded terms within the top-m documents, NPRF uses these documents directly as expansion queries by considering the interactions between them and a target document. Thus, each document's ultimate relevance score depends on both its interactions with the original query and its interactions with these feedback documents.
Overview
Given a query q, NPRF estimates the relevance of a target document d relative to q as described in the following steps. The architecture is summarized in Figure 1. Akin to the established neural IR models like DRMM , the description is based on a query-document pair, and a ranking can be produced by sorting the documents according to their scores.
-Create initial ranking. Given a document corpus, a ranking method rel q (q, d) is applied to individual documents to obtain the top-m documents, denoted as D q for q.
-Extract document interactions. To evaluate the relevance of d, each d q in D q is used to expand q, where d is compared against each d q , using a ranking method rel d (d q , d).
-Combine document interactions. The relevance scores rel d (d q , d) for individual d q ∈ D q are further weighted by rel q (q, d q ), which serves as an estimator for the confidence of the contribution of d q relative to q. The weighted combination of these relevance scores is used to produce a relevance score for d, denoted as rel D (q, D q , d).
While the same ranking model can be used for both rel q (., .) and rel d (., .), we denote them separately in the architecture. In our experiments, the widely-used unsupervised ranking method BM25 (Robertson et al., 1995) serves as rel q (., .); meanwhile two state-of-the-art neural IR relevance matching models, namely, DRMM and K-NRM (Xiong et al., 2017), serve as the ranking method rel d (., .). However, it is worth noting that in principle rel q and rel d can be replaced with any ranking method, and the above choices mainly aim to demonstrate the effectiveness of the NPRF framework.
Model Architecture
The NPRF framework begins with an initial ranking for the input query q determined by rel q (., .), which forms D q , the set of the top-m documents D q . The ultimate query-document relevance score rel D (q, D q , d) is computed as follows.
Extracting document interactions. Given the target document d and each feedback document d q ∈ D q , rel d (., .) is used to evaluate the relevance between d and d q , resulting in m real-valued relevance scores, where each score corresponds to the estimated relevance of d according to one feedback document d q .
As mentioned, two NIRMs are separately used to compute rel d (d q , d) in our experiments. Both models take as input the cosine similarities between each pair of terms in d q and d, which are computed using pre-trained word embeddings as explained in Section 3.1. Given that both models consider only unigram matches and do not consider term dependencies, we first summarize d q by retaining only the top-k terms according to their tf -idf scores, which speeds up training by reducing the document size and removing noisy terms. In our pilot experiments, the use of top-k tf -idf document summarization did not influence performance. For different d q ∈ D q , the same model is used as rel d (., .) for different pairs of (d q , d) by sharing model weights.
Combining document interactions. When determining the relevance of a target document d, there exist two sources of relevance signals to consider: the target document's relevance relative to the feedback documents D q and its relevance relative to the query q itself. In this step, we combine rel d (d q , d) for each d q ∈ D q into an overall feedback document relevance score rel D (q, D q , d). When combining the relevance scores, the agreement between q and each d q is also important, since d q may differ from q in terms of information needs. The relevance of d q from the initial ranking rel q (q, d q ) is employed to quantify this agreement and weight each rel d (d q , d) accordingly.
When computing such agreements, it is necessary to remove the influence of the absolute ranges of the scores from the initial ranker. For example, ranking scores from a language model (Ponte and Croft, 1998) and from BM25 (Robertson et al., 1995) can differ substantially in their absolute ranges. To mitigate this, we use a smoothed min-max normalization to rescale rel q (q, d q ) into the range [0.5, 1]. The min-max normalization is applied by considering min(rel q (q, d q )|d q ∈ D q ) and max (rel q (q, d q )|d q ∈ D q ). Hereafter, rel q (q, d q ) is used to denote this relevance score after min-max normalization for brevity. The (normalized) relevance score is smoothed and then weighted by the relevance evaluation of d q , producing a weighted document relevance score rel d (d q , d) for each d q ∈ D q that reflects the relevance of d q relative to q. This computation is described in the following equation.
rel d (d q , d) = rel d (d q , d)(0.5 + 0.5 × rel q (q, d q ))
(1) As the last step, we propose two variants for combining the rel d (d q , d) for different d q into a single score rel D (q, D q , d): (i) performing a direct summation and (ii) using a feed forward network with a hyperbolic tangent (tanh) non-linear activation. Namely, the first variant simply sums up the scores, whereas the second takes the ranking positions of individual feedback documents into account.
Optimization and Training
Each training sample consists of a query q, a set of m feedback documents D q , a relevant target document d + and a non-relevant target document d − according to the ground truth. The Adam optimizer (Kingma and Ba, 2014) is used with a learning rate 0.001 and a batch size of 20. Training normally converges within 30 epochs, with weights uniformly initialized. A hinge loss is employed for training as shown below.
loss(q, D q , d + , d − ) = max(0, 1 − rel (q, D q , d + ) + rel (q, D q , d − ))
3 Evaluation
Evaluation Setup
Dataset.
We evaluate our proposed NPRF framework on two standard test collections, namely, TREC1-3 (Harman, 1993) and Ro-bust04 (Voorhees, 2004). TREC1-3 consists of 741,856 documents with 150 queries used in the TREC 1-3 ad-hoc search tasks (Harman, 1993(Harman, , 1994(Harman, , 1995. Robust04 contains 528,155 documents and 249 queries used in the TREC 2004 Robust track (Voorhees, 2004). We use those two collections to balance between the number of queries and the TREC pooling depth, i.e., 100 on both collections, allowing for sufficient training data. Manual relevance judgments are available on both collections, where both the relevant and nonrelevant documents are labeled for each query.
Two versions of queries are included in our experiments: a short keyword query (title query), and a longer description query that restates the corresponding keyword query's information need in terms of natural language (description query). We evaluate each type of query separately using the metrics Mean Average Precision at 1,000 (MAP), Precision at 20 (P@20) (Manning et al., 2008), and NDCG@20 (Järvelin and Kekäläinen, 2002). Preprocessing. Stopword removal and Porter's stemmer are applied (Manning et al., 2008). The word embeddings are pre-trained based on a pool of the top 2,000 documents returned by BM25 for individual queries as suggested by (Diaz et al., 2016). The implementation of Word2Vec 1 from (Mikolov et al., 2013) is employed. In particular, we employ CBOW with the dimension set to 300, window size to 10, minimum count to 5, and a subsampling threshold of 10 −3 . The CBOW model is trained for 10 iterations on the target corpus. Unsupervised ranking models serve as baselines for comparisons. We use the open source Terrier platform's (Macdonald et al., 2012) implementation of these ranking models:
-BM25 (Robertson et al., 1995), a classical probabilistic model, is employed as an unsupervised baseline. The hyper-parameters b and k 1 are tuned by grid search. As mentioned in Sec. 2.1, BM25 also generates the initial rankings D q , serving as rel q (., .) in the NPRF framework.
-On top of BM25, we use an adapted version of Rocchio's query expansion (Ye et al., 2009), denoted as BM25+QE. Note that, as demonstrated in the results, BM25+QE's performance is comparable with the base neural IR models, including DRMM, K-NRM and PACRR. This illustrates the difficulty in making improvements on the TREC benchmarks through the uses of deep learning methods. The hyper-parameters, including the number of feedback documents and the number of expansion terms, are optimized using grid search on training queries.
-In addition, QL+RM3, the query likelihood language model with the popular RM3 PRF Neural IR models are used for rel d (., .). As mentioned in Section 2.1, two unigram neural IR models are employed in our experiments:
-DRMM. We employ the variant with the best effectiveness on Robust04 according to , namely, DRMM LCH×IDF with the original configuration.
-K-NRM. Due to the lack of training data compared with the commercial data used by (Xiong et al., 2017), we employ a K-NRM variant with a frozen word embedding layer. To compensate for this substantial reduction in the number of learnable weights, we add an additional fully connected layer to the model. These changes lead to a small but competitive K-NRM variant, as demonstrated in (Hui et al., 2018).
-We additionally implement PACRR for the purpose of performing comparisons, but do not use PACRR to compute rel d (., .) due to the computational costs. In particular, PACRR-firstk is employed where the first 1, 000 terms are used to compute the similarity matrices, and the original configuration from is used.
-NIRM(QE) uses the modified query generated by the query expansion of BM25+QE (Ye et al., 2009) as input to the neural IR model. Both DRMM and K-NRM are used to instantiate NIRM(QE).
-Variants of the proposed NPRF approach. As indicated in Section 2.2, NPRF includes two variants that differ in the combination of the relevance scores from different d q ∈ D q : the variant NPRF ff uses a feed forward network with a hidden layer with five neurons to compute rel(d, D q ), and the other variant NPRF ds performs a direct summation of the different relevance scores. For the purposes of comparison, we additionally introduce another variant coined NPRF ff , where the relevance of d q to q is not considered in the combination by di-
rectly setting rel d (d, d q ) = rel d (d, d q ) in place of Equation 1
, thereafter combining the scores with a fully connected layer as in NPRF ff . We combine each of the three NPRF variants with the DRMM and K-NRM models, and report results for all six variants. Our implementation of the NPRF framework is available to enable future comparisons 2 .
Akin to Xiong et al., 2017;, the NIRM baselines and the proposed NPRF are employed to re-rank the search results from BM25. In particular, the top-10 documents from the unsupervised baseline are used as the pseudo relevance feedback documents D q as input for NPRF, where each d q ∈ D q is represented by its top-20 terms with the highest tfidf weights. As illustrated later in Section 3.3, NPRF's performance is stable over a wide range of settings for both parameters. Cross-validation. Akin to (Hui et al., 2018), owing to the limited number of labeled data, five-fold cross-validation is used to report the results by randomly splitting all queries into five equal partitions. In each fold, three partitions are used for training, one for validation, and one for testing. The model with the best MAP on the validation set is selected. We report the average performance on all test partitions. A two-tailed paired t-test is used to report the statistical significance at 95% confidence interval.
Results
Comparison to BM25. We first compare the proposed NPRF models with the unsupervised BM25. The results are summarized in Tables 1 and 2, where the best result in each column is highlighted in bold. From Tables 1 and 2, it can be seen that the proposed NPRF variants obtain significant improvement relative to BM25 on both test collections with both kinds of test queries. Moreover, the results imply that the use of different query types does not affect the effectiveness of NPRF, which consistently outperforms BM25. Comparison to neural IR models. NPRF is further compared with different neural IR models, as summarized in Tables 3 & 4. It can be seen that NPRF regularly improves on top of the NIRM baselines. For both types of queries, NPRF-DRMM outperforms DRMM and NPRF-KNRM outperforms K-NRM when re-ranking BM25. Remarkably, the proposed NPRF is able to improve the weaker NIRM baseline. For instance, on Robust04, when using the description queries, 2 https://github.com/ucasir/NPRF DRMM and K-NRM obtain highly different results, with MAPs of 0.2630 and 0.1687 after reranking the initial results from BM25, respectively. When NPRF is used in conjunction with the NIRM models, however, the gap between the two models is closed; that is, MAP=0.2801 for NRFF ds -DRMM and MAP=0.2800 for NRFF ds -KNRM (see Table 4). This finding highlights that our proposed NPRF is robust with respect to the use of the two embedded NIRM models. A possible explanation for the poor performance of K-NRM on two TREC collections is the lack of training data, as suggested in (Dai et al., 2018). While K-NRM could be improved by introducing weak supervision (Dai et al., 2018), we achieve the same goal by incorporating pseudo relevance feedback information without extra training data.
While the six NPRF variants exhibit similar results across both kinds of queries, NPRF ds -DRMM in general achieves the best performance on Robust04, and NPRF ds -KNRM appears to be the best variant on TREC1-3. In the meantime, NPRF ds outperforms NPRF ff variants. One difference between the two methods is that NPRF ff considers the position of each d q in the D q ranked documents, whereas NPRF ds simply sums up the scores regardless of the positions. The fact that NPRF ds performs better suggests that the ranking position within the D q documents may not be a useful signal. In the remainder of this paper, we mainly report on the results obtained by NPRF ds .
Comparison to query expansion baselines. In Table 5, the proposed NPRF model is compared with three kinds of query expansion baselines, namely, the unsupervised BM25+QE (Ye et al., 2009), QL+RM3 (Lavrenko and Croft, 2001), and DRMM/K-NRM(QE), the neural IR models using expanded queries as input. According to Table 5, the unsupervised BM25+QE baseline appears to achieve better performance in terms of MAP@1k, owing to its use of query expansion to match relevant documents containing the expansion terms from the whole collection. On the other hand, NPRF ds , which reranks the top-1000 documents returned by BM25, outperforms the query expansion baselines in terms of early precision, as measured by either NDCG@20 or P@20. These measures on shallow rankings are particularly important for general IR applications where the quality of the top-ranked results is crucial to the user satisfaction. Moreover, our NPRF outperforms NIRM(QE) in most cases, indicating the benefit brought by wrapping up the feedback information in a document-to-document matching framework as in NPRF, as opposed to directly adding unweighted expansion terms to the query. Recall that, it is not straightforward to incorporate these expanded terms within the existing NIRMs' architectures because the NIRMs do not distinguish between them and the original query terms.
Analysis
Parameter sensitivity. Moreover, we analyze factors that may influence NPRF's performance. We report results on NPRF ds using title queries on Robust04 for the sake of brevity, but similar observations also hold for the other NPRF variants, as well as on TREC1-3. Figure 2 illustrates the sensitivity of NPRF relative to two parameters: the number of feedback documents m within D q and the number of terms k that are used to summarize each d q ∈ D q . Specifically, Figure 2 shows the performance of NPRF ds as the number of feedback documents m varies (top), and as the number of top terms k varies (bottom). The effectiveness of NPRF appears to be stable over a wide range of the parameter configurations, where the proposed model consistently outperforms the BM25 baseline. Case study. A major advantage of the proposed NPRF over existing neural IR models is that it allows for soft-matching query-related terms that are missing from both the query and the target document. Table 6 presents an illustrative example of soft matching in NPRF. From Table 6, it can be seen that there exist query-related terms in the top-10 documents returned by BM25 in the initial ranking. However, since those query-related terms are missing in both the query and the target document, they are not considered in the documentto-query matching and, consequently, the target document is ranked 122 nd by BM25 despite the facts that it was judged relevant by a human assessor. In contrast, the NPRF framework allows for the soft-matching of terms that are missing in both the query and target document. As a result, the matching signals for the query terms and queryrelated terms in the target document are enhanced. This leads to enhanced effectiveness with the target document now ranked in the 5 th position.
In summary, the evaluation on two standard TREC test collections shows promising results obtained by our proposed NPRF approach, which outperforms state-of-the-art neural IR models in most cases. Overall, NPRF provides effective retrieval performance that is robust with respect to the two embedded neural models used for encoding the document-to-document interactions, the two kinds of queries with varied length, and wide range of parameter configurations. Table 3: Comparisons between NPRF and neural IR models on TREC1-3. Relative performances of NPRF-DRMM(KNRM) compared with DRMM (K-NRM) are in percentages, and statistically significant improvements are marked with †.
Related Work
Recently, several neural IR models (NIRMs) have been proposed to apply deep learning techniques in ad-hoc information retrieval. One of the essential ideas from prior work is to model the document-to-query interaction via neural networks, based on a matrix of document-to-query embedding term similarities, incorporating both the "exact matching" of terms appearing in both the document and query and the "soft matching" of different query and document term pairs that are semantically related.
DSSM, one of the earliest NIRMs proposed in (Huang et al., 2013), employs a multi-layer neural network to project queries and document into a common semantic space. The cosine similarity between a query and a document (document title) is used to produce a final relevance score for the query-document pair. CDSSM is a convolutional version of DSSM, which uses the convolutional neural network (CNN) and maxpooling strategy to extract semantic matching features at the sentence level (Shen et al., 2014). (Pang et al., 2016) also employ a CNN to construct the MatchPyramid model, which learns hierarchical matching patterns between local interactions of document-query pair. argue that both DSSM and CDSSM are representation-focused models, and thus are better suited to capturing semantic matching than relevance matching (i.e., lexical matching), and propose the interaction-focused relevance model named DRMM. DRMM maps the local interactions between a query-document pair into a fixedlength histogram, from which the exact matching signals are distinguished from the other matching signals. These signals are fed into a feed forward network and a term gating network to produce global relevance scores. Similar to DRMM, K-NRM (Xiong et al., 2017) builds its model on top of a matrix of local interaction signals, and utilizes multiple Gaussian kernels to obtain multilevel exact/soft matching features that are input into a ranking layer to produce the final ranking score. K-NRM is later improved by Conv-KNRM, which employs CNN filters to capture n-gram representations of queries and documents (Dai et al., 2018). DeepRank (Pang et al., 2017) models the relevance generation process by identifying querycentric contexts, processing them with a CNN or LSTM, and aggregating them to produce a final relevance score. Building upon DeepRank, (Fan et al., 2018) propose to model diverse relevance patterns by a data-driven method to allow rele- Table 5: Comparisons between NPRF and query expansion baselines on TREC1-3 and Robust04. Significant improvements over the best baseline is marked with †.
TREC Query 341: airport security Terms in doc at rank i Terms in target document FBIS3-23332 1. terrorist detect passenger check police scan; 2. heathrow terrorist armed aviation police; 3. detect airline passenger police scan flight weapon; 4. aviation; 5. detect baggage passenger; 6. passenger bomb baggage terrorist explosive aviation scan flight weapon; 7. baggage airline detect passenger scan flight weapon; 8. baggage airline passenger flight; 9. passenger police aviation; 10. airline baggage aviation flight transec semtex airline ditma security baggage heathrow test device lockerbie klm bomb virgin airport loaded blobby transport detect inspector terrorist identify atlantic depressing passenger fail aircraft dummy check inert patchy stein norwich doll regard rupert lapse busiest loophole employee campaign blew procedure traveler passport reconcile glasgow investigate boeing bags bag harry successive smuggle conscious reconciliation tragedy board wire hidden... Table 6: An illustrative example of soft matching in NPRF. The target document FBIS3-23332, judged relevant, is ranked 122 nd by BM25 for query 341 on Robust04, and is promoted to the 5 th by NPRF ds -DRMM. The NPRF mechanism increases the chances of soft-matching query-related terms that appear in the top-ranked documents (terms in blue), but are missing in both the query and the target document. Subsequently, the matching signals with the query terms (in bold) and the query-related terms (in red) in the target document are enhanced.
vance signals at different granularities to compete with each other for the final relevance assessment.
Duet (Mitra et al., 2017) employs two separate deep neural networks to build a relevance ranking model, in which a local model estimates the relevance score according to exact matches between query and document terms, and a distributed model estimates relevance by learning dense lower-dimensional representations of query and document text. (Zamani et al., 2018) extends the Duet model by considering different fields within a document. propose the PACRR model based on the idea that an appropriate combination of convolutional kernels and pooling operations can be used to successfully identify both unigram and n-gram query matches. PACRR is later improved upon by Co-PACRR, a context-aware variant that takes the local and global context of matching signals into account through the use of three new components (Hui et al., 2018). (Ran et al., 2017) propose a document-based neural relevance model that utilizes complemented medical records to address the mismatch problem in clinical decision support. (Nogueira and Cho, 2017) propose a reinforcement learning approach to reformulating a task-specific query. (Li et al., 2018) propose DAZER, a CNN-based neu-ral model upon interactions between seed words and words in a document for zero-shot document filtering with adversarial learning. (Ai et al., 2018) propose to refine document ranking by learning a deep listwise context model.
In summary, most existing neural IR models are based on query-document interaction signals and do not provide a mechanism for incorporating relevance feedback information. This work proposes an approach for incorporating relevance feedback information by embedding neural IR models within a neural pseudo relevance feedback framework, where the models consume feedback information via document-to-document interactions.
Conclusions
In this work we proposed a neural pseudo relevance feedback framework (NPRF) for incorporating relevance feedback information into existing neural IR models (NIRM). The NPRF framework uses feedback documents to better estimate relevance scores by considering individual feedback documents as different interpretations of the user's information need. On two standard TREC datasets, NPRF significantly improves the performance of two state-of-the-art NIRMs. Furthermore, NPRF was able to improve their perfor- of PRF documents (top) and different umber of terms which are used to summarize the feedback documents (bottom). The •, , correspond to results measured by MAP, P@20 and NDCG@20 respectively, and the empty or solid symbols correspond to those for NPRF ds -DRMM and NPRF ds -KNRM. The three dotted lines, from bottom to top, are the BM25 baseline evaluated by MAP, P@20 and NDCG@20, respectively. mance across two kinds of query tested (namely, short queries and the verbal queries in natural language). Finally, our analysis demonstrated the robustness of the NPRF framework over different parameter configurations.
Figure 1 :
1Architecture of the proposed neural pseudo relevance feedback (NPRF) framework.
Figure 2 :
2Performance of NPRF ds with different numbers
Table 1 :
1ff -KNRM 0.2654 † 10.22% 0.5077 † 5.70% 0.5216 † 5.44% 0.2462 † 17.60% 0.5197 † 12.65% 0.5363 † 10.84% NPRF ds -KNRM 0.2707 † 12.41% 0.5303 † 10.42% 0.5406 † 9.29% 0.2505 † 19.61% 0.5270 † 14.24% 0.5460 † 12.87% Comparisons between NPRF and BM25 on TREC1-3 dataset. Relative performances compared with BM25 are in percentages. Significant improvements relative to the baselines are marked with †.Title
Description
Model
MAP
P@20
NDCG@20
MAP
P@20
NDCG@20
BM25
0.2408
-0.4803
-0.4947
-0.2094
-0.4613
-0.4838
-
NPRF ff -DRMM 0.2669 † 10.85% 0.5010
4.31% 0.5119
3.47% 0.2509 † 19.80% 0.5257 † 13.95% 0.5393 † 11.46%
NPRF ff -DRMM 0.2671 † 10.93% 0.5023 †
4.59% 0.5116
3.42% 0.2504 † 19.58% 0.5163 † 11.93% 0.5291 †
9.37%
NPRF ds -DRMM 0.2698 † 12.03% 0.5187 †
7.99% 0.5282 † 6.77% 0.2527 † 20.67% 0.5283 † 14.53% 0.5444 † 12.52%
NPRF ff -KNRM
0.2633 †
9.34% 0.5033
4.80% 0.5171 4.52% 0.2486 † 18.71% 0.5240 † 13.59% 0.5398 † 11.58%
NPRF Title
Description
Model
MAP
P@20
NDCG@20
MAP
P@20
NDCG@20
BM25
0.2533
-0.3612
-0.4158
-0.2479
-0.3514
-0.4110
-
NPRF ff -DRMM 0.2823 † 11.46% 0.3941 †
9.11% 0.4350 † 4.62% 0.2766 † 11.58% 0.3908 † 11.21% 0.4421 †
7.56%
NPRF ff -DRMM 0.2837 † 12.00% 0.3928 †
8.74% 0.4377 † 5.27% 0.2774 † 11.90% 0.3984 † 13.38% 0.4493 †
9.32%
NPRF ds -DRMM 0.2904 † 14.66% 0.4064 † 12.52% 0.4502 † 8.28% 0.2801 † 12.95% 0.4026 † 14.57% 0.4559 † 10.92%
NPRF ff -KNRM
0.2809 † 10.90% 0.3851 †
6.62% 0.4287 3.11% 0.2720 †
9.71% 0.3867 † 10.06% 0.4356 †
5.99%
NPRF ff -KNRM 0.2815 † 11.13% 0.3882 †
7.48% 0.4264 2.55% 0.2737 † 10.39% 0.3892 † 10.74% 0.4382 †
6.61%
NPRF ds -KNRM 0.2846 † 12.36% 0.3926 †
8.69% 0.4327 4.06% 0.2800 † 12.95% 0.3972 † 13.03% 0.4477 †
8.94%
Table 2 :
2Comparisons between NPRF and BM25 on the Robust04 dataset. Relative performances compared with BM25 are in percentages. Significant improvements relative to the baselines are marked with †.
Table 4 :
4Comparisons between NPRF and neural IR models on Robust04. Relative performances of NPRF-DRMM(KNRM) compared with DRMM (K-NRM) are in percentages, and statistically significant improvements are marked with †.
https://code.google.com/p/word2vec/(Lavrenko and Croft, 2001), is used as another unsupervised baseline.
AcknowledgmentsThis work is supported in part by the National Natural Science Foundation of China (61433015/61472391), and the Beijing Natural Science Foundation under Grant No. (4162067/4142050).
Learning a deep listwise context model for ranking refinement. Qingyao Ai, Keping Bi, Jiafeng Guo, W Bruce Croft, SIGIR. ACMQingyao Ai, Keping Bi, Jiafeng Guo, and W. Bruce Croft. 2018. Learning a deep listwise context model for ranking refinement. In SIGIR, pages 135-144. ACM.
Relevance feedback track overview: Trec. Chris Buckley, Stephen E Robertson, TREC. NISTChris Buckley and Stephen E. Robertson. 2008. Rel- evance feedback track overview: Trec 2008. In TREC. NIST.
Convolutional neural networks for soft-matching n-grams in ad-hoc search. Zhuyun Dai, Chenyan Xiong, Jamie Callan, Zhiyuan Liu, WSDM. ACMZhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. 2018. Convolutional neural networks for soft-matching n-grams in ad-hoc search. In WSDM, pages 126-134. ACM.
Query expansion with locally-trained word embeddings. Fernando Diaz, Mitra Bhaskar, Nick Craswell, ACL. The Association for Computer Linguistics. Fernando Diaz, Bhaskar Mitra, and Nick Craswell. 2016. Query expansion with locally-trained word embeddings. In ACL. The Association for Computer Linguistics.
Modeling diverse relevance patterns in ad-hoc retrieval. Yixing Fan, Jiafeng Guo, Yanyan Lan, Jun Xu, Chengxiang Zhai, Xueqi Cheng, SIGIR. ACMYixing Fan, Jiafeng Guo, Yanyan Lan, Jun Xu, Chengxiang Zhai, and Xueqi Cheng. 2018. Mod- eling diverse relevance patterns in ad-hoc retrieval. In SIGIR, pages 375-384. ACM.
A deep relevance matching model for ad-hoc retrieval. Jiafeng Guo, Yixing Fan, Qingyao Ai, W Bruce Croft, CIKM. ACMJiafeng Guo, Yixing Fan, Qingyao Ai, and W. Bruce Croft. 2016. A deep relevance matching model for ad-hoc retrieval. In CIKM, pages 55-64. ACM.
Overview of the first text retrieval conference. Donna Harman, SIGIR. ACMDonna Harman. 1993. Overview of the first text re- trieval conference. In SIGIR, pages 36-47. ACM.
Overview of the third text retrieval conference (TREC-3). Donna Harman, TREC, volume Special Publication 500-225. National Institute of Standards and Technology (NISTDonna Harman. 1994. Overview of the third text re- trieval conference (TREC-3). In TREC, volume Special Publication 500-225, pages 1-20. National Institute of Standards and Technology (NIST).
Overview of the second text retrieval conference (TREC-2). Donna Harman, Inf. Process. Manage. 313Donna Harman. 1995. Overview of the second text re- trieval conference (TREC-2). Inf. Process. Manage., 31(3):271-289.
Learning deep structured semantic models for web search using clickthrough data. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, Larry P Heck, CIKM. ACMPo-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry P. Heck. 2013. Learning deep structured semantic models for web search us- ing clickthrough data. In CIKM, pages 2333-2338. ACM.
PACRR: A position-aware neural IR model for relevance matching. Kai Hui, Andrew Yates, Klaus Berberich, Gerard De Melo, EMNLP. Association for Computational LinguisticsKai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2017. PACRR: A position-aware neural IR model for relevance matching. In EMNLP, pages 1049-1058. Association for Computational Linguis- tics.
Co-pacrr: A context-aware neural IR model for ad-hoc retrieval. Kai Hui, Andrew Yates, Klaus Berberich, Gerard De Melo, WSDM. ACMKai Hui, Andrew Yates, Klaus Berberich, and Gerard de Melo. 2018. Co-pacrr: A context-aware neu- ral IR model for ad-hoc retrieval. In WSDM, pages 279-287. ACM.
Cumulated gain-based evaluation of ir techniques. Kalervo Järvelin, Jaana Kekäläinen, ACM Trans. Inf. Syst. 204Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumu- lated gain-based evaluation of ir techniques. ACM Trans. Inf. Syst., 20(4):422-446.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, abs/1412.6980CoRRDiederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.
Relevancebased language models. Victor Lavrenko, W. Bruce Croft, SIGIR. ACMVictor Lavrenko and W. Bruce Croft. 2001. Relevance- based language models. In SIGIR, pages 120-127. ACM.
A deep relevance model for zero-shot document filtering. Chenliang Li, Wei Zhou, Feng Ji, Yu Duan, Haiqing Chen, ACL. The Association for Computer LinguisticsChenliang Li, Wei Zhou, Feng Ji, Yu Duan, and Haiqing Chen. 2018. A deep relevance model for zero-shot document filtering. In ACL, pages 2300- 2310. The Association for Computer Linguistics.
From puppy to maturity: Experiences in developing terrier. Craig Macdonald, Richard Mccreadie, OSIR at SIGIR. Rodrygo Santos, and Iadh OunisCraig Macdonald, Richard McCreadie, Rodrygo San- tos, and Iadh Ounis. 2012. From puppy to maturity: Experiences in developing terrier. In OSIR at SIGIR, pages 60-63.
Introduction to information retrieval. Christopher D Manning, Prabhakar Raghavan, Hinrich Schütze, Cambridge University PressChristopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to information retrieval. Cambridge University Press.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S Corrado, Jeffrey Dean, NIPS. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed repre- sentations of words and phrases and their composi- tionality. In NIPS, pages 3111-3119.
Learning to match using local and distributed representations of text for web search. Bhaskar Mitra, Fernando Diaz, Nick Craswell, WWW. ACMBhaskar Mitra, Fernando Diaz, and Nick Craswell. 2017. Learning to match using local and distributed representations of text for web search. In WWW, pages 1291-1299. ACM.
Taskoriented query reformulation with reinforcement learning. Rodrigo Nogueira, Kyunghyun Cho, EMNLP. Association for Computational LinguisticsRodrigo Nogueira and Kyunghyun Cho. 2017. Task- oriented query reformulation with reinforcement learning. In EMNLP, pages 574-583. Association for Computational Linguistics.
A study of matchpyramid models on ad-hoc retrieval. Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xueqi Cheng, abs/1606.04648CoRRLiang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, and Xueqi Cheng. 2016. A study of matchpyramid mod- els on ad-hoc retrieval. CoRR, abs/1606.04648.
Deeprank: A new deep architecture for relevance ranking in information retrieval. Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Jingfang Xu, Xueqi Cheng, CIKM. ACMLiang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Jing- fang Xu, and Xueqi Cheng. 2017. Deeprank: A new deep architecture for relevance ranking in informa- tion retrieval. In CIKM, pages 257-266. ACM.
A language modeling approach to information retrieval. M Jay, W Bruce Ponte, Croft, SI-GIR. ACMJay M. Ponte and W. Bruce Croft. 1998. A language modeling approach to information retrieval. In SI- GIR, pages 275-281. ACM.
A document-based neural relevance model for effective clinical decision support. Yanhua Ran, Ben He, Kai Hui, Jungang Xu, Le Sun, BIBM. IEEE Computer SocietyYanhua Ran, Ben He, Kai Hui, Jungang Xu, and Le Sun. 2017. A document-based neural relevance model for effective clinical decision support. In BIBM, pages 798-804. IEEE Computer Society.
Okapi at TREC-4. In TREC, volume Special Publication 500-236. Stephen E Robertson, Steve Walker, Micheline Hancock-Beaulieu, Mike Gatford, A Payne, National Institute of Standards and Technology (NISTStephen E. Robertson, Steve Walker, Micheline Hancock-Beaulieu, Mike Gatford, and A. Payne. 1995. Okapi at TREC-4. In TREC, volume Special Publication 500-236. National Institute of Standards and Technology (NIST).
The SMART retrieval system: experiments in automatic document processing. J Rocchio, Gerard SaltonPrentice HallEnglewood, Cliffs, New JerseyRelevance feedback in information retrievalJ. Rocchio. 1971. Relevance feedback in information retrieval. In Gerard Salton, editor, The SMART re- trieval system: experiments in automatic document processing, pages 313-323. Prentice Hall, Engle- wood, Cliffs, New Jersey.
Learning semantic representations using convolutional neural networks for web search. Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, Grégoire Mesnil, WWW (Companion Volume). ACMYelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Grégoire Mesnil. 2014. Learning semantic rep- resentations using convolutional neural networks for web search. In WWW (Companion Volume), pages 373-374. ACM.
Overview of the TREC 2004 robust track. Ellen M Voorhees, TREC, volume Special Publication. National Institute of Standards and Technology (NISTEllen M. Voorhees. 2004. Overview of the TREC 2004 robust track. In TREC, volume Special Publication 500-261. National Institute of Standards and Tech- nology (NIST).
End-to-end neural ad-hoc ranking with kernel pooling. Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, Russell Power, SIGIR. ACMChenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. 2017. End-to-end neural ad-hoc ranking with kernel pooling. In SIGIR, pages 55-64. ACM.
York university at TREC 2009: Relevance feedback track. Zheng Ye, Xiangji Huang, Ben He, Hongfei Lin, TREC, volume Special Publication 500-278. National Institute of Standards and Technology (NISTZheng Ye, Xiangji Huang, Ben He, and Hongfei Lin. 2009. York university at TREC 2009: Relevance feedback track. In TREC, volume Special Publi- cation 500-278. National Institute of Standards and Technology (NIST).
Neural ranking models with multiple document fields. Hamed Zamani, Mitra Bhaskar, WSDM. ACMXia Song, Nick Craswell, and Saurabh TiwaryHamed Zamani, Bhaskar Mitra, Xia Song, Nick Craswell, and Saurabh Tiwary. 2018. Neural rank- ing models with multiple document fields. In WSDM, pages 700-708. ACM.
| [
"https://github.com/ucasir/NPRF"
] |
[
"BASPRO: a balanced script producer for speech corpus collection based on the genetic algorithm",
"BASPRO: a balanced script producer for speech corpus collection based on the genetic algorithm"
] | [
"Yu-Wen Chen ",
"Hsin-Min Wang ",
"Yu Tsao "
] | [] | [] | The performance of speech-processing models is heavily influenced by the speech corpus that is used for training and evaluation. In this study, we propose BAlanced Script PROducer (BASPRO) system, which can automatically construct a phonetically balanced and rich set of Chinese sentences for collecting Mandarin Chinese speech data. First, we used pretrained natural language processing systems to extract ten-character candidate sentences from a large corpus of Chinese news texts. Then, we applied a genetic algorithm-based method to select 20 phonetically balanced sentence sets, each containing 20 sentences, from the candidate sentences. Using BASPRO, we obtained a recording script called TMNews, which contains 400 tencharacter sentences. TMNews covers 84% of the syllables used in the real world. Moreover, the syllable distribution has 0.96 cosine similarity to the real-world syllable distribution. We converted the script into a speech corpus using two text-to-speech systems. Using the designed speech corpus, we tested the performances of speech enhancement (SE) and automatic speech recognition (ASR), which are one of the most important regression-and classificationbased speech processing tasks, respectively. The experimental results show that the SE and ASR models trained on the designed speech corpus outperform their counterparts trained on a randomly composed speech corpus.Index terms -corpus design, Mandarin Chinese speech corpus, phonetically balanced and rich corpus, recording script design, genetic algorithm. | 10.1561/116.00000155 | [
"https://export.arxiv.org/pdf/2301.04120v1.pdf"
] | 255,569,877 | 2301.04120 | 1af280327d487487e826aed393e1812829144568 |
BASPRO: a balanced script producer for speech corpus collection based on the genetic algorithm
Yu-Wen Chen
Hsin-Min Wang
Yu Tsao
BASPRO: a balanced script producer for speech corpus collection based on the genetic algorithm
1
The performance of speech-processing models is heavily influenced by the speech corpus that is used for training and evaluation. In this study, we propose BAlanced Script PROducer (BASPRO) system, which can automatically construct a phonetically balanced and rich set of Chinese sentences for collecting Mandarin Chinese speech data. First, we used pretrained natural language processing systems to extract ten-character candidate sentences from a large corpus of Chinese news texts. Then, we applied a genetic algorithm-based method to select 20 phonetically balanced sentence sets, each containing 20 sentences, from the candidate sentences. Using BASPRO, we obtained a recording script called TMNews, which contains 400 tencharacter sentences. TMNews covers 84% of the syllables used in the real world. Moreover, the syllable distribution has 0.96 cosine similarity to the real-world syllable distribution. We converted the script into a speech corpus using two text-to-speech systems. Using the designed speech corpus, we tested the performances of speech enhancement (SE) and automatic speech recognition (ASR), which are one of the most important regression-and classificationbased speech processing tasks, respectively. The experimental results show that the SE and ASR models trained on the designed speech corpus outperform their counterparts trained on a randomly composed speech corpus.Index terms -corpus design, Mandarin Chinese speech corpus, phonetically balanced and rich corpus, recording script design, genetic algorithm.
I. Introduction
S peech corpus plays a crucial role in the performance of speech-processing models. The speech corpus that is used to train and evaluate these models significantly affects their performance in real-world environments. Recently, massive amounts of data have been generated and collected. Therefore, models are often trained using a large amount of data to achieve better performance. However, not all research institutions can support such computing resources. Furthermore, the use of large amounts of data in listening tests to evaluate models is expensive and time consuming. Moreover, for personalizing models, the amount of data that can be collected Yu-Wen Chen is with the Research Center for Information Technology Innovation, Academia Sinica, Taiwan and Department of Computer Science, Columbia University, New York, United States.
Yu Tsao is with the Research Center for Information Technology Innovation, Academia Sinica, Taiwan.
Hsin-Min Wang is with the Institute of Information Science, Academia Sinica, Taiwan. from new users is often limited. Therefore, a representative speech corpus is essential for training and testing.
Active learning [1], [2] is a popular strategy used for training data sampling and selection. Active learning algorithm dynamically selects a subset of samples with labels that are most beneficial to improving the model during training. In this study, however, we focus on an algorithm that finds a fixed representative training and testing speech corpus for general speech-processing models. That is, active learning selects a corpus for a specific model to optimize it, whereas the proposed algorithm creates a model-independent corpus. The proposed algorithm can cooperate with active learning. Specifically, the model can be initially trained using the proposed representative corpus, followed by active learning to select the most beneficial samples for further training.
A representative speech corpus is often referred to as a phonetically balanced or rich corpus. Phonetic balance means that the frequencies of phonemes in the corpus are distributed as close as possible to the frequencies in real-world conditions, and a phonetically rich corpus implies that the dataset should cover as many allowed phonemes as possible. In previous studies, researchers have developed corpora of this type for multiple languages, such as Amharic [3], Arabic [4], Bangla [5], Urdu [6], Thai [7], Turkish [8], Mexican Spanish [9], Romanian [10] and Chinese [11]- [13].
Previously, phonetically balanced and rich corpora were designed by experts with linguistic backgrounds [14]- [16]. The experts manually wrote or chose sentences that could form a phonetically balanced corpus. However, creating a phonetically balanced and rich corpus in this manner is timeconsuming and difficult. In addition, sentences written by the same person tend to be similar and lack variation. Moreover, this method cannot be used to generate corpora for specific knowledge domains.
Automatic methods have also been proposed, in addition to manual development. Automatic methods usually begin with a large collection of sentences. An algorithm then selects sentences from the collection to form a corpus that meets these requirements. Selecting the desired set of sentences is an NP-hard set-covering optimization problem. In other words, evaluating all possible sets of sentences is computationally too complex to be solved within an acceptable time. To automatically compose a phonetically balanced corpus, [10], [17] proposed random sampling and evaluating sentence groups and chose the one that best meets the requirements. [3] and [11] proposed two-stage methods. The first stage selects important sentences that contain as many syllables as possible or consist of units that appear less frequently in the corpus. The second stage involves selecting sentences that can achieve the desired statistical distribution. Additionally, [18] used the perplexity of each sentence as an indicator to generate a corpus. Most automatic methods are based on greedy algorithms [5], [6], [8], [12], [13]. Genetic algorithms (GA), a well-known approach for solving NP-hard problems, on the other hand, have not received much attention in speech corpus development.
In [19], the authors proposed a GA-based method to automatically form a phonetically balanced Chinese word list; nevertheless, this study focused on word lists rather than sentence lists. Only a few previous studies have used GA to automatically select sentence sets [20], [21]. Moreover, these GA-based methods focus on phonetic and prosodic enrichment rather than phonetic balance and enrichment. The development of GA-based Chinese speech corpora has not yet been thoroughly investigated.
Mandarin Chinese is a tonal syllabic language with five different tones, including four main tones and a neutral tone. Syllables that do not consider tone are denoted as base syllables. On the other hand, syllables that consider the tonal information are referred to as tonal syllables. Each syllable comprises an INITIAL (consonant) and a FINAL (vowel) and is represented by the pinyin system. The INITIAL and FINAL can be further decomposed into smaller acoustic units such as phonemes. Compared to phonemes, syllables are more intuitive to Mandarin Chinese speakers and are used more frequently. Therefore, we developed a tonal syllable-balanced and -rich (hereafter referred to as syllable-balanced) corpus to represent a phonetically balanced and rich corpus.
In this study, we propose an automatic method called BAlanced Script PROducer (BASPRO) 1 to compose a syllablebalanced Mandarin Chinese speech corpus. First, BASPRO uses pretrained natural language processing (NLP) systems to extract candidate sentences from a huge Chinese news text corpus. Subsequently, a syllable-balanced recording script is generated using a GA-based method. Finally, the script is converted into a speech corpus using two text-to-speech (TTS) systems. The syllable-balanced recording script developed in this study is called TMNews 2 because the sentences in the script are collected from Mandarin Chinese news articles collected in Taiwan.
The contributions of this study are as follows.
• We propose BASPRO, which uses machine-learningbased NLP tools to process and extract candidate sentences from a collection of news articles. • BASPRO employs a GA-based method to form a syllablebalanced recording script from candidate sentences. Experimental results show that the proposed BASPRO system can effectively select sentences according to the designed optimization criteria. • The proposed BASPRO system is flexible in terms of language, data domain, and script size. In addition, it allows the generated script to have multiple sets, each satisfying the desired requirements. For example, in this work, each of the 20 sets is syllable-balanced, and the sentences do not overlap between sets. • We analyze the performance of speech processing models trained on syllable-balanced (produced by BASPRO) and randomly composed speech corpora. Experimental results show that the speech processing models trained on the syllable-balanced corpus perform better than those trained on the randomly composed corpus.
II. The Proposed BASPRO System
The proposed BASPRO system consists of three main phases: data processing, script-composing, and postprocessing. The input is articles crawled from the Internet, and the output is a syllable-balanced recording script. Speech corpora can be generated from recording scripts using TTS systems or by asking people to make recordings. Figure 1 shows a schematic of the BASPRO system. First, the data processing phase extracts candidate sentences from the collected news articles. Simultaneously, the syllable distribution of the collected articles was calculated, which is denoted as real-world syllable distribution. The script-composing phase then generates a temporary syllable-balanced script from the candidate sentences. Finally, the postprocessing phase replaces unwanted sentences in the temporary script and produces the final script. Fig. 1. Schematic diagram of the proposed BASPRO system. In the data processing phase, candidate sentences were extracted from the collected articles. The script-composing phase uses real-world syllable distribution to compose a syllable-balanced script from the candidate sentences. Finally, the postprocessing phase replaces unwanted sentences in the script and produces the final script.
Recording script
A. Data processing
In the data processing phase, the input is news articles crawled from the Internet, and the output is candidate sentences. All sentences in the recording script were selected from the candidate sentences. We used five filters in the data processing phase to extract candidate sentences: (1) general, (2) sensitive word, (3) part-of-speech (POS), (4) perplexity, and (5) intelligibility filters. The general filter removes sentences with non-Chinese characters and keeps sentences with exactly ten characters. The sensitive filter then removes the sentences containing sensitive words. In this study, we let the sentences have a fixed length and excluded sentences containing sensitive words, as these settings are often required for listening tests. In addition, we designed a POS filter, perplexity filter, and intelligibility filter to filter out incomprehensible sentences. Because the resulting corpus will be used for listening tasks, we do not want any sentences to be difficult to understand and thus affect the evaluation results.
The POS is a category of lexical items with similar grammatical properties. Words assigned to the same POS often play similar roles in the grammatical structure of a sentence. We used POS as an indicator to exclude sentences that may not be suitable for listening tests. For example, a sentence containing a proper noun may be difficult to understand for someone who has never heard the word before, leading to a personal bias in listening tests. Meanwhile, sentences that start with a preposition, particle, or conjunction, and sentences that end with a preposition or conjunction are also inappropriate because they are usually not complete sentences. Therefore, we used two pretrained POS tagging systems to tag candidate sentences and remove sentences that met the above POS-based removal criteria.
Perplexity (PPL) is defined as the model's uncertainty regarding a sentence. Higher perplexity indicates that a sentence may be more difficult to understand. In this study, we used pre-trained BERT [22], a neural-network-based model trained with a masked language modeling objective, to compute the perplexity of each sentence. Given a sentence W = (w 1 , ..., w i , ..., w |W | ), w i is the i-th character in W . To calculate W 's perplexity, w i is replaced with the [MASK] token and predicted using all other characters in W , that is, W \i = (w 1 , ..., w i−1 , w i+1 , ..., w |W | ). P BERT (w i |W \i ) is the probability of w i given its context calculated by BERT. Then, the perplexity of sentence W is defined as:
P P L(W ) = − 1 |W | |W | i=1 log P BERT (w i |W \i )(1)
A high P P L(W ) indicates that W contains characters that are difficult to predict from their context, suggesting that W can be difficult to understand. We computed the perplexity for each sentence and analyzed the distribution of perplexity across all sentences to determine a threshold. The perplexity filter then removes sentences whose perplexity is above the threshold.
The last is the intelligibility filter, which removes sentences with low intelligibility scores. Figure 2 illustrates the calculation of the intelligibility score for a sentence. First, a TTS system was used to convert a sentence into a corresponding speech utterance. Subsequently, a pretrained automatic speech recognition (ASR) system is used to predict the content of the utterance. Finally, the Levenshtein distance between the sentence and the ASR prediction is used to measure the intelligibility of the sentence. If a sentence is difficult to understand, the TTS system may not be able to generate a correctly pronounced utterance because some characters have multiple pronunciations. In addition, previous research [23] showed that ASR predictions are highly correlated with human perception of intelligibility. In other words, if a sentence is confusing, the ASR system may fail to correctly recognize the corresponding speech utterance. Therefore, the distance between ASR prediction and the original sentence reflects the intelligibility of the sentence. The intelligibility score is defined as one minus the distance of the sentence divided by the length of the sentence. Therefore, a perfect ASR prediction will lead to an intelligibility score of 1.
Sentence TTS
Distance calculation
Intelligibility score Utterance ASR Predicted sentence + 我幾乎所有的狀況都有 我今天所有的狀況都有 Fig. 2. Illustration of the intelligibility score calculation. First, a sentence is converted into an utterance using a TTS system. Then, an ASR system is used to predict the content of the utterance. The distance between the sentence and ASR prediction is used to calculate the intelligibility score.
B. Script-composing
In the script-composing phase, we used the GA to select sentences, from the candidate sentences, to form a syllablebalanced recording script. The script consisted of several sets, each containing a fixed number of sentences, and the sentences did not overlap between sets. First, we introduce the basic concept of the GA. Then, we present the proposed GA-based script-composing method.
1) Genetic algorithm (GA): The GA is inspired by natural selection-a process of eliminating the weak and leaving only the strong. In the GA, the population is a series of possible solutions named chromosomes. Chromosomes are composed of genes that represent specific items. A fitness function is used to evaluate each chromosome. The fitness score reflects how well a chromosome "fits" the problem; a higher fitness score indicates that the chromosome is a better solution.
The GA comprises five steps: (1) initialization, (2) fitness calculation, (3) selection, (4) crossover, and (5) mutation. The initialization step creates the initial population and the fitness calculation step calculates the fitness score of each chromosome in the population. In the selection step, chromosomes with higher fitness scores have higher probabilities of leaving their offspring in the next generation. In the crossover step, a pair of selected chromosomes exchanges genes to form a new pair of chromosomes. Take one-point crossover as an example, a point called "crossover point" on both parents' chromosomes is randomly chosen. Then, the genes to the right of the crossover point are swapped between the parent chromosomes, producing two new chromosomes that carry genetic information from both parents. Lastly, genes in chromosomes may change randomly during the mutation step.
2) The GA-based script-composing phase: Figure 3 shows the GA terms and the corresponding definitions in this study. The population comprises a collection of scripts. Each chromosome is a script and the best chromosome in the population is the target syllable-balanced script. A gene is a sentence that is swapped between chromosomes.
Population
Script 1
Script 2 Script n p Set 1 Chromosome … Sentence i Sentence j Gene …
Set n s Set n s -1 … n Fig. 3. GA terms and their corresponding definitions. The population is a collection of scripts, each chromosome is a script, each gene is a sentence, and np, ns, and n denote the number of scripts in the population, number of sets in a script, and number of sentences in a set, respectively. Sentence i denotes the i-th sentences in the candidate sentence set. The sentences were randomly sampled from the candidate sentence set during initialization, and there were no duplicate sentences in each script. Figure 4 illustrates the GA process. The initial population step generated multiple scripts, each consisting of random sentences. The fitness calculation step then calculates the fitness score of each script in the population. The selection step replaces scripts with lower scores with scripts with higher fitness scores. The crossover step exchanges sentences between the scripts. This process stops when the population is dominated by one script and the maximum fitness score no longer increases. We skip the mutation step because it increases the complexity without improving the performance of our test. 3) Fitness calculation: The fitness calculation step evaluates how well a script satisfies the requirements. Specifically, a script with a higher fitness score is considered a better choice. In this study, the fitness score is defined as follows:
Initial population
F itness score = w 1 × script syllable distribution +w 2 × script syllable coverage +w 3 × set syllable distribution (2)
where w 1 , w 2 , and w 3 are the weights.
Let D script be the syllable distribution of a script and D real be the real-world syllable distribution, D script ∈ R s , D real ∈ R s , and s be the number of distinct syllables in Mandarin Chinese. The script syllable distribution is the cosine similarity between D script and D real .
script syllable distribution = D script · D real D script D real(3)
Similarly, the set syllable distribution is the average cosine similarity between the real-world syllable distribution and each set in the script.
set syllable distribution = 1 n s ns i=1 D i set · D real D i set D real(4)
where D i set is the syllable distribution of the i-th set in the script, and n s is the number of sets in the script. We include the set syllable distribution in the fitness score such that each set is representative and can be used individually. For example, each set can be used as a validation set in the training of a speech-processing model and as an indicator for selecting the best model. Additionally, each set can be used for model training when only a small amount of data is required.
Script syllable coverage is the fraction of all possible syllables covered in a script. For example, assuming that the number of distinct syllables in Mandarin Chinese is 1300, the script syllable coverage score of a script that contains 130 distinct syllables is 0.1 (i.e., 130/1300). Note that in this study, we consider tonal syllables instead of base syllables. In other words, the fitness function calculates the distribution and coverage of the tonal syllables. 4) Selection: The selection step realizes the "survival of the fittest." In other words, scripts with higher fitness scores are retained and replicated, whereas scripts with lower fitness scores are eliminated. In this study, the truncation selection method was used. Scripts were sorted by their fitness scores, and 50% of the fittest scripts were selected and replicated twice. Figure 5 shows the selection process. 5) Crossover: The crossover step aims to combine the information of the two scripts and then generates new scripts. In this study, we used sets as crossover units, instead of complete scripts. This is because if we use scripts as crossover units, only one set in each script exchanges the information at every iteration when using the one-point crossover. However, if we use sets as crossover units, every set in the script participates in crossover at every iteration. Figure 6 shows an example of a crossover pair and Figure 7 illustrates the Fig. 5. Illustration of the truncation-selection process. The scripts are sorted by their fitness scores, and then 50% of the fittest scripts are selected and replicated twice.
crossover step. As shown in Figure 7, to avoid duplicate sentences in one script, sentences present in the other script are held and not swapped in the crossover step. If the number of duplicate sentences in the paired sets is not the same, we randomly select sentences such that the number of held sentences is the same in both sets. Finally, we apply a onepoint crossover to the two sets. Note that holding the same number of sentences in both sets ensures that the two new sets have the same number of sentences after crossover. Fig. 6. Illustration of the crossover pairs. The crossover step exchanges sentences between two sets with the same index.
Population
C. Postprocessing
After the script-composing phase, we obtained a syllablebalanced script. However, we may still want to replace some sentences in the script because the data-processing phase does not ensure that all candidate sentences are suitable. For example, the sensitive word filter cannot remove newly invented sensitive words that are not included in a sensitive word list. In addition, POS tagging systems may give incorrect POS tags because even the best POS tagging system cannot guarantee 100% accuracy. Therefore, sentences that meet POS removal criteria may not be removed as expected. Moreover, sentences with low perplexity and high intelligibility scores are not necessarily logical from the human perspective.
Therefore, in the postprocessing phase, we still need to manually label inappropriate sentences to be replaced with more appropriate sentences. The script generated in the scriptcomposing phase is denoted as a temporary script. We propose two methods to replace unwanted sentences in a temporary script: (1) GA-based method and (2) greedy-based method. The GA-based method is similar to the GA in the scriptcomposing phase. The only difference is the generation of scripts in the initial population. In postprocessing, all scripts in the initial population are initialized based on the temporary script, with unwanted sentences replaced with sentences randomly sampled from the candidate sentences. The rest of the GA steps were the same as those in the script-composing phase. For the greedy-based method, unwanted sentences are replaced one by one with sentences from the candidate sentences that can achieve the highest fitness score. According to our empirical results, the greedy-based method is more suitable when there are only a few unwanted sentences in the temporary script, whereas the GA-based method is more suitable when there are many unwanted sentences in the script.
III. Experiments
In this section, we first present a statistical analysis of Mandarin speech units based on Chinese news articles collected from five major news media outlets in Taiwan in 2021. We then show that the proposed BASPRO system can effectively select sentences based on a specially designed fitness function to form a syllable-balanced script for collecting speech data. Finally, we demonstrate that speech processing models trained on a TTS-synthesized syllable-balanced speech corpus based on the syllable-balanced script can achieve better performance than their counterparts trained on a randomly composed speech corpus. Note that the "syllable distribution and coverage" in the experiments represent "tonal syllable distribution and coverage".
A. Analysis of news articles in Taiwan in 2021
We crawled news articles from five major news media sources in Taiwan in 2021, with a total Chinese character count of around 182,583,000. We used the Pypinyin tool [24] to identify the syllables of each character. See the Appendix for the list of INITIAL and FINAL in the Pypinyin tool, and the INITIAL, FINAL, and tone distribution in these news articles. There are 404 distinct base syllables and 1259 distinct tonal syllables, which are close to the number of distinct base syllables and tonal syllables reported in other studies [11], [25]. Note that there is no consensus on the exact number of base and tonal syllables in Mandarin Chinese. For example, the number of base and tonal syllables in [11] are 416 and 1345, respectively, while in [25] they are 407 and 1333, respectively.
B. Data processing experiment 1) Experimental settings of data processing: The general filter kept only ten-character sentences. The POS tagging filter removes sentences that satisfy the POS-based removal criteria using CkipTagger [26] or DDParser [27]. The removal criteria when using the CkipTagger and DDParser are listed in Table I. The perplexity filter removes sentences with perplexities higher than 4.0. In intelligibility filter, only sentences with an intelligibility score of 1.0 were kept. After the data-processing phase, the total number of candidate sentences was around 167,000. Table II lists the toolkits used in each data-processing phase. [26] DDParser [27] Hugging Face [28] (bert-base-chinese) Google-TTS [29] Google-ASR [30] Pypinyin [24] 2) Experimental results of data processing: Table III lists several examples of sentences and their corresponding perplexities. The experimental results showed that perplexity can reflect human perception to a certain extent. Specifically, sentences 1-1, 2-1, and 3-1 are literally similar to sentences 1-2, 2-2, and 3-2, respectively. Only a few characters in each sentence pair were different, and the pronunciations of the different characters were similar. However, sentences 1-1, 2-1, 3-1 are considered natural, while sentences 1-2, 2-2, 3-2 contain typos or are illogical. According to the results in Table III, sentences 1-2, 2-2, 3-2 have higher perplexity, while sentences 1-1, 2-1, 3-1 have lower perplexity. Figure 8 shows the perplexity distribution for ten-character sentences in Mandarin Chinese news texts. The distribution of perplexity was right-skewed, with a mean of 2.336. According to Figure 8, we chose 4.0 as the perplexity threshold, which is approximately 1.5 standard deviations from the mean of perplexity for all tencharacter sentences. However, sometimes the perplexity does not correctly reflect whether a sentence is understandable. For example, sentence 4 in Table III is difficult to understand but has the lowest perplexity among the examples. IV lists examples of sentences and their corresponding intelligibility scores. "Ori" is the original input sentence, and "Pred" is the corresponding ASR prediction. The first and second examples show that the intelligibility filter can identify sentences with words that are not easy to understand. To avoid the need to replace many sentences in the postprocessing phase, the intelligibility filter removes all sentences with an intelligibility score lower than 1.0. In other words, the intelligibility filter only retained sentences with perfect ASR test results. However, like perplexity, sometimes, the intelligibility score does not perfectly reflect human perception. For example, the third sentence is not intuitive but has the intelligibility score of 1.0. As shown in Tables III and IV, perplexity and intelligibility filters cannot remove all illogical sentences. Therefore, manual labeling is required during the postprocessing phase.
C. GA-based script-composing experiment
In this section, we demonstrate that the BASPRO system can effectively select sentences to form a recording script according to the designed fitness function. We set the number of sets in the script and the number of sentences in each set to 20. Thus, the length of the chromosomes was 400. The weight of script syllable coverage (w 2 in Eq. 2) was set to two, whereas the weights of the script syllable distribution (w 1 in Eq. 2) and set syllable distribution (w 3 in Eq. 2) was set to 1. The population size was set to 25,000 and the GA was stopped until the maximum fitness score converged. Figure 9 shows the training curve of GA. The maximum fitness score drops for some generations because scripts are split and remixed in the crossover step, which may lower the fitness score. However, overall, the fitness score increases with the number of generations and eventually converges. Figure 10 shows the distribution of syllables in the best scripts of the first and final generations, and in real-world texts. The results showed that the syllable distribution of the best script in the final generation was much closer to the realworld syllable distribution than the syllable distribution of the best script in the first generation. The red region in Figure 10 indicates the effect of script syllable coverage score on the fitness function. In the real world, the ratio of the frequency of syllables with indices 800 to 1200 to the frequency of all syllables is close to 0; therefore, when considering only script syllable distribution and set syllable distribution in the fitness function, most syllables in this rare region will not be present in the best script in the final generation. However, because the fitness function includes script syllable coverage, more rare syllables are covered in the best script in the final generation, making the distribution of syllables indexed from 800 to 1200 in (b) and (c) significantly different. Table V compares the values of script syllable distribution, set syllable distribution, and script syllable coverage for the best scripts in the first and final generations. Note that because there were 20 sets in a script, for the set syllable distribution, the mean and standard deviation of the 20 sets were calculated. Clearly, all values increase with generation. As shown in the ablation study in Table VI, there is a tradeoff between script syllable distribution, set syllable distribution, and syllable coverage. For example, if the fitness function only considers the script syllable distribution, the best final script can achieve a script syllable distribution value of 0.997. However, in this case, the script syllable coverage and set syllable distribution can only reach 579 and 0.702, respectively.
Fitness score Generation Maximum fitness Mean fitness Next, we compare the greedy and GA-based replacement methods in the postprocessing phase. Figure 11 shows the fitness scores of the resulting scripts for different replacement percentages. Specifically, 80% means that 320 (i.e., 400 × 0.8) The results show that the best script in the final generation has a syllable distribution that is closer to real-world syllable distribution than the best script in the first generation. The red region reveals the effect of script syllable coverage; that is, more rare syllables are covered in the best script in the final generation.
sentences in the script have been replaced with new sentences. The results show that if a large portion of sentences needs to be replaced, the GA-based method performs better than the greedy-based method. Conversely, if only a few sentences must be replaced, the greedy method outperforms the GAbased method. Fitness score
Replacement percentage
Greedy GA % Fig. 11. Comparison between the GA-and greedy-based replacement methods in the postprocessing phase. The greedy-based method outperforms the GAbased method when the replacement percentage is lower than 10%; however, as the replacement percentage increases, the GA-based method outperforms the greedy-based method.
Finally, Figure 12 compares the statistics of a script produced by the BASPRO system and the TMHINT Mandarin Chinese recording script [16] used in many previous studies. For a fair comparison, the number of sets and sentences in each set was set to 32 and 10, respectively, following the TMHINT script. The top two panels of Figure 12 show that the BASPRO-produced script covers more syllables, while the bottom two panels of Figure 12 show that the syllable distribution of the BASPRO-produced script is closer to the real-world syllable distribution.
D. Experiment on speech-processing tasks
In this section, we investigate whether speech-processing models trained on the syllable-balanced TMNews corpus can outperform their counterparts trained on a randomly composed corpus. We experiment on two common speech processing tasks, including speech enhancement (SE) and ASR.
1) Experimental settings for both tasks: To verify the usefulness of the proposed BASPRO system, we compared the performances of speech-processing models trained on syllablebalanced and randomly selected corpora. In the following experiments, CorpusBAL referred to a syllable-balanced corpus, whereas CorpusRAN represented a randomly composed corpus. CorpusBAL was formed based on a syllable-balanced script, TMNews. CorpusRAN was formed using randomly selected sentences. Both CorpusBAL and CorpusRAN have large and small versions, denoted by Corpus(BAL,RAN) Large and Corpus(BAL,RAN) Small, respectively. The large and small corpora contained 20 and 5 sets, respectively, with 20 sentences in each set. That is, 400 sentences form a large corpus and 100 sentences form a small corpus.
For each sentence in the script, we used two TTS systems, GoogleTTS [29] and TTSkit [31], to generate corresponding utterances. The utterances generated by GoogleTTS were female voices, while the utterances generated by TTSkit were male voices. As a result, the Corpus(BAL,RAN) Large corpus contains 800 utterances, and the Corpus(BAL,RAN) Small corpus contains 200 utterances. Table VII lists the statistics of each speech corpus. The syllable distribution of CorpusBAL was closer to the realworld syllable distribution than that of CorpusRAN. In addition, CorpusBAL Small had better syllable coverage than CorpusRAN Large, although the number of sentences in Cor-pusBAL Small was only a quarter of that in CorpusRAN Large. a) Experimental settings for the SE task: We trained the SE model on small corpora, and tested it on large corpora. In practical applications, the test data are also larger than the training data. Therefore, we believe that the experimental results under this setting can better reflect performance in a real environment.
For the training data, each clean utterance was contaminated with 25 noises randomly selected from 100 noises [32] at -1, 1, 3, and 5 SNR levels. The training data contained 20,000 utterances (100 (sentences) × 2 (voice types) × 25 (noise types) × 4 (SNR levels)). The training data were divided into training and validation datasets. The validation set contained 20% of the training data and was used to select the best model for training. Therefore, in our experiments, using this training-validation setup, we trained five models with a training corpus and reported the mean and standard deviation of the results evaluated on the testing corpus. For the test set, each clean utterance was contaminated with three noise types (white, street, and babble) at 2 and 4 SNR levels. The test set contains 4,800 utterances (400 (sentences) × 2 (voice types) × 3 (noise types) × 2 (SNR levels)).
The corpora were evaluated using MetricGAN+ [33], [34], a state-of-the-art SE model. Because the input of MetricGAN+ is a spectrogram, the input signal was transformed into a spectrogram using a short-time Fourier transform (STFT) with a window length of 512 and hop length of 256. In addition, the batch size was 32, the loss function used was L1 loss, and the optimizer was Adam with a learning rate of 0.001. The perceptual evaluation of speech quality (PESQ) [35] and shorttime objective intelligibility (STOI) [36] are used as objective evaluation metrics.
b) Experimental settings for the ASR task: In the ASR experiments, we downloaded the pretrained transformer-based ASR model from SpeechBrain [37], and then fine-tuned the ASR model using the speech corpora collected in this study. The pre-trained ASR model was trained on the AISHELL dataset, which is also a Mandarin speech corpus. We finetuned a pre-trained model because our training speech was not sufficient to train the ASR model from scratch. In addition, this setup simulates the personalization of an ASR system, that is, fine-tuning an ASR system with a few recordings of a new user. Similar to the 80% training-20% validation setting in the SE task, given a training corpus, we obtained five models and reported the means and standard deviations of the evaluation results. For each training and validation split, we fine-tuned the model for 50 epochs and selected the best model using a validation set.
We used the pinyin error rate (PER), character error rate (CER), and sentence error rate (SER) to evaluate ASR performance. PER calculates the difference between the predicted and ground-truth syllable sequences. Note that Pypinyin [24] was used to convert characters to tonal syllables before calculating PER. PER and CER were calculated using Levenshtein distance. In SER, a predicted sentence is considered to be incorrect if any character is wrong.
2) Experimental results for SE: The fitness function contains the set syllable distribution score, because we want each set to be representative. We argue that the model selected by a small syllable-balanced validation set is more robust than the model selected by a small randomly selected validation set. Table X, valid:bal indicates that the validation set is a syllable-balanced set in CorpusBAL Small, whereas valid:ran indicates that the validation set is randomly selected sentences from Corpus-BAL Small. The results show that the average performance of the SE models selected with a syllable-balanced validation set is better than that of the SE models selected with a randomly selected validation set. 3) Experimental results for ASR: Table XI shows the performance of the ASR models fine-tuned using Corpus-BAL and CorpusRAN. First, the results reveal that finetuning an ASR model always improves ASR performance. In addition, the ASR models fine-tuned on CorpusBAL generally performed better than their corresponding models finetuned on CorpusRAN. This is because the CorpusBAL Large and CorpusBAL Small corpora cover relatively complete and rich pronunciations; thus, the ASR model can be fine-tuned comprehensively. However, we also see that when tested on CorpusRAN Small, the ASR model fine-tuned on Cor-pusBAL Large performs slightly worse than the ASR model fine-tuned on CorpusRAN Large. One possible explanation is that both CorpusBAL Large and CorpusRAN Large cover more syllables than CorpusRAN Small, as shown in Table VII. Therefore, fine-tuning the model with either corpus did not make a significant difference when testing on a small test set. However, such a biased small test set could mislead the model. When using a small corpus as a test set, more consideration should be given to the pronunciation balance and coverage. Finally, the ASR performance tested on CorpusRAN is better than the ASR performance tested on CorpusBAL, which is consistent with the SE experiments. This is because CorpusBAL covers more rare syllables and is, therefore, more challenging than CorpusRAN. Table XII presents the corresponding t-test results. This evaluation shows that the performance of the two ASR models using corpora of different scripts across all evaluation metrics is significantly different on the CorpusBAL Large and Cor-pusBAL Small testing data (p-value 0.05). On the Corpus-RAN Large testing data, the p-value for CER is 0.18967, which means that the performance difference is not significant. Note that the CER is the only case in which CorpusRAN Small performs better than CorpusBAL Small on CorpusRAN Large in Table XI. On the CorpusRAN Small testing data, the performance differences in PER, CER, and SER are not significant (p-value>0.05). The experimental results show that syllable coverage and distribution should be considered for both training data and testing data, especially when the amount of data is small. Table XIII compares the performance of best model selection using different validation sets. The ASR model was finetuned on CorpusBAL Small and tested on CorpusBAL Large and CorpusRAN Large. The best model was selected using a Table XIII) or a randomly selected sentences set (cf. valid:ran in Table XIII). The results show that the ASR model selected by a syllable-balanced validation set yields lower CER and SER than the ASR model selected by a randomly selected validation set.
IV. Conclusion
In this paper, we first present a statistical analysis of Mandarin Chinese acoustic units based on a large corpus of news texts collected from the internet. We then proposed the BASPRO system that selects sentences from a large text corpus to compose a syllable-balanced recording script with similar statistics. The experimental results showed that the BASPRO system can effectively produce a syllable-balanced script based on the designed fitness function. Using BASPRO, we obtained a recording script called TMNews. Subsequently, we used TTS systems to convert sentences in the TMNews script into utterances to form a speech corpus. Through SE and ASR experiments evaluated on speech corpora based on different recording scripts, we confirmed that SE and ASR models trained on a syllable-balanced speech corpus based on the TMNews script outperformed those trained on a randomly formed speech corpus. In this study, we primarily focused on the design of audio-recording scripts rather than the audio recordings. There are too many variations in the recorded utterances, such as the recording device and the gender, age, and accent of the speaker. Therefore, the recording setting is beyond the scope of this study, and we used synthetic speech with relatively simple characteristics for the SE and ASR evaluation experiments. Furthermore, the data-processing phase does not ensure that every candidate sentence is logical and appropriate from a human perspective. Therefore, manual screening is required during the postprocessing phase. In the future, we hope to develop a method that better reflects human understanding of sentence semantics and reduces human involvement in corpus design.
Fig. 4 .
4Schematic diagram of GA. The termination condition occurs when the maximum fitness score no longer increases after several generations.
Fig. 7 . ( 1 )
71The original sets before crossover. Sen i denotes the i-th sentence in the candidate sentence set. (2) Holding duplicate sentences. In this example, Sen 23 and Sen 2 are held because they exist in a set in Script B. Similarly, Sen 43 is held because it already exists a set in Script A. These sentences are not exchanged during the crossover process to avoid duplicate sentences in the script. If Sen 23 and Sen 2 are exchanged to Script B, there will be two Sen 23 and two Sen 2 in Script B. (3) Making the length the same. Because the number of duplicate sentences in Set 1 of Script A and Set 1 of Script B are not the same (i.e., two sentences in Set 1 of Script A and one sentence in Set 1 of Script B), we randomly hold one more sentence (Sen 9 ) in Set 1 of Script B. (4) Applying the one-point crossover.
Fig. 9 .
9Training curve of the GA. Overall, the fitness score increases with the number of generations and then eventually converges.
Fig. 10 .
10Distribution of syllables in the best scripts of the first and final generations and in real-world texts.
Fig. 12 .
12Comparison between the script produced by the BASPRO system and TMHINT script. The maximum base syllable and (tonal) syllable coverage were 404 and 1259, respectively. The maximum corpus syllable distribution and syllable distribution scores are 1.
Sen 1 Sen 4 Sen 13 Sen 25 Sen 8 Sen 23 Sen 2 Sen 3 Sen 9 Sen 12 Sen 10 Sen 43 Sen 6 Sen 16 Sen 3 Sen 9 Sen 12 Sen 10 Sen 43 Sen 6 Sen 16Script A
Set 1
Script B
Set 1
Script A
Set 1
Script B
Set 1
Sen 1 Sen 4 Sen 13 Sen 25 Sen 8 Sen 23 Sen 2
Hold
Hold
Script A
Set 1
Script B
Set 1
Sen 1 Sen 4 Sen 13 Sen 25 Sen 8 Sen 23 Sen 2
Sen 3
Sen 9
Sen 12 Sen 10
Sen 43
Sen 6 Sen 16
Hold
Hold
New
Script A
Set 1
New
Script B
Set 1
Sen 1 Sen 4
Sen 13 Sen 25 Sen 8
Sen 23 Sen 2
Sen 3
Sen 9
Sen 12
Sen 10
Sen 43
Sen 6 Sen 16
From Script B
From Script A
(1) The original sets before crossover
(2) Holding duplicate sentences
(3) Making the length the same
(4) Applying the one-point crossover
Script A
Set k
Sen 43 Sen 42 Sen 88 Sen 82 Sen 29 Sen 37 Sen 21
Script B
Set k
Sen 23 Sen 2 Sen 50 Sen 52 Sen 19 Sen 7 Sen 22
TABLE I THE
IPOS-BASED REMOVAL CRITERIA. DESCRIPTIONS OF POS TAGS CAN BE FOUND IN [26] AND[27] Toolkit
Include
Start
End
CkipTagger [26]
'Nb','Nc','FW'
'DE','SHI','T'
'Caa','Cab','Cba',
'Cbb','P','T'
DDParser [27]
'LOC','ORG','TIME',
'PER','w','nz'
'p','u','c'
'xc','u'
TABLE II DATA
IIPROCESSING TOOLKITS USED IN THIS STUDYPOS
filter
Perplexity
filter
Intelligibility
filter
Syllable
calculation
CkipTagger
TABLE III EXAMPLES
IIIOF SENTENCE PERPLEXITY ASSESSMENTIndex
Content
Manual
selection
Perplexity
1-1
他寧願當一匹孤獨的狼
✓
2.501
1-2
那你願當一起孤獨的狼
✗
5.402
2-1
警方就聞到他渾身酒味
✓
3.427
2-2
喜歡就聞到他純身酒味
✗
6.091
3-1
候選人也積極掃街拜票
✓
2.758
3-2
候選人也積極少接待票
✗
5.913
4
達到與槓鈴跳舞的境界
✗
2.385
%
Proportion
Perplexity score
Fig. 8. Perplexity distribution for ten-character sentences in Mandarin Chinese
news texts. The red dotted line represents the threshold used for the perplexity
filter.
Table
TABLE IV EXAMPLES
IVOF SENTENCE INTELLIGIBILITY ASSESSMENT English. Because the sentence omits the subject "body," it is not intuitive and hard to understand.Original sentence
Score
Comment
1-Ori
有一種果敢叫奮不顧身
0.8
The word "果敢" is rarely
used in daily conversation.
1-Pred
有一種果感覺奮不顧身
2-Ori
災害來臨時除了盼天助
0.7
The word "盼天助" is rarely
used in daily conversation.
2-Pred
災害來臨時除了看牽著
3-Ori
&
3-Pred
不科學的比例相當火辣
1.0
The sentence means "the
unrealistic
(body)
proportions are very hot"
in
TABLE V STATISTICS
VOF THE BEST SCRIPTS IN THE FIRST AND FINAL GENERATIONSSyllable distribution
Generation
Syllable
coverage
Script
Set
First
668
0.894
0.622 (std: 0.033)
Final
1120
0.964
0.751 (std: 0.019)
TABLE VI
ABLATION STUDY OF THE FITNESS FUNCTION
Syllable distribution
Fitness
function
Syllable
coverage
Script
Set
All
1120
0.964
0.751(std:0.019)
Syllable coverage
1122
0.827
0.494(std:0.035)
Script distribution
579
0.997
0.702(std:0.042)
Set distribution
343
0.943
0.889(std:0.003)
TABLE VII STATISTICS
VIIOF THE SPEECH CORPORACorpus
Base
syllable
coverage
Syllable
coverage
Script
syllable
distribution
Set
syllable
distribution
CorpusBAL Large
(TMNews L)
392
1061
0.970
0.743
(std: 0.020)
CorpusBAL Small
(TMNews S)
333
629
0.934
0.701
(std: 0.10)
CorpusRAN Large
319
609
0.869
0.603
(std: 0.365)
CorpusRAN Small
241
387
0.818
0.637
(std: 0.015)
Table VIII compares the performances of the SE models trained on CorpusBAL Small and CorpusRAN Small. The results show that the SE model trained on CorpusBAL Small outperformed the SE model trained on CorpusRAN Small in terms of both PESQ and STOI under all testing conditions. In addition, both models performed worse when tested on CorpusBAL Large than on CorpusRAN Large. This may be because CorpusBAL Large covers more syllables than CorpusRAN Large, thus making it a more challenging test corpus. Table IX presents the corresponding t-test results. The p-values of the STOI results on both CorpusBAL Large and CorpusRAN Large testing data are about 0.1, while the p-values of the PESQ results are about 0.5. That is, the improvement in the SE performance on STOI is more statistically significant than that on PESQ. This result may be because syllable coverage and distribution have a greater impact on intelligibility (STOI) than on quality (PESQ).
TABLE VIII PERFORMANCE
VIIIOF THE SE MODELS TRAINED ON CORPUSBAL AND CORPUSRAN TABLE IX T-TEST OF THE CORPUSBAL SMALL AND CORPUSRAN SMALL SETesting
Training
CorpusBAL Small
CorpusRAN Small
STOI
PESQ
STOI
PESQ
CorpusBAL Large
0.832
(std: 0.0149)
1.792
(std: 0.1154)
0.793
(std: 0.0426)
1.744
(std: 0.1068)
CorpusRAN Large
0.832
(std: 0.0133)
1.804
(std: 0.1182)
0.796
(std: 0.0426)
1.755
(std: 0.1101)
RESULTS
p-value
Testing data
STOI
PESQ
CorpusBAL Large
0.10028
0.51936
CorpusRAN Large
0.10524
0.46861
Table X compares the performances of the SE models selected with different validation sets. The SE model was trained on CorpusBAL Small and tested on CorpusBAL Large and CorpusRAN Large. In
TABLE X SE
XPERFORMANCE USING DIFFERENT VALIDATION SETSTesting
Training
CorpusBAL Small
(valid:bal)
CorpusBAL Small
(valid:ran)
STOI
PESQ
STOI
PESQ
CorpusBAL Large
0.832
(std: 0.0149)
1.792
(std: 0.1154)
0.814
(std: 0.0266)
1.790
(std: 0.0655)
CorpusRAN Large
0.832
(std: 0.0133)
1.804
(std: 0.1182)
0.816
(std: 0.0265)
1.802
(std: 0.0694)
TABLE XI PERFORMANCE
XIOF ASR MODELS TRAINED ON CORPUSBAL AND CORPUSRANTABLE XII T-TEST OF THE CORPUSBAL AND CORPUSRAN ASR RESULTS syllable-balanced set (cf. valid:bal inTesting data
Training data
PER
CER
SER
w/o fine-tuned
14.94
19.73
74.88
CorpusBAL Small
9.658
(std: 0.259)
15.544
(std: 0.212)
67.648
(std: 0.957)
CorpusBAL Large
CorpusRAN Small
10.738
(std: 0.277)
16.696
(std: 0.264)
70.324
(std: 1.311)
w/o fine-tuned
8.78
11.69
55.62
CorpusBAL Small
4.885
(std: 0.062)
9.244
(std: 0.141)
47.922
(std: 0.518)
CorpusRAN Large
CorpusRAN Small
5.063
(std: 0.034)
9.094
(std: 0.186)
49.126
(std: 0.905)
w/o fine-tuned
14.30
17.75
70.00
CorpusBAL Large
6.61
(std: 0.163)
11.84
(std: 0.397)
56.70
(std: 1.823)
CorpusBAL Small
CorpusRAN Large
7.91
(std: 0.357)
13.08
(std: 0.529)
61.30
(std: 3.114)
w/o fine-tuned
8.55
12.30
53.50
CorpusBAL Large
3.24
(std: 0.221)
7.89
(std: 0.433)
42.20
(std: 1.483)
CorpusRAN Small
CorpusRAN Large
3.02
(std: 0.103)
7.41
(std: 0.379)
44.20
(std: 1.483)
p-value
Testing data
PER
CER
SER
CorpusBAL Large
0.00022
0.00006
0.00617
CorpusRAN Large
0.00051
0.18967
0.03256
CorpusBAL Small
0.00008
0.00305
0.02148
CorpusRAN Small
0.07949
0.09961
0.06559
TABLE XIII ASR
XIIIPERFORMANCE USING DIFFERENT VALIDATION SETSTesting data
Training data
PER
CER
SER
CorpusBAL Small
(valid:bal)
9.658
(std: 0.259)
15.544
(std: 0.212)
67.648
(std: 0.957)
CorpusBAL Large
CorpusBAL Small
(valid:ran)
9.630
(std: 0.263)
15.622
(std: 0.274)
67.898
(std: 0.672)
CorpusBAL Small
(valid:bal)
4.885
(std: 0.062)
9.244
(std: 0.141)
47.922
(std: 0.518)
CorpusRAN Large
CorpusBAL Small
(valid:ran)
4.870
(std: 0.132)
9.250
(std: 0.168)
47.976
(std: 0.445)
The toolkit is available via: https://github.com/yuwchen/BASPRO 2 The script is available via: https://github.com/yuwchen/BASPRO/tree/ main/TMNews
Loss prediction: End-to-end active learning approach for speech recognition. J Luo, J Wang, N Cheng, J Xiao, Proc. IJCNN 2021. IJCNN 2021J. Luo, J. Wang, N. Cheng, and J. Xiao, "Loss prediction: End-to-end active learning approach for speech recognition," in Proc. IJCNN 2021.
Active learning for effectively fine-tuning transfer learning to downstream task. M A Bashar, R Nayak, ACM Transactions on Intelligent Systems and Technology. 122M. A. Bashar and R. Nayak, "Active learning for effectively fine-tuning transfer learning to downstream task," ACM Transactions on Intelligent Systems and Technology, vol. 12, no. 2, pp. 1-24, 2021.
An Amharic speech corpus for large vocabulary continuous speech recognition. S T Abate, W Menzel, B Tafila, Proc. INTER-SPEECH. INTER-SPEECHS. T. Abate, W. Menzel, B. Tafila, et al., "An Amharic speech corpus for large vocabulary continuous speech recognition," in Proc. INTER- SPEECH 2005.
Phonetically rich and balanced text and speech corpora for Arabic language. M A Abushariah, R N Ainon, R Zainuddin, M Elshafei, O O Khalifa, Language resources and evaluation. 464M. A. Abushariah, R. N. Ainon, R. Zainuddin, M. Elshafei, and O. O. Khalifa, "Phonetically rich and balanced text and speech corpora for Arabic language," Language resources and evaluation, vol. 46, no. 4, pp. 601-634, 2012.
SUST TTS Corpus: a phonetically-balanced corpus for Bangla text-to-speech synthesis. A Ahmad, M R Selim, M Z Iqbal, M S Rahman, Acoustical Science and Technology. 426A. Ahmad, M. R. Selim, M. Z. Iqbal, and M. S. Rahman, "SUST TTS Corpus: a phonetically-balanced corpus for Bangla text-to-speech synthesis," Acoustical Science and Technology, vol. 42, no. 6, pp. 326- 332, 2021.
Design and development of phonetically rich Urdu speech corpus. A A Raza, S Hussain, H Sarfraz, I Ullah, Z Sarfraz, Proc. O-COCOSDA. O-COCOSDAA. A. Raza, S. Hussain, H. Sarfraz, I. Ullah, and Z. Sarfraz, "Design and development of phonetically rich Urdu speech corpus," in Proc. O-COCOSDA 2009.
Phonetically distributed continuous speech corpus for Thai language. C Wutiwiwatchai, P Cotsomrong, S Suebvisai, S Kanokphara, Proc. LREC. LRECC. Wutiwiwatchai, P. Cotsomrong, S. Suebvisai, and S. Kanokphara, "Phonetically distributed continuous speech corpus for Thai language," in Proc. LREC 2002.
Text design for TTS speech corpus building using a modified greedy selection. B Bozkurt, O Ozturk, T Dutoit, Proc. Eurospeech. EurospeechB. Bozkurt, O. Ozturk, and T. Dutoit, "Text design for TTS speech corpus building using a modified greedy selection," in Proc. Eurospeech 2003.
VOXMEX speech database: design of a phonetically balanced corpus. E Uraga, C Gamboa, Proc. LREC. LRECE. Uraga and C. Gamboa, "VOXMEX speech database: design of a phonetically balanced corpus," in Proc. LREC 2004.
ASR for lowresourced languages: building a phonetically balanced Romanian speech corpus. M Stȃnescu, H Cucu, A Buzo, C Burileanu, Proc. EUSIPCO. EUSIPCOM. Stȃnescu, H. Cucu, A. Buzo, and C. Burileanu, "ASR for low- resourced languages: building a phonetically balanced Romanian speech corpus," in Proc. EUSIPCO 2012.
Statistical analysis of Mandarin acoustic units and automatic extraction of phonetically rich sentences based upon a very large Chinese text corpus. H.-M Wang, International Journal of Computational Linguistics & Chinese Language Processing. 3H.-m. Wang, "Statistical analysis of Mandarin acoustic units and au- tomatic extraction of phonetically rich sentences based upon a very large Chinese text corpus," in International Journal of Computational Linguistics & Chinese Language Processing, vol. 3, pp. 93-114, 1998.
An efficient algorithm to select phonetically balanced scripts for constructing a speech corpus. M Liang, R Lyu, Y.-C Chiang, Proc. NLP-KE. NLP-KEM.-s. Liang, R.-y. Lyu, and Y.-c. Chiang, "An efficient algorithm to select phonetically balanced scripts for constructing a speech corpus," in Proc. NLP-KE 2003.
Design of speech corpus for Mandarin text to speech. J T F L M Zhang, H Jia, Proc. The Blizzard Challenge. The Blizzard ChallengeworkshopJ. T. F. L. M. Zhang and H. Jia, "Design of speech corpus for Mandarin text to speech," in Proc. The Blizzard Challenge 2008 workshop.
ATR Japanese speech database as a tool of speech recognition and synthesis. A Kurematsu, K Takeda, Y Sagisaka, S Katagiri, H Kuwabara, K Shikano, Speech communication. 94A. Kurematsu, K. Takeda, Y. Sagisaka, S. Katagiri, H. Kuwabara, and K. Shikano, "ATR Japanese speech database as a tool of speech recogni- tion and synthesis," Speech communication, vol. 9, no. 4, pp. 357-363, 1990.
Speech database development at MIT: TIMIT and beyond. V Zue, S Seneff, J Glass, Speech communication. 94V. Zue, S. Seneff, and J. Glass, "Speech database development at MIT: TIMIT and beyond," Speech communication, vol. 9, no. 4, pp. 351-356, 1990.
Department of speech language pathology and audiology. M Huang, National Taipei University of Nursing and Health scienceDevelopment of Taiwan Mandarin hearing in noise testM. Huang, "Development of Taiwan Mandarin hearing in noise test," Department of speech language pathology and audiology, National Taipei University of Nursing and Health science, 2005.
Phonetically balanced text corpus design using a similarity measure for a stereo super-wideband speech database. Y R Oh, Y G Kim, M Kim, H K Kim, M S Lee, H J Bae, IEICE transactions on information and systems. 947Y. R. Oh, Y. G. Kim, M. Kim, H. K. Kim, M. S. Lee, and H. J. Bae, "Phonetically balanced text corpus design using a similarity measure for a stereo super-wideband speech database," IEICE transactions on information and systems, vol. 94, no. 7, pp. 1459-1466, 2011.
Experiments on the construction of a phonetically balanced corpus from the web. L Villaseñor-Pineda, M Montes-Y Gómez, D Vaufreydaz, J.-F Serignat, Proc. CICLing. CICLingL. Villaseñor-Pineda, M. Montes-y Gómez, D. Vaufreydaz, and J.-F. Serignat, "Experiments on the construction of a phonetically balanced corpus from the web," in Proc. CICLing 2004.
Development of a Mandarin monosyllable recognition test. K.-S Tsai, L.-H Tseng, C.-J Wu, S.-T Young, Ear and hearing. 301K.-S. Tsai, L.-H. Tseng, C.-J. Wu, and S.-T. Young, "Development of a Mandarin monosyllable recognition test," Ear and hearing, vol. 30, no. 1, pp. 90-99, 2009.
Recording script design for a Brazilian Portuguese TTS system aiming at a higher phonetic and prosodic variability. M V Nicodem, I C Seara, R Seara, D Dos Anjos, Proc. ISSPA. ISSPAM. V. Nicodem, I. C. Seara, R. Seara, and D. dos Anjos, "Recording script design for a Brazilian Portuguese TTS system aiming at a higher phonetic and prosodic variability," in Proc. ISSPA 2007.
Evolutionarybased design of a Brazilian Portuguese recording script for a concatenative synthesis system. M V Nicodem, I C Seara, D D Anjos, R Seara, Proc. PROPOR. PROPORM. V. Nicodem, I. C. Seara, D. d. Anjos, and R. Seara, "Evolutionary- based design of a Brazilian Portuguese recording script for a concate- native synthesis system," in Proc. PROPOR 2008.
BERT: pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Proc. NAACL. NAACLJ. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "BERT: pre-training of deep bidirectional transformers for language understanding," in Proc. NAACL 2019.
InQSS: a speech intelligibility and quality assessment model using a multi-task learning network. Y.-W Chen, Y Tsao, Proc. IN-TERSPEECH 2022. IN-TERSPEECH 2022Y.-W. Chen and Y. Tsao, "InQSS: a speech intelligibility and quality assessment model using a multi-task learning network," in Proc. IN- TERSPEECH 2022.
Pypinyin. 2022"Pypinyin," 2022 (accessed on August 01, 2022). https://github.com/ mozillazg/python-pinyin.
Automatic selection of phonetically rich sentences from a Chinese text corpus. H.-M Wang, Y.-C Chang, L.-S Lee, Proc. ROCLING. ROCLINGH.-M. Wang, Y.-C. Chang, and L.-S. Lee, "Automatic selection of phonetically rich sentences from a Chinese text corpus," in Proc. ROCLING 1993.
Why attention? analyze BiLSTM deficiency and its remedies in the case of NER. P.-H Li, T.-J Fu, W.-Y Ma, Proc. null2020P.-H. Li, T.-J. Fu, and W.-Y. Ma, "Why attention? analyze BiLSTM deficiency and its remedies in the case of NER," in Proc. AAAI 2020.
A practical Chinese dependency parser based on a large-scale dataset. S Zhang, L Wang, K Sun, X Xiao, S. Zhang, L. Wang, K. Sun, and X. Xiao, "A practical Chinese dependency parser based on a large-scale dataset," 2020.
Huggingface's transformers: state-of-the-art natural language processing. T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, arXiv:1910.03771arXiv preprintT. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, et al., "Huggingface's trans- formers: state-of-the-art natural language processing," arXiv preprint arXiv:1910.03771, 2019.
Google text-to-speech. P.-N Durette, 2022P.-N. Durette, "Google text-to-speech," 2022 (accessed on August 01, 2022). https://pypi.org/project/gTTS/.
Speech recognition (version 3.8). A Zhang, A. Zhang, "Speech recognition (version 3.8)," 2017 (accessed on August 01, 2022). https://github.com/Uberi/speech recognition#readme.
Text to speech toolkit. 2022"Text to speech toolkit," 2022 (accessed on August 01, 2022). https: //pypi.org/project/ttskit/.
A tandem algorithm for pitch estimation and voiced speech segregation. G Hu, D Wang, Speech, and Language Processing. 18G. Hu and D. Wang, "A tandem algorithm for pitch estimation and voiced speech segregation," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 18, no. 8, pp. 2067-2079, 2010.
MetricGAN: generative adversarial networks based black-box metric scores optimization for speech enhancement. S.-W Fu, C.-F Liao, Y Tsao, S.-D Lin, Proc. ICML. ICMLS.-W. Fu, C.-F. Liao, Y. Tsao, and S.-D. Lin, "MetricGAN: generative adversarial networks based black-box metric scores optimization for speech enhancement," in Proc. ICML 2019.
Metricgan+: An improved version of metricgan for speech enhancement. S.-W Fu, C Yu, T.-A Hsieh, P Plantinga, M Ravanelli, X Lu, Y Tsao, Proc. INTERSPEECH 2021. INTERSPEECH 2021S.-W. Fu, C. Yu, T.-A. Hsieh, P. Plantinga, M. Ravanelli, X. Lu, and Y. Tsao, "Metricgan+: An improved version of metricgan for speech enhancement," in Proc. INTERSPEECH 2021.
Perceptual evaluation of speech quality (PESQ)-a new method for speech quality assessment of telephone networks and codecs. A W Rix, J G Beerends, M P Hollier, A P Hekstra, Proc. ICASSP. ICASSPA. W. Rix, J. G. Beerends, M. P. Hollier, and A. P. Hekstra, "Perceptual evaluation of speech quality (PESQ)-a new method for speech quality assessment of telephone networks and codecs," in Proc. ICASSP 2001.
An algorithm for intelligibility prediction of time-frequency weighted noisy speech. C H Taal, R C Hendriks, R Heusdens, J Jensen, IEEE Transactions on Audio, Speech, and Language Processing. 197C. H. Taal, R. C. Hendriks, R. Heusdens, and J. Jensen, "An algorithm for intelligibility prediction of time-frequency weighted noisy speech," IEEE Transactions on Audio, Speech, and Language Processing, vol. 19, no. 7, pp. 2125-2136, 2011.
SpeechBrain: a generalpurpose speech toolkit. M Ravanelli, T Parcollet, P Plantinga, A Rouhe, S Cornell, L Lugosch, C Subakan, N Dawalatabad, A Heba, J Zhong, J.-C Chou, S.-L Yeh, S.-W Fu, C.-F Liao, E Rastorgueva, F Grondin, W Aris, H Na, Y Gao, R D Mori, Y Bengio, arXiv:2106.046242021M. Ravanelli, T. Parcollet, P. Plantinga, A. Rouhe, S. Cornell, L. Lu- gosch, C. Subakan, N. Dawalatabad, A. Heba, J. Zhong, J.-C. Chou, S.-L. Yeh, S.-W. Fu, C.-F. Liao, E. Rastorgueva, F. Grondin, W. Aris, H. Na, Y. Gao, R. D. Mori, and Y. Bengio, "SpeechBrain: a general- purpose speech toolkit," 2021. arXiv:2106.04624.
| [
"https://github.com/yuwchen/BASPRO",
"https://github.com/yuwchen/BASPRO/tree/",
"https://github.com/Uberi/speech"
] |
[
"Types of Out-of-Distribution Texts and How to Detect Them",
"Types of Out-of-Distribution Texts and How to Detect Them"
] | [
"Udit Arora \nNew York University ♣ Capital One\n\n",
"William Huang william.huang@capitalone.com \nNew York University ♣ Capital One\n\n",
"He He \nNew York University ♣ Capital One\n\n"
] | [
"New York University ♣ Capital One\n",
"New York University ♣ Capital One\n",
"New York University ♣ Capital One\n"
] | [
"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing"
] | Despite agreement on the importance of detecting out-of-distribution (OOD) examples, there is little consensus on the formal definition of OOD examples and how to best detect them. We categorize these examples by whether they exhibit a background shift or a semantic shift, and find that the two major approaches to OOD detection, model calibration and density estimation (language modeling for text), have distinct behavior on these types of OOD data. Across 14 pairs of in-distribution and OOD English natural language understanding datasets, we find that density estimation methods consistently beat calibration methods in background shift settings, while performing worse in semantic shift settings. In addition, we find that both methods generally fail to detect examples from challenge data, highlighting a weak spot for current methods. Since no single method works well across all settings, our results call for an explicit definition of OOD examples when evaluating different detection methods. | 10.18653/v1/2021.emnlp-main.835 | [
"https://www.aclanthology.org/2021.emnlp-main.835.pdf"
] | 237,503,353 | 2109.06827 | 0bc557191c2e57f65a5aa8670df37177ac5813d4 |
Types of Out-of-Distribution Texts and How to Detect Them
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 7-11, 2021. 2021
Udit Arora
New York University ♣ Capital One
William Huang william.huang@capitalone.com
New York University ♣ Capital One
He He
New York University ♣ Capital One
Types of Out-of-Distribution Texts and How to Detect Them
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNovember 7-11, 2021. 202110687
Despite agreement on the importance of detecting out-of-distribution (OOD) examples, there is little consensus on the formal definition of OOD examples and how to best detect them. We categorize these examples by whether they exhibit a background shift or a semantic shift, and find that the two major approaches to OOD detection, model calibration and density estimation (language modeling for text), have distinct behavior on these types of OOD data. Across 14 pairs of in-distribution and OOD English natural language understanding datasets, we find that density estimation methods consistently beat calibration methods in background shift settings, while performing worse in semantic shift settings. In addition, we find that both methods generally fail to detect examples from challenge data, highlighting a weak spot for current methods. Since no single method works well across all settings, our results call for an explicit definition of OOD examples when evaluating different detection methods.
Introduction
Current NLP models work well when the training and test distributions are the same (e.g. from the same benchmark dataset). However, it is common to encounter out-of-distribution (OOD) examples that diverge from the training data once the model is deployed to real settings. When training and test distributions differ, current models tend to produce unreliable or even catastrophic predictions that hurt user trust (Ribeiro et al., 2020). Therefore, it is important to identify OOD inputs so that we can modify models' inference-time behavior by abstaining, asking for human feedback, or gathering additional information (Amodei et al., 2016).
Current work in NLP either focuses on specific tasks like intent classification in taskoriented dialogue (Zheng et al., 2020), or arbitrary in-distribution (ID) and OOD dataset pairs * Work done while at New York University. ( Hendrycks et al., 2020bHendrycks et al., , 2019Zhou and Chen, 2021), e.g. taking a sentiment classification dataset as ID and a natural language inference dataset as OOD. However, getting inputs intended for a different task is rare in realistic settings as users typically know the intended task. In practice, an example is considered OOD due to various reasons, e.g. being rare (Sagawa et al., 2020), out-ofdomain (Daumé III, 2007), or adversarial (Carlini and Wagner, 2017). This broad range of distribution shifts makes it unreasonable to expect a detection algorithm to work well for arbitrary OOD examples without assumptions on the test distribution (Ahmed and Courville, 2020). In this paper, we categorize OOD examples by common types of distribution shifts in NLP problems inspired by Ren et al. (2019) and Hsu et al. (2020). Specifically, we assume an input (e.g. a movie review) can be represented as background features (e.g. genre) that are invariant across different labels, and semantic features (e.g. sentiment words) that are discriminative for the prediction task. Correspondingly, at test time we consider two types of OOD examples characterized by a major shift in the distribution of background and semantic features, respectively. While the two types of shifts often happen simultaneously, we note that there are realistic settings where distribution shift is dominated by one or the other. For example, background shift dominates when the domain or the style of the text changes (Pavlick and Tetreault, 2016), e.g. from news to tweets, and semantic shift dominates when unseen classes occur at test time, as in open-set classification (Scheirer et al., 2013). 1 We use this categorization to evaluate two major approaches to OOD detection, namely calibration methods that use the model's prediction confidence (Hendrycks and Gimpel, 2017; and density estimation methods that fit a distribution of the training inputs (Nalisnick et al., 2019a;Winkens et al., 2020;Kirichenko et al., 2020). We show that the two approaches make implicit assumptions on the type of distribution shift, and result in behavioral differences under each type of shift. By studying ID/OOD pairs constructed from both simulations and real datasets, we find that the density estimation method better accounts for shifts in background features, consistently outperforming the calibration method on background shift pairs. We further see the opposite in semantic shift pairs, with the calibration method consistently yielding higher performance.
In addition, we analyze the detection performance on challenge datasets (McCoy et al., 2019a;Naik et al., 2018b) through the lens of background/semantic shift. We find that these challenge datasets provide interesting failure cases for both methods. Calibration methods completely fail when the model is over-confident due to spurious semantic features. While density estimation methods are slightly more robust, language models are easily fooled by repetitions that significantly increase the probability of a piece of text. Together, our findings suggest that better definitions of OOD and corresponding evaluation datasets are required for both model development and fair comparison of OOD detection methods. 1 We exclude task shift where the OOD examples are from a different task, e.g. textual entailment inputs for a text classification model, because it is less likely to happen in realistic settings where users are often aware of the intended use of the model.
Categorization of OOD Examples
Problem Statement
Consider classification tasks where each example consists of an input x ∈ X and its label y ∈ Y. In the task of OOD detection, we are given a training dataset D train of (x, y) pairs sampled from the training data distribution p(x, y). At inference time, given an input x ∈ X the goal of OOD detection is to identify whether x is a sample drawn from p(x, y).
Types of Distribution Shifts
As in (Ren et al., 2019), we assume that any representation of the input x, φ(x), can be decomposed into two independent and disjoint components: the background features φ b (x) ∈ R m and the semantic features φ s (x) ∈ R n . Formally, we have
φ(x) = [φ s (x); φ b (x)],(1)p(x) = p(φ s (x))p(φ b (x)) (2) Further, we assume that φ b (x) is independent of the label while φ s (x) is not. Formally, ∀y ∈ Y, p(φ b (x) | y) = p(φ b (x)), (3) p(φ s (x) | y) = p(φ s (x))(4)
Note that p refers to the ground truth distribution, as opposed to one learned by a model. Intuitively, the background features consist of population-level statistics that do not depend on the label, whereas the semantic features have a strong correlation with the label. A similar decomposition is also used in previous work on style transfer (Fu et al., 2018), where a sentence is decomposed into the content (semantic) and style (background) representations in the embedding space.
Based on this decomposition, we classify the types of OOD data as either semantic or background shift based on whether the distribution shift is driven by changes in φ s (x) or φ b (x), respectively. An example of background shift is a sentiment classification corpus with reviews from IMDB versus GoodReads where phrases indicating positive reviews (e.g. "best", "beautifully") are roughly the same while the background phrases change significantly (e.g. "movie" vs "book"). On the other hand, semantic shift happens when we encounter unseen classes at test time, e.g. a dialogue system for booking flight tickets receiving a request for meal vouchers (Zheng et al., 2020), or a question-answering system handling unanswerable questions (Rajpurkar et al., 2018). We note that the two types of shifts may happen simultaneously in the real world, and our categorization is based on the most prominent type of shift.
OOD Detection Methods
To classify an input x ∈ X as ID or OOD, we produce a score s(x) and classify it as OOD if s(x) < γ, where γ is a pre-defined threshold. Most methods differ by how they define s(x). Below we describe two types of methods commonly used for OOD detection.
Calibration methods. These methods use the model's prediction confidence as the score. A well-calibrated model's confidence score reflects the likelihood of the predicted label being correct. Since the performance on OOD data is usually lower than on ID data, lower confidence suggests that the input is more likely to be OOD. The simplest method to obtain the confidence score is to directly use the conditional probability produced by a probabilistic classifier p model , referred to as maximum softmax probability (MSP; Hendrycks and Gimpel, 2017). Formally,
s MSP (x) = max k∈Y p model (y = k | x).(5)
While there exist more sophisticated methods that take additional calibration steps , MSP proves to be a strong baseline, especially when p model is fine-tuned from pretrained transformers (Hendrycks et al., 2020b;Desai and Durrett, 2020).
Density estimation methods. These methods use the likelihood of the input given by a density estimator as the score. For text or sequence data, a language model p LM is typically used to estimate p(x) (Ren et al., 2019). To avoid bias due to the length of the sequence (see analysis in Appendix A), we use the token perplexity (PPL) as the score. Formally, given a sequence x = (x 1 , . . . , x T ),
s PPL (x) = exp 1 T T t=1 log p LM (x t | x 1:t−1 )(6)
While there are many works on density estimation methods using flow-based models in computer vision (e.g. Nalisnick et al., 2019a;Zhang et al., 2020a), there is limited work experimenting with density estimation methods for OOD detection on text (Lee et al., 2020).
Implicit assumptions on OOD.
One key question in OOD detection is how the distribution shifts at test time, i.e. what characterizes the difference between ID and OOD examples. Without access to OOD data during training, the knowledge must be incorporated into the detector through some inductive bias. Calibration methods rely on p(y | x) estimated by a classifier, thus they are more influenced by the semantic features which are correlated with the label. We can see this formally by
p(y | x) ∝ p(x | y)p(y) (7) = p(φ b (x) | y)p(φ s (x) | y)p(y) (8) ∝ p(φ s (x) | y)p(y).(9)
In contrast, density estimation methods are sensitive to all components of the input, including both background and semantic features, even in situations where distribution shifts are predominately driven by one particular type. In the following sections, we examine how these implicit assumptions impact performance on different ID/OOD pairs.
Simulation of Distribution Shifts
As an illustrative example, we construct a toy OOD detection problem using a binary classification setting similar to the one depicted in Figure 1. This allows us to remove estimation errors and study optimal calibration and density estimation detectors under controlled semantic and background shifts.
Data Generation
We generate the ID examples from a Gaussian Mixture Model (GMM):
y = 0 w.p. 0.5 1 otherwise ,(10)x | y = i ∼ N (µ i , Σ).(11)
The centroids are sets of semantic and background features such that µ 1 = [µ s , µ b ] and µ 0 = [−µ s , µ b ], where µ s ∈ R n and µ b ∈ R m . In the 2D case in Figure 1, this corresponds to the two Gaussian clusters where the first component is the semantic feature and the second is the background feature.
In this case, we know the true calibrated score p(y | x) and the true density p(x) given any inputs.
Specifically, the optimal classifier is given by the Linear Discriminant Analysis (LDA) predictor. By setting Σ to the identity matrix, it corresponds to a linear classifier with weights [2µ s , 0 b ], where 0 b ∈ R m is a vector of all 0s. For simplicity, we set µ s = 1 s and µ b = 0 b , where 1 s ∈ R n , 0 b ∈ R m are vectors of all 0s.
Semantic Shift
We generate sets of OOD examples using a semantic shift by varying the overlap of ID and OOD semantic features. Formally, we vary the overlap rate r such that
r = |µ s ∩ µ Shift s | |µ s |(12)
where µ s , µ Shift s ∈ R n are the set of semantic features for ID and OOD, respectively, µ s ∩ µ Shift s represents the common features between the two, and | · | denotes the number of elements.
We fix the total dimensions to n + m = 200 and set n = 40 (semantic features) and m = 160 (background features). Further, we vary r by increments of 10%. Larger r indicates stronger semantic shift. For each r, we randomly sample ID and OOD semantic features and report the mean over 20 trials with 95% confidence bands in Figure 2.
Background Shift
We generate sets of OOD examples using a background shift by applying a displacement vector z = [0 s , z b ] to the two means. Formally,
µ i, Shift = µ i + z(13)
where 0 s ∈ R n is a vector of all 0s.
We set z = α[0 s , 1 b ], where 1 b ∈ R m is a vector of 1s.
Note that this shift corresponds to a translation of the ID distribution along the direction of µ b . We set the total dimensions to n + m = 200 while varying the split between semantic (n) and background (m) components by increments of 20. Figure 2 shows the OOD detection performance of our simulated experiment. We use Area Under the Receiver Operating Characteristics (AUROC) as our performance metric.
Simulation Results
We see that the calibration method generally outperforms density estimation. Further, the performance gap between the two methods decreases as both methods approach near-perfect performance under large semantic shifts with no overlap in semantic features, and approach chance under no semantic shift with completely overlapping semantic features. However, the calibration method is unable to improve performance under background shifts in either regime because the background features do not contribute to p(y | x) as the LDA weights are 0 for these components (Section 4.1). We find these results in line with our expectations and use them to drive our intuition when evaluating both types of OOD detection methods for real text data.
Experiments and Analysis
We perform head-to-head comparisons of calibration and density estimation methods on 14 ID/OOD pairs categorized as either background shift or semantic shift, as well as 8 pairs from challenge datasets.
Setup
OOD detectors. Recall that the calibration method MSP relies on a classifier trained on the ID data. We fine-tune the RoBERTa model on the ID data and compute its prediction probabilities (see Equation (5)). For the density estimation method PPL, we fine-tune GPT-2 (Radford et al., 2019) on the ID data and use perplexity as the OOD score (see Equation (6)). 2 To control for model size of the two methods, we choose RoBERTa Base and GPT-2 Small , which have 110M and 117M parameters, respectively. We also experiment with two larger models, RoBERTa Large and GPT-2 Medium with 355M and 345M parameters, respectively. We evaluate the OOD detectors by AUROC and the False Alarm Rate at 95% Recall (FAR95), which measures the misclassification rate of ID examples at 95% OOD recall. Both metrics show similar trends (see Appendix B for FAR95 results).
Training details. For RoBERTa, we fine-tune the model for 3 epochs on the training split of ID data with a learning rate of 1e-5 and a batch size of 16. For GPT-2, we fine-tune the model for 1 epoch on the training split of ID data for the language modeling task, using a learning rate of 5e-5 and a batch size of 8. 3
Oracle detectors. To get an estimate of the upper bound of OOD detection performance, we consider the situation where we have access to the OOD data and can directly learn an OOD classifier. Specifically, we train a logistic regression model with bag-of-words features using 80% of the test data and report results on the remaining 20%.
Semantic Shift
Recall that the distribution of discriminative features changes in the semantic shift setting, i.e. p train (φ s (x)) = p test (φ s (x)) (Section 2). We create semantic shift pairs by including test examples from classes unseen during training. Thus, semantic features useful for classifying the training data are not representative in the test set.
We use the News Category (Misra, 2018) and DBPedia Ontology Classification (Zhang et al., 2015) multiclass classification datasets to create two ID/OOD pairs. The News Category dataset consists of HuffPost news data. We use the examples from the five most frequent classes as ID (News Top-5) and the data from the remaining 36 classes as OOD (News Rest). The DBPedia Ontology Classification dataset consists of data from Wikipedia extracted from 14 non-overlapping classes of DBPedia 2014 (Lehmann et al., 2015).
We use examples from the first four classes by class number as ID (DBPedia Top-4) and the rest as OOD (DBPedia Rest). Results. Table 1 shows the results for our semantic shift pairs. The calibration method consistently outperforms the density estimation method, indicating that calibration methods are better suited for scenarios with large semantic shifts, which is in line with our simulation results (Section 4).
Background Shift
Recall that background features (e.g. formality) do not depend on the label. Therefore, we consider domain shift in sentiment classification and natural language inference (NLI) datasets. For our analysis, we use the SST-2 (Socher et al., 2013), IMDB (Maas et al., 2011), and Yelp Polarity (Zhang et al., 2015) binary sentiment classification datasets. The SST-2 and IMDB datasets consist of movie reviews with different lengths. Meanwhile, the Yelp polarity dataset contains reviews for different businesses, representing a domain shift from SST-2 and IMDB. Each of these datasets is used as ID/OOD, using the validation split of SST-2 and test split of IMDB and Yelp Polarity for evaluation.
We also use the SNLI (Bowman et al., 2015), MNLI (Williams et al., 2018) and RTE (from GLUE, Wang et al., 2018a) datasets. SNLI and MNLI consist of NLI examples sourced from different genres. RTE comprises of examples sourced from a different domain. Where there is some change in semantic information since the task has the two labels (entailment and non-entailment) as opposed to three (entailment, neutral and contradiction) in SNLI and MNLI, 4 domain/background shift is more prominent since the semantic features for the NLI task are similar. Each of these datasets is used as either ID or OOD, and we use the validation set of the OOD data for evaluation. The density estimation method consistently outperforms the calibration method (for all pairs except MNLI vs RTE), indicating that PPL is more sensitive to changes in background features. Further, in cases where the discriminative model generalizes well (as evident by the small difference in ID and OOD accuracy numbers), we find that the calibration method performance is close to random (50) because a well-calibrated model also has higher confidence on its correct OOD predictions. We note that the discriminative models tend to generalize well here, hence it might be better to focus on domain adaptation instead of OOD detection when the shift is predominantly a background shift. We discuss this further in Section 6.
Results.
Analysis
Controlled distribution shifts. We use two controlled distribution shift experiments on real text data to further study the framework of semantic and background shifts. For background shift, we append different amounts of text from Wikitext (Merity et al., 2017) (25,50,100,150,200) words. For semantic shift, we use the News Category dataset and move classes from ID to OOD. We start with the top 40 ID classes by frequency and move classes in increments of 10. The ID coverage of semantic information decreases as more classes move to the OOD subset, resulting in a larger semantic shift. Results. Figure 3 shows the AUROC score obtained from both methods for our controlled distribution shift experiments. We see that the density estimation method is more sensitive to the amount of synthetic background text than calibration methods, and that the calibration method is more sensitive to the number of ID/OOD classes. This is in line with our intuition about the shifts and the results we obtain from simulated data (Section 4).
Larger models. Table 3 shows the results using larger models for OOD detection. We observe that the larger discriminative model achieves a much higher score for the background shift pair, closing the gap with the language model performance.
We speculate that the larger model is able to learn some of the background features in its representation. The performance for the semantic shift pair is largely unchanged when using the larger models.
Challenge Data
Challenge datasets are designed to target either superficial heuristics adopted by a model (e.g. premise-hypothesis overlap) or model deficiencies (e.g. numerical reasoning in NLI), which creates significant challenges for deployed models (Ribeiro et al., 2020). It is therefore desirable to abstain on detected OOD examples. We consider the following challenge datasets.
Human-generated challenge data. Kaushik et al. (2020) crowdsourced a set of counterfactuallyaugmented IMDB examples (c-IMDB) by instructing annotators to minimally edit examples to yield counterfactual labels. This changes the distribution of semantic features with high correlation to labels such that p train (φ s (x)) = p test (φ s (x)), creating a semantic shift. We consider IMDB as ID and c-IMDB as OOD, combining the training, validation, and test splits of c-IMDB for evaluation.
Rule-based challenge data. HANS (McCoy et al., 2019b) consists of template-based examples that have high premise-hypothesis overlap but are non-entailment, which mainly results in background shift due to the specific templates/syntax. Similarly, the Stress Test dataset (Naik et al., 2018a) is a set of automatically generated examples designed to evaluate common errors from NLI models. We categorize the type of distribution shifts from these test categories with respect to MNLI (ID) depending on whether they append "background" phrases to the ID examples or replace discriminative phrases (Table 4). Antonym (changing premise to obtain an antonymous hypothesis resulting in contradiction despite high overlap) and Numerical Reasoning (different semantic information than MNLI training set) constitute semantic shifts, as the set of semantic features now focus on specific types of entailment reasoning (e.g. antonymy and numerical representation). Negation (appending "and false is not true" to hypothesis), Spelling Errors (randomly introducing spelling errors in one premise word), Word Overlap (appending "and true is true" to each hypothesis), and Length Mismatch (appending a repetitive phrase "and true is true" five times to the premise) constitute background shifts because they introduce population level changes (e.g. appending "and true is true" to each hypothesis) that are unrelated to the entailment conditions of each example.
Failure case 1: spurious semantic features.
Challenge data is often constructed to target spurious features (e.g. premise-hypothesis overlap for NLI) that are useful on the training set but do not correlate with the label in general, e.g. on the test set. Therefore, a discriminative model would be over-confident on the OOD examples because the spurious semantic features that were discriminative during training, while still prominent, are no longer predictive of the label. As a result, in Table 4, MSP struggles with most challenge data, achieving an AUROC score close to random (50). On the other hand, the density estimation method achieves almost perfect performance on HANS.
Failure case 2: small shifts. While density estimation methods perform better in background shift settings, our simulation results show that they still struggle to detect small shifts when the ID and OOD distributions largely overlap. Table 4 shows similar findings for Negation and Word Overlap Stress Test categories that append short phrases (e.g. "and true is true") to each ID hypothesis.
Failure case 3: repetition. For Antonym, Numerical Reasoning, and Length Mismatch, PPL performance is significantly worse than random, indicating that our language model assigns higher likelihoods to OOD than ID examples. These challenge examples contain highly repetitive phrases (e.g. appending "and true is true" five times in Length Mismatch, or high overlap between premise and hypothesis in Numerical Reasoning and Antonym), which is known to yield high likelihood under recursive language models (Holtzman et al., 2020). Thus repetition may be used as an attack to language model-based OOD detectors.
Overall, the performance of both methods drops significantly on the challenge datasets. Among these, human-generated counterfactual data is the most difficult to detect, and rule-based challenge data can contain unnatural patterns that cause unexpected behavior.
Discussion
The performance of calibration and density estimation methods on OOD examples categorized along the lines of semantic and background shift provides us with insights that can be useful in improving OOD detection. This framework can be used to build better evaluation benchmarks that focus on different challenges in OOD detection. A choice between the two methods can also be made based on the anticipated distribution shift at test time, i.e, using calibration methods when detecting semantic shift is more important, and using density estimation methods to detect background shifts. However, we observe failure cases from challenge examples, with density estimation methods failing to detect OOD examples with repetition and small shifts, and calibration methods failing to detect most challenge examples. This indicates that these challenge examples constitute a type of OOD that target the weaknesses of both approaches. This highlights the room for a more explicit definition of OOD to progress the development of OOD detection methods and create benchmarks that reflect realistic distribution shifts.
Related Work
Distribution shift in the wild. Most early works on OOD detection make no distinctions on the type of distribution shift observed at test time, and create synthetic ID/OOD pairs using different datasets based on the setup in Hendrycks and Gimpel (2017). Recently, there is an increasing interest in studying real-world distribution shifts (Ahmed and Courville, 2020;Hsu et al., 2020;Hendrycks et al., 2020a;Koh et al., 2020a). On these benchmarks with a diverse set of distribution shifts, no single detection method wins across the board. We explore the framework of characterization of distribution shifts along the two axes of semantic shift and background (or non-semantic) shift, shedding light on the performance of current methods.
OOD detection in NLP. Even though OOD detection is crucial in production (e.g. dialogue systems (Ryu et al., 2018)) and high-stake applications (e.g. healthcare (Borjali et al., 2020)), it has received relatively less attention in NLP until recently. Recent works evaluated/improved the calibration of pretrained transformer models (Hendrycks et al., 2020b;Goyal and Durrett, 2020;Kong et al., 2020;Zhou and Chen, 2021). They show that while pretrained transformers are better calibrated, making them better at detecting OOD data than previous models, there is scope for improvement. Our analysis reveals one limitation of calibration-based detection when faced with a background shift. Other works focus on specific tasks, including prototypical network for low-resource text classification (Tan et al., 2019) and data augmentation for intent classification (Zheng et al., 2020).
Inductive bias in OOD detection. Our work shows that the effectiveness of a method largely depends on whether its assumption on the distribution shift matches the test data. One straightforward way to incorporate prior knowledge on the type of distribution shift is through augmenting similar OOD data during training, i.e., the so-called outlier exposure method (Hendrycks et al., 2019), which has been shown to be effective on question answering (Kamath et al., 2020). Given that the right type of OOD data can be difficult to obtain, another line of work uses a hybrid of calibration and density estimation methods to achieve a balance between capturing semantic features and background features. These models are usually trained with both a discriminative loss and a generative (or selfsupervised) loss (Winkens et al., 2020;Zhang et al., 2020a;Nalisnick et al., 2019b).
Domain adaptation versus OOD detection.
There are two ways of handling the effect of OOD data: 1) build models that perform well across domains (i.e., background shifts), i.e., domain adaptation (Chu and Wang, 2018;Kashyap et al., 2021) or 2) allow models to detect a shift in data distribution, and potentially abstain from making a prediction. In our setting (2), we want to guard against all types of OOD data without any access to it, unlike domain adaptation which usually relies on access to OOD data. This setting can be more important than (1) for safety-critical applications, such as those in healthcare, because the potential cost of an incorrect prediction is greater, motivating a more conservative approach to handling OOD data by abstaining. This could also help improve performance in selective prediction (Kamath et al., 2020;Xin et al., 2021).
Conclusion
Despite the extensive literature on outlier and OOD detection, previous work in NLP tends to lack consensus on a rigorous definition of OOD examples, instead relying on arbitrary dataset pairs from different tasks. In our work, we approach this problem in natural text and simulated data by categorizing OOD examples as either background or semantic shifts and study the performance of two common OOD detection methods-calibration and density estimation. For both types of data, we find that density estimation methods outperform calibration methods under background shifts while the opposite is true under semantic shifts. However, we find several failure cases from challenge examples that target model shortcomings.
As explained in Section 2, we assume that φ s and φ b map x to two disjoint sets of components for simplicity. This assumption helps us simplify the framework and compare the two types of detection methods in relation to the two types of shifts. While this simplified framework explains much of the differences between the two methods, failure cases from challenge examples highlight the room for better frameworks and a more explicit definition of OOD to progress the development of OOD detection methods. Such a definition can inform the creation of benchmarks on OOD detection that reflect realistic distribution shifts.
Defining (or at least explicitly stating) the types of OOD examples that predictors are designed to target can also guide future modeling decisions between using calibration and density estimation methods, and help improve detection. Some promising directions include test-time fine-tuning (Sun et al., 2020) and data augmentation (Chen et al., 2020), which can be guided towards a specific type of distribution shift for improved detection performance against it. Finally, the methods we studied work well for one type of shift, which motivates the use of hybrid models (Zhang et al., 2020b;Liu and Abbeel, 2020) that use both calibration and density estimation when both types of shift occur at the same time.
Ethical Considerations
As society continues to rely on automated machine learning systems to make important decisions that affect human lives, OOD detection becomes increasingly vital to ensure that these systems can detect natural shifts in domain and semantics. If medical chat-bots cannot recognize that new disease variants or rare co-morbidities are OOD while diagnosing patients, they will likely provide faulty and potentially harmful recommendations 5 if they don't contextualize their uncertainty. We believe that implementing OOD detection, especially for more challenging but commonly occurring semantic shifts should be part of any long-lasting production model.
In addition, OOD detection can be used to identify and alter model behavior when encountering data related to minority groups. For example, Koh et al. (2020b) present a modified version of the CivilComments dataset (Borkan et al., 2019b), with the task of identifying toxic user comments on online platforms. They consider domain annotations for each comment based on whether the comment mentions each of 8 demographic identities -male, female, LGBTQ, Christian, Muslim, other religions, Black and White. They note that a standard BERTbased model trained using ERM performs poorly on the worst group, with a 34.2 % drop in accuracy as compared to the average. Such models may lead to unintended consequences like flagging a comment as toxic just because it mentions some demographic identities, or in other words, belongs to some domains. Our work can be useful in altering the inference-time behavior of such models upon detection of such domains which constitute a larger degree of background shift. Of course, nefarious agents could use the same pipeline to alter model behavior to identify and discriminate against demographics that display such background shifts. the paper. We also want to thank Diksha Meghwal, Vaibhav Gadodia and Ambuj Ojha for their help with an initial version of the project and experimentation setup.
References
A Example Probability
We additionally evaluate our density estimation methods using log p(x) as a detection measure. In the case of text, log p(x) is defined as
t i=1 log p(x i | x <i ).
While PPL accounts for varying sequence lengths by averaging word likelihoods over the input sequence, log p(x) does not. Figure 4 shows that this difference significantly impacts performance. With IMDB as the ID data, using log p(x) fails for SST-2, achieving close to 100 FAR95 and near 0 AUROC. We suspect this because IMDB examples are a full paragraph while SST-2 examples are one to two sentences. log p(x) would naturally be smaller for IMDB examples than these OOD examples, resulting in complete failure for simple thresholding methods measured by AUROC.
B FAR95 Results
We additionally evaluate the performance for all experiments using FAR95, which measures the false positive rate at 95% recall. In the context of OOD detection, this measure gives the misclassification rate of ID data at 95% recall of OOD classification, hence a lower value indicates better performance.
Tables 5, 6, 7 and 8 show the results obtained using FAR95 as a metric for the corresponding ID/OOD pairs used earlier. We observe that FAR95 results are in line with AUROC results except for DBPedia, in which case density estimation methods yield a better result. The difference may be a result of the accumulative nature of AUROC in contrast to FAR95, which is a point measurement.
C Background shift in MNLI Genres
MNLI is a crowd-sourced collection of sentence pairs for textual entailment sourced from 10 genres including Fiction, Government, Slate, Telephone, and Travel. We use examples from these five MNLI genres and separately consider each genre as ID and OOD, using the validation splits for evaluation. Table 9 shows the results for MNLI genres. The discriminative model generalizes well to other genres and we find that the OOD detection performance of calibration method is close to random (50) because of the higher confidence on correct OOD predictions by a well-calibrated model. Table 9: Performance on background shifts caused by shift in MNLI genre. For each pair, higher score obtained (by PPL or MSP) is in bold. We can see that the density estimation method using PPL significantly outperforms the calibration method.
Figure 1 :
1Illustration of semantic shift and background shift in R 2 . Each point consists of semantic features (x-axis) and background features (y-axis). OOD examples (red points) can shift in either direction. The background color indicates regions of ID (light) and OOD (dark) given by the density estimation method (left) and the calibration method (right). The calibration method fails to detect OOD examples due to background shift.
Figure 2 :
2Area Under Receiver Operating Characteristics (AUROC) of calibration (blue) and density estimation (orange) methods for OOD detection using our toy binary classification problem. The calibration method outperforms the density estimation method under larger semantic shifts while the opposite is true under larger background shifts.
Figure 3 :
3AUROC of PPL (orange) and MSP (blue) for controlled background and semantic shift experiments. The density estimation method performance improves as we increase the amount of background shift by appending longer texts, and the calibration method performance increases as we increase the amount of semantic shift by moving more classes to OOD.
Table 1: Performance on semantic shifts, with higher score (among PPL/MSP) in bold. We can see that the calibration method using MSP significantly outperforms the density estimation methods.ID
OOD
AUROC
PPL MSP Oracle
News Top-5
News Rest
60.2 78.9
72.0
DBPedia Top-4 DBPedia Rest 75.4 88.8
99.6
Table 2shows the results for binary sentiment classification and NLI domain shifts.ID
OOD
AUROC
Accuracy
PPL MSP Oracle
OOD (∆)
ID
SST-2
IMDB 97.9
66.2
100.0
92.0 (-1.8) 93.8
Yelp
98.7
57.5
99.8
94.4 (+0.6)
IMDB
SST-2
96.9
82.6
100.0
89.2 (-6.3) 95.5
Yelp
77.9
67.1
100.0
95.4 (-0.1)
Yelp
SST-2
98.9
85.9
99.8
88.9 (-9.3) 98.2
IMDB 86.6
61.8
100.0
93.2 (-5.0)
SNLI
RTE
94.6
78.7
99.8 67.5 (-22.6) 90.1
MNLI 96.7
75.6
99.7 77.9 (-12.2)
RTE
SNLI
81.2
45.1
99.7
82.0 (+6.9) 75.1
MNLI 81.4
55.5
97.0
77.3 (+2.2)
MNLI
SNLI
75.7
56.1
99.7
80.4 (-4.4) 84.8
RTE
68.0
76.5
96.7
76.5 (-8.3)
Table 2 :
2Performance on background shifts caused by shift in domain. For each pair, higher score obtained (by PPL or MSP) is in bold. The density estimation method using PPL outperforms the calibration method.
and Civil Comments (Borkan et al., 2019a) to SST-2 examples to create synthetic ID and OOD examples, respectively. We append the unrelated texts with lengths ∈
Table 3 :
3Performance of Base and Large models for abackground shift pair and semantic shift pair each, with
higher score in bold. The larger discriminative model
helps close the performance gap between the calibra-
tion method and density estimation method for back-
ground shift.
Table 4 :
4Word Overlap and Length Mismatch examples from the Stress Test as background shifts, and the Numerical Reasoning and Antonym examples as semantic shifts. We consider MNLI as ID for these challenge examples and use the validation split of HANS and MNLI for evaluation.AUROC scores obtained using PPL, MSP and
Oracle for challenge data. The primary type of shift
observed is indicated in the 'Shift' column. Higher per-
formance (among MSP/PPL) for each pair is in bold.
We can see that both methods struggle with most types
of challenge data.
We consider the matched Negation, Spelling Er-
rors,
Chenhui Chu and RuiWang. 2018. A survey of domain adaptation for neural machine translation. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 1304-1319. Association for Computational Linguistics. Hal Daumé III. 2007. Frustratingly easy domain adaptation. In ACL 2007, Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, June 23-30, 2007, Prague, Czech Republic. The Association for Computational Linguistics. Shrey Desai and Greg Durrett. 2020. Calibration of pre-trained transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 295-302, Online. Association for Computational Linguistics. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Exploration and evaluation. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 663-670. AAAI Press. Tanya Goyal and Greg Durrett. 2020. Evaluating factuality in generation with dependency-level entailment. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, Online Event, 16-20 November 2020, pages 3592-3603. Association for Computational Linguistics. Dan Hendrycks, Steven Basart, Mantas Mazeika, Mohammadreza Mostajabi, Jacob Steinhardt, and Dawn Song. 2020a. Scaling out-of-distribution detection for real-world settings. CoRR, abs/1911.11132. Dan Hendrycks and Kevin Gimpel. 2017. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net. Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020b. Pretrained transformers improve out-ofdistribution robustness. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 2744-2751. Association for Computational Linguistics. Dan Hendrycks, Mantas Mazeika, and Thomas G. Dietterich. 2019. Deep anomaly detection with outlierFigure 4: OOD detection performance as measured by AUROC using different measures for binary sentiment classification based background shift, using IMDB as ID data. We can see that using log p(x) as a measure is highly noisy due to its dependency on sequence lengths.Faruk Ahmed and Aaron C. Courville. 2020. De-
tecting semantic anomalies. In The Thirty-Fourth
AAAI Conference on Artificial Intelligence, AAAI
2020, The Thirty-Second Innovative Applications of
Artificial Intelligence Conference, IAAI 2020, The
Tenth AAAI Symposium on Educational Advances
in Artificial Intelligence, EAAI 2020, New York, NY,
USA, February 7-12, 2020, pages 3154-3162. AAAI
Press.
Dario Amodei, Chris Olah, Jacob Steinhardt, Paul F.
Christiano, John Schulman, and Dan Mané.
2016. Concrete problems in AI safety. CoRR,
abs/1606.06565.
Alireza Borjali, Martin Magneli, David Shin, Hen-
rik Malchau, Orhun K. Muratoglu, and Kartik M.
Varadarajan. 2020. Natural language processing
with deep learning for medical adverse event de-
tection from free-text medical narratives: A case
study of detecting total hip replacement dislocation.
CoRR, abs/2004.08333.
Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum
Thain, and Lucy Vasserman. 2019a. Nuanced met-
rics for measuring unintended bias with real data for
text classification. In Companion of The 2019 World
Wide Web Conference, WWW 2019, San Francisco,
CA, USA, May 13-17, 2019, pages 491-500. ACM.
Daniel Borkan, Lucas Dixon, Jeffrey Sorensen, Nithum
Thain, and Lucy Vasserman. 2019b. Nuanced met-
rics for measuring unintended bias with real data for
text classification. In Companion Proceedings of
The 2019 World Wide Web Conference, WWW '19,
page 491-500, New York, NY, USA. Association for
Computing Machinery.
Samuel R. Bowman, Gabor Angeli, Christopher Potts,
and Christopher D. Manning. 2015. A large an-
notated corpus for learning natural language infer-
ence. In Proceedings of the 2015 Conference on
Empirical Methods in Natural Language Processing,
EMNLP 2015, Lisbon, Portugal, September 17-21,
2015, pages 632-642. The Association for Compu-
tational Linguistics.
Nicholas Carlini and David A. Wagner. 2017. Adver-
sarial examples are not easily detected: Bypassing
ten detection methods. In Proceedings of the 10th
ACM Workshop on Artificial Intelligence and Secu-
rity, AISec@CCS 2017, Dallas, TX, USA, November
3, 2017, pages 3-14. ACM.
Jiefeng Chen, Yixuan Li, Xi Wu, Yingyu Liang,
and Somesh Jha. 2020. Robust out-of-distribution
detection via informative outlier mining. CoRR,
abs/2006.15207.
ID
OOD
FAR95 (↓)
PPL MSP Oracle
News Top-5
News Rest
88.5 75.7
80.4
DBPedia Top-4 DBPedia Rest 78.3 86.3
1.3
Table 5 :
5FAR95 scores obtained using PPL, MSP and Oracle for semantic shifts, with lower score (among PPL/MSP) in bold.
Table 6 :
6FAR95 scores obtained using PPL, MSP and Oracle for background shift caused by shift in domain. For each pair, lower score obtained (by PPL or MSP) is in bold.ID
OOD
FAR95 (↓)
PPL MSP Oracle
Fiction
Government 57.4
95.0
9.7
Slate 66.0
92.7
37.7
Telephone 29.1
93.3
36.0
Travel 58.0
93.3
10.0
Government
Fiction 74.7
92.6
6.4
Slate 70.7
92.1
13.7
Telephone 35.2
95.5
6.2
Travel 52.8
92.4
6.2
Slate
Fiction 90.6
96.2
32.2
Government 90.0
96.1
12.6
Telephone 57.4
96.0
22.7
Travel 83.3
95.8
16.8
Telephone
Fiction 54.2
93.3
32.5
Government 50.9
93.7
8.5
Slate 49.6
91.1
36.3
Travel 44.6
91.4
10.7
Travel
Fiction 74.5
95.5
10.2
Government 69.0
94.4
7.8
Slate 75.9
93.8
16.8
Telephone 30.3
93.7
9.5
Table 7 :
7FAR95 scores obtained using PPL, MSP and Oracle for background shift caused by shift in MNLI genre. For each pair, lower score obtained (by PPL or MSP) is in bold.ID
OOD
Shift
FAR95 (↓)
PPL MSP Oracle
IMDB c-IMDB
Semantic
93.1 82.8
69.3
MNLI
HANS
Background
4.2 73.1
0.0
Negation
Background 94.9 93.5
0.1
Len. Mismatch Background 98.3 95.0
0.1
Spell. Error
Background 96.9 92.4
3.0
Word Overlap Background 96.0 94.4
1.1
Antonym
Semantic
100.0 90.8
6.3
Num. Reas.
Semantic
99.5 77.6
0.7
Table 8 :
8FAR95 scores obtained using PPL, MSP and Oracle for challenge data. The primary type of shift observed is indicated in the 'Shift' column. Lower score (among MSP/PPL) for each pair is in bold.
We also use the sentence probability (p(x)) as the score, but find it highly sensitive to sequence lengths (Appendix A).
Our code can be found at https://github.com/ uditarora/ood-text-emnlp.
Both neutral and contradiction are considered as non-entailment when evaluating accuracy with RTE vs SNLI/MNLI or vice-versa.
https://www.nabla.com/blog/gpt-3/
AcknowledgementsWe thank the anonymous reviewers, Ethan Perez, Angelica Chen and other members of the Machine Learning for Language Lab at New York University for their thoughtful suggestions on improving
7th International Conference on Learning Representations, ICLR 2019. New Orleans, LA, USAOpenReview.netexposure. In 7th International Conference on Learn- ing Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
The curious case of neural text degeneration. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netAri Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Generalized ODIN: detecting out-ofdistribution image without learning from out-ofdistribution data. Yen-Chang Hsu, Yilin Shen, Hongxia Jin, Zsolt Kira, 10.1109/CVPR42600.2020.010962020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USAIEEE2020Yen-Chang Hsu, Yilin Shen, Hongxia Jin, and Zsolt Kira. 2020. Generalized ODIN: detecting out-of- distribution image without learning from out-of- distribution data. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 10948-10957. IEEE.
Selective question answering under domain shift. Amita Kamath, Robin Jia, Percy Liang, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online. the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, OnlineAssociation for Computational LinguisticsAmita Kamath, Robin Jia, and Percy Liang. 2020. Se- lective question answering under domain shift. In Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5684-5696. Associa- tion for Computational Linguistics.
Domain divergences: A survey and empirical analysis. Devamanyu Abhinav Ramesh Kashyap, Min-Yen Hazarika, Roger Kan, Zimmermann, 10.18653/v1/2021.naacl-main.147Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021OnlineAssociation for Computational LinguisticsAbhinav Ramesh Kashyap, Devamanyu Hazarika, Min- Yen Kan, and Roger Zimmermann. 2021. Domain divergences: A survey and empirical analysis. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 1830-1849. Association for Computational Linguis- tics.
Learning the difference that makes A difference with counterfactuallyaugmented data. Divyansh Kaushik, Eduard H Hovy, Zachary Chase Lipton, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netDivyansh Kaushik, Eduard H. Hovy, and Zachary Chase Lipton. 2020. Learning the differ- ence that makes A difference with counterfactually- augmented data. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Why normalizing flows fail to detect out-of-distribution data. Polina Kirichenko, Pavel Izmailov, Andrew Gordon Wilson, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. 2020virtualPolina Kirichenko, Pavel Izmailov, and Andrew Gor- don Wilson. 2020. Why normalizing flows fail to detect out-of-distribution data. In Advances in Neu- ral Information Processing Systems 33: Annual Con- ference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
. Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finnand Percy Liang. 2020a. WILDS: A benchmark of in-the-wild distribution shifts. CoRR, abs/2012.07421Pang Wei Koh, Shiori Sagawa, Henrik Mark- lund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. 2020a. WILDS: A benchmark of in-the-wild distribution shifts. CoRR, abs/2012.07421.
. Pang Wei Koh, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finnand Percy Liang. 2020b. WILDS: A benchmark of in-the-wild distribution shifts. CoRR, abs/2012.07421Pang Wei Koh, Shiori Sagawa, Henrik Mark- lund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu, Michihiro Yasunaga, Richard Lanas Phillips, Sara Beery, Jure Leskovec, Anshul Kundaje, Emma Pierson, Sergey Levine, Chelsea Finn, and Percy Liang. 2020b. WILDS: A benchmark of in-the-wild distribution shifts. CoRR, abs/2012.07421.
Calibrated language model fine-tuning for in-and out-ofdistribution data. Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, Chao Zhang, 10.18653/v1/2020.emnlp-main.102Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingOnline2020Association for Computational LinguisticsLingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, and Chao Zhang. 2020. Cali- brated language model fine-tuning for in-and out-of- distribution data. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 1326-1340. Association for Computa- tional Linguistics.
A simple unified framework for detecting outof-distribution samples and adversarial attacks. Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin, Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems. NeurIPS; Montréal, CanadaKimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. 2018. A simple unified framework for detecting out- of-distribution samples and adversarial attacks. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Pro- cessing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pages 7167-7177.
Misinformation has high perplexity. CoRR, abs. Nayeon Lee, Yejin Bang, Andrea Madotto, Pascale Fung, Nayeon Lee, Yejin Bang, Andrea Madotto, and Pas- cale Fung. 2020. Misinformation has high perplex- ity. CoRR, abs/2006.04666.
Dbpedia-a large-scale, multilingual knowledge base extracted from wikipedia. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, 6Semantic webJens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. 2015. Dbpedia-a large-scale, mul- tilingual knowledge base extracted from wikipedia. Semantic web, 6(2):167-195.
Enhancing the reliability of out-of-distribution image detection in neural networks. Shiyu Liang, Yixuan Li, R Srikant, 6th International Conference on Learning Representations, ICLR 2018. Vancouver, BC, CanadaConference Track Proceedings. OpenReview.netShiyu Liang, Yixuan Li, and R. Srikant. 2018. Enhanc- ing the reliability of out-of-distribution image detec- tion in neural networks. In 6th International Confer- ence on Learning Representations, ICLR 2018, Van- couver, BC, Canada, April 30 -May 3, 2018, Con- ference Track Proceedings. OpenReview.net.
Hybrid discriminative-generative training via contrastive learning. CoRR, abs. Hao Liu, Pieter Abbeel, Hao Liu and Pieter Abbeel. 2020. Hybrid discriminative-generative training via contrastive learning. CoRR, abs/2007.09070.
. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, RoBERTa: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692.
Learning word vectors for sentiment analysis. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, Christopher Potts, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesOregon, USAAssociation for Computational LinguisticsPortlandAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analy- sis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 142-150, Port- land, Oregon, USA. Association for Computational Linguistics.
Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. Tom Mccoy, Ellie Pavlick, Tal Linzen, 10.18653/v1/p19-1334Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, Italy; Long Papers1Association for Computational LinguisticsTom McCoy, Ellie Pavlick, and Tal Linzen. 2019a. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Pro- ceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Pa- pers, pages 3428-3448. Association for Computa- tional Linguistics.
Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. Tom Mccoy, Ellie Pavlick, Tal Linzen, 10.18653/v1/P19-1334Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsTom McCoy, Ellie Pavlick, and Tal Linzen. 2019b. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Lin- guistics.
Pointer sentinel mixture models. Stephen Merity, Caiming Xiong, James Bradbury, Richard Socher, 5th International Conference on Learning Representations. Toulon, FranceConference Track Proceedings. Open-Review. netStephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mixture mod- els. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. Open- Review.net.
News category dataset. Rishabh Misra, 10.13140/RG.2.2.20331.18729Rishabh Misra. 2018. News category dataset.
Stress test evaluation for natural language inference. Aakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, Graham Neubig, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsAakanksha Naik, Abhilasha Ravichander, Norman Sadeh, Carolyn Rose, and Graham Neubig. 2018a. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, pages 2340-2353, Santa Fe, New Mexico, USA. Association for Com- putational Linguistics.
Stress test evaluation for natural language inference. Aakanksha Naik, Abhilasha Ravichander, Norman M Sadeh, Carolyn Penstein Rosé, Graham Neubig, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsAakanksha Naik, Abhilasha Ravichander, Norman M. Sadeh, Carolyn Penstein Rosé, and Graham Neubig. 2018b. Stress test evaluation for natural language inference. In Proceedings of the 27th International Conference on Computational Linguistics, COLING 2018, Santa Fe, New Mexico, USA, August 20-26, 2018, pages 2340-2353. Association for Computa- tional Linguistics.
Do deep generative models know what they don't know?. Eric T Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Görür, Balaji Lakshminarayanan, 7th International Conference on Learning Representations, ICLR 2019. New Orleans, LA, USAOpenReview.netEric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Görür, and Balaji Lakshminarayanan. 2019a. Do deep generative models know what they don't know? In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net.
Hybrid models with deep and invertible features. Eric T Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Görür, Balaji Lakshminarayanan, PMLRProceedings of the 36th International Conference on Machine Learning, ICML 2019. the 36th International Conference on Machine Learning, ICML 2019Long Beach, California, USA97Eric T. Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Görür, and Balaji Lakshminarayanan. 2019b. Hybrid models with deep and invertible features. In Proceedings of the 36th International Confer- ence on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 4723-4732. PMLR.
An empirical analysis of formality in online communication. Ellie Pavlick, Joel R Tetreault, Trans. Assoc. Comput. Linguistics. 4Ellie Pavlick and Joel R. Tetreault. 2016. An empir- ical analysis of formality in online communication. Trans. Assoc. Comput. Linguistics, 4:61-74.
Language models are unsupervised multitask learners. A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever, A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. 2019. Language models are unsuper- vised multitask learners.
Know what you don't know: Unanswerable questions for SQuAD. P Rajpurkar, R Jia, P Liang, Association for Computational Linguistics (ACL). P. Rajpurkar, R. Jia, and P. Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Association for Computational Linguis- tics (ACL).
Likelihood ratios for outof-distribution detection. Jie Ren, Peter J Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark Depristo, Joshua Dillon, Balaji Lakshminarayanan, Advances in Neural Information Processing Systems. Curran Associates, Inc32Jie Ren, Peter J. Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark Depristo, Joshua Dillon, and Balaji Lakshminarayanan. 2019. Likelihood ratios for out- of-distribution detection. In Advances in Neural Information Processing Systems, volume 32, pages 14707-14718. Curran Associates, Inc.
Beyond accuracy: Behavioral testing of NLP models with checklist. Tongshuang Marco Túlio Ribeiro, Carlos Wu, Sameer Guestrin, Singh, 10.18653/v1/2020.acl-main.442Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online. the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, OnlineAssociation for Computational LinguisticsMarco Túlio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Behav- ioral testing of NLP models with checklist. In Pro- ceedings of the 58th Annual Meeting of the Associ- ation for Computational Linguistics, ACL 2020, On- line, July 5-10, 2020, pages 4902-4912. Association for Computational Linguistics.
Out-of-domain detection based on generative adversarial network. Seonghan Ryu, Sangjun Koo, Hwanjo Yu, Gary Geunbae Lee, 10.18653/v1/d18-1077Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsSeonghan Ryu, Sangjun Koo, Hwanjo Yu, and Gary Geunbae Lee. 2018. Out-of-domain detection based on generative adversarial network. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, Brussels, Bel- gium, October 31 -November 4, 2018, pages 714- 718. Association for Computational Linguistics.
Distributionally robust neural networks. Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, Percy Liang, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netShiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. 2020. Distributionally robust neu- ral networks. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Toward open set recognition. J Walter, Anderson Scheirer, De Rezende, Archana Rocha, Terrance E Sapkota, Boult, 10.1109/TPAMI.2012.256IEEE Trans. Pattern Anal. Mach. Intell. 357Walter J. Scheirer, Anderson de Rezende Rocha, Archana Sapkota, and Terrance E. Boult. 2013. To- ward open set recognition. IEEE Trans. Pattern Anal. Mach. Intell., 35(7):1757-1772.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, Christopher Potts, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013. the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013Seattle, Washington, USAACLA meeting of SIGDAT, a Special Interest Group of the ACLRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013. Recursive deep mod- els for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2013, 18-21 October 2013, Grand Hy- att Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1631-1642. ACL.
Test-time training with self-supervision for generalization under distribution shifts. Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei A Efros, Moritz Hardt, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning2020Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei A. Efros, and Moritz Hardt. 2020. Test-time training with self-supervision for generalization un- der distribution shifts. In Proceedings of the 37th In- ternational Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 9229-9248. PMLR.
Out-ofdomain detection for low-resource text classification tasks. Ming Tan, Yang Yu, Haoyu Wang, Dakuo Wang, Saloni Potdar, Shiyu Chang, Mo Yu, 10.18653/v1/D19-1364Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsMing Tan, Yang Yu, Haoyu Wang, Dakuo Wang, Sa- loni Potdar, Shiyu Chang, and Mo Yu. 2019. Out-of- domain detection for low-resource text classification tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 3564-3570. Association for Computational Linguis- tics.
Glue: A multi-task benchmark and analysis platform for natural language understanding. A Wang, A Singh, J Michael, F Hill, O Levy, S R Bowman, arXiv:1804.07461arXiv preprintA. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman. 2018a. Glue: A multi-task bench- mark and analysis platform for natural language un- derstanding. arXiv preprint arXiv:1804.07461.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, 10.18653/v1/w18-5446Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP. the Workshop: Analyzing and Interpreting Neural Networks for NLPBrussels, BelgiumAssociation for Computational LinguisticsAlex Wang, Amanpreet Singh, Julian Michael, Fe- lix Hill, Omer Levy, and Samuel R. Bowman. 2018b. GLUE: A multi-task benchmark and analysis platform for natural language understand- ing. In Proceedings of the Workshop: Analyzing and Interpreting Neural Networks for NLP, Black- boxNLP@EMNLP 2018, Brussels, Belgium, Novem- ber 1, 2018, pages 353-355. Association for Com- putational Linguistics.
A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel Bowman, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1Long PapersAdina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122. Association for Computational Linguistics.
Contrastive training for improved out-of-distribution detection. Jim Winkens, Rudy Bunel, Abhijit Guha Roy, Robert Stanforth, Vivek Natarajan, Joseph R Ledsam, Patricia Macwilliams, abs/2007.05566Pushmeet Kohli, Alan Karthikesalingam, Simon Kohl, A. Taylan Cemgil, S. M. Ali Eslami, and Olaf Ronneberger. 2020CoRRJim Winkens, Rudy Bunel, Abhijit Guha Roy, Robert Stanforth, Vivek Natarajan, Joseph R. Ledsam, Pa- tricia MacWilliams, Pushmeet Kohli, Alan Karthike- salingam, Simon Kohl, A. Taylan Cemgil, S. M. Ali Eslami, and Olaf Ronneberger. 2020. Contrastive training for improved out-of-distribution detection. CoRR, abs/2007.05566.
The art of abstention: Selective prediction and error regularization for natural language processing. Ji Xin, Raphael Tang, Yaoliang Yu, Jimmy Lin, 10.18653/v1/2021.acl-long.84Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021Association for Computational Linguistics1Virtual EventJi Xin, Raphael Tang, Yaoliang Yu, and Jimmy Lin. 2021. The art of abstention: Selective prediction and error regularization for natural language process- ing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 1040-1051. Association for Computational Linguistics.
Hybrid models for open set recognition. Hongjie Zhang, Ang Li, Jie Guo, Yanwen Guo, 10.1007/978-3-030-58580-8_7Computer Vision -ECCV 2020 -16th European Conference. Glasgow, UKSpringer12348Proceedings, Part IIIHongjie Zhang, Ang Li, Jie Guo, and Yanwen Guo. 2020a. Hybrid models for open set recognition. In Computer Vision -ECCV 2020 -16th European Con- ference, Glasgow, UK, August 23-28, 2020, Proceed- ings, Part III, volume 12348 of Lecture Notes in Computer Science, pages 102-117. Springer.
Hybrid models for open set recognition. Hongjie Zhang, Ang Li, Jie Guo, Yanwen Guo, 10.1007/978-3-030-58580-8_7Computer Vision -ECCV 2020 -16th European Conference. Glasgow, UKSpringer12348Proceedings, Part IIIHongjie Zhang, Ang Li, Jie Guo, and Yanwen Guo. 2020b. Hybrid models for open set recognition. In Computer Vision -ECCV 2020 -16th European Con- ference, Glasgow, UK, August 23-28, 2020, Proceed- ings, Part III, volume 12348 of Lecture Notes in Computer Science, pages 102-117. Springer.
Character-level convolutional networks for text classification. Xiang Zhang, Junbo Zhao, Yann Lecun, Advances in Neural Information Processing Systems. Curran Associates, Inc28Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in Neural Information Pro- cessing Systems, volume 28, pages 649-657. Curran Associates, Inc.
Out-of-domain detection for natural language understanding in dialog systems. Yinhe Zheng, Guanyi Chen, Minlie Huang, 10.1109/TASLP.2020.2983593IEEE ACM Trans. Audio Speech Lang. Process. 28Yinhe Zheng, Guanyi Chen, and Minlie Huang. 2020. Out-of-domain detection for natural language under- standing in dialog systems. IEEE ACM Trans. Audio Speech Lang. Process., 28:1198-1209.
Contrastive out-of-distribution detection for pretrained transformers. Wenxuan Zhou, Muhao Chen, abs/2104.08812CoRRWenxuan Zhou and Muhao Chen. 2021. Contrastive out-of-distribution detection for pretrained trans- formers. CoRR, abs/2104.08812.
| [] |
[
"Constituent Parsing as Sequence Labeling",
"Constituent Parsing as Sequence Labeling"
] | [
"Carlos Gómez-Rodríguez carlos.gomez@udc.es \nDepartamento de Computación Campus de Elviña s/n\nLyS Group\nUniversidade da Coruña FASTPARSE Lab\n15071 ACoruñaSpain\n",
"David Vilares david.vilares@udc.es \nUniversidade da Coruña FASTPARSE Lab, LyS Group Departamento de Computación Campus de Elviña s/n\n15071 ACoruñaSpain\n"
] | [
"Departamento de Computación Campus de Elviña s/n\nLyS Group\nUniversidade da Coruña FASTPARSE Lab\n15071 ACoruñaSpain",
"Universidade da Coruña FASTPARSE Lab, LyS Group Departamento de Computación Campus de Elviña s/n\n15071 ACoruñaSpain"
] | [
"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing"
] | We introduce a method to reduce constituent parsing to sequence labeling. For each word w t , it generates a label that encodes: (1) the number of ancestors in the tree that the words w t and w t+1 have in common, and (2) the nonterminal symbol at the lowest common ancestor. We first prove that the proposed encoding function is injective for any tree without unary branches. In practice, the approach is made extensible to all constituency trees by collapsing unary branches. We then use the PTB and CTB treebanks as testbeds and propose a set of fast baselines. We achieve 90% F-score on the PTB test set, outperforming the Vinyals et al. (2015) sequence-to-sequence parser. In addition, sacrificing some accuracy, our approach achieves the fastest constituent parsing speeds reported to date on PTB by a wide margin. | 10.18653/v1/d18-1162 | [
"https://www.aclweb.org/anthology/D18-1162.pdf"
] | 53,047,545 | 1810.08994 | c3e4ab04f548cd4bfc4c941705d9e73b3c4492de |
Constituent Parsing as Sequence Labeling
Association for Computational LinguisticsCopyright Association for Computational LinguisticsOctober 31 -November 4. 2018. 2018
Carlos Gómez-Rodríguez carlos.gomez@udc.es
Departamento de Computación Campus de Elviña s/n
LyS Group
Universidade da Coruña FASTPARSE Lab
15071 ACoruñaSpain
David Vilares david.vilares@udc.es
Universidade da Coruña FASTPARSE Lab, LyS Group Departamento de Computación Campus de Elviña s/n
15071 ACoruñaSpain
Constituent Parsing as Sequence Labeling
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsOctober 31 -November 4. 2018. 20181314
We introduce a method to reduce constituent parsing to sequence labeling. For each word w t , it generates a label that encodes: (1) the number of ancestors in the tree that the words w t and w t+1 have in common, and (2) the nonterminal symbol at the lowest common ancestor. We first prove that the proposed encoding function is injective for any tree without unary branches. In practice, the approach is made extensible to all constituency trees by collapsing unary branches. We then use the PTB and CTB treebanks as testbeds and propose a set of fast baselines. We achieve 90% F-score on the PTB test set, outperforming the Vinyals et al. (2015) sequence-to-sequence parser. In addition, sacrificing some accuracy, our approach achieves the fastest constituent parsing speeds reported to date on PTB by a wide margin.
Introduction
Constituent parsing is a core problem in NLP where the goal is to obtain the syntactic structure of sentences expressed as a phrase structure tree.
Traditionally, constituent-based parsers have been built relying on chart-based, statistical models (Collins, 1997;Charniak, 2000;Petrov et al., 2006), which are accurate but slow, with typical speeds well below 10 sentences per second on modern CPUs (Kummerfeld et al., 2012).
Several authors have proposed more efficient approaches which are helpful to gain speed while preserving (or even improving) accuracy. Sagae and Lavie (2005) present a classifier for constituency parsing that runs in linear time by relying on a shift-reduce stack-based algorithm, instead of a grammar. It is essentially an extension of transition-based dependency parsing (Nivre, 2003). This line of research has been polished through the years (Wang et al., 2006;Zhu et al., 2013;Dyer et al., 2016;Liu and Zhang, 2017;Fernández-González and Gómez-Rodríguez, 2018).
With an aim more related to our work, other authors have reduced constituency parsing to tasks that can be solved faster or in a more generic way. Fernández-González and Martins (2015) reduce phrase structure parsing to dependency parsing. They propose an intermediate representation where dependency labels from a head to its dependents encode the nonterminal symbol and an attachment order that is used to arrange nodes into constituents. Their approach makes it possible to use off-the-shelf dependency parsers for constituency parsing. In a different line, Vinyals et al. (2015) address the problem by relying on a sequence-to-sequence model where trees are linearized in a depth-first traversal order. Their solution can be seen as a machine translation model that maps a sequence of words into a parenthesized version of the tree. Choe and Charniak (2016) recast parsing as language modeling. They train a generative parser that obtains the phrasal structure of sentences by relying on the Vinyals et al. (2015) intuition and on the Zaremba et al. (2014) model to build the basic language modeling architecture.
More recently, Shen et al. (2018) propose an architecture to speed up the current state-of-theart chart parsers trained with deep neural networks (Stern et al., 2017;Kitaev and Klein, 2018). They introduce the concept of syntactic distances, which specify the order in which the splitting points of a sentence will be selected. The model learns to predict such distances, to then recursively partition the input in a top-down fashion.
Contribution We propose a method to transform constituent parsing into sequence labeling. This reduces it to the complexity of tasks such as part-of-speech (PoS) tagging, chunking or namedentity recognition. The contribution is two-fold.
First, we describe a method to linearize a tree into a sequence of labels ( §2) of the same length of the sentence minus one. 1 The label generated for each word encodes the number of common ancestors in the constituent tree between that word and the next, and the nonterminal symbol associated with the lowest common ancestor. We prove that the encoding function is injective for any tree without unary branchings. After applying collapsing techniques, the method can parse unary chains.
Second, we use such encoding to present different baselines that can effectively predict the structure of sentences ( §3). To do so, we rely on a recurrent sequence labeling model based on BIL-STM's (Hochreiter and Schmidhuber, 1997;Yang and Zhang, 2018). We also test other models inspired in classic approaches for other tagging tasks (Schmid, 1994;Sha and Pereira, 2003). We use the Penn Treebank (PTB) and the Penn Chinese Treebank (CTB) as testbeds.
The comparison against Vinyals et al. (2015), the closest work to ours, shows that our method is able to train more accurate parsers. This is in spite of the fact that our approach addresses constituent parsing as a sequence labeling problem, which is simpler than a sequence-to-sequence problem, where the output sequence has variable/unknown length. Despite being the first sequence labeling method for constituent parsing, our baselines achieve decent accuracy results in comparison to models coming from mature lines of research, and their speeds are the fastest reported to our knowledge.
2 Linearization of n-ary trees Notation and Preliminaries In what follows, we use bold style to refer to vectors and matrices (e.g x and W). Let w=[w 1 , w 2 , ..., w |w| ] be an input sequence of words, where w i ∈ V . Let T |w| be the set of constituent trees with |w| leaf nodes that have no unary branches. For now, we will assume that the constituent parsing problem consists in mapping each sentence w to a tree in T |w| , i.e., we assume that correct parses have no unary branches. We will deal with unary branches later.
To reduce the problem to a sequence labeling task, we define a set of labels L that allows us to encode each tree in T |w| as a unique sequence of labels in L (|w|−1) , via an encoding function Φ |w| : T |w| → L (|w|−1) . Then, we can reduce the constituent parsing problem to a sequence labeling task where the goal is to predict a function F |w|,θ : V |w| → L |w|−1 , where θ are the parameters to be learned. To parse a sentence, we label it and then decode the resulting label sequence into a constituent tree, i.e., we apply F |w|,θ • Φ −1 |w| . For the method to be correct, we need the encoding of trees to be complete (every tree in T |w| must be expressible as a label sequence, i.e., Φ |w| must be a function, so we have full coverage of constituent trees) and injective (so that the inverse function Φ −1 |w| is well-defined). Surjectivity is also desirable, so that the inverse is a function on L |w|−1 , and the parser outputs a tree for any sequence of labels that the classifier can generate.
We now define our Φ |w| and show that it is total and injective. Our encoding is not surjective per se. We handle ill-formed label sequences in §2.3.
The Encoding
Let w i be a word located at position i in the sentence, for 1 ≤ i ≤ |w| − 1. We will assign it a 2-tuple label l i = (n i , c i ), where: n i is an integer that encodes the number of common ancestors between w i and w i+1 , and c i is the nonterminal symbol at the lowest common ancestor.
Basic encodings The number of common ancestors may be encoded in several ways.
1. Absolute scale: The simplest encoding is to make n i directly equal to the number of ancestors in common between w i and w i+1 .
2. Relative scale: A second and better variant consists in making n i represent the difference with respect to the number of ancestors encoded in n i−1 . Its main advantage is that the size of the label set is reduced considerably. Figure 1 shows an example of a tree linearized according to both absolute and relative scales.
Encoding for trees with exactly k children For trees where all branchings have exactly k children, it is possible to obtain a even more efficient linearization in terms of number of labels. To do so, we take the relative scale encoding as our starting point. If we build the tree incrementally in a leftto-right manner from the labels, if we find a negative n i , we will need to attach the word w i+1 (or a new subtree with that word as its leftmost leaf) to the (−n i + 2)th node in the path going from w i to the root. If every node must have exactly k children, there is only one valid negative value of n i : the one pointing to the first node in said path that has not received its kth child yet. Any smaller value would leave this node without enough children (which cannot be fixed later due to the leftto-right order in which we build the tree), and any larger value would create a node with too many children. Thus, we can map negative values to a single label. Figure 2 shows an example for the case of binarized trees (k = 2). Links to root Another variant emerged from the empirical observation that some tokens that are usually linked to the root node (such as the final punctuation in Figure 1) were particularly difficult to learn for the simpler baselines. To successfully deal with these cases in practice, it makes sense to consider a simplified annotation scheme where a node is assigned a special tag (ROOT, c i ) when it is directly linked to the root of the tree.
From now on, unless otherwise specified, we use the relative scale without the simplification for exactly k children. This will be the encoding used in the experiments ( §4), because the size of the label set is significantly lower than the one obtained by relying on the absolute one. Also, it works directly with non-binarized trees, in contrast to the encoding that we introduce for trees with exactly k children, which is described only for completeness and possible interest for future work. For the experiments ( §4), we also use the special tag (ROOT, c i ) to further reduce the size of the label set and to simplify the classification of tokens connected to the root, where |n i | is expected to be large.
Theoretical correctness
We now prove that Φ |w| is a total function and injective for any tree in T |w| . We remind that trees in this set have no unary branches. Later (in §2.3) we describe how we deal with unary branches. To prove correctness, we use the relative scale. Correctness for the other scales follows trivially.
Completeness Every pair of nodes in a rooted tree has at least one common ancestor, and a unique lowest common ancestor. Hence, for any tree in T |w| , the label l i = (n i , c i ) defined in Section 2.1 is well-defined and unique for each word w i , 1 ≤ i ≤ |w| − 1; and thus Φ |w| is a total function from T |w| to L (|w|−1) .
Injectivity The encoding method must ensure that any given sequence of labels corresponds to exactly one tree. Otherwise, we have to deal with ambiguity, which is not desirable.
For simplicity, we will prove injectivity in two steps. First, we will show that the encoding is injective if we ignore nonterminals (i.e., equivalently, that the encoding is injective for the set of trees resulting from replacing all the nonterminals in trees in T |w| with a generic nonterminal X). Then, we will show that it remains injective when we take nonterminals into account.
For the first part, let τ ∈ T |w| be a tree where nonterminals take a generic value X. We represent the label of the ith leaf node as • i . Consider the representation of τ as a bracketed string, where a single-node tree with a node labeled A is represented by (A), and a tree rooted at R with child subtrees C 1 . . . C n is represented as (R(C 1 . . . C n )).
Each leaf node will appear in this string as a substring (• i ). Thus, the parenthesized string has the form α 0 (• 1 )α 1 (• 2 ) . . . α |w|−1 (• |w| )α w , where the α i s are strings that can only contain brackets and nonterminals, as by construction there can be no leaf nodes between (• i ) and (• i+1 ).
We now observe some properties of this parenthesized string. First, note that each of the substrings α i must necessarily be composed of zero or more closing parentheses followed by zero or more opening parentheses with their corresponding nonterminal, i.e., it must be of the form [)] * [(X] * . This is because an opening parenthesis followed by a closing parenthesis would represent a leaf node, and there are no leaf nodes between (• i ) and (• i+1 ) in the tree.
Thus, we can write α i as α i) α i( , where α i) is a string matching the expression [)] * and α i( a string matching the expression [(X] * . With this, we can write the parenthesized string for τ as
α 0) α 0( (• 1 )α 1) α 1( (• 2 )α 2) α 2( . . . (• |w| )α |w|) α |w|( . Let us now denote by β i the string α i−1( (• i )α i) .
Then, and taking into account that α 0) and α w( are trivially empty in the previous expression due to bracket balancing, the expression for the tree becomes simply β 1 β 2 . . . β |w| , where we know, by construction, that each β i is of the form
[(X] * (• i )[)] * .
Since we have shown that each tree in T |w| uniquely corresponds to a string β 1 β 2 . . . β |w| , to show injectivity of the encoding, it suffices to show that different values for a β i generate different label sequences.
To show this, we can say more about the form of β i : it must be either of the form
[(X] * (• i ) or of the form (• i )[)] * , i.e.
, it is not possible that β i contains both opening parenthesis before the leaf node and closing parentheses after the leaf node. This could only happen if the tree had a subtree of the form (X(• i )), but this is not possible since we are forbidding unary branches.
Hence, we can identify each β i with an integer number δ(β i ): 0 if β i has neither opening nor closing parentheses outside the leaf node, +k if it has k opening parentheses, and −k if it has k closing parentheses. It is easy to see that δ(β 1 )δ(β 2 ) . . . δ(β |w|−1 ) corresponds to the values n i in the relative-scale label encoding of the tree τ . To see this, note that the number of unclosed parentheses at the point right after β i in the string exactly corresponds to the number of common ancestors between the ith and (i + 1)th leaf nodes. A positive δ(β i ) = k corresponds to opening k parentheses before β i , so the number of common ancestors of w i and w i+1 will be k more than that of w i−1 and w i . A negative δ(β i ) = −k corresponds to closing k parentheses after β i , so the number of common ancestors will conversely decrease by k. A value of zero means no opening or closing parentheses, and no change in the number of common ancestors.
Thus, different parenthesized strings β 1 β 2 . . . β |w| generate different label sequences, which proves injectivity ignoring nonterminals (note that δ(β |w| ) does not affect injectivity as it is uniquely determined by the other values: it corresponds to closing all the parentheses that remain unclosed at that point).
It remains to show that injectivity still holds when nonterminals are taken into account. Since we have already proven that trees with different structure produce different values of n i in the labels, it suffices to show that trees with the same structure, but different nonterminals, produce different values of c i . Essentially, this reduces to showing that every nonterminal in the tree is mapped into a concrete c i . That said, consider a tree τ ∈ T |w| , and some nonterminal X in τ . Since trees in T w do not have unary branches, X has at least two children. Consider the rightmost word in the first child subtree, and call it w i . Then, w i+1 is the leftmost word in the second child subtree, and X is the lowest common ancestor of w i and w i+1 . Thus, c i = X, and a tree with identical structure but a different nonterminal at that position will generate a label sequence with a different value of c i . This concludes the proof of injectivity.
Limitations
We have shown that our proposed encoding is a total, injective function from trees without unary branches with yield of length |w| to sequences of |w| − 1 labels. This will serve as the basis for our reduction of constituent parsing to sequence labeling. However, to go from theory to practice, we need to overcome two limitations of the theoretical encoding: non-surjectivity and the inability to encode unary branches. Fortunately, both can be overcome with simple techniques.
Handling of unary branches The encoding function Φ |w| cannot directly assign the nonterminal symbols of unary branches, as there is not any pair of words (w i , w i+1 ) that have those in common. Figure 3 illustrates it with an example.
It is worth remarking that this is not a limitation of our encoding, but of any encoding that would facilitate constituent parsing as sequence labeling, as the number of nonterminal nodes in a tree with unary branches is not bounded by any function of |w|. The fact that our encoding works for trees without unary branches owes to the fact that such a tree cannot have more than |w| − 1 non-leaf nodes, and therefore it is always possible to encode all of them in labels associated with |w| − 1 leaf nodes. T 1 T 2 T 3 T 4 T 5 w 1 w 2 w 3 w 4 w 5 T 1 T 2 T 3 T 4 T 5 w 1 w 2 w 3 w 4 w 5
Φ(T):
Φ -1 (Φ(T)): Figure 3: An example of a tree that cannot be directly linearized with our approach. w i and T i abstract over words and PoS tags. Dotted lines represent incorrect branches after applying and inverting our encoding naively without any adaptation for unaries. The nonterminal symbol of the second ancestor of w 2 (X) cannot be decoded, as no pair of words have X as their lowest common ancestor. A similar situation can be observed for the closest ancestor of w 5 (Z).
To overcome this issue, we follow a collapsing approach, as is common in parsers that need special treatment of unary chains (Finkel et al., 2008;Narayan and Cohen, 2016;Shen et al., 2018). For clarity, we use the name intermediate unary chains to refer to unary chains that end up into a nonterminal symbol (e.g. X → Y in Figure 3) and leaf unary chains to name those that yield a PoS tag (e.g. Z → T 5 ). Intermediate unary chains are collapsed into a chained single symbol, which can be encoded by Φ |w| as any other nonterminal symbol. On the other hand, leaf unary chains are collapsed together with the PoS tag, but these cannot be encoded and decoded by relying on Φ |w| , as our encoding assumes a fixed sequence of leaf nodes and does not encode them explicitly. To overcome this, we propose two methods:
1. To use an extra function to enrich the PoS tags before applying our main sequence labeling function. This function is of the form Ψ |w| : V |w| → U |w| , where U is the set of labels of the leaf unary chains (without including the PoS tags) plus a dummy label ∅. Ψ |w| maps w i to ∅ if there is no leaf unary chain at w i , or to the collapsed label otherwise.
2. To extend our encoding function to predict them as a part of our labels l i , by transforming them into 3-tuples (n i , c i , u i ) where u i encodes the leaf unary chain collapsed label for w i , if there is any, or none otherwise. We call this extended encoding function Φ |w| .
The former requires to run two passes of sequence labeling to deal with leaf unary chains. The latter avoids this, but the number of labels is larger and sparser. In §4 we discuss how these two approaches behave in terms of accuracy and speed.
Non-surjectivity Our encoding, as defined formally in Section 2.1, is injective but not surjective, i.e., not every sequence of |w| − 1 labels of the form (n i , c i ) corresponds to a tree in T |w| . In particular, there are two situations where a label sequence formally has no tree, and thus Φ −1 |w| is not formally defined and we have to use extra heuristics or processing to define it:
• Sequences with conflicting nonterminals. A nonterminal can be the lowest common ancestor of more than two pairs of contiguous words when branches are non-binary. For example, in the tree in Figure 1, the lowest common ancestor of both "the" and "red" and of "red" and "toy" is the same N P node. This translates into c 4 = NP , c 5 = NP in the label sequence. If we take that sequence and set c 5 = VP , we obtain a label sequence that does not strictly correspond to the encoding of any tree, as it contains a contradiction: two elements referencing the same node indicate different nonterminal labels. In practice, this problem is trivial to solve: when a label sequence encodes several conflicting nonterminals at a given position in the tree, we compute Φ −1 |w| using the first such nonterminal and ignoring the rest.
• Sequences that produce unary structures.
There are sequences of values n i that do not correspond to a tree in T |w| because the only tree structure satisfying the common ancestor conditions of their values (the one built by generating the string of β i s in the injectivity proof) contains unary branchings, causing the problem described above where we do not have a specification for every nonterminal. An example of this is the sequence (1, S), (3, Y ), (1, S), (1, S) in absolute scaling, that was introduced in Figure 3. In practice, as unary chains have been previously collapsed, any generated unary node is considered as not valid and removed.
Sequence Labeling
Sequence labeling is an structured prediction task that generates an output label for every token in an input sequence (Rei and Søgaard, 2018). Examples of practical tasks that can be formulated under this framework in natural language processing are PoS tagging, chunking or named-entity recognition, which are in general fast. However, to our knowledge, there is no previous work on sequence labeling methods for constituent parsing, as an encoding allowing it was lacking so far. In this work, we consider a range of methods ranging from traditional models to state-of-theart neural models for sequence labeling, to test whether they are valid to train constituency-based parsers following our approach. We give the essential details needed to comprehend the core of each approach, but will mainly treat them as black boxes, referring the reader to the references for a careful and detailed mathematical analysis of each method. Appendix A specifies additional hyperparameters for the tested models.
Preprocessing We add to every sentence both beginning and end tokens.
Traditional Sequence Labeling Methods
We consider two baselines to train our prediction function F |w|,θ , based on popular sequence labeling methods used in NLP problems, such as PoS tagging or shallow parsing (Schmid, 1994;Sha and Pereira, 2003). (Lafferty et al., 2001) Let CRF |w|,θ be its prediction function, a CRF model computes conditional probability distributions of the form p(l, w) such that CRF θ (w) = l = arg max l p(l , w). In our work, the inputs to the CRF are words and PoS tags. To represent a word w i , we are using information of the word itself and also contextual information from w [i−1:i+1] . 2 In particular:
Conditional Random Fields
• We extract the word form (lowercased), the PoS tag and its prefix of length 2, from w [i−1:i+1] . For these words we also include binary features: whether it is the first word, the last word, a number, whether the word is capitalized or uppercased.
• Additionally, for w i we look at the suffixes of both length 3 and 2 (i.e. w i[−3:] and w i[−2:] ).
To build our CRF models, we relied on the sklearn-crfsuite library 3 .
MultiLayer Perceptron (Rosenblatt, 1958) We use one hidden layer. Let MLP |w|,θ be its prediction function, it treats sequence labeling as a set of independent predictions, one per word. The prediction for a word is computed as sof tmax(W 2 · relu(W 1 · x + b 1 ) + b 2 ), where x is the input vector and W i and b i the weights and biases to be learned at layer i. We consider both a discrete (MLP d ) and an embedded (MLP e ) perceptron. For the former, we use as inputs the same set of features as for the CRF. For the latter, the vector x for w i is defined as a concatenation of word and PoS tag embeddings from w [i−2:i+2] . 4 To build our MLPs, we relied on keras. 5
Sequence Labeling Neural Models
We are using NCRFPP++ 6 , a sequence labeling framework based on recurrent neural networks (RNN) (Yang and Zhang, 2018), and more specifically on bidirectional short-term memory networks (Hochreiter and Schmidhuber, 1997), which have been successfully applied to problems such as PoS tagging or dependency parsing (Plank et al., 2016;Kiperwasser and Goldberg, 2016). Let LSTM(x) be an abstraction of a standard long short-term memory network that processes the sequence x = [x 1 , ..., x |x| ], then a BILSTM encoding of its ith element, BILSTM(x, i) is defined as:
BILSTM(x, i) = h i = h l i • h r i = LSTM l (x [1:i] ) • LSTM r (x [|x|:i] )
In the case of multilayer BILSTM'S, the timestep outputs of the BILSTM m are fed as input to the BILSTM m+1 . The output label for each w i is finally predicted as sof tmax(W · h i + b).
Given a sentence [w 1 , w 2 , ..., w |w| ], the input to the sequence model is a sequence of embeddings [w 1 , w 2 , ..., w |w| ] where each w i = w i • p i • ch i , such that w i and p i are a word and a PoS tag embedding, and ch i is a word embedding obtained from an initial character embedding layer, also based on a BILSTM. Figure 4 shows the architecture of the network.
Experiments
We report results on models trained using the relative scale encoding and the special tag (ROOT,c i ). As a reminder, to deal also with leaf unary chains, we proposed two methods in §2.3: to predict them relying both on the encoding functions Φ |w| and Ψ |w| , or to predict them as a part of an enriched label predicted by the function Φ |w| . For clarity, we are naming these models with the superscripts Ψ,Φ and Φ , respectively.
Datasets We use the Penn Treebank (Marcus et al., 1994) and its official splits: Sections 2 to 21 for training, 22 for development and 23 for testing. For the Chinese Penn Treebank (Xue et al., 2005): articles 001-270 and 440-1151 are used for training, articles 301-325 for development, and articles 271-300 for testing. We use the version of the corpus with the predicted PoS tags of Dyer et al. (2016). We train the Φ models based on the predicted output by the corresponding Ψ model.
Metrics
We use the F-score from the EVALB script. Speed is measured in sentences per second. As the problem is reduced to sequence labeling, we briefly comment on the accuracy (percentage of correctly predicted labels) of our baselines.
Source code It can be found at https:// github.com/aghie/tree2labels
Hardware The models are run on a single thread of a CPU 7 and on a consumer-grade GPU 8 .
In sequence-to-sequence work (Vinyals et al., 2015) the authors use a multi-core CPU (the number of threads was not specified), while we provide results on a single core for easier comparability. Parsing sentences on a CPU can be framed as an "embarrassingly parallel" problem (Hall et al., 2014), so speed can be made to scale linearly with the number of cores. We use the same batch size as Vinyals et al. (2015) for testing (128). 9 Table 1 shows the performance of our baselines on the PTB development set. It is worth noting that since we are using different libraries to train the models, these might show some differences in terms of performance/speed beyond those expected in theory. For the BILSTM model we test:
Results
• BILSTM m=1 : It does not use pretrained word embeddings nor character embeddings. The number of layers m is set to 1.
• BILSTM m=1,e : It adds pretrained word embeddings from GloVe (Pennington et al., 2014) for English and from the Gigaword corpus for Chinese (Liu and Zhang, 2017).
• BILSTM m=1,e,ch : It includes character embeddings processed through a BILSTM.
• BILSTM m=2,e : m is set to 2. No character embeddings.
• BILSTM m=2,e,ch : m is set to 2.
7 An Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz. 8 A GeForce GTX 1080. 9 A larger batch will likely result in faster parsing when executing the model on a GPU, but not necessarily on a CPU. The Ψ, Φ and the Φ models obtain similar Fscores. When it comes to speed, the BILSTMs Φ are notably faster than the BILSTMs Ψ,Φ . Φ models are expected to be more efficient, as leaf unary chains are handled implicitly. In practice, Φ is a more expensive function to compute than the original Φ, since the number of output labels is significantly larger, which reduces the expected gains with respect to the Ψ, Φ models. It is worth noting that our encoding is useful to train an MLP e with a decent sense of phrase structure, while being very fast. Paying attention to the differences between F-score and Accuracy for each baseline, we notice the gap between them is larger for CRFs and MLPs. This shows the difficulties that these methods have, in comparison to the BILSTM approaches, to predict the correct label when a word w i+1 has few common ancestors with w i . For example, let -10X be the right (relative scale) label between w i and w i+1 , and let l 1 =-1X and l 2 =-9X be two possible wrong labels. In terms of accuracy it is the same that a model predicts l1 or l2, but in terms of constituent F-score, the first will be much worse, as many closed parentheses will remain unmatched.
Tables 2 and 3 compare our best models against the state of the art on the PTB and CTB test sets. The performance corresponds to models without reranking strategies, unless otherwise specified.
Discussion
We are not aware of work that reduces constituency parsing to sequence labeling. The work that can be considered as the closest to ours is that of Vinyals et al. (2015), who address it as a sequence-to-sequence problem, where the output sequence has variable/unknown length. In this context, even a one hidden layer perceptron outperforms their 3-layer LSTM model without attention, while parsing hundreds of sentences per second. Our best models also outperformed their 3-layer LSTM model with attention and even a simple BILSTM model with pretrained GloVe embeddings obtains a similar performance. In terms of F-score, the proposed sequence labeling baselines still lag behind mature shift-reduce and chart parsers. In terms of speed, they are clearly faster than both CPU and GPU chart parsers and are at least on par with the fastest shift-reduce ones. Although with significant loss of accuracy, if phrase-representation is needed in large-scale tasks where the speed of current systems makes parsing infeasible (Gómez-Rodríguez, 2017;, we can use the simpler, less accurate models to get speeds well above any parser reported to date.
It is also worth noting that in their recent work, published while this manuscript was under review, Shen et al. (2018) developed a mapping of binary trees with n leaves to sequences of n − 1 integers (Shen et al., 2018, Algorithm 1). This encoding is different from the ones presented here, as it is based on the height of lowest common ancestors in the tree, rather than their depth. While their purpose is also different from ours, as they use this mapping to generate training data for a parsing algorithm based on recursive partitioning using realvalued distances, their encoding could also be applied with our sequence labeling approach. However, it has the drawback that it only supports binarized trees, and some of its theoretical properties are worse for our goal, as the way to define the inverse of an arbitrary label sequence can be highly ambiguous: for example, a sequence of n−1 equal labels in this encoding can represent any binary tree with n leaves.
Conclusion
We presented a new parsing paradigm, based on a reduction of constituency parsing to sequence labeling. We first described a linearization function Multi-core 120 88.3 (number not (Vinyals et al., 2015) specified) Constituency parsing as dependency parsing Fernández-González and Martins (2015) WSJ23 1 (Peters et al., 2018) Chart-based parsers with GPU-specific implementation Canny et al. (2013) WSJ ( Stern et al. (2017) report that they use a 16-core machine, but sentences are processed one-at-a-time. Hence, they do not exploit inter-sentence parallelism, but they may gain some speed from intra-sentence parallelism. indicates the that the speed was reported in the paper itself. * and £ indicate that the speeds were extracted from Zhu et al. (2013) and Fernández and Gómez-Rodríguez (2018 to transform a constituent tree (with n leaves) into a sequence of n − 1 labels that encodes it. We proved that this encoding function is total and injective for any tree without unary branches. We also discussed its limitations: how to deal with unary branches and non-surjectivity, and showed how these can be solved. We finally proposed a set of fast and strong baselines.
Figure 1 :
1An example of a constituency tree linearized applying both absolute and relative scales.
Figure 2 :
2An example of a binarized constituency tree, linearized both applying absolute and relative scales.
Figure 4 :
4Architecture of the neural model
Table 1 :
1Performance of the proposed sequence label-
ing methods on the development set of the PTB. For the
CRF models the complexity is quadratic with respect to
the number of labels, which causes CRF Φ to be partic-
ularly slow.
Transition-based and other greedy constituent parsersZhu et al. (2013) <30)
1
250
Hall et al. (2014)
WSJ(<40)
1
404
WSJ23
1
101
89.9
Zhu et al. (2013)+Padding
WSJ23
1
90
90.4
Dyer et al. (2016) £
WSJ23
1
17
91.2
Fernández and Gómez-Rodríguez (2018)
WSJ23
1
18
91.7
Stern et al. (2017)
WSJ23
16 *
76
91.8
Liu and Zhang (2017)
WSJ23
91.8
Shen et al. (2018)
WSJ23
1
111
91.8
Table 2 :
2Comparison against the state of the art. *
).Model
F-score
MLP
Ψ,Φ
e
63.1
MLP
Φ
e
64.4
BILSTM
Ψ,Φ
m=2,e,ch
84.4
BILSTM
Φ
m=2,e,ch
84.1
BILSTM
Ψ,Φ
m=2,e
84.4
BILSTM
Φ
m=2,e
83.1
Zhu et al. (2013)
82.6
Zhu et al. (2013)+P
83.2
Dyer et al. (2016)
84.6
Liu and Zhang (2017)
86.1
Shen et al. (2018)
86.5
Fernández and Gómez-Rodríguez (2018)
86.8
Table 3 :
3Performance on the CTB test set
A last dummy label is generated to fulfill the properties of sequence labeling tasks.
We tried contextual information beyond the immediate previous and next word, but the performance was similar.3 https://sklearn-crfsuite.readthedocs.io/en/latest/ 4 In contrast to the discrete input, larger contextual information was useful. 5 https://keras.io/ 6 https://github.com/jiesutd/NCRFpp, with PyTorch.
AcknowledgmentsThis work has received funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01). We gratefully acknowledge NVIDIA Corporation for the donation of a GTX Titan X GPU.
A multi-teraflop constituency parser using GPUs. John Canny, David Hall, Dan Klein, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingJohn Canny, David Hall, and Dan Klein. 2013. A multi-teraflop constituency parser using GPUs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1898-1907.
A maximum-entropy-inspired parser. Eugene Charniak, Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference. the 1st North American chapter of the Association for Computational Linguistics conferenceAssociation for Computational LinguisticsEugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st North American chapter of the Association for Computational Lin- guistics conference, pages 132-139. Association for Computational Linguistics.
Parsing as language modeling. Kook Do, Eugene Choe, Charniak, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsDo Kook Choe and Eugene Charniak. 2016. Parsing as language modeling. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2331-2336, Austin, Texas. Association for Computational Linguistics.
Three generative, lexicalised models for statistical parsing. Michael Collins, Proceedings of the eighth conference on European chapter of the Association for Computational Linguistics. the eighth conference on European chapter of the Association for Computational LinguisticsAssociation for Computational LinguisticsMichael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the eighth conference on European chapter of the Asso- ciation for Computational Linguistics, pages 16-23. Association for Computational Linguistics.
Recurrent neural network grammars. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, Noah A Smith, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsChris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A. Smith. 2016. Recurrent neural net- work grammars. In Proceedings of the 2016 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 199-209. Association for Computational Linguistics.
Faster Shift-Reduce Constituent Parsing with a Non-Binary, Bottom-Up Strategy. Daniel Fernández, - González, Carlos Gómez-Rodríguez, ArXiv e-printsDaniel Fernández-González and Carlos Gómez- Rodríguez. 2018. Faster Shift-Reduce Constituent Parsing with a Non-Binary, Bottom-Up Strategy. ArXiv e-prints.
Parsing as reduction. Daniel Fernández, - González, F T André, Martins, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingAssociation for Computational LinguisticsDaniel Fernández-González and André F. T. Martins. 2015. Parsing as reduction. In Proceedings of the 53rd Annual Meeting of the Association for Compu- tational Linguistics and the 7th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 1523-1533. Associa- tion for Computational Linguistics.
Efficient, feature-based, conditional random field parsing. Jenny Rose Finkel, Alex Kleeman, Christopher D Manning, Proceedings of ACL-08: HLT. ACL-08: HLTJenny Rose Finkel, Alex Kleeman, and Christopher D Manning. 2008. Efficient, feature-based, condi- tional random field parsing. Proceedings of ACL- 08: HLT, pages 959-967.
Towards fast natural language parsing: FASTPARSE ERC Starting Grant. Carlos Gómez-Rodríguez, Procesamiento del Lenguaje Natural. esamiento del Lenguaje Natural59Carlos Gómez-Rodríguez. 2017. Towards fast natu- ral language parsing: FASTPARSE ERC Starting Grant. Procesamiento del Lenguaje Natural, 59.
How important is syntactic parsing accuracy? An empirical evaluation on rulebased sentiment analysis. Carlos Gómez-Rodríguez, Iago Alonso-Alonso, David Vilares, Artificial Intelligence Review. Carlos Gómez-Rodríguez, Iago Alonso-Alonso, and David Vilares. 2017. How important is syntactic parsing accuracy? An empirical evaluation on rule- based sentiment analysis. Artificial Intelligence Re- view.
Sparser, better, faster GPU parsing. David Hall, Taylor Berg-Kirkpatrick, Dan Klein, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational Linguistics1David Hall, Taylor Berg-Kirkpatrick, and Dan Klein. 2014. Sparser, better, faster GPU parsing. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 208-217.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
Simple and accurate dependency parsing using bidirectional LSTM feature representations. Eliyahu Kiperwasser, Yoav Goldberg, Transactions of the Association for Computational Linguistics. 4Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- ple and accurate dependency parsing using bidirec- tional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313-327.
Constituency parsing with a self-attentive encoder. Nikita Kitaev, Dan Klein, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, Australia1Long Papers). Association for Computational LinguisticsNikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), Melbourne, Australia. Association for Com- putational Linguistics.
Parser showdown at the Wall Street corral: An empirical investigation of error types in parser output. Jonathan K Kummerfeld, David Hall, James R Curran, Dan Klein, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningJeju Island, KoreaAssociation for Computational LinguisticsJonathan K. Kummerfeld, David Hall, James R. Cur- ran, and Dan Klein. 2012. Parser showdown at the Wall Street corral: An empirical investigation of error types in parser output. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1048-1059, Jeju Island, Korea. Association for Computational Lin- guistics.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John D Lafferty, Andrew Mccallum, Fernando C N Pereira, Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01. the Eighteenth International Conference on Machine Learning, ICML '01San Francisco, CA, USA. MorganKaufmann Publishers IncJohn D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning, ICML '01, pages 282-289, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.
In-order transition-based constituent parsing. Jiangming Liu, Yue Zhang, Transactions of the Association for Computational Linguistics. 5Jiangming Liu and Yue Zhang. 2017. In-order transition-based constituent parsing. Transactions of the Association for Computational Linguistics, 5:413-424.
The Penn Treebank: Annotating predicate argument structure. Mitchell Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert Macintyre, Ann Bies, Mark Ferguson, Karen Katz, Britta Schasberger, Proceedings of the Workshop on Human Language Technology, HLT '94. the Workshop on Human Language Technology, HLT '94Stroudsburg, PA, USAAssociation for Computational LinguisticsMitchell Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schas- berger. 1994. The Penn Treebank: Annotating predicate argument structure. In Proceedings of the Workshop on Human Language Technology, HLT '94, pages 114-119, Stroudsburg, PA, USA. Association for Computational Linguistics.
Optimizing spectral learning for parsing. Shashi Narayan, Shay B Cohen, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsShashi Narayan and Shay B. Cohen. 2016. Optimizing spectral learning for parsing. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1546-1556, Berlin, Germany. Association for Com- putational Linguistics.
An efficient algorithm for projective dependency parsing. Joakim Nivre, Proceedings of the 8th International Workshop on Parsing Technologies (IWPT). the 8th International Workshop on Parsing Technologies (IWPT)Joakim Nivre. 2003. An efficient algorithm for pro- jective dependency parsing. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT), pages 149-160.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.
Deep contextualized word representations. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1Long PapersMatthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227- 2237. Association for Computational Linguistics.
Learning accurate, compact, and interpretable tree annotation. Slav Petrov, Leon Barrett, Romain Thibaux, Dan Klein, Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsSlav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and inter- pretable tree annotation. In Proceedings of the 21st International Conference on Computational Lin- guistics and the 44th annual meeting of the Associa- tion for Computational Linguistics, pages 433-440. Association for Computational Linguistics.
Improved inference for unlexicalized parsing. Slav Petrov, Dan Klein, Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference. Slav Petrov and Dan Klein. 2007. Improved infer- ence for unlexicalized parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computa- tional Linguistics; Proceedings of the Main Confer- ence, pages 404-411.
Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. Barbara Plank, Anders Søgaard, Yoav Goldberg, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyShort Papers2Association for Computational LinguisticsBarbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 412- 418, Berlin, Germany. Association for Computa- tional Linguistics.
Zero-shot sequence labeling: Transferring knowledge from sentences to tokens. Marek Rei, Anders Søgaard, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers1Association for Computational LinguisticsMarek Rei and Anders Søgaard. 2018. Zero-shot se- quence labeling: Transferring knowledge from sen- tences to tokens. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 293-302. Association for Computational Linguis- tics.
The perceptron: a probabilistic model for information storage and organization in the brain. Frank Rosenblatt, Psychological review. 656386Frank Rosenblatt. 1958. The perceptron: a probabilis- tic model for information storage and organization in the brain. Psychological review, 65(6):386.
A classifier-based parser with linear run-time complexity. Kenji Sagae, Alon Lavie, Proceedings of the Ninth International Workshop on Parsing Technology. the Ninth International Workshop on Parsing TechnologyAssociation for Computational LinguisticsKenji Sagae and Alon Lavie. 2005. A classifier-based parser with linear run-time complexity. In Proceed- ings of the Ninth International Workshop on Parsing Technology, pages 125-132. Association for Com- putational Linguistics.
Part-of-speech tagging with neural networks. Helmut Schmid, Proceedings of the 15th Conference on Computational Linguistics. the 15th Conference on Computational LinguisticsStroudsburg, PA, USAAssociation for Computational Linguistics1COLING '94Helmut Schmid. 1994. Part-of-speech tagging with neural networks. In Proceedings of the 15th Con- ference on Computational Linguistics -Volume 1, COLING '94, pages 172-176, Stroudsburg, PA, USA. Association for Computational Linguistics.
Shallow parsing with conditional random fields. Fei Sha, Fernando Pereira, Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyAssociation for Computational Linguistics1Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pages 134- 141. Association for Computational Linguistics.
Straight to the tree: Constituency parsing with neural syntactic distance. Yikang Shen, Zhouhan Lin, Paul Jacob, Alessandro Sordoni, Aaron Courville, Yoshua Bengio, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Association for Computational LinguisticsYikang Shen, Zhouhan Lin, Athul Paul Jacob, Alessan- dro Sordoni, Aaron Courville, and Yoshua Bengio. 2018. Straight to the tree: Constituency parsing with neural syntactic distance. In Proceedings of the 56th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1171-1180. Association for Computational Linguis- tics.
A minimal span-based neural constituency parser. Mitchell Stern, Jacob Andreas, Dan Klein, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Mitchell Stern, Jacob Andreas, and Dan Klein. 2017. A minimal span-based neural constituency parser. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 818-827, Vancouver, Canada. Association for Computational Linguistics.
Grammar as a foreign language. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, Geoffrey Hinton, Advances in Neural Information Processing Systems. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Gram- mar as a foreign language. In Advances in Neural Information Processing Systems, pages 2773-2781.
A fast, accurate deterministic parser for chinese. Mengqiu Wang, Kenji Sagae, Teruko Mitamura, Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44. the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44Stroudsburg, PA, USAAssociation for Computational LinguisticsMengqiu Wang, Kenji Sagae, and Teruko Mitamura. 2006. A fast, accurate deterministic parser for chinese. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Com- putational Linguistics, ACL-44, pages 425-432, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.
The Penn Chinese Treebank: Phrase structure annotation of a large corpus. Naiwen Xue, Fei Xia, Fu-Dong Chiou, Marta Palmer, Natural language engineering. 112Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The Penn Chinese Treebank: Phrase structure annotation of a large corpus. Natural lan- guage engineering, 11(2):207-238.
NCRF++: An opensource neural sequence labeling toolkit. Jie Yang, Yue Zhang, Proceedings of ACL 2018, System Demonstrations. ACL 2018, System DemonstrationsMelbourne, AustraliaAssociation for Computational LinguisticsJie Yang and Yue Zhang. 2018. NCRF++: An open- source neural sequence labeling toolkit. In Proceed- ings of ACL 2018, System Demonstrations, pages 74-79, Melbourne, Australia. Association for Com- putational Linguistics.
Wojciech Zaremba, arXiv:1409.2329Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprintWojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2014. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329.
Fast and accurate shiftreduce constituent parsing. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, Jingbo Zhu, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational Linguistics1Long Papers)Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and accurate shift- reduce constituent parsing. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), vol- ume 1, pages 434-443.
| [
"https://github.com/jiesutd/NCRFpp,"
] |
[
"AI 2 : The next leap toward native language based and explainable machine learning framework",
"AI 2 : The next leap toward native language based and explainable machine learning framework"
] | [
"Jean-Sébastien Dessureault ",
"Daniel Massicotte "
] | [] | [] | The machine learning frameworks flourished in the last decades, allowing artificial intelligence to get out of academic circles to be applied to enterprise domains. This field has significantly advanced, but there is still some meaningful improvement to reach the subsequent expectations. The proposed framework, named AI 2 , uses a natural language interface that allows a non-specialist to benefit from machine learning algorithms without necessarily knowing how to program with a programming language. The primary contribution of the AI 2 framework allows a user to call the machine learning algorithms in English, making its interface usage easier. The second contribution is greenhouse gas (GHG) awareness. It has some strategies to evaluate the GHG generated by the algorithm to be called and to propose alternatives to find a solution without executing the energy-intensive algorithm. Another contribution is a preprocessing module that helps to describe and to load data properly. Using an English text-based chatbot, this module guides the user to define every dataset so that it can be described, normalized, loaded and divided appropriately. The last contribution of this paper is about explainability. For decades, the scientific community has known that machine learning algorithms imply the famous black-box problem. Traditional machine learning methods convert an input into an output without being able to justify this result. The proposed framework explains the algorithm's process with the proper texts, graphics and tables. The results, declined in five cases, present usage applications from the user's English command to the explained output. Ultimately, the AI 2 framework represents the next leap toward native language-based, human-oriented concerns about machine learning framework. learning field. It helped this field of knowledge reach a more comprehensive range of applicative projects instead of being restricted to academics.A few years after the first version of Tensorflow, many others came to the machine learning community. Among the most popular: Scikit-Learn, CNTK, Torch, Matlab, and Keras [39]. In the last few years, a user-friendly framework with a graphical interface named Orange [8] became available, aiming to be even more accessible for the community, especially for the non-expert. While consistently more accessible over time, requiring less mathematics and fewer programming skills, none of those frameworks has made the ultimate step: the ability to communicate in the native human language.Some recent studies compare the most popular machine learning software framework. For instance, framework performances have been recently analysed in[39]. For this same purpose of performance analysis, [38] divides frameworks into some topics (computational distribution, Tensor Processing Units and Field-Programmable Gate Array (FPGAs)). [42] compares machine learning frameworks on different hardware platforms, such as Raspberry Pi 3 B+, NVIDIA Jetson, MacBook Pro, Huawei Nexus 6P and Intel FogNode. Nguyen et al. in [27] have an essential paper regarding this current research. Their work establishes evaluation criteria for supervised, unsupervised, and reinforcement learning, which are the three prominent families of machine learning. [27] presents an overview of machine learning frameworks and gives the advantages and disadvantages of each. Frameworks are applied in different domains. For instance, [30] applies it to the Automated Detection of Arrhythmias in ECG Segments, while [26] is a framework application in the health domain for smart patient monitoring and recommendation. The work of [25][3] present and compares explainable and interpretable frameworks.This framework, called AI 2 , proposes a natural language interface. To the authors' best knowledge, there is no machine learning framework offering an Natural language Processing (NLP) interface using a chatbot. This first AI 2 version proposes an English chatbot, but some other native languages might be proposed later. The NLP domain has flourished recently, especially when using the Transformers technology [33][37]. This recent NLP breakthrough created the opportunity to fill the last gap between humans and machine learning frameworks: the ability to communicate in the native human language. This last step has just been done with this proposed AI 2 framework.A state-of-the-art, Transformer-based NLP agent can now correctly interpret users' English requests. Outperforming older methods like Recurrent Neural Networks (RNN) [18] [35] and Artificial Intelligence Markup Language (AIML) [24], Transformer technology [19] delivers better results. Transformer-based applications exist in multiple domains. For instance, [23] uses it for sentiment analysis. [16] evaluates a Transformer's ability to learn Italian syntax. Finally, [6] proposes a chatbot that helps detect and classify fraud in a finance context. Bidirectional Encoder Representation from Transformers (BERT) [14][32] [2] is a widely used NLP model. It performs exceptionally well when evaluating the context and understanding the intent behind the user's query [28].Using the BERT NLP model, two pre-trained datasets have been used to build the AI 2 framework. The first one, BERT (BERT-large), is helpful to answer common questions like "Which dataset has been used?". The second one is RoBERTa (roberta-large)[21]. It is only used to answer Yes/No questions like "Is it a clustering problem?". Besides launching the requests, a minor contribution of AI 2 is its ability to preprocess the datasets using its NLP chatbot.Even if the NLP interface is the main contribution of this paper, other contributions are also proposed. For instance, another contribution of the AI 2 framework is the awareness of greenhouse | 10.48550/arxiv.2301.03391 | [
"https://export.arxiv.org/pdf/2301.03391v2.pdf"
] | 255,545,840 | 2301.03391 | 5bdc6670354401ce643700a036fce4cecd26b012 |
AI 2 : The next leap toward native language based and explainable machine learning framework
January 16, 2023
Jean-Sébastien Dessureault
Daniel Massicotte
AI 2 : The next leap toward native language based and explainable machine learning framework
January 16, 2023machine learningframeworkNLPAI ethicsexplainability
The machine learning frameworks flourished in the last decades, allowing artificial intelligence to get out of academic circles to be applied to enterprise domains. This field has significantly advanced, but there is still some meaningful improvement to reach the subsequent expectations. The proposed framework, named AI 2 , uses a natural language interface that allows a non-specialist to benefit from machine learning algorithms without necessarily knowing how to program with a programming language. The primary contribution of the AI 2 framework allows a user to call the machine learning algorithms in English, making its interface usage easier. The second contribution is greenhouse gas (GHG) awareness. It has some strategies to evaluate the GHG generated by the algorithm to be called and to propose alternatives to find a solution without executing the energy-intensive algorithm. Another contribution is a preprocessing module that helps to describe and to load data properly. Using an English text-based chatbot, this module guides the user to define every dataset so that it can be described, normalized, loaded and divided appropriately. The last contribution of this paper is about explainability. For decades, the scientific community has known that machine learning algorithms imply the famous black-box problem. Traditional machine learning methods convert an input into an output without being able to justify this result. The proposed framework explains the algorithm's process with the proper texts, graphics and tables. The results, declined in five cases, present usage applications from the user's English command to the explained output. Ultimately, the AI 2 framework represents the next leap toward native language-based, human-oriented concerns about machine learning framework. learning field. It helped this field of knowledge reach a more comprehensive range of applicative projects instead of being restricted to academics.A few years after the first version of Tensorflow, many others came to the machine learning community. Among the most popular: Scikit-Learn, CNTK, Torch, Matlab, and Keras [39]. In the last few years, a user-friendly framework with a graphical interface named Orange [8] became available, aiming to be even more accessible for the community, especially for the non-expert. While consistently more accessible over time, requiring less mathematics and fewer programming skills, none of those frameworks has made the ultimate step: the ability to communicate in the native human language.Some recent studies compare the most popular machine learning software framework. For instance, framework performances have been recently analysed in[39]. For this same purpose of performance analysis, [38] divides frameworks into some topics (computational distribution, Tensor Processing Units and Field-Programmable Gate Array (FPGAs)). [42] compares machine learning frameworks on different hardware platforms, such as Raspberry Pi 3 B+, NVIDIA Jetson, MacBook Pro, Huawei Nexus 6P and Intel FogNode. Nguyen et al. in [27] have an essential paper regarding this current research. Their work establishes evaluation criteria for supervised, unsupervised, and reinforcement learning, which are the three prominent families of machine learning. [27] presents an overview of machine learning frameworks and gives the advantages and disadvantages of each. Frameworks are applied in different domains. For instance, [30] applies it to the Automated Detection of Arrhythmias in ECG Segments, while [26] is a framework application in the health domain for smart patient monitoring and recommendation. The work of [25][3] present and compares explainable and interpretable frameworks.This framework, called AI 2 , proposes a natural language interface. To the authors' best knowledge, there is no machine learning framework offering an Natural language Processing (NLP) interface using a chatbot. This first AI 2 version proposes an English chatbot, but some other native languages might be proposed later. The NLP domain has flourished recently, especially when using the Transformers technology [33][37]. This recent NLP breakthrough created the opportunity to fill the last gap between humans and machine learning frameworks: the ability to communicate in the native human language. This last step has just been done with this proposed AI 2 framework.A state-of-the-art, Transformer-based NLP agent can now correctly interpret users' English requests. Outperforming older methods like Recurrent Neural Networks (RNN) [18] [35] and Artificial Intelligence Markup Language (AIML) [24], Transformer technology [19] delivers better results. Transformer-based applications exist in multiple domains. For instance, [23] uses it for sentiment analysis. [16] evaluates a Transformer's ability to learn Italian syntax. Finally, [6] proposes a chatbot that helps detect and classify fraud in a finance context. Bidirectional Encoder Representation from Transformers (BERT) [14][32] [2] is a widely used NLP model. It performs exceptionally well when evaluating the context and understanding the intent behind the user's query [28].Using the BERT NLP model, two pre-trained datasets have been used to build the AI 2 framework. The first one, BERT (BERT-large), is helpful to answer common questions like "Which dataset has been used?". The second one is RoBERTa (roberta-large)[21]. It is only used to answer Yes/No questions like "Is it a clustering problem?". Besides launching the requests, a minor contribution of AI 2 is its ability to preprocess the datasets using its NLP chatbot.Even if the NLP interface is the main contribution of this paper, other contributions are also proposed. For instance, another contribution of the AI 2 framework is the awareness of greenhouse
Introduction
Two decades ago, some popular algorithms existed and were well documented in scientific literacy, but there was still no easy way to use them. Scientists had to read the equations and the algorithm before implementing it in the desired programming language. Every matrix had to be multiplied, and every derivative had to be computed by the scientist's code. In the last two decades, machine learning has finally flourished. One of the most meaningful frameworks was certainly TensorFlow [1]. This powerful tool helped the community accelerate development and democratize the machine gases (GHG). CodeCarbon [22] recently proposed a library of functions about GHG awareness and AI 2 integrates some of those functions and enhances it with machine learning methods. Based on [29], explainability is an essential contribution of this proposed framework. It aims to include ethics principles from the Institute for Ethical AI & Machine Learning [20]. This UK-based research centre develops frameworks that support the responsible development, deployment and operation of machine learning systems.
Explainability is a concept intending to eliminate the "black box" problem. Yoshua Bengio has addressed it, and Judea Perl [15], two Turing awards winners. Over the last decade, ML has reached a certain level of maturity. One of the differences is our expectations of machine learning. There is a need to democratize the methods to non-expert users. Until recently, the scientific community was concerned about lowering the error when using ML algorithms. They were concerned about the performance. Now the expectation is higher. The community still wants good results, but those results have to be found in an explainable, interpretable and ethical context. Human well-being must be the main interest of the ML systems. The results must be explainable. For decades, the "black box" problem was neglected. Now, there are some methods to explain the results and make them understandable to a human. The expectations are also higher regarding the accessibility to the ML methods, GHG awareness and preprocessing. Now, the expectations are higher at different levels. Disposing of the previously presented technologies and based on [13], the contributions of this framework aim to reach expectations with the following targets: 1. democratizing ML frameworks using NLP methods, 2. being GHG aware with a built-in structure to monitor it, 3. being more ethics with a built-in structure systematically explain the results, and 4. having the preprocessing of the data more accessible with an automated NLP based chatbot.
The following sections of this paper are organized with the following structure: Section 2 describes the proposed methodology. Section 3 presents the results. Section 4 discusses the results and their meaning, and Section 5 concludes this research.
2 Methodology of the AI 2 framework 2.1 Architecture Fig. 1 presents the architecture of the AI 2 framework. The NLP method, through a chatbot, allows communication with the framework methods and the data using the English language. The kernel of the AI 2 framework includes four types of methods: 1. Preprocessing methods, 2. Machine learning methods, 3. GHG methods, and 4. Explainability methods.
The preprocessing interface method is done systematically once for each dataset when used for the first time. The chatbot guides this user throughout the process. It consists of a series of questions to the user about the dataset and each feature/class. The chatbot asks about the type of each field and its normalization method. The machine learning methods are the classic supervised and unsupervised learning methods: classifiers, regressors, clustering, dimensionality reduction and a method to evaluate the importance of the features. There are also some new methods like Decision Process for Dimensionality Reduction (DPDR) [10], Decision Process for Dimensionality Reduction before Clustering DPDRC [12], and CK-Means [11]. There are also some functions assuring the GHG awareness of the framework. Based on the CodeCarbon library [22], those functions compute the generated GHG for each request. Before launching a request, the GHG functions will predict the GHG generated for this request. They will try to find equivalent requests using clustering methods to save the execution of the subsequent request, thus, saving the generation of GHG. The explainability methods offer a complement to the standard machine learning results. The user gets more than the expected results for his request. He gets a well-documented explanation for every result. The form of the explanation varies according to the used algorithm and data. Some examples (like learning curves and the importance of features graphic) are described in the use cases. The different machine learning methods are divided into three modules. Module 1 includes the preprocessing tools (Encoding, normalization, data augmentation/imputation, graphics). Module 2 consists of the supervised learning tools (classifiers, regressors and the computation of the feature's importance). Finally, module 3 exploits the unsupervised learning tools (clustering and reduction of dimensionality methods). At last, all the results are given in 2 forms: the expected and the explained results. AI 2 's functions can be called without using its NLP interface. Calling the Python function directly without using the English chatbot is very straightforward. The user is responsible for obtaining his own datasets. No sample dataset is included in this first version of AI 2 .
NLP methods (chatbot)
This machine learning framework is its ability to communicate with a user, exploiting a chatbot based on NLP. The chatbot used by the AI 2 user interface is made with the Transformers technology, thus, being a state-of-the-art NLP model. In the AI 2 context, the Transformer technology is used with the "BERT" technology. Fig. 2 presents the NLP architecture of the AI 2 framework. The chatbot uses two types of questions, requiring two different types of NLP pre-trained data. It is essential to note the difference between the datasets that can be processed by the AI's methods and those NLP-based pre-trained datasets used by the chatbot. The Standard NLP answering module using Bert-large-uncased-whole-word-masking-finetuned-squad can help in responding to open questions like: What is the dataset?. As displayed in Table 1, this question is associated with the DATASET key. The chatbot will try every question having this key to filling out the dataset information. A typical answer to this request can be iris, for the iris dataset. The pre-trained dataset Bert-large-uncased-whole-word-masking-finetuned-squad [14] is used to answer this type of question. It is a pretrained model on English language using a Masked Language Modeling (MLM) objective.
Using the Roberta-large [21] pre-trained dataset, the second type of question is the Yes/No question. A typical question would be Is this a clustering problem?. The two possible answers are Yes and No, both associated with a certain level of confidence. As presented in Table 1, this question is associated with the PROBLEM key and the CLUSTERING return value. If the answer to this question is Yes, it will return CLUSTERING as an answer to fill out the information.
As mentioned earlier, every question related to the key is asked in these two types. The NLP system returns an answer for each question and confidence level. The answer related to the best level of confidence is kept. The methodology used to train both pretrained datasets, including the level of confidence formulas are documented in [14] and [21]. It consists of applying a softmax function on the logits values. The logits variable is known to be the output of a BERT-based Transformer. It is a list of the most probable answers.
The following describes how the chatbot works. The chatbot first asks "Please, enter your English command to the framework". The system specifies writing the English command to avoid confounding with a specific programming language-based command used in other frameworks. The expected command is the English instruction to the AI 2 framework. A typical command could be "I want to perform a clustering using 3 clusters on the iris dataset.". From this first answer from AI 2 , the chatbot will read a Parameters.csv file storing the structure of the required keys, the returned values and the questions to send to the chatbot to access the information. There is no specific order for the keys in this file. The system will request the keys to get the related information. For now, there are 73 rows defined in this file. Those rows designate 19 keys and the questions to access them. Many questions may retrieve each key. It is essential to understand that the framework uses those questions to extract pieces of information from the user command. Those questions are entirely transparent for the users. This file will grow following the new releases of the AI 2 framework. Table 1 presents a sample of this file. Key field identifies the information to retrieve. For instance, if AI 2 seeks the type of problem in the user's command, it will find all the PROBLEM rows. It will then interrogate the user's command with all the corresponding Questions field. It will keep the answer having to higher level of confidence according to the Transformer. The answer to the question will be returned, except if it is a Yes/No question. In this case, the Return value field will be used. For instance, if the AI 2 system replies Yes to the question Is this a clustering problem? then the returned value will be CLUSTERING. The Type field indicates Y/N for Yes/No questions and Std. for standard questions. Systematically, the chatbot will try to fill the PROBLEM key. It must know what kind of problem it is. To find it out, a question list corresponding to the PROBLEM key, is processed by the chatbot. If the chatbot can return the answer, the problem information (corresponding to PROBLEM in the Key field, Table 1) will be filled. If the first question can return no answer, other questions (corresponding to the key) will be tried to extract information. If no answer can be found after having tried all the questions, the chatbot will prompt to directly ask the user: problem to resolve has been found in your text. Please clearly identify the type of problem to solve. Then, the algorithm will go to the second and the third required keys: the DATASET key and the NB_CLST (number of clusters) key. The interface will ask for every crucial information. When the parameter is not mandatory, its default value will be assumed. The same principle is repeated for every required parameter. An example of a complete sequence is illustrated in Table 2. Remember that the questions are not directly addressed to the user but to his command, aiming to extract meaningful information to execute his request. Iris Iris (To extract the number of clusters) How many groups?
(No suitable answer) None How many clusters? 3 3 In this example, the answer is No for the first four questions, since the command is not about reduction of dimensionality nor classification. Since the command is about a clustering problem, the answer will be Yes to the question Is this clustering?. Since the value Yes would not mean anything, the corresponding return value (Table 1) CLUSTERING is returned. After having extracted the problem type, the dataset name is required. The question What is the dataset? answer the question. The answer is Iris, and the returned value is also Iris. The last required information is about the number of clusters needed for the clustering algorithm. There are at least two ways of asking this question since groups and clusters are synonyms. The question How many groups? is tried to extract the information. Since the command uses the term clusters, no suitable answer is found for this question. The second question will be: How many clusters?. The answer and the returned value will be 3. From this point, AI 2 has all the required information to launch a clustering algorithm using the Iris dataset and 3 clusters. Some more complete examples are shown in Section 3.
Preprocess module
The preprocessing method is done systematically once for each dataset when used for the first time. AI 2 detects when no dataset configuration has been done and stored in a JSON file. The chatbot then asks for the correct configuration for every field, like their name, role in the dataset, and normalization methods. In the end, the dataset's configuration is stored in a JSON file, and the dataset is preprocessed and stored using the same file name, added with a _preprocessed suffix. The chatbot finally asks the user if he wants to process a data imputation of the missing data and a data augmentation. Fig. 3 presents the functionalities of the preprocess modules. First, a dataset name is given to the module. If a preprocessed version of the dataset already exists, the module will open it, dividing it into train and test data. If the preprocessed files do not exist, the system will try to find the corresponding JSON file. If the JSON file exists, the system will use it to build the preprocessed file. If it does not exist, the AI 2 chatbot will guide the user through some questions about the field and create the final JSON file containing the structure of the dataset, and it will create the preprocessed dataset from this JSON file. Ultimately, it will also split the data into train and test data.
Machine learning methods
Any framework requires a tremendous amount of development hours. This framework is still in development, yet it has some contributions to bring to the scientific community. Some known algorithms are included, resolving most machine learning problems (prediction, classification, and others). Table 3 shows algorithms included in AI 2 .
Let us preprocess the iris dataset. Please, answer the following questions: What is the description of the iris dataset (ENTER to skip)? >This dataset describes the features and the class of the iris dataset.
What is the name of the field 0? ( Supervised learning Neural network regressor 6.
Neural network classifier 7.
Random Forest 8.
Unsupervised learning K-Means 9.
CK-Means 10.
Silouette metric 11. PCA 12. DPDRC 13. DPDR 14.
FRSD
The pre-processing methods (Module 1) are regrouped into one callable function. This function can do the whole process of finding the outliers, augmenting the data and imputing the missing data. The recent explainable metric named xGEWFI [9] is used to evaluate the performance of the data generation (imputation and augmentation). It considers the importance of the feature and each feature error to evaluate the global error of the data generation process. Inter Quartile Range (IQR) algorithm is used to find the outliers. Data generation (augmentation and imputation of missing data) are made with a SMOTE algorithm [7] and a KNNImputer [36], respectively.
Some neural networks (multilayer perceptron doing regressions and classifications) [31] are available for supervised learning functions (Module 2). A Random Forest (RF) algorithm [5] is used as a classifier and regressor. It is also used to evaluate the importance of the features.
Some unsupervised learning methods (Module 3) are also available. The K-means algorithm [4] can be executed for clustering problems. The CK-Means algorithm [11] can be called to extract data from the cluster's intersection. The metric to evaluate the cluster consistencies of those first two algorithms is the Silhouette Index (SI) [34]. Concerning the dimensionality reduction, the Principal Component Analysis (PCA) algorithm [17] is included in the AI 2 framework. Two new decision processes are also included to help with the dimensionality reduction problems. 1. Decision Process for Dimensionality Reduction before Clustering (DPDRC) [12] and 2. Decision Process for Dimensionality Reduction (DPDR) [10]. Those two are used in unsupervised learning and supervised learning contexts, respectively. In an unsupervised learning context, Feature Ranking Process Based on Silhouette Decomposition (FRSD) [40] helps evaluate the importance of the features.
GHG Methods -CodeCarbon integration in AI 2
Climate change is an essential issue for humanity. It is our responsibility to be aware of it and to do everything that can be done to contribute to lower GHG. We know that computer sciences, particularly machine learning, can significantly generate GHG while executing on CPU and GPU. The CodeCarbon library is an important initiative available to data scientists, so they can be aware of their impact on GHG. The following quote can be found on the CodeCarbon website (at pypi.org/project/codecarbon/) based on [22]: While computing currently represents roughly 0.5% of the world's energy consumption, that percentage is projected to grow beyond 2% in the coming years, which will entail a significant rise in global CO2 emissions if not done properly. Given this increase, it is important to quantify and track the extent and origin of this energy usage, and to minimize the emissions incurred as much as possible. For this purpose, we created CodeCarbon, a Python package for tracking the carbon emissions produced by various kinds of computer programs, from straightforward algorithms to deep neural networks. By taking into account your computing infrastructure, location, usage and running time, CodeCarbon can provide an estimate of how much CO2 you produced, and give you some comparisons with common modes of transportation to give you an order of magnitude. The contribution of this paper is to embed this library's features in a machine learning framework, add some machine learning-based functions to predict the subsequent request amount of GHG, and try to spare its execution by proposing some alternatives. Fig. 6 explains those embedded GHG functionalities.
First, every GHG statistic (request name, machine learning algorithm used, dataset, number of data, fields, elapsed time, GHG emissions) is stored in a file. When a user is about to launch a new request, from this stored historic, AI 2 framework will try to predict the amount of GHG this subsequent request will generate. A multilayer perceptron (MLP) is used to evaluate this GHG Figure 6: GHG module architecture amount. This MLP have 5 hidden layers of 25 neurons. It uses a relu activation function and an adam solver. Then, a k-means clustering algorithm is used to regroup every similar request to the current request. The list is proposed to the user so he can spare his execution, with some similar results available from the historic. Knowing how much GHG will be generated and knowing the similar results of the past, the user will finally decide if yes or no he wants to execute his new request. Fig. 10 presents an example of the information from the chatbot concerning the GHG before launching a new request.
Explainability methods
The goal of this part is to get rid of the famous "black box" problem in machine learning. When most frameworks usually display the results for every executed algorithm, AI 2 will systematically display the ad-hoc graphics, tables and texts that will ensure a better explainability for a particular algorithm. It could be some learning curves, some scalability curves, and some confusion matrices. For instance, for a clustering process, some stacked radar graphics (one per cluster) are produced, plus a Silhouette index graphic that shows the cluster's consistency. A cluster table and a text (in LaTeX format) are also created to complete the explainability of the process. For each machine learning algorithm, the totality of the graphics, tables and texts are generated using the explain() method.
Predicted execution time (in sec): 4.498 Predicted generated GHG: 4.899e-05 kg CO2
Here are the most similar requests in case launching another request can be avoided.
Request _2022-11-21_21-23-43 using dataset make_blob Request _2022-11-22_13-54-45 using dataset make_blob Request _2022-11-22_14-29-32 using dataset make_blob Launch the request (y/n)?
Results
The following presents five functional use cases. They emphasise the singularity of the AI 2 framework. It shows how a user can execute some requests to this framework and what type of results are presented as output. The output graphics, tables, and texts are not presented in this paper for two reasons: 1. It is not what this paper intends to demonstrate. For instance, there is no need to show result for a simple clustering K-mean process. 2. There would have needed too many graphics, tables and texts to present in this paper. Case 1 to case 5 present a clustering, a reduction of dimensionality, a classification, a prediction, and an evaluation of the feature's importance.
Case 1: Clustering
The first case is about a clustering process. As mentioned earlier, the user must write his query in English in the chatbot. For this first case, the following command has been entered: I want to perform a clustering using iris dataset and having 3 clusters.
From the Parameters.csv file where a sample is presented in Table 4, the following questions (Table 4) will be generated by the chatbot to fill the required information about a clustering process : At this first step, AI 2 transparently tries to find the answers in the command entered by the user. After this first step, if AI 2 misses some information, the chatbot will ask for it until every critical information is defined. From this example, the iris dataset is loaded, a k-means algorithm is launched with the parameter n c lusters = 3 and using the default parameters random s tate = 1 and init = "k − means + +".
The primary results are displayed, presenting a data table along with their clusters, that what most of the frameworks would do. Using AI 2 , each graphic, table and text can be called using the explain() method. In this first case, stacked radar graphics are generated for each cluster, allowing to visualize the profile of every cluster. It also generates a graphic of the Silouhette Index, showing and measuring the consistency of every cluster, and finding the mean of the whole clustering process. For each table and graphic, a short text describing it is generated in LaTeX format.
Case 2: Reduction of dimensionnality
The second case is about the reduction of dimensionality. The entered command was: reduction of dimensionality with iris dataset and having 3 components. The only required parameter is the targeted number of components that should be used to downsize the dataset. If this parameter is not specified in the command, the chatbot will directly ask to specify it. Since it is defined in this case command, AI 2 will extract three components of the dataset using the PCA algorithm. Always from the Parameters.csv file, the questions shown in Table 5 will be generated by the chatbot to fill the required information about a reduction of dimensionality process : The result is a dataset having three principal components (reduced with the PCA algorithm). The explain() method generated two graphics: 1. the covariance heatmap of the initial features. 2. a bar graph of the three extracted features' importance (explained variance ratio). For both graphics, a short LaTeX explaining it is generated.
Case 3: Classification
The following case is about the typical problem of classification. For this case, a multiple sentences English is given: Perform a classification of the iris dataset. I want this request to be reproducible.
Test [4.8,3.0,1.4,0.2] value. The first sentence of the command is straightforward. Those two sentences are written in a single command. It calls a classification of the iris dataset. To do so, it will call a multilayer perceptron (MLPClassifier from the Scikit-learn framework). The second sentence mention that it requires reproducible results. This will set the seed of the random_state parameter to the "1" integer value, assuring the request gives the same result every time. The opposite would have been a "random request". The seed would have been set to None, allowing the request to give slightly different results due to some random synaptic connection initialization. If it is not specified, the request is reproducible. The final sentence commands to try some values. In other words, it aims to classify the specified values [4.8,3.0,1.4,0.2]. The questions in Table 6 will be extracted from the text command. The classification result will then be shown. The training is done with cross-validation having the parameter k = 10. The whole dataset is split k times, and the subsets are used to validate to process. The training and validation scores are returned for each step of the cross-validation. While both scores are increasing, the training may continue the learning process. When the training score is still increasing while the validation score starts to decrease, it is precisely the right time to stop the training process. Stopping before that moment creates under-fitted training, and stopping after that point results in overfitted training. Calling the explain() method, a learning curve is generated of both the training score and validation score based on the cross-validation.
A state-of-the-art method executes the neural network to classify the data. Earlier in the process, the train and the test data were split, allowing the algorithm to train and evaluate the performances. Performance graphics is also created, showing the performance of the training. Scalability graphics show the ratio of the number of processed data/processing time. Like the other cases, LaTeX texts are generated to explain every graphic.
Case 4: Prediction
This case aims to demonstrate the prediction feature of the AI 2 framework, using the MLPRegressor from Scikit-learn. It also shows how to preprocess a dataset before calling an algorithm. This preprocessing can be called in the chatbot. In this case, the following English command is given: Do the preprocess of the iris2 dataset. Note that the iris2 dataset is identical to the iris dataset, except that the class field is not included. Selecting the columns of a dataset is not included in this first version of AI 2 , but it will be in a different version. The iris2 dataset remains with four features: Sepal length, sepal width, petal length and petal width. The value of the petal width must be predicted. When responding to the chatbot's questions, the user must specify that the first three fields are non-normalized features and the fourth is a regression value. After responding to the questions in the chatbot, the iris2.json file is created, containing the information about the configuration. The iris2_preprocessed.csv data file is also created containing the preprocessed data. A second command can be sent to AI 2 using the chatbot: I want to make a prediction using the iris dataset. Test [4.5,3.1,1.2]. The questions in Table 7 will be extracted from the text command. Three graphics are generated to explain the results as in 3.3. A learning curve is displayed to ensure no training underfitting or overfitting. A second graphic shows the performance of the training process. Moreover, a third graphic shows the scalability of the training. As always, LaTeX texts are created to explain the figures, ready to be cut and pasted in a LaTeX document.
Case 5: Feature's importance
This next case shows how to evaluate the feature importance in the AI 2 framework. The following command has been typed in AI 2 's chatbot: Find the importance of the features with the iris dataset. This command calls a Random Forest algorithm. More precisely, the RandomForestClassifier and the RandomForestRegressor from the Scikit-learn framework. According to the configuration file's content (iris.json in this case), it will detect whether it is a dataset made for regression or classification. In this case, the iris is a dataset made for classification, so the RandomForestClassifier algorithm will be used. From the Parameters.csv file, the questions shown in Table 8 are asked by the chatbot to fill in the information about the feature importance algorithm: The explain() method gives a graphic where the X axe represents the index of the features, and the Y axe shows each feature's normalized level of importance. A LaTeX explanation text is generated as usual.
GHG algorithms validation
As stated in 2.5, the AI 2 framework predicts GHG for each algorithm to be executed. Execution time is also predicted before calling the machine learning algorithm. To validate those predictions, a clustering algorithm has been called within 50 iterations loops. For each execution, a randomsized dataset of 10,000 to 50,000 rows and 5 to 20 features have been used. Those datasets were generated by the make_blob() function of the scikit-learn framework. Fig. 8 shows the validation of the predicted and real values of the generated GHG. X axe displays the 50 iterations, and the Y axe shows the level of GHG (in kgCO 2 unit). The regression algorithm was trained from a dataset containing 1382 rows containing the request's historical. Here are the most similar requests in case launching another request can be avoided.
Request _2022-11-21_21-23-43 using dataset make_blob Request _2022-11-22_13-54-45 using dataset make_blob Request _2022-11-22_14-29-32 using dataset make_blob Concerning the predicted and real GHG and execution time, it can be seen that the signal is reasonably reconstructed.
Finally, before launching each request, AI 2 proposes similar requests from the request's historic after extracting this information using a clustering process. Fig. 10 presents an example of the AI 2 propositions of the similar requests.
Discussions
The first contribution of this paper is to present an accessible framework. With its state-of-the-art NLP methods, this machine learning framework is a pioneer in communicating with a non-expert user in English. The new Transformers technology allows the AI 2 framework to receive native language commands extracted, parsed and executed. When there is an essential missing parameter, AI 2 will use its chatbot to communicate with the user, asking him to enter the missing information. With this NLP interface, a user can exploit the AI 2 framework without knowing how to code with a programming language like Python or others.
The AI 2 framework is GHG-aware, and this is the second contribution of this paper. The CodeCarbon library is encapsulated in each of its ML functions, allowing the calculation of the GHG for each algorithm executed. Those GHG records are kept in a register and used to predict, based on ML, the GHG generated before the execution. AI 2 also propose some similar registered requests, also based on ML, to save this execution and save GHG.
The opposite of most other frameworks, AI 2 systematically encapsulates the most important format of explanations about the data and the results. This aspect of the framework is crucial to solving the famous black-box problem. This is the third contribution of this paper. Most of the machine learning framework is not systematically offering some explainability with the results. AI 2 does. It generates, for each request, some graphics, some tables, and some texts explaining the results and the data, thus, making this framework more ethical than others.
The final contribution of this paper is data preprocessing. It usually takes time to code a suitable preprocessing of the data. The AI 2 framework proposes a method based on communication with the chatbot to automatize this process. Guided by the AI 2 chatbot, the user may do some basic preprocessing of its datasets by establishing the dataset's structures. Having a structure stored in a JSON file, the preprocessing module can generate a new preprocessed dataset.
Comparing AI 2 with other machine learning frameworks, what is the advantage of using it? For now, there are frameworks more complete and more sophisticated. The AI 2 framework targets non-expert users who need a machine-learning algorithm to process their data. Typical AI 2 users would be, for instance, researchers, engineers, teachers and students in natural science, and so on. A significant part of the scientific community cannot program complex algorithms using a programming language. An NLP interface is the best solution since it requires no programming skills. Table 9 shows a comparison between AI 2 and the other popular machine learning framework, according to 3 criteria: 1. NLP interface, 2. GHG awareness, 3. Explainability, and 4. NLP Preprocessing. [13] Note that some well-known frameworks may seem absent from the list: CNTK and Theano are no longer supported. Caffe2 is merged with PyTorch. According to Table 9, we can regroup the frameworks into three categories: 1. The general, multi-purpose frameworks (Gluons, Keras, MXNet, Tensorflow, PyTorch, Matlab, Orange and Scikit-learn) 2. The Explainability frameworks (AIX360, ELI5, LIME, SHAP, Skater and XAI), and 3. The GHG-aware framework (CodeCarbon).
This table shows AI 2 's novelty. It is the only framework that combines all the studied criteria (NLP interface, GES awareness, Explainability, Preprocessing, and Coding required). It is the first framework to have an NLP interface to send the instructions to the framework. Several frameworks integrate the explainability of the data and the models, but no general and multi-purpose framework includes it. AI 2 : The next leap toward native language-based, GHG-aware and explainable ML framework.
Conclusion
This framework proposes a tool for the non-expert to use machine learning methods. It offers an NLP interface so the user can communicate with the framework using a chatbot. It encapsulates some very concrete functions to provide ecological awareness. It includes the principle of explainability, proposing expanded results explications for different algorithms. It finally allows preprocessing of data using an English chatbot.
This framework could be the first draft of a long series of improvements. There are many future works to do for each of its contributions. Regarding its NLP interface, this framework can be improved by training the pre-trained Transformer on a specific machine learning-oriented text corpus. Likely, the NLP's performance will significantly improve. The chatbot method can also be optimized to minimize errors and recognize the user's intentions. Questions used to extract command information can be improved by increasing the quality and the number of questions. GHG awareness can be improved. Better methods can be found to minimize wasted energy, maximize the GHG estimation before calling an algorithm, and cluster similar requests. There is a lot to do, but this framework has the merit of being aware of the climate change problem and proposing a modest solution. Explanations available for each data and machine learning algorithm can also be optimized in quantity and quality. Some essential explanations are included in this framework, but those need to be systematically included. Regarding the preprocessing module, there are many things to add. For instance, some normalization methods can be added. The rows and columns selection can be added to this module, also. Some graphics can be added to plot data at the preprocessing stage. Finally, this framework contains a limited number of ML algorithms. Some more ML algorithms can be easily added to the AI 2 framework.
Figure 1 :
1Architecture of the AI 2 framework.
Figure 2 :
2Architecture of the NLP interface in the AI 2 framework.
Figure 3 :
3Preprocessing architecture
Fig. 4
4shows an example of a structure configuration JSON file. The included fields in JSON format are the following: dataset_name is the name of the dataset. dataset_description is a description of the dataset. feat_no is the number of the feature. feat_label is the label given to this feature. The type of the feature is given by feat_type. Possible values are 1. Feature field, 2.
Figure 4 :
4iris.json structure file Regression value field, 3. Class field, and 4. Class for neural network field (to be one-hot encoded). The last field is feat_normalization. Possible values are 1. No normalization, and 2. MinMax normalization.Fig. 5shows an example of an exchange between the chatbot and the user, aiming to proprocessing the data.
Figure 5 :
5An example of the exchange between the chatbot and the user for the data preprocessing.
Figure 7 :
7Information from the chatbot concerning the GHG before launching a new request.
Figure 8 :
8Validation of the predicted and real GHG Fig. 9 displays the validation of the predicted and actual values of the execution time for every iteration of the loop. X axe shows the 50 iterations, and the Y axe shows the execution time (in sec.).
Figure 9 :
9Validation of the predicted and real execution time
Figure 10 :
10iris.json structure file
Table 1 :
1Sample of the Parameters.csv file. Only data used for this example is presented (14 rows on a total of 73).Key
Type Return value
Questions to the command
PROBLEM Y/N
DIMENSIONALITY Is this about dimensionality?
PROBLEM Y/N
DIMENSIONALITY Is this about dimensionality
reduction?
PROBLEM Y/N
CLASSIFICATION
Is this about classification?
PROBLEM Y/N
CLASSIFICATION
Is this a classification problem?
...
...
...
...
PROBLEM Y/N
CLUSTERING
Is this clustering?
PROBLEM Y/N
CLUSTERING
Is this a clustering problem?
PROBLEM Y/N
CLUSTERING
Is this regrouping?
PROBLEM Y/N
CLUSTERING
Is this a regrouping problem?
PROBLEM Y/N
CLUSTERING
Do you want to regroup data?
PROBLEM Y/N
CLUSTERING
Do you want to cluster data?
DATASET
Std.
What is the dataset?
DATASET
Std.
Which data are used?
NB_CLST Std.
How many groups?
NB_CLST Std.
How many clusters?
...
...
...
...
Table 2 :
2To extract the name of the dataset)What is the dataset?Example of a typical command and the question sequence used to extract the information
of the command: I want to perform a clustering using 3 clusters on the iris dataset.
Questions
Answer
Ret. value
(To extract the type of the problem)
Is this about dimensionality?
No
None
Is this about dimensionality
No
None
reduction?
Is this about classification?
No
None
Is this a classification problem? No
None
Is this clustering?
Yes
CLUSTERING
(
What is the normalization applied to Sepal length in cm?(1. None 2. MinMax) >1(... And so on for each feature and class.)Saving dataset configuration... The configuration is saved to iris.json Processing to the file conversion... The configuration is saved to iris_preprocessed.csvValue example: 5.1)
>Sepal length in cm
What is the type of field Sepal length in cm? (1. Feature 2.
Predicted value 3. Class 4. Class (to be converted ONE-HOT
for neural network)
>1
Table 3 :
3Machine learning algorithms are included in the AI 2 framework.No. Modules
Algorithms
1.
Pre-processing
IQR
2.
SMOTE
3.
KNNImputer
4.
xGEWFI metric
5.
Table 4 :
4Required information and questions to access it.Key
Type Return value Questions
PROBLEM Y/N
CLUSTERING Is this clustering?
PROBLEM Y/N
CLUSTERING Is this a clustering problem?
PROBLEM Y/N
CLUSTERING Is this regrouping?
PROBLEM Y/N
CLUSTERING Is this a regrouping problem?
PROBLEM Y/N
CLUSTERING Do you want to regroup data?
PROBLEM Y/N
CLUSTERING Do you want to cluster data?
DATASET
Std.
What is the dataset?
DATASET
Std.
Which data are used?
NB_CLST Std.
How many clusters?
NB_CLST Std.
How many groups?
Table 5 :
5Required information and questions to access it.Key
Type Return value
Questions
PROBLEM Y/N
DIMENSIONALITY Is this about dimensionality?
PROBLEM Y/N
DIMENSIONALITY Is this about dimensionality
reduction?
PROBLEM Y/N
DIMENSIONALITY Is this about reduction
of dimensionality?
PROBLEM Y/N
DIMENSIONALITY Is this a regrouping problem?
PROBLEM Y/N
DIMENSIONALITY Is this a dimensionality problem?
PROBLEM Y/N
DIMENSIONALITY Is this a dimensionality
reduction problem?
DATASET
Std.
What is the dataset?
DATASET
Std.
Which data are used?
NB_CMPS Std.
How many components?
Table 6 :
6Required information and questions to access it.Key
Type Return value
Questions
PROBLEM Y/N
CLASSIFICATION
Is this about classification?
PROBLEM Y/N
CLASSIFICATION
Is this a classification problem?
PROBLEM Y/N
CLASSIFICATION
Do you want to classify data?
DATASET
Std.
What is the dataset?
DATASET
Std.
Which data are used?
RANDOM
Y/N
RANDOM
Is this a random request?
RANDOM
Y/N
REPRODUCTIBLE Is this a reproductible request?
TEST
Std.
What are the test values?
TEST
Std.
What values do you want
to be tested?
Table 7 :
7Required information and questions to access it.Key
Type Return value Questions
PROBLEM Y/N
PREDICTION Do you want to make
a prediction?
PROBLEM Y/N
PREDICTION Is this a prediction problem?
PROBLEM Y/N
PREDICTION Do you want to predict something?
DATASET
Std.
What is the dataset?
DATASET
Std.
Which data are used?
TEST
Std.
What are the test values?
TEST
Std.
What values do you want
to be tested?
Table 8 :
8Required information and questions to access it.Key
Type Return value Questions
PROBLEM Y/N
FEAT_IMP
Is this about feature importance?
PROBLEM Y/N
FEAT_IMP
Is this about the importance
of the features?
PROBLEM Y/N
FEAT_IMP
Is this a feature importance problem?
PROBLEM Y/N
FEAT_IMP
Do you want to know the
feature importance?
DATASET
Std.
What is the dataset?
DATASET
Std.
Which data are used?
Table 9 :
9Comparison of the popular machine learning frameworks, specialized frameworks, and AI 2 Framework NLP GES Explain. Prepro. Code Ref.Aware
req.
AIX360
NO
NO
YES
NO
YES
[3]
ELI5
NO
NO
YES
NO
YES
[3]
Gluon
NO
NO
NO
NO
YES
[27]
Keras
NO
NO
NO
NO
YES
[27]
LIME
NO
NO
YES
NO
YES
[25]
Matlab
NO
NO
NO
NO
YES
[27]
MXNet
NO
NO
NO
NO
YES
[41]
Orange
NO
NO
NO
NO
NO
[8]
PyTorch
NO
NO
NO
NO
YES
[27]
Scikit-learn
NO
NO
NO
NO
YES
[27]
SHAP
NO
NO
YES
NO
YES
[25]
Skater
NO
NO
YES
NO
YES
[3]
Tensorflow
NO
NO
NO
NO
YES
[41]
What-if Tool NO
NO
YES
NO
YES
[25]
XAI
NO
NO
YES
NO
YES
[25]
CodeCarbon NO
YES
NO
NO
NO
[22]
AI 2
YES YES
YES
YES
NO
AcknowledmentThis work has been supported by the "Cellule d'expertise en robotique et intelligence artificielle" of the Cégep de Trois-Rivières and the Natural Sciences and Engineering Research Council.
Martín Abadi, Large-Scale Machine Learning on Heterogeneous Distributed Systems. Martín Abadi et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems.
Comparative analyses of bert, roberta, distilbert, and xlnet for text-based emotion recognition. Nunoo-Mensah Acheampong Francisca Adoma, Wenyu Henry, Chen, 2020 17th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP). Acheampong Francisca Adoma, Nunoo-Mensah Henry, and Wenyu Chen. Comparative anal- yses of bert, roberta, distilbert, and xlnet for text-based emotion recognition. In 2020 17th International Computer Conference on Wavelet Active Media Technology and Information Pro- cessing (ICCWAMTIP), pages 117-121, 2020.
Interpretable machine learning tools: A survey. Namita Agarwal, Saikat Das, 2020 IEEE Symposium Series on Computational Intelligence (SSCI). Namita Agarwal and Saikat Das. Interpretable machine learning tools: A survey. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pages 1528-1534.
The k-means algorithm: A comprehensive survey and performance evaluation. Electronics. Mohiuddin Ahmed, Raihan Seraj, and Syed Mohammed Shamsul Islam981295Multidisciplinary Digital Publishing InstituteMohiuddin Ahmed, Raihan Seraj, and Syed Mohammed Shamsul Islam. The k-means al- gorithm: A comprehensive survey and performance evaluation. Electronics, 9(8):1295, 2020. Number: 8 Publisher: Multidisciplinary Digital Publishing Institute.
A random forest guided tour. Gérard Biau, Erwan Scornet, TEST. 252Gérard Biau and Erwan Scornet. A random forest guided tour. TEST, 25(2):197-227, 2016.
Design of a NLP-empowered finance fraud awareness model: The anti-fraud chatbot for fraud detection and fraud classification as an instance. Jia-Wei Chang, Neil Yen, Jason C Hung, 13Jia-Wei Chang, Neil Yen, and Jason C. Hung. Design of a NLP-empowered finance fraud awareness model: The anti-fraud chatbot for fraud detection and fraud classification as an instance. 13(10):4663-4679.
SMOTE: synthetic minority over-sampling technique. V Nitesh, Kevin W Chawla, Lawrence O Bowyer, W Philip Hall, Kegelmeyer, Journal of Artificial Intelligence Research. 161Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. SMOTE: synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16(1):321-357, 2002.
Orange: data mining toolbox in python. Janez Demšar, the Journal of machine Learning research. 141Janez Demšar et al. Orange: data mining toolbox in python. the Journal of machine Learning research, 14(1):2349-2353, 2013.
explainable global error weighted on feature importance: The xGEWFI metric to evaluate the error of data imputation and data augmentation. Jean-Sébastien Dessureault, Daniel Massicotte, 2206.08980Jean-Sébastien Dessureault and Daniel Massicotte. [2206.08980] explainable global error weighted on feature importance: The xGEWFI metric to evaluate the error of data impu- tation and data augmentation.
DPDR: A novel machine learning method for the decision process for dimensionality reduction. Jean-Sébastien Dessureault, Daniel Massicotte, 2206.08974Jean-Sébastien Dessureault and Daniel Massicotte. [2206.08974] DPDR: A novel machine learning method for the decision process for dimensionality reduction. 2022.
ck-means, a novel unsupervised learning method that combines fuzzy and crispy clustering methods to extract intersecting data. Jean-Sébastien Dessureault, Daniel Massicotte, 2206.08982Jean-Sébastien Dessureault and Daniel Massicotte. [2206.08982] ck-means, a novel unsuper- vised learning method that combines fuzzy and crispy clustering methods to extract intersecting data. 2022.
DPDRC, a novel machine learning method about the decision process for dimensionality reduction before clustering. Jean-Sébastien Dessureault, Daniel Massicotte, Number: 1 Publisher: Multidisciplinary Digital Publishing InstituteAIJean-Sébastien Dessureault and Daniel Massicotte. DPDRC, a novel machine learning method about the decision process for dimensionality reduction before clustering. AI, 3(1):1-21, 2022. Number: 1 Publisher: Multidisciplinary Digital Publishing Institute.
Ai2: a novel explainable machine learning framework using an nlp interface. Jean-Sebastien Dessureault, Daniel Massicotte, Jean-Sebastien Dessureault and Daniel Massicotte. Ai2: a novel explainable machine learning framework using an nlp interface.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding, 2019.
The Book of Why: The New Science of Cause and Effect. Lisa R Goldberg, 10.1080/14697688.2019.165592819Publisher: Routledge _eprintLisa R. Goldberg. The Book of Why: The New Science of Cause and Effect, volume 19. 2019. Publisher: Routledge _eprint: https://doi.org/10.1080/14697688.2019.1655928.
Assessing BERT's ability to learn Italian syntax: A study on null-subject and agreement phenomena. Raffaele Guarasci, Stefano Silvestri, Giuseppe De Pietro, Hamido Fujita, Massimo Esposito, Raffaele Guarasci, Stefano Silvestri, Giuseppe De Pietro, Hamido Fujita, and Massimo Espos- ito. Assessing BERT's ability to learn Italian syntax: A study on null-subject and agreement phenomena.
Principal component analysis: a review and recent developments. Ian T Jolliffe, Jorge Cadima, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. The Royal Society PublishingIan T. Jolliffe and Jorge Cadima. Principal component analysis: a review and recent de- velopments. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2016. Publisher: The Royal Society Publishing.
Serial order: A parallel distributed processing approach. M I Jordan, Technical reportAD-A-173989/5/XAB; ICS-8604M. I. Jordan. Serial order: A parallel distributed processing approach. Technical report, June 1985-March 1986. (AD-A-173989/5/XAB; ICS-8604).
A comparative study on transformer vs RNN in speech applications. Shigeki Karita, IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). Shigeki Karita et al. A comparative study on transformer vs RNN in speech applications. 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 449-456, 2019.
The Institute for Ethical Ai \& Machine Learning. The institute for ethical AI & machine learning. The Institute for Ethical Ai \& Machine Learning. The institute for ethical AI & machine learning.
Yinhan Liu, A Robustly Optimized BERT Pretraining Approach. Yinhan Liu et al. RoBERTa: A Robustly Optimized BERT Pretraining Approach.
Energy Usage Reports: Environmental awareness as part of algorithmic accountability. Kadan Lottick, Silvia Susai, A Sorelle, Jonathan P Friedler, Wilson, Kadan Lottick, Silvia Susai, Sorelle A. Friedler, and Jonathan P. Wilson. Energy Usage Reports: Environmental awareness as part of algorithmic accountability.
Bidirectional transfer learning model for sentiment analysis of natural language. Shivani Malhotra, Vinay Kumar, Alpana Agarwal, 12Shivani Malhotra, Vinay Kumar, and Alpana Agarwal. Bidirectional transfer learning model for sentiment analysis of natural language. 12(11):10267-10287.
. Maria Das Graças Bruno Marietto, Artificial Intelligence MArkup Language: A Brief Tutorial. Maria das Graças Bruno Marietto et al. Artificial Intelligence MArkup Language: A Brief Tutorial.
A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. Sina Mohseni, Niloofar Zarei, Eric D Ragan, Sina Mohseni, Niloofar Zarei, and Eric D. Ragan. A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems.
Novel framework based on deep learning and cloud analytics for smart patient monitoring and recommendation (SPMR). Anand Motwani, Piyush Kumar Shukla, Mahesh Pawar, Anand Motwani, Piyush Kumar Shukla, and Mahesh Pawar. Novel framework based on deep learning and cloud analytics for smart patient monitoring and recommendation (SPMR).
Machine learning and deep learning frameworks and libraries for largescale data mining: a survey. Giang Nguyen, Artificial Intelligence Review. 521Giang Nguyen et al. Machine learning and deep learning frameworks and libraries for large- scale data mining: a survey. Artificial Intelligence Review, 52(1):77-124, 2019.
Training language models to follow instructions with human feedback. Long Ouyang, Long Ouyang et al. Training language models to follow instructions with human feedback.
Sebastian Palacio, Adriano Lucieri, Mohsin Munir, Sheraz Ahmed, Jörn Hees, and Andreas Dengel. XAI handbook: Towards a unified framework for explainable AI. Sebastian Palacio, Adriano Lucieri, Mohsin Munir, Sheraz Ahmed, Jörn Hees, and Andreas Dengel. XAI handbook: Towards a unified framework for explainable AI, 2021.
A novel machine learning framework for automated detection of arrhythmias in ECG segments. The-Hanh Pham, Vinitha Sree, John Mapes, Sumeet Dua, Oh Shu Lih, Joel E W Koh, Edward J Ciaccio, U Rajendra Acharya, 12The-Hanh Pham, Vinitha Sree, John Mapes, Sumeet Dua, Oh Shu Lih, Joel E. W. Koh, Ed- ward J. Ciaccio, and U. Rajendra Acharya. A novel machine learning framework for automated detection of arrhythmias in ECG segments. 12(11):10145-10162.
Multilayer perceptron: Architecture optimization and training. Youssef Hassan Ramchoun, Ghanou, 2021-07- 07T10:37International Journal of Interactive Multimedia and Artificial Intelligence. Mohamed Ettaouil, and Mohammed Amine Janati Idrissi59Z PublisherIJIMAIHassan Ramchoun, Youssef Ghanou, Mohamed Ettaouil, and Mohammed Amine Janati Idrissi. Multilayer perceptron: Architecture optimization and training. 2016. Accepted: 2021-07- 07T10:37:59Z Publisher: International Journal of Interactive Multimedia and Artificial Intel- ligence (IJIMAI).
Denis Rothman, Transformers for Natural Language Processing: Build Innovative Deep Neural Network Architectures for NLP with Python. PyTorch, TensorFlow, BERT, RoBERTa, and MorePackt Publishing LtdDenis Rothman. Transformers for Natural Language Processing: Build Innovative Deep Neural Network Architectures for NLP with Python, PyTorch, TensorFlow, BERT, RoBERTa, and More. Packt Publishing Ltd.
Denis Rothman, Transformers for Natural Language Processing: Build innovative deep neural network architectures for NLP with Python. PyTorch, TensorFlow, BERT, RoBERTa, and morePackt Publishing LtdDenis Rothman. Transformers for Natural Language Processing: Build innovative deep neural network architectures for NLP with Python, PyTorch, TensorFlow, BERT, RoBERTa, and more. Packt Publishing Ltd, 2021.
Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J Peter, Rousseeuw, Journal of Computational and Applied Mathematics. 20Peter J. Rousseeuw. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 20:53-65, 1987.
Learning internal representations by error propagation. David E Rumelhart, Geoffrey E Hinton, Ronald J Williams, David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. Learning internal represen- tations by error propagation.
Missing value estimation. Olga G Troyanskaya, David Botstein, Russ B Altman, A Practical Approach to Microarray Data Analysis. Daniel P. Berrar, Werner Dubitzky, and Martin GranzowSpringer USOlga G. Troyanskaya, David Botstein, and Russ B. Altman. Missing value estimation. In Daniel P. Berrar, Werner Dubitzky, and Martin Granzow, editors, A Practical Approach to Microarray Data Analysis, pages 65-75. Springer US, 2003.
Attention is all you need. Ashish Vaswani, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani et al. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
A survey on distributed machine learning. Joost Verbraeken, 30:1-30:33ACM Computing Surveys. 532Joost Verbraeken et al. A survey on distributed machine learning. ACM Computing Surveys, 53(2):30:1-30:33, 2020.
Various frameworks and libraries of machine learning and deep learning: A survey. Zhaobin Wang, Ke Liu, Jian Li, Ying Zhu, Yaonan Zhang, Archives of Computational Methods in Engineering. Zhaobin Wang, Ke Liu, Jian Li, Ying Zhu, and Yaonan Zhang. Various frameworks and libraries of machine learning and deep learning: A survey. Archives of Computational Methods in Engineering, 2019.
An ensemble feature ranking algorithm for clustering analysis. Jaehong Yu, Hua Zhong, Seoung Bum Kim, Journal of Classification. 372Jaehong Yu, Hua Zhong, and Seoung Bum Kim. An ensemble feature ranking algorithm for clustering analysis. Journal of Classification, 37(2):462-489, 2020.
A comparison of distributed machine learning platforms. Kuo Zhang, Salem Alqahtani, Murat Demirbas, 26th International Conference on Computer Communication and Networks (ICCCN). Kuo Zhang, Salem Alqahtani, and Murat Demirbas. A comparison of distributed machine learning platforms. In 2017 26th International Conference on Computer Communication and Networks (ICCCN), pages 1-9, 2017.
pcamp: Performance comparison of machine learning packages on the edges. Xingzhou Zhang, Yifan Wang, Weisong Shi, Xingzhou Zhang, Yifan Wang, and Weisong Shi. pcamp: Performance comparison of machine learning packages on the edges. 2018.
| [] |
[
"Cross-Lingual Low-Resource Set-to-Description Retrieval for Global E-Commerce",
"Cross-Lingual Low-Resource Set-to-Description Retrieval for Global E-Commerce"
] | [
"Juntao Li \nCenter for Data Science\nAcademy for Advanced Interdisciplinary Studies\nPeking University\nBeijingChina\n\nWangxuan Institute of Computer Technology\nPeking University\nBeijingChina\n",
"Chang Liu \nCenter for Data Science\nAcademy for Advanced Interdisciplinary Studies\nPeking University\nBeijingChina\n\nWangxuan Institute of Computer Technology\nPeking University\nBeijingChina\n",
"Jian Wang \nDAMO Academy\nAlibaba Group\n\n",
"Lidong Bing binglidong@gmail.com \nDAMO Academy\nAlibaba Group\n\n",
"Hongsong Li hongsong.lhs@alibaba-inc.com ",
"Xiaozhong Liu \nDAMO Academy\nAlibaba Group\n\n\nIndiana University\nBloomingtonUSA\n",
"Dongyan Zhao \nCenter for Data Science\nAcademy for Advanced Interdisciplinary Studies\nPeking University\nBeijingChina\n\nWangxuan Institute of Computer Technology\nPeking University\nBeijingChina\n",
"Rui Yan ruiyan@pku.edu.cn \nCenter for Data Science\nAcademy for Advanced Interdisciplinary Studies\nPeking University\nBeijingChina\n\nWangxuan Institute of Computer Technology\nPeking University\nBeijingChina\n"
] | [
"Center for Data Science\nAcademy for Advanced Interdisciplinary Studies\nPeking University\nBeijingChina",
"Wangxuan Institute of Computer Technology\nPeking University\nBeijingChina",
"Center for Data Science\nAcademy for Advanced Interdisciplinary Studies\nPeking University\nBeijingChina",
"Wangxuan Institute of Computer Technology\nPeking University\nBeijingChina",
"DAMO Academy\nAlibaba Group\n",
"DAMO Academy\nAlibaba Group\n",
"DAMO Academy\nAlibaba Group\n",
"Indiana University\nBloomingtonUSA",
"Center for Data Science\nAcademy for Advanced Interdisciplinary Studies\nPeking University\nBeijingChina",
"Wangxuan Institute of Computer Technology\nPeking University\nBeijingChina",
"Center for Data Science\nAcademy for Advanced Interdisciplinary Studies\nPeking University\nBeijingChina",
"Wangxuan Institute of Computer Technology\nPeking University\nBeijingChina"
] | [] | With the prosperous of cross-border e-commerce, there is an urgent demand for designing intelligent approaches for assisting e-commerce sellers to offer local products for consumers from all over the world. In this paper, we explore a new task of cross-lingual information retrieval, i.e., cross-lingual set-todescription retrieval in cross-border e-commerce, which involves matching product attribute sets in the source language with persuasive product descriptions in the target language. We manually collect a new and high-quality paired dataset, where each pair contains an unordered product attribute set in the source language and an informative product description in the target language. As the dataset construction process is both time-consuming and costly, the new dataset only comprises of 13.5k pairs, which is a low-resource setting and can be viewed as a challenging testbed for model development and evaluation in cross-border e-commerce. To tackle this cross-lingual set-to-description retrieval task, we propose a novel cross-lingual matching network (CLMN) with the enhancement of context-dependent cross-lingual mapping upon the pre-trained monolingual BERT representations. Experimental results indicate that our proposed CLMN yields impressive results on the challenging task and the contextdependent cross-lingual mapping on BERT yields noticeable improvement over the pre-trained multi-lingual BERT model. | 10.1609/aaai.v34i05.6335 | null | 212,810,890 | 2005.08188 | d4ce1ddaf6fa8f30294ac77c44c7f21a05da06aa |
Cross-Lingual Low-Resource Set-to-Description Retrieval for Global E-Commerce
Juntao Li
Center for Data Science
Academy for Advanced Interdisciplinary Studies
Peking University
BeijingChina
Wangxuan Institute of Computer Technology
Peking University
BeijingChina
Chang Liu
Center for Data Science
Academy for Advanced Interdisciplinary Studies
Peking University
BeijingChina
Wangxuan Institute of Computer Technology
Peking University
BeijingChina
Jian Wang
DAMO Academy
Alibaba Group
Lidong Bing binglidong@gmail.com
DAMO Academy
Alibaba Group
Hongsong Li hongsong.lhs@alibaba-inc.com
Xiaozhong Liu
DAMO Academy
Alibaba Group
Indiana University
BloomingtonUSA
Dongyan Zhao
Center for Data Science
Academy for Advanced Interdisciplinary Studies
Peking University
BeijingChina
Wangxuan Institute of Computer Technology
Peking University
BeijingChina
Rui Yan ruiyan@pku.edu.cn
Center for Data Science
Academy for Advanced Interdisciplinary Studies
Peking University
BeijingChina
Wangxuan Institute of Computer Technology
Peking University
BeijingChina
Cross-Lingual Low-Resource Set-to-Description Retrieval for Global E-Commerce
With the prosperous of cross-border e-commerce, there is an urgent demand for designing intelligent approaches for assisting e-commerce sellers to offer local products for consumers from all over the world. In this paper, we explore a new task of cross-lingual information retrieval, i.e., cross-lingual set-todescription retrieval in cross-border e-commerce, which involves matching product attribute sets in the source language with persuasive product descriptions in the target language. We manually collect a new and high-quality paired dataset, where each pair contains an unordered product attribute set in the source language and an informative product description in the target language. As the dataset construction process is both time-consuming and costly, the new dataset only comprises of 13.5k pairs, which is a low-resource setting and can be viewed as a challenging testbed for model development and evaluation in cross-border e-commerce. To tackle this cross-lingual set-to-description retrieval task, we propose a novel cross-lingual matching network (CLMN) with the enhancement of context-dependent cross-lingual mapping upon the pre-trained monolingual BERT representations. Experimental results indicate that our proposed CLMN yields impressive results on the challenging task and the contextdependent cross-lingual mapping on BERT yields noticeable improvement over the pre-trained multi-lingual BERT model.
Introduction
Cross-border online shopping has become popular in the past few years, and e-commerce platforms start to pay more effort in improving their user experience. To increase the conversion rate, one critical aspect is how to display products to overseas customers with persuasive and informative descriptions. Fortunately, there already exist various local and cross-national e-commerce platforms, e.g., Amazon, eBay, Taobao, Lazada, with multi-lingual persuasive descriptions of products from all over the world, making it possible for retrieving a feasible product description in the foreign language to a local product. Compared with generationbased methods, retrieved product descriptions are more un- Figure 1: An example training pair sampled from the collected dataset, where a Chinese product attribute set is given for retrieving a proper persuasive description in English.
derstandable and accessible to overseas users, and training a retrieval model requires less paired data. Accordingly, we formulate this problem as a cross-lingual information retrieval (CLIR) task for global e-commerce, i.e., ranking foreign product descriptions against a local product with an attribute set (Litschko et al. 2018;Zhang et al. 2019).
Conventional CLIR systems mainly comprise of two components: machine translation and monolingual information retrieval. According to the translation direction, these systems are further categorized into translating the local language to the foreign one and translating the foreign language to the local one. By doing so, the CLIR task is converted to a monolingual setting through the machine translation system (Nie 2010;Rücklé, Swarnkar, and Gurevych 2019). Although these systems are conceptual simple and natural, they are restricted by the performance of machine translation system. Thus, directly modeling CLIR with recent deep neural information retrieval models is promising McDonald, Brokos, and Androutsopoulos 2018). However, while performing well, existing deep IR models require a large amount of labeled training data, which is expensive and costly in our cross-lingual retrieval scenario.
In this paper, we explore to leverage a deep neural model to directly solve the low-resource cross-lingual information retrieval task without machine translation process. Herein, the critical component of deep IR model is to capture the matching score between product attribute sets in the source language and product descriptions in the target language. As there is not existing open dataset for CLIR in e-commerce, we hire domain experts to collect a heuristic dataset with 13.5K pairs to facilitate the development of cross-lingual retrieval in e-commerce and address the shortage of dataset. Note that since present retrieval/searching systems of crossborder e-commerce platform are not yet applicable to obtain product description candidates in the target language for reranking, we mainly focus on designing matching model to solve this low-resource cross-lingual problem directly. Figure 1 illustrates an example in the collected dataset, where an attribute set (i.e., keywords) of a product in the source language has a matched persuasive description in the target language. Specifically, the product attribute sets are collected from a local e-commerce platform, where there exist various overlapping and relations inside the sets to improve the clicking rate. The corresponded heuristic product descriptions in the target language are created by domain experts, and only leverage part of the attribute set considering the overlapping and relations inside the set.
To address the obstacle of lacking paired data and the unordered product attribute sets, we propose a novel model with the enhancement of unsupervised monolingual pretraining, semi-supervised cross-lingual alignment, and a supervised fine-tuning matching model. Concretely, we propose to leverage the deep attention module in neural translation (Vaswani et al. 2017) and response selection for modeling the unordered product attributes and copious cross-lingual correlations between the matched pairs in two different languages. Besides, motivated by the appealing performance of unsupervised monolingual pretrained models (e.g., ELMo (Peters et al. 2018), BERT (Devlin et al. 2019)), we further incorporate a semi-supervised context-dependent cross-lingual mapping mechanism upon the monolingual BERT representations. By doing so, the data scarcity issue is significantly mitigated.
In a nutshell, our contributions are as follows.
(1) We gathered a new and challenging heuristic dataset for crosslingual retrieval in e-commerce. The dataset will be given by asking. (2) We proposed a cross-lingual matching network (CLMN) for addressing this task, which incorporates deep attention mechanism for modeling unordered attribute sets and cross-lingual correlations, and a context-dependent cross-lingual mapping learning upon BERT. The code is available on https://github.com/LiuChang97/CLMN. (3) We conducted extensive experiments on the challenging dataset, including evaluating the performance of fine-tuned multilingual BERT representations, exploring the performance of strong machine translation based CLIR methods, comparing with the state-of-the-art directly CLIR models. (4) Experimental results indicate that our proposed CLMN model achieves promising performance on the low-resource crosslingual retrieval task. With the enhancement of contextdependent mapping upon BERT, our directly cross-lingual retrieval model achieves significant performance improvement over the fine-tuned Multi-lingual BERT and state-ofthe-art CLIR models.
Related Work Information Retrieval Models
Information retrieval models for cross-lingual tasks can be roughly categorized into two groups. The first one is to learn matching signals across different language space directly. For instance, bag-of-words addition has been proved effective (Vulić and Moens 2015) for matching degree calculation, where each text is represented by adding its bilingual word embeddings representation together. With the enhancement of the TF-IDF algorithm, bag-of-words addition model further achieves a performance improvement, and the term-by-term query translation model yields the state-ofthe-art performance for unsupervised cross-lingual retrieval (Litschko et al. 2018). Another group of methods transfer the cross-lingual retrieval task to a monolingual retrieval task by using machine translation systems (Schuster et al. 2019), e.g., combing cross-language tree kernel align with neural machine translation system for retrieval (Da San Martino et al. 2017), learning language invariant representations for cross-lingual question re-ranking (Joty et al. 2017).
Low-Resource Cross-Lingual Learning
Existing low-resource cross-lingual learning methods mainly contain three elements, i.e., off-the-shelf monolingual word embedding training algorithm (Bojanowski et al. 2017), cross-lingual mapping learning, and the refinement procedure. In cross-lingual mapping learning, a linear mapping between the source embeddings space and the target embeddings space is learned in an adversarial fashion (Conneau et al. 2018a). To enhance the quality of learned bilingual word embeddings, various refinement strategies are proposed, such as synthetic parallel vocabulary building (Artetxe, Labaka, and Agirre 2017), orthogonal constraint (Smith et al. 2017), cross-domain similarity local scaling (Søgaard, Ruder, and Vulić 2018), self-boosting (Artetxe, Labaka, and Agirre 2018), byte-pair encodings (Sennrich, Haddow, and Birch 2016;Lample et al. 2018). Alternatively, a context-dependent cross-lingual representation mapping based on pre-trained ELMo (Peters et al. 2018) is proposed recently to boost the performance of cross-lingual learning. (Schuster et al. 2019). Unlike previous work, we propose to train a cross-lingual mapping upon the context-dependent monolingual BERT with an effective refinement strategy.
Existing Datasets
There already exists several cross-lingual text retrieval datasets. Most of them are collected from open-domain and comprise of comparable/parallel document pairs, e.g., WaCky translation dataset (Baroni et al. 2009;Joulin et al. 2018 2019) extend open-domain crosslingual question retrieval to the task-oriented domain, i.e., constructing a dataset upon StackOverflow. To facilitate the development of cross-lingual information retrieval in crossborder e-Commerce, we, for the first time, create a highquality heuristic dataset from real commercial applications.
Our Proposed Model
In our cross-lingual set-to-description retrieval task, the main purpose is to train a cross-lingual matching model that computes the relevance score between a product attribute set in the source language and a product description in the target language. To formulate this task, we utilize the following necessary notations. A dataset with paired product attribute sets and descriptions D = {(a i , d i , y i )} N i=1 is first given, where a i , d i , y i represent a product attribute set in the source language, a description in the target language, and the corresponding relevance score y i ∈ {0, 1} where y i = 1 represents the given description candidate is a proper one for the given attribute set, otherwise y i = 0. Our task is defined as learning a matching model from the given dataset that can output a relevance score for a set-description pair. As shown in Figure 2, our proposed cross-lingual matching network (CLMN) consists of encoding, bidirectional crosslingual alignment, and matching.
Encoding
As the unsupervised monolingual pre-trained models has achieved promising results on various NLU tasks (Peters et al. 2018;Devlin et al. 2019), we propose to leverage the pre-trained BERT for obtaining the contextualized word representations. In details, we utilize the pre-trained Chinese BERT and the pre-trained English BERT to convert each product attribute set(Chinese) and each product description(English) into two separate monolingual semantic spaces. Their monolingual contextualized representa-tions are then bidirectionally aligned into the same semantic spaces by context-dependent bilingual mapping.
Bidirectional Cross-lingual Alignment
Inspired by the success of cross-lingual alignment learning (Conneau et al. 2018a), we propose to map the contextualized monolingual embedding representations to a shared bilingual space so as to extract semantic information and matching features of a product attribute set and a description. Different from the case in context-independent word embeddings, the contextualized embedding of a word varies with the surrounding contexts. In order to draw on the effective off-the-shelf cross-lingual alignment algorithms, we bridge this gap by considering the averaged contextualized representations as the anchor(i.e., the static embedding) for each word in a monolingual corpus following Schuster et al. (2019). Then we adopt MUSE (Conneau et al. 2018a) to learn the cross-lingual alignment matrix W as the initialization of the mapping matrix of our model. To better facilitate the downstream matching task, we propose to update W during the training process. As demonstrated by Smith et al. (2017), imposing an orthogonal constraint to the mapping matrix leads to better performance. In order to ensure that W stays close to an orthogonal matrix, we put an additional constraint during parameters updating (Cisse et al. 2017;Conneau et al. 2018a), denoted as:
W ← (1 + β)W − β(W · W T ) · W(1)
As illustrated in Figure 2, we conduct bidirectional crosslingual alignment on the contextualized monolingual embeddings. We first learn the alignment mapping matrix W ch2en from the Chinese space to the English Space. Then, we learn the alignment mapping matrix W en2ch from the English space to the Chinese Space. By doing so, we have two groups of representations of a i and d i in the bilingual space, i.e., a en,i and d en,i in the English semantic space and a ch,i and d ch,i in the Chinese semantic space.
Matching Model
Given that the product attribute sets are unordered and there exist abundant correlations between product attributes and descriptions, we propose to use a fully attention-based module, including self-attention and cross-attention, to capture these dependency information. Specifically, the attentionbased module is constructed upon the aligned bilingual representations of each product attribute and description. The attentive module takes three inputs, namely the query, the key, and the value, which are denoted as Q, K, V respectively. First, the attentive module uses each word in the query to attend each word in the key through the scaled dot-product attention mechanism. Then, the obtained attention score is functioned upon the value V to form a new representation of Q, which is formulated as
Att(Q, K, V) = softmax( Q · K T √ d ) · V(2)
Thus, each word in the query Q is represented by the joint meaning of its similar words in V. In practice, the key K and the value V are set to identical. Q = V corresponds to the self-attention representation of the query. Otherwise, we can obtain the cross-attention representation of the query. We stack the attention modules L times to better capturing different granularities of representations ). Accordingly, we obtain the self-attention and the crossattention representations A r l and D r l , where 1 ≤ l ≤ L and r ∈ {self, cross}. To perform interaction between two sets of representations, we first calculate the scaled interaction matrix M r l , denoted as:
M r l = A r l · D r l T √ d(3)
We then use softmax function on both row and column directions of M r k to get bi-directional attentive weights and obtain the bi-directional attentive aligned representations:
A r l = softmax(M r l ) · D r l D r l = softmax(M r l T ) · A r l(4)
Next, we calculate the interaction between A r l and r l as well as D r l andD r l by element-wise multiplication, subtraction, and concatenation, following with a fully connected layer with ReLU activation denoted as g(·):
A r l = g([A r l  r l ; A r l − r l ; A r l ; r l ]) D r l = g([D r l D r l ; D r l −D r l ; D r l ;D r l ])(5)
Subsequently, we consider the rows ofà r l andD r l as time steps and summarize the interaction information with two GRU layers then concatenate the last hidden states of both as the representation of the overall interaction features of A r l and D r l . Finally, we gather all the interaction information from all the interaction between all the A r l -D r l pairs with a fully connected layer followed by a sigmoid function to obtain the matching scores. We denote the predictions of the matching models in the Chinese semantic space and the English semantic space score1 and score2, respectively.
Learning and Prediction
Recall that we have two separate matching models with the same structure but different inputs from different aligned semantic spaces, we jointly update the two matching models as well as the two alignment matrices with the same batches of training cases so as to make full use of the forward pass of BERT, which is time-consuming. In learning each of the matching model f (·), the objective is to minimize the cross entropy with dataset D, formulated as:
L = − N i=1 y i log(f (a i , d i , y i )) + N i=1 (1 − y i )log(1 − f (a i , d i , y i ))(6)
where N refers to the number of training pairs, y i is the label of a product descriptions, f (·) is the matching model.
In prediction, we utilize the addition of score1 and score2 as the final relevance score between product attribute sets and descriptions.
Experimental Setup
CLIR-EC Dataset
We propose to construct a heuristic dataset CLIR-EC (Cross-Lingual Information Retrieval for e-Commerce) for exploring how to retrieve informative product descriptions in the target language given product attribute sets in the source language. Specifically, we hired 20 annotators with both proficient language skills (with master's degree majored in translation) and domain knowledge (at least one year experience for e-commerce data annotation) to write a proper informative product description in the target language given a product attribute set in the source language. As there exist multilevel overlaps and correlations inside a product attribute, we asked annotators to choose part of the pivotal attributes of a set for creating a fluent and persuasive product description. We also employed 3 native speakers to conduct proofreading and editing on the collected product descriptions. By doing so, each product attribute set in the source language is paired with a product description in the target language, which is used for training the CLMN model. Totally, we collected 13.5 K training pairs, and the collection process takes 35 days. More details can be seen in Table 1. We randomly split the alignment dataset into 10,500/1,000/2,000 for training/validating/testing the CLMN model. We also launch another human evaluation to verify the quality of the manually created dataset. Concretely, we employed 5 evaluators with proficient language skills and rich crossnational shopping experience to judge the created product descriptions from the following four perspectives: 1) information integrity, i.e., whether the descriptions include the most pivotal information in product attributes; 2) grammatically formed, i.e., whether the descriptions are fluent and grammatical; 3) matching degree, i.e., does the description correlate well with any given attributes; 4) sentiment polarities, i.e., whether the descriptions present a positive sentiment to users. Each annotator is asked to rate a product description from these aspects on a scale of 1 (very bad) to 5 (excellent). We randomly sample 500 pairs for quality verification. Table 2 presents the results of human verification, where the evaluation results are very promising.
Other Datasets
Besides, we also automatically collect two large monolingual corpora from e-commerce platforms, i.e., eBay 1 with product descriptions in the target language (English), and Taobao 2 with product attribute sets in the source language (Chinese) for learning cross-lingual alignment, including bilingual word embeddings and context-dependent bilingual word representations upon BERT.
Comparison Methods
To thoroughly evaluate the performance of our proposed CLMN model, we leverage three groups of baselines, i.e., advanced unsupervised cross-lingual IR models, the combination of machine translation system and monolingual IR models, directly cross-lingual retrieval models.
Unsupervised CLIR Models We utilize the state-of-theart unsupervised cross-lingual retrieval model and its extensions (Litschko et al. 2018), including term-by-term query translation model (TbTQT), bilingual word embedding aggregation model (BWE-AGG), inverse document frequency weighted BWE-AGG (BWE-IDF), Bigram matching.
Translation Based CLIR Models We follow the recent proposed CLIR model (Rücklé, Swarnkar, and Gurevych 2019) to transfer the cross-lingual information retrieval task to a monolingual information retrieval one by a machine translation model. In our settings, we use the SOTA commercial translation system (Google API) to translate the unordered product attribute sets in the source language to the target language, and then we utilize the obtained paired data to train the following monolingual IR models for retrieval. SMN, a strong model for response selection (Wu et al. 2017). It first learns a matching vector between the translated attribute sets and the product descriptions with the CNN network. Then, the learned matching vectors are aggregated by an RNN to calculate the final matching score.
DAM, a strong text matching network with only attention modules, which is adapted from the response selection task ). This model is used for studying whether attention modules can model unordered attribute sets and their matching degrees with descriptions.
CSRAN (Tay, Tuan, and Hui 2018), a state-of-the-art monolingual IR model. This model first adopts stacked recurrent encoders with refinement to learn representations. Then, it calculates interactions based on the attentively aligned representations and aggregates the interaction features via a multi-layered LSTM to obtain matching scores.
Directly CLIR Models We also leverage the state-of-theart directly cross-lingual IR models for comparison. (Devlin et al. 2019), the multilingual version of BERT 3 , which is trained on 104 languages and has achieved the SOTA performance in cross-lingual text pair classification (Conneau et al. 2018b). We further fine-tune the pre-trained multilingual BERT on our collected datasets.
ML-BERT
POSIT-DRMM (Zhang et al. 2019), the recent proposed cross-lingual document retrieval model, which is designed for addressing the low-resource issue in CLIR. This model incorporates bilingual representations to capture and aggregate matching signals between an input query in the source language and a document in the target language.
Implementation Details
Following the conventional settings in related research for training the proposed CLMN model, we use the CLIR-EC dataset to prepare the supervised positive sample pairs and negative sample pairs. Each positive sample pair consists of a product attribute set and the matched product description while a negative sample pair comprises of a product attribute set and a mismatched product description sampled from all of the descriptions in CLIR-EC dataset. The proportion of the positive sample and the negative sample in training and validation sets is 1:1, while the corresponding proportion of testing set is 1:9. We limit the length of product attributes and descriptions to 50 words and 100 words, respectively. The pre-trained static monolingual and bilingual word embeddings on Taobao and eBay corpora used by all the non-BERT models have the dimension size 300. For our BERT-based CLMN model, we use the 768-dimension outputs of the second last BERT layer and keep the dimension of the encoder outputs as well as interaction outputs the same as the dimension of BERT. We use L = 2 stacked attention modules for attributes and descriptions. The parameters of encoders for attributes and descriptions are shared. We set the mini-batch size to 50 and adopt Adam optimizer (Kingma and Ba 2014) with the initial learning rate of 3e-4 to for training. The best performance of each model on the validation set is chosen.
Evaluation Metrics
To evaluate the model performance, we follow the conventional settings in related work (Wu et al. 2017;Zhang et al. 2019). Specifically, we first calculate the matching scores between a product attribute set and product description candidates, and then rank the matching scores of all candidates to calculate the following automatic metrics, including mean reciprocal rank (MRR) (Voorhees and others 1999), and recall at position k in n candidates (Rn@k). Table 3 summarizes the evaluation results of different crosslingual retrieval models. Overall, our proposed CLMN model achieves the best performance on five evaluation metrics. For simplicity, we mainly discuss the performance of R 10 @1 in the following part. In details, the advanced unsupervised CLIR models only achieve a fair performance without the collected CLIR-EC dataset. Through utilizing paired 3 https://github.com/google-research/bert Model R 2 @1 R 10 @1 R 10 @2 R 10 @5 M RR Table 3: Evaluation results of baselines and our models. CLMN − refers to CLMN without using pre-trained BERT.
Experimental Results
data, translation based CLIR models significantly outperform the SOTA unsupervised methods and achieves a decent performance on each metric. We can observe from Table 3 that the attention-based translation CLIR model DAM achieves a noticeable improvement upon the SMN model, which demonstrates the effectiveness of attention module for modeling the unordered product attribute sets. Moreover, directly modeling the cross-lingual information retrieval task can further introduce a significant performance improvement over the translation based models. Both the recent proposed POSIT-DRMM and ML-BERT models yield a large increase over the strong translation based models. We also observe that our proposed CLMN − matching model are better than the other matching models and even achieves comparable results with the fine-tuned multilingual BERT. With the enhancement of monolingual BERT, our proposed CLMN model outperforms the fine-tuned multi-lingual BERT model. Table 4 presents the experimental results for ablation study. Through analyzing, we observe that monolingual BERT is superior to train domain-specific word embeddings in our task. All models trained on the monolingual BERT achieve impressing results in Table 4, which are significantly better than domain-specific word embeddings pre-training. We attribute such an observation to the relatively small size of the domain-specific monolingual data, which is far from ideal to train a decent model. As for the influence of crosslingual alignment, it can effectively capture the correlations between two different languages and thus can introduce performance improvement. Besides, we also notice that the context-dependent mapping strategy is more effective in English bilingual space. As reflected in Table 4, CLMN-ch2en model yileds a observable improvement over CLMN-en2ch model on two different settings. This observation is perhaps owing to that the relatively small size of the unordered Chinese product attribute can be easily mapped to another language space compared with a relatively long product description with lexical and syntactic structures. Table 4: Ablation study of cross-lingual alignment, where mono refers to without cross-lingual alignment learning, en2ch refers to calculating the matching score in the Chinese bilingual space, and ch2en is to compute the relevance score in the English Bilingual Space.
Ablation Study
The Effect of Paired Data Size
We also launch experiments to study the influence of the pried data size for directly cross-lingual retrieving. As illustrated in Figure 4, when the number of training pairs is less than 4000, the performance of cross-lingual information retrieval is positively correlated with the increasement of cross-lingual training pairs. When the number of crosslingual training pairs is large than 4000, model performance only gains a slight improvement with more training pairs.
Bad Case Analysis
To learn the limitations of our proposed CLMN model, we thoroughly analyze typical bad cases from the test set that fails in R 10 @5. Figure 5 presents the typical example that our proposed model fails in the cross-lingual retrieval task. We can observe that the main error pattern is caused by the commonly occurred phrases in e-commerce platforms, such as "gentleman", "leisure", "Korean-style", "workplace", "male". The system pays more attention to these words and leaves the key information unobserved, e.g., the trained model retrieves a product description of "suit" for Figure 5: A typical bad case example in our task that the proposed CLMN model fails in R 10 @5 . the product of "tie". Such an error pattern is related to the search behaviors of customers on an e-commerce platform. Usually, users are not exactly sure about their target products. They prefer to input a large concept such as "man", "Korean-style", rather than "6cm black tie". To improve the search conversion rate, sellers on e-commerce platforms add these general keywords to their product attributes. As a result, the potential solutions of this issue rely on how to filtrate this general information and augment the matching signals of important product attributes.
Conclusion
In this paper, we propose to address the task of cross-lingual set-to-description retrieval in e-commerce, which involves automatically retrieving a persuasive product description in the target language given a product attribute set in the source language. We collected a high-quality heuristic dataset with 13.5K pairs created by experts and two large monolingual auxiliary corpora for sharing. We further propose a novel cross-lingual matching network (CLMN) aligned with a refined context-dependent cross-lingual mapping upon pretrained BERT to address this low-resource cross-lingual retrieval task directly. Experimental results indicate that our proposed CLMN is effective for solving this challenging task. In our future work, we will evaluate our model on other open datasets for cross-lingual information retrieval.
Figure 2 :
2), benchmark CLEF corpora (Vulić and Moens 2015), the Askubuntu benchmark corpus in QA task (Dos Santos et al. 2015; Barzilay et al. 2016), ArabicEnglish language pairs (Da San Martino et al. 2017), text stream alignment (Ge et al. 2018), English-German cross-lingual information retrieval dataset based on English-German Wikipidia (Schamoni et al. 2014). Later on, Sasaki et al. (2018) follow previous dataset construction method (Schamoni et al. 2014) and collect 25 cross-lingual datasets with large scales based on The overall architecture of our proposed CLMN model. Wikipedia. Rücklé et al. (
Figure 3 :
3The detailed illustration of the matching model.
Figure 4 :
4Trade-off between the size of CLIR-EC corpus and model performance.
Table 1 :
1The statistical results of datasets used in experiments. Descri. refers to product descriptions. Attri. is product attribute sets.Taobao eBay
Attri. Descri.
# Samples
313K
118K 13.5K 13.5K
# Total Words
17.6M 8.92M 233K
639K
Average Length
56.2
75.8
17.2
47.3
Vocabulary Size
163K
50.8K 16.6K 15.5K
Table 1
1illustrates the de-
tailed data statistical results. Overall, there are 313K sam-
ples and 17.6 million words in the Taobao dataset while
1 https://www.ebay.com/
2 https://www.taobao.com/
Table 2 :
2Evaluation results of data quality verification, where kappa values refer to the inter-annotator agreement.there are 118K samples and 8.92 million words in the eBay
dataset. The average document length of Taobao and eBay
corpora are 75.8 and 40.8, respectively. To preprocess these
corpora, we use NLTK for tokenization. All words in both
datasets are reserved, which results in a vocabulary size of
162,506 for Taobao and 59,233 for eBay.
ModelR 2 @1 R 10 @1 R 10 @2 R 10 @5 M RRInitialize with word embedding
CLMN − -mono
95.8
78.3
91.6
99.2
87.3
CLMN − -en2ch
96.3
81.5
93.1
99.5
89.2
CLMN − -ch2en
96.7
81.8
93.4
99.5
89.5
CLMN −
97.1
83.5
94.7
99.6
90.5
Initialize with BERT
CLMN-mono
96.5
80.7
93.8
99.8
89.1
CLMN-en2ch
97.2
84.6
95.4
99.7
91.4
CLMN-ch2en
97.5
85.5
94.8
99.8
91.7
CLMN
97.8
86.8
95.5
99.8
92.3
AcknowledgmentsWe thank the reviewers for their constructive comments. This work was supported by the National Key Research and Development Program of China (No. 2017YFC0804001), the National Science Foundation of China (NSFC No. 61876196 and NSFC No. 61672058). Rui Yan was sponsored by Alibaba Innovative Research (AIR) Grant and Beijing Academy of Artificial Intelligence (BAAI).
Learning Bilingual Word Embeddings with (almost) no Bilingual Data. Labaka Artetxe, M Artetxe, G Labaka, E Agirre, ACL. 1Artetxe, Labaka, and Agirre 2017] Artetxe, M.; Labaka, G.; and Agirre, E. 2017. Learning Bilingual Word Embeddings with (almost) no Bilingual Data. In ACL, volume 1, 451-462.
A Robust Self-Learning Method for Fully Unsupervised Cross-Lingual Mappings of Word Embeddings. Labaka Artetxe, M Artetxe, G Labaka, E Agirre, ACL. 1Artetxe, Labaka, and Agirre 2018] Artetxe, M.; Labaka, G.; and Agirre, E. 2018. A Robust Self-Learning Method for Fully Un- supervised Cross-Lingual Mappings of Word Embeddings. In ACL, volume 1, 789-798.
The wacky wide web: a collection of very large linguistically processed web-crawled corpora. [ Baroni, Language resources and evaluation. 433[Baroni et al. 2009] Baroni, M.; Bernardini, S.; Ferraresi, A.; and Zanchetta, E. 2009. The wacky wide web: a collection of very large linguistically processed web-crawled corpora. Language resources and evaluation 43(3):209-226.
Semi-supervised question retrieval with gated convolutions. Barzilay, NAACL-HLT. [Barzilay et al. 2016] Barzilay, T. L. H. J. R.; Jaakkola, T.; Ty- moshenko, K.; and Marquez, A. M. L. 2016. Semi-supervised question retrieval with gated convolutions. In NAACL-HLT, 1279-1289.
. [ Bojanowski, Enriching Word Vectors with Subword Information. TACL. 5[Bojanowski et al. 2017] Bojanowski, P.; Grave, E.; Joulin, A.; and Mikolov, T. 2017. Enriching Word Vectors with Subword Information. TACL 5:135-146.
Parseval networks: Improving robustness to adversarial examples. M Cisse, P Bojanowski, E Grave, Y Dauphin, N Usunier, Conneau, arXiv:1809.05053Xnli: Evaluating cross-lingual sentence representations. arXiv preprintICML[Cisse et al. 2017] Cisse, M.; Bojanowski, P.; Grave, E.; Dauphin, Y.; and Usunier, N. 2017. Parseval networks: Improving robust- ness to adversarial examples. In ICML, 854-863. JMLR. org. [Conneau et al. 2018a] Conneau, A.; Lample, G.; Ranzato, M.; Denoyer, L.; and Jégou, H. 2018a. Word Translation Without Parallel Data. ICLR. [Conneau et al. 2018b] Conneau, A.; Lample, G.; Rinott, R.; Williams, A.; Bowman, S. R.; Schwenk, H.; and Stoyanov, V. 2018b. Xnli: Evaluating cross-lingual sentence representations. arXiv preprint arXiv:1809.05053.
Cross-Language Question Re-anking. Martino Da San, Bert: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL-HLT. ACMSIGIRDa San Martino et al. 2017] Da San Martino, G.; Romeo, S.; Barroón-Cedeño, A.; Joty, S.; Maàrquez, L.; Moschitti, A.; and Nakov, P. 2017. Cross-Language Question Re-anking. In SIGIR, 1145-1148. ACM. [Devlin et al. 2019] Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. Bert: Pre-training of Deep Bidirectional Transformers for Language Understanding. NAACL-HLT.
Learning hybrid representations to retrieve semantically equivalent questions. Santos [dos, ACL-IJCNLP. 2Short Papers[Dos Santos et al. 2015] Dos Santos, C.; Barbosa, L.; Bog- danova, D.; and Zadrozny, B. 2015. Learning hybrid represen- tations to retrieve semantically equivalent questions. In ACL- IJCNLP(Short Papers), volume 2, 694-699.
Fine-grained coordinated crosslingual text stream alignment for endless language knowledge acquisition. EMNLP. et al. 2018] Ge, T.; Dou, Q.; Ji, H.; Cui, L.; Chang, B.; Sui, Z.; Wei, F.; and Zhou, M. 2018. Fine-grained coordinated cross- lingual text stream alignment for endless language knowledge acquisition. In EMNLP, 2496-2506.
Co-pacrr: A context-aware neural ir model for adhoc retrieval. [ Hui, Proceedings of the eleventh ACM international conference on web search and data mining. the eleventh ACM international conference on web search and data miningACMCoNLL[Hui et al. 2018] Hui, K.; Yates, A.; Berberich, K.; and De Melo, G. 2018. Co-pacrr: A context-aware neural ir model for ad- hoc retrieval. In Proceedings of the eleventh ACM international conference on web search and data mining, 279-287. ACM. [Joty et al. 2017] Joty, S.; Nakov, P.; Màrquez, L.; and Jaradat, I. 2017. Cross-language learning with adversarial neural networks. In CoNLL, 226-237.
Loss in translation: Learning bilingual word mapping with a retrieval criterion. Joulin, EMNLP. Ba[Joulin et al. 2018] Joulin, A.; Bojanowski, P.; Mikolov, T.; Jégou, H.; and Grave, E. 2018. Loss in translation: Learning bilingual word mapping with a retrieval criterion. In EMNLP, 2979-2984. [Kingma and Ba 2014] Kingma, D. P., and Ba, J. 2014.
Phrase-Based & Neural Unsupervised Machine Translation. G Adam ; Lample, M Ott, A Conneau, L Denoyer, arXiv:1412.6980EMNLP. A Method for Stochastic Optimization. arXiv preprintLample et al. 2018Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980. [Lample et al. 2018] Lample, G.; Ott, M.; Conneau, A.; Denoyer, L.; et al. 2018. Phrase-Based & Neural Unsupervised Machine Translation. In EMNLP, 5039-5049.
Unsupervised Cross-Lingual Information Retrieval Using Monolingual Data Only. SIGIR. [ Litschko, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingDeep relevance ranking using enhanced document-query interactions[Litschko et al. 2018] Litschko, R.; Glavaš, G.; Ponzetto, S. P.; and Vulić, I. 2018. Unsupervised Cross-Lingual Information Retrieval Using Monolingual Data Only. SIGIR. [McDonald, Brokos, and Androutsopoulos 2018] McDonald, R.; Brokos, G.; and Androutsopoulos, I. 2018. Deep relevance rank- ing using enhanced document-query interactions. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, 1849-1860.
Cross-language information retrieval. J.-Y Nie, Synthesis Lectures on Human Language Technologies. 31[Nie 2010] Nie, J.-Y. 2010. Cross-language information retrieval. Synthesis Lectures on Human Language Technologies 3(1):1- 125.
Improved cross-lingual question retrieval for community question answering. Peters, The World Wide Web Conference. ACMNAACL-HLT. Short Papers[Peters et al. 2018] Peters, M. E.; Neumann, M.; Iyyer, M.; Gard- ner, M.; Clark, C.; Lee, K.; and Zettlemoyer, L. 2018. Deep con- textualized word representations. In NAACL-HLT, 2227-2237. [Rücklé, Swarnkar, and Gurevych 2019] Rücklé, A.; Swarnkar, K.; and Gurevych, I. 2019. Improved cross-lingual question retrieval for community question answering. In The World Wide Web Conference, 3179-3186. ACM. [Sasaki et al. 2018] Sasaki, S.; Sun, S.; Schamoni, S.; Duh, K.; and Inui, K. 2018. Cross-lingual learning-to-rank with shared representations. In NAACL-HLT(Short Papers), 458-463.
Learning translational and knowledge-based similarities from relevance rankings for cross-language retrieval. [ Schamoni, ACL. 2Short Papers[Schamoni et al. 2014] Schamoni, S.; Hieber, F.; Sokolov, A.; and Riezler, S. 2014. Learning translational and knowledge-based similarities from relevance rankings for cross-language retrieval. In ACL(Short Papers), volume 2, 488-494.
Cross-lingual alignment of contextual word embeddings, with applications to zero-shot dependency parsing. [ Schuster, arXiv:1902.09492ACL. 1arXiv preprintNeural Machine Translation of Rare Words with Subword Units[Schuster et al. 2019] Schuster, T.; Ram, O.; Barzilay, R.; and Globerson, A. 2019. Cross-lingual alignment of contextual word embeddings, with applications to zero-shot dependency parsing. arXiv preprint arXiv:1902.09492. [Sennrich, Haddow, and Birch 2016] Sennrich, R.; Haddow, B.; and Birch, A. 2016. Neural Machine Translation of Rare Words with Subword Units. In ACL, volume 1, 1715-1725.
On the Limitations of Unsupervised Bilingual Dictionary Induction. arXiv:1702.03859Orthogonal Transformations and the Inverted Softmax. 1arXiv preprintACLet al. 2017] Smith, S. L.; Turban, D. H.; Hamblin, S.; and Hammerla, N. Y. 2017. Offline Bilingual Word Vectors, Orthog- onal Transformations and the Inverted Softmax. arXiv preprint arXiv:1702.03859. [Søgaard, Ruder, and Vulić 2018] Søgaard, A.; Ruder, S.; and Vulić, I. 2018. On the Limitations of Unsupervised Bilingual Dictionary Induction. In ACL, volume 1, 778-788.
Co-stack Residual Affinity Networks with Multi-level Attention Refinement for Matching Text Sequences. Tuan Tay, Y Hui ; Tay, L A Tuan, S C Hui, A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, E M Voorhees, Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-based Chatbots. In ACL. ACM99Citeseer. [Vulić and Moens 2015] Vulić, I., and MoensTay, Tuan, and Hui 2018] Tay, Y.; Tuan, L. A.; and Hui, S. C. 2018. Co-stack Residual Affinity Networks with Multi-level At- tention Refinement for Matching Text Sequences. EMNLP. [Vaswani et al. 2017] Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polo- sukhin, I. 2017. Attention Is All You Need. In NIPS, 5998-6008. [Voorhees and others 1999] Voorhees, E. M., et al. 1999. The TREC-8 Question Answering Track Report. In Trec, volume 99, 77-82. Citeseer. [Vulić and Moens 2015] Vulić, I., and Moens, M.-F. 2015. Monolingual and Cross-Lingual Information Retrieval Models Based on (bilingual) Word Embeddings. In SIGIR, 363-372. ACM. [Wu et al. 2017] Wu, Y.; Wu, W.; Xing, C.; Zhou, M.; and Li, Z. 2017. Sequential Matching Network: A New Architecture for Multi-turn Response Selection in Retrieval-based Chatbots. In ACL, volume 1, 496-505.
Improving low-resource cross-lingual document retrieval by reranking with deep bilingual representations. Zhang, ACL. 1Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network[Zhang et al. 2019] Zhang, R.; Westerfield, C.; Shim, S.; Bing- ham, G.; Fabbri, A.; Verma, N.; Hu, W.; and Radev, D. 2019. Im- proving low-resource cross-lingual document retrieval by rerank- ing with deep bilingual representations. ACL. [Zhou et al. 2018] Zhou, X.; Li, L.; Dong, D.; Liu, Y.; Chen, Y.; Zhao, W. X.; Yu, D.; and Wu, H. 2018. Multi-Turn Response Selection for Chatbots with Deep Attention Matching Network. In ACL, volume 1, 1118-1127.
| [
"https://github.com/LiuChang97/CLMN.",
"https://github.com/google-research/bert"
] |
[
"Improved and Robust Controversy Detection in General Web Pages Using Semantic Approaches under Large Scale Conditions",
"Improved and Robust Controversy Detection in General Web Pages Using Semantic Approaches under Large Scale Conditions"
] | [
"Jasper Linmans jasperlinmans@gmail.com \nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\n\n",
"Bob Van De Velde rnvandevelde@gmail.com \nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\n\n",
"Evangelos Kanoulas ekanoulas@gmail.com \nUniversity of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\n\n"
] | [
"University of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\n",
"University of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\n",
"University of Amsterdam\nUniversity of Amsterdam\nUniversity of Amsterdam\n"
] | [
"The 27th ACM International Conference on Information and Knowledge Management (CIKM '18)"
] | Detecting controversy in general web pages is a daunting task, but increasingly essential to efficiently moderate discussions and effectively filter problematic content. Unfortunately, controversies occur across many topics and domains, with great changes over time. This paper investigates neural classifiers as a more robust methodology for controversy detection in general web pages. Current models have often cast controversy detection on general web pages as Wikipedia linking, or exact lexical matching tasks. The diverse and changing nature of controversies suggest that semantic approaches are better able to detect controversy. We train neural networks that can capture semantic information from texts using weak signal data. By leveraging the semantic properties of word embeddings we robustly improve on existing controversy detection methods. To evaluate model stability over time and to unseen topics, we asses model performance under varying training conditions to test crosstemporal, cross-topic, cross-domain performance and annotator congruence. In doing so, we demonstrate that weak-signal based neural approaches are closer to human estimates of controversy and are more robust to the inherent variability of controversies. | 10.1145/3269206.3269301 | [
"https://arxiv.org/pdf/1812.00382v1.pdf"
] | 53,037,523 | 1812.00382 | 0a9af2b76e6c2ad516478d77f74e8439ad2f5e34 |
Improved and Robust Controversy Detection in General Web Pages Using Semantic Approaches under Large Scale Conditions
ACMCopyright ACM2018. October 22-26. 2018
Jasper Linmans jasperlinmans@gmail.com
University of Amsterdam
University of Amsterdam
University of Amsterdam
Bob Van De Velde rnvandevelde@gmail.com
University of Amsterdam
University of Amsterdam
University of Amsterdam
Evangelos Kanoulas ekanoulas@gmail.com
University of Amsterdam
University of Amsterdam
University of Amsterdam
Improved and Robust Controversy Detection in General Web Pages Using Semantic Approaches under Large Scale Conditions
The 27th ACM International Conference on Information and Knowledge Management (CIKM '18)
Torino, Italy; New York, NY, USAACM42018. October 22-26. 201810.1145/3269206.3269301
Detecting controversy in general web pages is a daunting task, but increasingly essential to efficiently moderate discussions and effectively filter problematic content. Unfortunately, controversies occur across many topics and domains, with great changes over time. This paper investigates neural classifiers as a more robust methodology for controversy detection in general web pages. Current models have often cast controversy detection on general web pages as Wikipedia linking, or exact lexical matching tasks. The diverse and changing nature of controversies suggest that semantic approaches are better able to detect controversy. We train neural networks that can capture semantic information from texts using weak signal data. By leveraging the semantic properties of word embeddings we robustly improve on existing controversy detection methods. To evaluate model stability over time and to unseen topics, we asses model performance under varying training conditions to test crosstemporal, cross-topic, cross-domain performance and annotator congruence. In doing so, we demonstrate that weak-signal based neural approaches are closer to human estimates of controversy and are more robust to the inherent variability of controversies.
INTRODUCTION & PRIOR WORK
Controversy detection is an increasingly important task. Controversial content can signal the need for moderation on social platforms, either to prevent conflict between users or limit the spread of misinformation. More generally, controversies provide insight into societies [4]. Often, the controversial content is outside the direct control of a platform on which it is shared, mentioned or discussed. This raises the requirement of generally applicable methods to gauge controversial content on the web for moderation purposes. Unfortunately, what is controversial changes, and may lie more in Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. the way topics are discussed rather than what is discussed, making it difficult to detect controversies in a robust fashion. We take the task of controversy detection and evaluate robustness of different methodologies with respect to the varying nature of controversies.
Prior work on detecting controversies has taken three kinds of approaches: 1) lexical approaches, which seek to detect controversies through signal terms, either through bag-of-word classifiers, lexicons, or lexicon based language models [6]. 2) explicit modeling of controversy through platform-specific features, often in Wikipedia or social-media settings. Features such as mutual reverts [14], user-provided flags [2], interaction networks [12] or stancedistributions [7] have been used as platform-specific indicators of controversies. The downside of these approaches is the lack of generalizability due to their platform-specific nature. 3) matching models that combine lexical and explicit modelling approaches by looking at lexical similarities between a given text and a set of texts in a domain that provides explicit features [3,6,8].
Controversy detection is a difficult task because 1) controversies are latent, like ideology, meaning they are often not directly mentioned as controversial in text. 2) Controversies occur across a vast range of topics with varying topic-specific vocabularies. 3) Controversies change over time, with some topics and actors becoming controversial whereas others stop to be so. Previous approaches lack the power to deal with such changes. Matching and explicit approaches are problematic when the source corpus (e.g. Wikipedia) lags after real-world changes [5]. Furthermore, lexical methods trained on common (e.g. fulltext) features are likely to memorize the controversial topics in the training set rather than the 'language of controversy'. Alleviating dependence on platform specific features and reducing sensitivity to an exact lexical representation is paramount to robust controversy detection. To this end, we focus only on fulltext features and suggest to leverage the semantic representations of word embeddings to reduce the vocabulary-gap for unseen topics and exact lexical representations.
The majority of NLP-task related neural architectures rely on word embeddings, popularized by Mikolov et al [11] to represent texts. In essence these embeddings are latent-vector representations that aim to capture the underlying meaning of words. Distances between such latent-vectors are taken to express semantic relatedness, despite having different surface forms. By using embeddings, neural architectures are also able to leverage features learned on other texts (e.g. pretrained word embeddings) and create higher level representations of input (e.g. convolutional feature maps or hidden-states). These properties suggest that neural approaches are better able to generalize to unseen examples that poorly match the training set. We use two often applied network architectures adopting word embeddings, to classify controversy: Recurrent Neural Networks [13] and Convolutional Neural Networks [9] to answer the following research question. RQ: Can we increase robustness of controversy detection using neural methods?
Currently, there is no open large-size controversy detection dataset that lends itself to test cross-temporal and cross-topic stability. Thus we generate a Wikipedia crawl-based dataset that includes general web pages and is sufficiently large to train and test high capacity models such as neural networks.
METHODS
A proven approach in modelling text with neural networks is to use Recurrent Neural Networks (RNNs) which enjoy weight sharing capabilities to model words irrespective of their sequence location. A specific type, the Hierarchical Attention Network (HAN) proposed by [13] makes use of attention to build document representations in a hierarchical manner. It uses bi-directional Gated Recurrent Units (GRUs) [1] to selectively update representations of both words and sentences. This allows the network to both capture the hierarchy from words to sentences to documents and to explicitly weigh all parts of the document relevant during inference.
Recently, Convolutional Neural Networks (CNNs) have enjoyed increasing success in text classification. One such network introduced by [9] looks at patterns in words within a window, such as "Scientology [...] brainwashes people". The occurrences of these patterns are then summarized to their 'strongest' observation (maxpooling) and used for classification. Since pooling is applied after each convolution, the output size of each convolutional operation itself is irrelevant. Therefore, filters of different sizes can be used, each capturing patterns in different sized word windows.
We explore the potential of RNNs and CNNs for controversy detection using both the HAN [13] and the CNN [9] model 1 . Similar to [13], each bi-directional GRU cell is set to a dimension of 50, resulting in a word/sentence representation of size 100 after concatenation. The word/sentence attention vectors similarly contain 100 dimensions, all randomly initialized. The word windows defined in the CNN model are set to sizes: 2, 3 and 4 with 128 feature maps each. Each model is trained using mini batches of size 64 and uses both dropout (0.5) and l 2 regularization (1e-3) at the dense prediction layer. Both networks use pre-trained embeddings, trained on 100 billion words of a Google News corpus 2 , which are further fine-tuned during training on the controversy dataset. The optimization algorithm used is Adam [10] (learning rate: 1e-3).
EXPERIMENTAL SETUP 3.1 Datasets and evaluation
We use the Clueweb09 derived dataset of [4] for baseline comparison. For cross-temporal, cross-topic and cross-domain training & evaluation, we generate a new dataset based on Wikipedia crawl data 3 . This dataset is gathered by using Wikipedia's 'List of Contoversial articles' overview page of 2018 (time of writing) and 2009 (for comparison with baselines) 4 . Using this as a 'seed' set of controversial articles, we iteratively crawl the 'See also', 'References' and 'External links' hyperlinks up to two hops from the seed list.
The negative seed pages (i.e. non controversial) are gathered by using the random article endpoint 5 . The snowball-sample approach includes general, non-Wikipedia, pages that are referred to from Wikipedia pages. The dataset thus extends beyond just the encyclopedia genre of texts. Labels are assumed to propagate: a page linked from a controversial issue is assumed to be controversial. The resulting dataset statistics are summarized in Table 1. To be useful as a flagging mechanism for moderation, a controversy detection algorithm should satisfy both Precision and Recall criteria. F1 scores will therefore be used to evaluate this balance.
The AUC values are used to measure classification performance in the unbalanced controversy datasets. The test-train split depends on the task investigated and is listed in the results section for the respective task. To test for significant results, all models were evaluated using a bootstrap approach: by drawing 1000 samples with replacements n documents from the test set equal to the test-set size. The resulting confidence intervals based on percentiles provide a measure of significance.
Baseline models
To compare the results of neural approaches to prior work we implemented the previous state-of-the-art controversy detection method: the language model from [8]. Together with an SVM baseline they act as controversy detection alternatives using only full text features, thus meeting the task-requirements of platformindependence. Note: the implementation of [8] additionally requires ranking methods to select a subset of the training data for each language model. A simplified version of this, excluding the ranking method but using the same dataset and lexicon to select documents as [8], is implemented and included in the baselines comparison section (LM-DBPedia). We also included the same language model trained on the full text Wikipedia pages (LM-wiki). Similarly, for completeness sake, we also include both the state-of-the-art matching model, the TILE-Clique model from [6] and the sentiment analysis baseline (using the state-of-the-art Polyglot library for python 6 ) from [3] in the comparison with previous work. outperforms all other models on Precision although this difference is not significant compared to the neural approaches. Similarly, the language model trained on the DBPedia dataset outperforms other models on Recall but shows no significant difference compared to the CNN model. Notably, the neural approaches show comparable results to the TILE-Clique model in terms of F1, demonstrating a balanced performance in terms of Precision and Recall. Furthermore, the CNN model shows a significant improvement compared to the other non neural baselines in terms of the AUC value (p < 0.05).
RESULTS
Comparison of results with previous work
Robustness of the model across time
Controversy is expected to change over time. Some issues become controversial, others cease to be so. To investigate robustness of controversy detection models with respect to changes over time, we evaluate model performance in two variants: trained and tested on 2018, or trained on the 2009 Wikipedia data and tested on the 2018 Wikipedia data. Table 3 shows the results for each of the text-based detection models.
Within year, the hierarchical attention model (HAN) outperforms all other models on Recall, F1 and AUC, losing Precision to the CNN and SVM models. However, our main interest is the robustness when a model is trained on a different year (2009) than the test set (2018). These between year experiments show a superior score for the HAN model compared to the non-neural models on Recall, and show significant improvements on F1 (p < 0.05) and AUC (p < 0.05), losing only to the SVM model on Precision (non significantly). In terms of robustness, we can also take the percentage change between the within year and between year experiment into account (were smaller absolute changes are preferable), shown by the delta values. With regard to temporal sensitivity, the CNN shows the least change across all four metrics. In Figure 1, we show the pooled results for the lexical and neural models to illustrate the overall increase in robustness by neural approaches.
Interestingly, the SVM and HAN model show some unexpected improvement with regard to Precision when applied to unseen timeframes. For both models, this increase in Precision is offset by a greater loss in Recall, which seems to indicate both models 'memorize' the controversial topics in a given timeframe instead of the controversial language. Overall, the neural approaches seem to compare favorably in terms of cross-temporal stability.
Robustness of the model across topics
Robustness of the model across domains
Most work on controversy has looked into using existing knowledge bases as a source of controversy information [3,6]. In this paper, we focus on text-based classification methods that do not aim to explicitly link general web pages to their knowledge-base counterparts. Therefore, we are interested in the ability of neural models to generalize beyond their training context. In addition to testing across time and topics, we also investigate robustness to changes in domain. By training only on Wikipedia data, and evaluating only on general web-pages, we look at the ability of the four methods to deal with out-of-domain documents. The hierarchical attention network shows significantly better results (p < 0.05) compared to all other models on F1. Both neural models also outperform both language models on AUC significantly (p < 0.05). Precision and Recall are more mixed, with the CNN and SVM outperforming the HAN on Precision and the language model -again-performing best in terms of Recall. Together, the neural methods seem to work best on three out of the four metrics.
Human agreement
Lastly, we examine model performance with respect to human annotation using the human annotated dataset of [3]. We assume that models that perform similarly to human annotators are preferable.
In Table 6, we present three Spearman correlation metrics to express model congruence with human annotations. Mean annotation expresses the correlation of model error rates with the controversy values attributed to a web page by human annotators, with positive values expressing greater error rates on controversial, and negative expressing higher error rates on non-controversial pages. Here, the HAN shows most unbiased (closest to zero) performance. Certainty is the distance of human annotations to the midpoint of the four-point controversy scale, i.e. a score between 0 and 2.5 that expresses how sure annotators are of document (non)controversy. Here, the HAN shows errors most strongly negatively correlated to the certainty of annotators. Finally, annotators disagree on the controversy of some documents, expressed as the standard deviation of their controversy annotations. Again, the HAN model seems preferable, as it's errors are most strongly correlated to annotator disagreement. Overall, the neural methods have less biased performance in relation to (non)controversial documents, correlate more strongly with the certainty of human annotators and are susceptible to errors in similar conditions as when annotators disagree.
CONCLUSION
Controversy detection is a hard task, as it forms a latent concept sensitive to vocabulary gaps between topics and vocabulary shifts over time. We analysed the performance of language model, SVM, CNN and HAN models on different tasks. First, we have demonstrated that neural methods perform as state-of-the-art tools in controversy detection on the ClueWeb09 [4] based testset, even beating matching models. Second, we investigated temporal stability, and demonstrated neural -and especially CNN-robustness in terms of Recall, F1 and AUC performance and stability with train and test sets that are 9 years apart. Thirdly, we show that CNN and HAN models outperform the SVM and LM baselines on Precision, F1 and AUC when tested on held-out-topics. Fourthly, we show that neural methods are better able to generalize from Wikipedia pages to unseen general web pages in terms of Precision, F1 and AUC. Lastly, neural methods seem better in line with human annotators with regard to certainty and disagreement.
CIKM '18, October 22-26, 2018, Torino, Italy © 2018 Copyright held by the owner/author(s). Publication rights licensed to the Association for Computing Machinery. ACM ISBN 978-1-4503-6014-2/18/10. . . $15.00 https://doi.org/10.1145/3269206.3269301
Figure 1 :
1Average F1 and AUC score of aggregated results for all lexical and neural models.
Table 1 :
1Wikipedia derived dataset statistics. Including the percentages of controversial (i.e. positive labelled) and general (i.e. non-Wikipedia) web pages from the total amount of pages per dataset split.Set
Seeds
Total
Controversial General Web
Train
5600
23.703
7.233 (31%)
15.449 (65%)
Validation
200
988
651 (66%)
688 (70%)
Test
200
1.024
654 (64%)
723 (71%)
Table 2
2shows the relative performance of the neural models compared to previous controversy detection methods, evaluated on the Clueweb09 derived dataset of[3] and trained on the Wikipedia data from the same time frame. The TILE-Clique matching model Train/Test: '18/'18 '09/'18 ∆ '18/'18 '09/'18 ∆ '18/'18 '09/'18 ∆ '18/'18 '09/'18 ∆Model
Precision
Recall
F1
AUC
TfIdf-SVM 0.910 0.941 ▲3% 0.689 0.191 ▼72% 0.784 0.317 ▼60% 0.785 0.585 ▼25%
LM
0.651 0.609 ▼6% 0.811 0.550 ▼32% 0.723 0.578 ▼20% 0.600 0.452 ▼25%
CNN
0.930 0.913 ▼2% 0.663 0.564 ▼15% 0.775 0.696 ▼11% 0.888 0.846 ▼5%
HAN
0.871 0.912 ▲5% 0.818 0.561 ▼31% 0.844 0.695 ▼18% 0.889 0.845 ▼5%
Table 3 :
3Temporal stability experiment. Results obtained by evaluating on the Wikipedia derived dataset from 2018 by either: models trained on Wikipedia data from 2018 or 2009. Trained on data from the same time frame, the neural models show a slight advantage over the lexical models. Most noticeable however is the drop in performance by the lexical models when trained on older data in terms of Recall and therefore also in terms of F1-score.
Table 2 :
2Comparison of results with previous workModel
Precision Recall
F1
AUC
Sentiment_polyglot
0.448
0.392
0.418
0.612
TfIdf-SVM
0.581
0.208
0.306
0.740
TILE_Clique [6]
0.710
0.720
0.714
0.780
LM-DBPedia [8]
0.415
0.886
0.566
0.730
LM-wiki
0.359
0.808
0.497
0.579
CNN
0.627
0.840
0.718
0.835
HAN
0.632
0.745
0.684
0.823
Table 4 :
4Cross-topic stability experiment. Metrics are averaged across 10 leave-on-out topic folds.Model
Precision
Recall
F1
AUC
TfIdf-SVM
0.793
0.575
0.661
0.829
LM
0.512
0.816
0.629
0.633
CNN
0.840
0.569
0.670
0.842
HAN
0.799
0.716
0.753
0.840
To evaluate robustness towards unseen topics, 10-fold cross valida-
tion was used on the top ten largest topics present in the Wikipedia
dataset in a leave-one-out fashion. The results are shown in table
4. In line with previous results, the language model scores best on
Recall, beating all other models with a significant difference (p <
0.01). However in balancing Recall with Precision, the HAN model
scores best, significantly outperforming both lexical models in F1
score (p < 0.05). Overall, when grouping together all neural and
lexical results, the neural methods outperform the lexical models
in Precision (p < 0.01), F1 (p < 0.05) and AUC (p < 0.01) with no
Table 5 :
5Cross-domain stability experiment. Metrics are based on models trained on Wikipedia data and tested on general web pages.Model
Precision
Recall
F1
AUC
TfIdf-SVM
0.718
0.361
0.480
0.638
LM
0.392
0.826
0.532
0.573
CNN
0.743
0.394
0.514
0.755
HAN
0.700
0.604
0.645
0.789
Table 6 :
6Spearman's correlations for estimated probability distance from true label. Mean controversy: Average annotator score, certainty: distance from controversy annotation-scale midpoint, disagreement: standard deviation of annotations. Only pages with at least 3 annotations included to ensure sensible agreement metrics, N=128, bolded scores are preferable.Model
mean annotation certainty disagreement
TfIdf-SVM
-0.540
-0.238
0.144
LM-DBPedia
0.633
-0.172
-0.023
CNN
0.348
-0.314
0.138
HAN
0.277
-0.390
0.207
Code available at https://github.com/JasperLinmans/ControversPy 2 Available at https://code.google.com/p/word2vec/ 3 Script to generate dataset available at: https://github.com/JasperLinmans/ControversPy 4 Available at https://en.wikipedia.org/wiki/Wikipedia:List_of_controversial_issues
Available at https://en.wikipedia.org/wiki/Special:Random 6 https://github.com/aboSamoor/polyglot
Neural machine translation by jointly learning to align and translate. K Cho, Y Bengio, D Bahdanau, arXiv:1409.0473arXiv preprintCho K. & Bengio Y. Bahdanau, D. 2014. Neural machine translation by jointly learning to align and translate.. In arXiv preprint arXiv:1409.0473.
A Manipulation among the arbiters of collective intelligence: How Wikipedia administrators mold public opinion. Sanmay Das, Allen Lavoie, Malik Magdon-Ismail, ACM Trans. Web. 1025Sanmay Das, Allen Lavoie, and Malik Magdon-Ismail. 2016. A Manipulation among the arbiters of collective intelligence: How Wikipedia administrators mold public opinion. ACM Trans. Web 10, 4, 24:1-24:25.
Detecting Controversy on The Web. S Dori-Hacohen, J Allan, Proceedings of the 22nd ACM CIKM. the 22nd ACM CIKMS. Dori-Hacohen and J. Allan. 2013. Detecting Controversy on The Web.. In Proceedings of the 22nd ACM CIKM, 1845-1848.
Navigating controversy as a complex search task. S Dori-Hacohen, E Yom-Tov, J Allan, CEUR Workshop Proceedings. S. Dori-Hacohen, E. Yom-Tov, and J. Allan. 2015. Navigating controversy as a complex search task.. In CEUR Workshop Proceedings, 1338.
The birth of collective memories: Analyzing emerging entities in text streams. D Odijk, M De Rijke, D Graus, JASIST. Odijk D. & de Rijke M. Graus, D. 2018. The birth of collective memories: Analyzing emerging entities in text streams.. In JASIST, 69(9). 773-786.
Improving Automated Controversy Detection on the Web. M Jang, J Allan, 39th ACM SIGIR Proceedings. M. Jang and J. Allan. 2016. Improving Automated Controversy Detection on the Web. In 39th ACM SIGIR Proceedings 865-868.
Modeling Controversy within Populations. Myungha Jang, Shiri Dori-Hacohen, James Allan, ICTR Proceedings. ACM. Myungha Jang, Shiri Dori-Hacohen, and James Allan. 2017. Modeling Controversy within Populations. In ICTR Proceedings. ACM, 141-149.
Probabilistic Approaches to Controversy Detection. M Jang, J Foley, S Dori-Hacohen, J Allan, Proceedings of the 25th ACM CIKM. the 25th ACM CIKMM. Jang, J. Foley, S. Dori-Hacohen, and J. Allan. 2016. Probabilistic Approaches to Controversy Detection.. In Proceedings of the 25th ACM CIKM. 2069--2072.
Convolutional Neural Networks for Sentence Classification. Y Kim, Proceedings of the 2014 EMNLP. the 2014 EMNLPY. Kim. 2014. Convolutional Neural Networks for Sentence Classification. In Proceedings of the 2014 EMNLP. 1746--1751.
Adam: A Method for Stochastic Optimization. Jimmy Kingma, Diederik P Ba, arXiv:1412.6980International Conference for Learning Representations. Jimmy. Kingma, Diederik P. Ba. 2014. Adam: A Method for Stochastic Optimiza- tion. In International Conference for Learning Representations. arXiv:1412.6980.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, abs/1301.3781CoRR. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space.. In CoRR, Vol. abs/1301.3781.
Detecting controversial events from twitter. M Popescu, A.-M Pennacchiotti, Proceedings of the 19th ACM CIKM -CIKM. the 19th ACM CIKM -CIKMM. Popescu, A.-M. Pennacchiotti. 2010. Detecting controversial events from twitter.. In Proceedings of the 19th ACM CIKM -CIKM.
Hierarchical Attention Networks for Document Classification. Proceedings of NAACL-HLT 2016. Dyer. D. He. X. Smola. A. Hovy. E. Yang. Z., Yang. Z.NAACL-HLT 2016Dyer. D. He. X. Smola. A. Hovy. E. Yang. Z., Yang. Z. 2016. Hierarchical Attention Networks for Document Classification.. In Proceedings of NAACL-HLT 2016. 1480- -1489.
Dynamics of Conflicts in Wikipedia. R Sumi, T Yasseri, PLOS ONE. 7Sumi R. Yasseri, T. 2012. Dynamics of Conflicts in Wikipedia. In PLOS ONE, 7(6), 1-12.
| [
"https://github.com/JasperLinmans/ControversPy",
"https://github.com/JasperLinmans/ControversPy",
"https://github.com/aboSamoor/polyglot"
] |
[
"Memory Attentive Fusion: External Language Model Integration for Transformer-based Sequence-to-Sequence Model",
"Memory Attentive Fusion: External Language Model Integration for Transformer-based Sequence-to-Sequence Model"
] | [
"Mana Ihori mana.ihori.kx@hco.ntt.co.jp \nNTT Media Intelligence Laboratories\nNTT Corporation\n1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan\n",
"Ryo Masumura \nNTT Media Intelligence Laboratories\nNTT Corporation\n1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan\n",
"Naoki Makishima \nNTT Media Intelligence Laboratories\nNTT Corporation\n1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan\n",
"Tomohiro Tanaka \nNTT Media Intelligence Laboratories\nNTT Corporation\n1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan\n",
"Akihiko Takashima \nNTT Media Intelligence Laboratories\nNTT Corporation\n1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan\n",
"Shota Orihashi \nNTT Media Intelligence Laboratories\nNTT Corporation\n1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan\n"
] | [
"NTT Media Intelligence Laboratories\nNTT Corporation\n1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan",
"NTT Media Intelligence Laboratories\nNTT Corporation\n1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan",
"NTT Media Intelligence Laboratories\nNTT Corporation\n1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan",
"NTT Media Intelligence Laboratories\nNTT Corporation\n1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan",
"NTT Media Intelligence Laboratories\nNTT Corporation\n1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan",
"NTT Media Intelligence Laboratories\nNTT Corporation\n1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan"
] | [
"Proceedings of The 13th International Conference on Natural Language Generation"
] | This paper presents a novel fusion method for integrating an external language model (LM) into the Transformer based sequenceto-sequence (seq2seq) model. While paired data are basically required to train the seq2seq model, the external LM can be trained with only unpaired data. Thus, it is important to leverage memorized knowledge in the external LM for building the seq2seq model, since it is hard to prepare a large amount of paired data. However, the existing fusion methods assume that the LM is integrated with recurrent neural network-based seq2seq models instead of the Transformer. Therefore, this paper proposes a fusion method that can explicitly utilize network structures in the Transformer. The proposed method, called memory attentive fusion, leverages the Transformer-style attention mechanism that repeats source-target attention in a multi-hop manner for reading the memorized knowledge in the LM. Our experiments on two text-style conversion tasks demonstrate that the proposed method performs better than conventional fusion methods. | null | [
"https://www.aclweb.org/anthology/2020.inlg-1.1.pdf"
] | 225,102,917 | 2010.15437 | 1d68dcae3c4b8c28af894bd1e6b0a89006606efc |
Memory Attentive Fusion: External Language Model Integration for Transformer-based Sequence-to-Sequence Model
December, 2020
Mana Ihori mana.ihori.kx@hco.ntt.co.jp
NTT Media Intelligence Laboratories
NTT Corporation
1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan
Ryo Masumura
NTT Media Intelligence Laboratories
NTT Corporation
1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan
Naoki Makishima
NTT Media Intelligence Laboratories
NTT Corporation
1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan
Tomohiro Tanaka
NTT Media Intelligence Laboratories
NTT Corporation
1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan
Akihiko Takashima
NTT Media Intelligence Laboratories
NTT Corporation
1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan
Shota Orihashi
NTT Media Intelligence Laboratories
NTT Corporation
1-1 Hikarinooka, Yokosuka-Shi239-0847KanagawaJapan
Memory Attentive Fusion: External Language Model Integration for Transformer-based Sequence-to-Sequence Model
Proceedings of The 13th International Conference on Natural Language Generation
The 13th International Conference on Natural Language GenerationDublin, IrelandDecember, 20201
This paper presents a novel fusion method for integrating an external language model (LM) into the Transformer based sequenceto-sequence (seq2seq) model. While paired data are basically required to train the seq2seq model, the external LM can be trained with only unpaired data. Thus, it is important to leverage memorized knowledge in the external LM for building the seq2seq model, since it is hard to prepare a large amount of paired data. However, the existing fusion methods assume that the LM is integrated with recurrent neural network-based seq2seq models instead of the Transformer. Therefore, this paper proposes a fusion method that can explicitly utilize network structures in the Transformer. The proposed method, called memory attentive fusion, leverages the Transformer-style attention mechanism that repeats source-target attention in a multi-hop manner for reading the memorized knowledge in the LM. Our experiments on two text-style conversion tasks demonstrate that the proposed method performs better than conventional fusion methods.
Introduction
In recent studies, the Transformer sequence-tosequence (seq2seq) model (Vaswani et al., 2017) has successfully performed in various natural language generation tasks, such as machine translation Barrault et al., 2019), image captioning (Li et al., 2019b;Yu et al., 2019;Li et al., 2019a), and automatic speech recognition (Dong et al., 2018;Karita et al., 2019;Salazar et al., 2019). Although the Transformer training needs paired data, a large amount of paired data cannot often be prepared. Moreover, unpaired data cannot be used for training the Transformer even though such data can be collected on a large scale.
To utilize a large amount of unpaired data, methods of integrating an external language model (LM) trained with these data into seq2seq mod-els have been proposed (Kannan et al., 2018;Gulcehre et al., 2015;Sriram et al., 2018). These methods can improve the fluency of sentences that are generated by seq2seq models; however, they integrate the LM into recurrent neural network (RNN) based seq2seq models rather than the Transformer. In other words, LM fusion methods specific to the Transformer have not been considered yet.
Here, the Transformer employs the multi-hop attention mechanism (Sukhbaatar et al., 2015) that repeats the source-target attention mechanism in each Transformer decoder block. Thus, it is supposed that the source-target attention mechanism promotes to extract effective source information for target tasks more exactly than RNN based seq2seq models. Therefore, we assume that the Transformer can utilize memorized knowledge in the external LM more effectively by using the multi-hop attention mechanism for the LM fusion.
In this paper, we propose a novel fusion method, called memory attentive fusion, to integrate an external LM into the Transformer. This fusion method utilizes a multi-hop source-target attention mechanism for combining the Transformer decoder with the external LM. We performed experiments with two text-style conversion tasks: spoken-to-written style conversion and dialect conversion. Our experiments demonstrate that the proposed method performs better than conventional fusion methods.
Related work
The simplest fusion method is to train the seq2seq model and the LM independently and then combine their outputs (Kannan et al., 2018;Chorowski and Jaitly, 2017;Sutskever et al., 2014). These methods are called shallow fusion.
Moreover, methods that integrate an external LM into seq2seq models during training have been proposed: deep fusion (Gulcehre et al., 2015) and cold fusion (Sriram et al., 2018). These methods utilize the information of not only paired data but also unpaired data in training. Figure 1 shows a Transformer with cold fusion. These methods assume that the LM is integrated into RNN-based seq2seq models instead of the Transformer.
Memory attentive fusion
This section details memory attentive fusion for the Transformer seq2seq model. In fact, memory attentive fusion is an extended method of the cold fusion (Sriram et al., 2018). While the cold fusion uses memorized knowledge in the LM at an output layer only once, the memory attentive fusion repeatedly uses the knowledge at Transformer decoder blocks based on a source-target attention mechanism.
We define an input sequence as X = {x 1 , · · · , x M } and an output sequence as Y = {y 1 , · · · , y N }, where x m and y n are tokens in the input and output sequence. In text-style conversion, the model predicts the generation probabilities of the output sequence given the input sequence. The generation probability of Y is defined as P (Y |X; Θ) = N ∏ n=1 P (y n |y 1:n−1 , X; Θ), (1) where Θ = {θ enc , θ dec , θ lm } represents model parameter sets. θ enc and θ dec are trainable parameter sets with encoder and decoder, respectively. θ lm is parameter set for the external LM. P (y n |y 1:n−1 , X; Θ) can be computed using an encoder and a decoder with memory attentive fusion in the Transformer. Figure 2 shows the Transformer with memory attentive fusion.
Encoder: The encoder converts an input sequence X into the hidden representations S (K) using K Transformer encoder blocks. First, the input hidden representation of the Transformer encoder block
S (0) = {s (0) 1:M } is produced by s (0) m = Embedding(x m ; θ enc ),(2)
where Embedding(·) consists of positional encoding and a linear layer. Next, the k-th Transformer encoder block composes the k-th hidden representations S (k) from the lower inputs S (k−1) as where TransformeEncBlock(·) is the Transformer encoder block that consists of a scaled dot product multi-head self-attention layer and a position-wise feed-forward network (Vaswani et al., 2017).
S (k) = TransformerEncBlock(S (k−1) ; θ enc ),(3)
Decoder with memory attentive fusion: The decoder with memory attentive fusion computes the generation probability of a token from the preceding tokens and hidden representations of the input sequence and the LM information. The predicted probabilities of the n-th token y n are calculated as
P (y n |y 1:n−1 , X) = softmax(u (J) n ; θ dec ),(4)
where softmax(·) is a softmax layer with a linear transformation. The input hidden vector u (J) n is computed from S (K) and y 1:n−1 using J Transformer decoder blocks with an external LM. First, the input hidden representation of the Transformer decoder block u
n−1 and h LM n−1 are produced by
u (0) n−1 = Embedding(y n−1 ; θ dec ),(5)l LM n−1 = LanguageModel(y 1:n−1 ; θ lm ),(6)h LM n−1 = linear(l LM n−1 ; θ dec ),(7)
where LanguageModel(·) is the trained external LM, and l LM n−1 is the logit output. Next, we convert hidden representations in the lower layer u (j−1) 1:n−1 and the encoder output S (k) into a hidden vector c (j) n . The hidden vector is computed as
v (j) n = SourceTarget(u (j−1) 1:n−1 , u (j−1) n−1 ; θ dec ),(8)v (j) n = LayerNorm(u (j−1) n−1 +v (j) n ),(9)c (j) n = SourceTarget(S (K) , v (j) n ; θ dec ), (10) c (j) n = LayerNorm(v (j) n +c (j) n ),(11)
where SourceTarget(·) is a scaled dot product multi-head source target attention layer and LayerNorm(·) is layer normalization (Ba et al., 2016). In memory attentive fusion, we also convert the LM output h LM 1:n−1 and the hidden vector v (j) n into a hidden vector b (j) n with attention mechanism. The hidden vector is computed as
b (j) n = SourceTarget(h LM 1:n−1 , v (j) n ; θ dec ), (12) b (j) n = LayerNorm(v (j) n +b (j) n ).(13)
This attention mechanism is repeated with Transformer decoder block in the multi-hop manner. Therefore, we expect to read the memorized memory in the LM effectively. Next, we concatenate the hidden vector that have target and source information, and that have target and the LM information with gating mechanism by
g (j) n = sigmoid([c (j) n T , b (j) n T ] T ; θ dec ), (14) q (j) n = [c (j) n T , g (j) n ⊙ b (j) n T ] T ,(15)
where sigmoid(·) is a sigmoid layer with a linear transformation. Next, the hidden vector q (j) n is converted into the j-th hidden representation u (j) n . The hidden representation is computed as
u (j) n = FeedForward(q (j) n ; θ dec ),(16)u (j) n = LayerNorm(q (j) n +ū (j) n ),(17)
where FeedForwrd(·) is a position-wise feedforward network.
Training: In the Transformer, the model parameter set can be optimized from training dataset D = {(X 1 , Y 1 ), · · · , (X |D| , Y |D| )}. The objective function for optimizing the model parameter is defined as
L = − 1 |D| |D| ∑ d=1 log P (Y d |X d ; Θ).(18)
Note that the external LM uses the freezing parameter θ lm .
Experiments
We evaluated our method on text-style conversion tasks. In particular, we chose spoken-to-written style conversion task and dialect conversion task in Japanese. In the spoken-to-written style conversion task, spoken-style text produced by an automatic speech recognition system is converted into written-style text that has correct punctuation and no disfluency (Ihori et al., 2020). In the dialect conversion task, Japanese dialects are converted into standard Japanese.
Datasets
Spoken-to-written style conversion: We used the Corpus of Spontaneous Japanese (CSJ) (Maekawa et al., 2000) and the parallel corpus for Japanese spoken-to-written style conversion (CJSW) (Ihori et al., 2020). We divided the CSJ into a training set, validation set, and test set. The training set, validation set, and test set have 46,847, 13,510, and 3,949 sentences, respectively. The CJSW has four domains, and we divided it up following (Ihori et al., 2020). We used all of the training and validation sets for training and each test set (CJSW-1, 2, 3, 4) for the evaluation. All of these datasets are paired data of spoken-style text (manual transcriptions of speech) and writtenstyle text (created with crowd-sourcing).
Dialect conversion: We prepared three paired data of dialects (Tohoku-ben, Osaka-ben, Kyushuben) and standard Japanese with crowd-sourcing. We divided these data into a training set, validation set, and test set for each dialect. We used all of the training and validation sets for training and three test sets for the evaluation. The training set, validation set and test set have 15,506, 3,924 and 2,160 sentences, respectively. Moreover, the test set consists of 700 Tohoku-ben, 862 Osaka-ben, and 598 Kyushu-ben sentences, respectively.
External text: We prepared a large-scale Japanese web text as the unpaired written-style text data. The Web text was downloaded from various topic Web pages using our home-made crawler. The downloaded pages were filtered in such a way that HTML tags, Javascript codes and other parts that were not useful for these tasks were excluded. Finally, we prepare one million sentences for training the external LM. Moreover, we divided this data into a training set, validation set. The training set and validation set have 800,000 and 200,000 sentences, respectively.
Setups
Transformer: We constructed the Transformer with shallow fusion (Kannan et al., 2018), cold fusion (Sriram et al., 2018) and memory attentive fusion methods. In addition, we constructed the Transformer without fusion methods as a baseline. We used the following configurations. The dimensions of the output continuous representations and the inner outputs in the position-wise feed-forward network were set to 256, and the number of heads in the multi-head attentions was set to 8. ReLU activation was used in nonlinear transformation function. We stacked 4-layer Transformer encoder blocks, and 2-layer Transformer decoder blocks. We set the output unit size (witch corresponded to the amount of tokens in the training set for language model) to 5,640. To train these models, we used the adam optimizer and label smoothing with a smoothing parameter of 0.1. The training steps were stopped based on early stopping using the part of the training data. We set the mini-batch size to 64 sentences and the dropout rate in the Transformer blocks to 0.2. For the mini-batch training, we truncated each sentence to 200 tokens. We used characters as tokens. All trainable parameters were randomly initialized. For the decoding, we used a beam search algorithm in which the beam size was set to 4.
External LM: We utilized Open AI GPT (Radford et al., 2019) for the LM fusion, although any LM can potentially be used. We used the following configurations. The number of heads in the multi-head attentions was set to 4. We stacked 8layer Transformer blocks. The training steps were stopped based on early stopping using the part of the training data. We set the dropout rate in the Transformer blocks to 0.1. The other settings were the same as the Transformer settings. After training, perplexity of this LM was 11.8. Note that this LM was used in both two tasks and the Transformer and the external LM were not pre-trained. Table 1 shows the experimental results in the spoken-to-written style conversion task. Also, Table 2 shows the experimental results in the dialect conversion task. We calculated automatic evaluation scores in three metrics: BLEU-3 (B-3) (Papineni et al., 2002), ROUGE-L (R-L) (Lin and Och, 2004), and METEOR (Banerjee and Lavie, 2005). Baseline in the tables mean the results of the Transformer without the external LM. Table 1 shows that shallow fusion and cold fusion performed worse than the baseline on the CSJ dataset. On the other hand, memory attentive fusion outperformed the baseline. Moreover, memory attentive fusion outperformed the baseline and shallow fusion on the CJSW dataset. In addition, cold fusion outperformed the baseline on CJSW-1, -3 and -4. As in the spoken-to-written style conversion task, Table 2 shows that memory attentive fusion outperformed the other methods.
Results
The above results show that shallow fusion is not suitable for the Transformer because it degraded performance in all cases. Moreover, when the LM was integrated with cold fusion, the performance was better than baseline in some domains. Thus, we consider that cold fusion can be used with the Transformer in limited cases. On the other hand, memory attentive fusion outperformed the other fusion methods in almost all of the domains and tasks. Therefore, we consider that memory attentive fusion is suitable for integration of the external LM into the Transformer. In addition, memory attentive fusion worked well especially in the dialect conversion task. Thus, we can infer that the fusion method for the Transformer is more effective when there is small training data.
We show the converted example of spoken-towritten style conversion in CSJ dataset with each fusion method in Figure 3. Figure 3 shows that the word " " (flesh) was output correctly with memory attentive fusion, but other methods were not output the word correctly. The word " " was not included in training data for the Transformer, but it was included in training data for the external LM. This indicate that only memory attentive fusion was successful in extracting knowledge of the external LM.
Conclusion
We proposed memory attentive fusion, a novel method to integrate an external LM into the Transformer. Conventional fusion methods assume that the LM is integrated into the RNN-based seq2seq. On the other hand, the proposed method employs a Transformer-specific fusion method which repeats the attention mechanism for the LM many times. Experiments demonstrated that the proposed method outperformed the conventional methods in two tasks. We conclude that the proposed method is suitable for integrating the LM into the Transformer. In the future work, we will try using the proposed method in other natural language generation tasks such as automatic speech recognition.
Figure 1 :Figure 2 :
12Transformer Transformer with memory attentive fusion.
. Baseline b). Shallow fusion c). Cold fusion d). Memory attentive fusion
.
Baseline b). Shallow fusion c). Cold fusion d). Memory attentive fusion
Figure 3 :
3Example of spoken-to-written style conversion in CSJ dataset with each fusion method.
Table 1 :
1Results on spoken-to-written style conversion tasks.
Table 2 :
2Results on dialect conversion tasks.
Jun Yu, JingLi, Zhou Yu, and Qingming Huang. 2019.Multimodal transformer with multi-view visual representation for image captioning. IEEE Transactions on Circuits and Systems for Video Technology, pages 1-1.
. Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E Hin, arXiv:1607.06450ton. 2016. Layer normalization. arXiv preprintJimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hin- ton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450.
METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. Satanjeev Banerjee, Alon Lavie, Proc. the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or SummarizationSatanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Proc. the ACL Workshop on Intrinsic and Extrinsic Eval- uation Measures for Machine Translation and/or Summarization, pages 65-72.
Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). Loïc Barrault, Ondřej Bojar, Marta R Costa-Jussà, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Proc. Conference on Machine Translation (WMT). Conference on Machine Translation (WMT)Loïc Barrault, Ondřej Bojar, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine trans- lation (WMT19). In Proc. Conference on Machine Translation (WMT), pages 1-61.
Towards better decoding and language model integration in sequence to sequence models. Jan Chorowski, Navdeep Jaitly, Proc. International Speech Communication Association (INTER-SPEECH). International Speech Communication Association (INTER-SPEECH)Jan Chorowski and Navdeep Jaitly. 2017. Towards better decoding and language model integration in sequence to sequence models. In Proc. Interna- tional Speech Communication Association (INTER- SPEECH), pages 523-527.
Speechtransformer: a no-recurrence sequence-to-sequence model for speech recognition. Linhao Dong, Shuang Xu, Bo Xu, Proc. International Conference on Acoustics, Speech and Signal Processing (ICASSP). International Conference on Acoustics, Speech and Signal essing (ICASSP)Linhao Dong, Shuang Xu, and Bo Xu. 2018. Speech- transformer: a no-recurrence sequence-to-sequence model for speech recognition. In Proc. International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP), pages 5884-5888.
Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, arXiv:1503.03535Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. arXiv preprintCaglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On us- ing monolingual corpora in neural machine transla- tion. arXiv preprint arXiv:1503.03535.
Parallel corpus for Japanese spoken-towritten style conversion. Mana Ihori, Akihiko Takashima, Ryo Masumura, Proc. Language Resources and Evaluation Conference (LREC). Language Resources and Evaluation Conference (LREC)Mana Ihori, Akihiko Takashima, and Ryo Masumura. 2020. Parallel corpus for Japanese spoken-to- written style conversion. In Proc. Language Re- sources and Evaluation Conference (LREC), pages 6346-6353.
An analysis of incorporating an external language model into a sequence-to-sequence model. Anjuli Kannan, Yonghui Wu, Patrick Nguyen, Tara N Sainath, Zhijeng Chen, Rohit Prabhavalkar, Proc. International Conference on Acoustics, Speech and Signal Processing (ICASSP). International Conference on Acoustics, Speech and Signal essing (ICASSP)Anjuli Kannan, Yonghui Wu, Patrick Nguyen, Tara N Sainath, Zhijeng Chen, and Rohit Prabhavalkar. 2018. An analysis of incorporating an exter- nal language model into a sequence-to-sequence model. In Proc. International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 5824-5828.
A comparative study on transformer vs RNN in speech applications. Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique , Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, Proc. Automatic Speech Recognition and Understanding Workshop (ASRU). Automatic Speech Recognition and Understanding Workshop (ASRU)Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, Ryuichi Yamamoto, Xiaofei Wang, et al. 2019. A comparative study on transformer vs RNN in speech applications. In Proc. Automatic Speech Recogni- tion and Understanding Workshop (ASRU), pages 449-456.
Entangled transformer for image captioning. Guang Li, Linchao Zhu, Ping Liu, Yi Yang, Proc. International Conference on Computer Vision (ICCV). International Conference on Computer Vision (ICCV)Guang Li, Linchao Zhu, Ping Liu, and Yi Yang. 2019a. Entangled transformer for image captioning. In Proc. International Conference on Computer Vision (ICCV), pages 8928-8937.
The speechtransformer for large-scale Mandarin Chinese speech recognition. Jie Li, Xiaorui Wang, Yan Li, Proc. International Conference on Acoustics, Speech and Signal Processing (ICASSP). International Conference on Acoustics, Speech and Signal essing (ICASSP)Jie Li, Xiaorui Wang, Yan Li, et al. 2019b. The speechtransformer for large-scale Mandarin Chinese speech recognition. In Proc. International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 7095-7099.
Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. Chin-Yew Lin, Franz Josef Och, Proc. Annual Meeting on Association for Computational Linguistics (ACL). Annual Meeting on Association for Computational Linguistics (ACL)Chin-Yew Lin and Franz Josef Och. 2004. Auto- matic evaluation of machine translation quality us- ing longest common subsequence and skip-bigram statistics. In Proc. Annual Meeting on Association for Computational Linguistics (ACL), pages 605- 612.
Spontaneous speech corpus of Japanese. Kikuo Maekawa, Hanae Koiso, Sadaoki Furui, Hitoshi Isahara, Proc. International Conference on Language Resources and Evaluation (LREC). International Conference on Language Resources and Evaluation (LREC)Kikuo Maekawa, Hanae Koiso, Sadaoki Furui, and Hi- toshi Isahara. 2000. Spontaneous speech corpus of Japanese. In Proc. International Conference on Language Resources and Evaluation (LREC), pages 947-952.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proc. Annual Meeting on Association for Computational Linguistics (ACL). Annual Meeting on Association for Computational Linguistics (ACL)Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proc. Annual Meet- ing on Association for Computational Linguistics (ACL), pages 311-318.
Language models are unsupervised multitask learners. OpenAI blog. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, 9Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog, page 9.
Self-attention networks for connectionist temporal classification in speech recognition. Julian Salazar, Katrin Kirchhoff, Zhiheng Huang, Proc. International Conference on Acoustics, Speech and Signal Processing (ICASSP). International Conference on Acoustics, Speech and Signal essing (ICASSP)Julian Salazar, Katrin Kirchhoff, and Zhiheng Huang. 2019. Self-attention networks for connectionist tem- poral classification in speech recognition. In Proc. International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 7115-7119.
Cold fusion: Training seq2seq models together with language models. Anuroop Sriram, Heewoo Jun, Sanjeev Satheesh, Adam Coates, Proc. International Speech Communication Association (IN-TERSPEECH). International Speech Communication Association (IN-TERSPEECH)Anuroop Sriram, Heewoo Jun, Sanjeev Satheesh, and Adam Coates. 2018. Cold fusion: Training seq2seq models together with language models. In Proc. In- ternational Speech Communication Association (IN- TERSPEECH), pages 387-391.
End-to-end memory networks. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, Proc. Advances in neural information processing systems (NIPS). Advances in neural information processing systems (NIPS)Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In Proc. Advances in neural information processing systems (NIPS), pages 2440-2448.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Proc. Advances in neural information processing systems(NIPS). Advances in neural information processing systems(NIPS)Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Proc. Advances in neural information pro- cessing systems(NIPS), pages 3104-3112.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Proc. Advances in neural information processing systems (NIPS). Advances in neural information processing systems (NIPS)Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. Advances in neural information processing systems (NIPS), pages 5998-6008.
Learning deep transformer models for machine translation. Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F Wong, Lidia S Chao, Proc. Association for Computational Linguistics(ACL). Association for Computational Linguistics(ACL)Qiang Wang, Bei Li, Tong Xiao, Jingbo Zhu, Changliang Li, Derek F. Wong, and Lidia S. Chao. 2019. Learning deep transformer models for ma- chine translation. In Proc. Association for Compu- tational Linguistics(ACL), pages 1810-1822.
| [] |
[
"Understanding Points of Correspondence between Sentences for Abstractive Summarization",
"Understanding Points of Correspondence between Sentences for Abstractive Summarization"
] | [
"Logan Lebanoff loganlebanoff@knights.ucf.edu \nUniversity of Central Florida § Adobe Research\n\n",
"John Muchovej john.muchovej@knights.ucf.edu \nUniversity of Central Florida § Adobe Research\n\n",
"Franck Dernoncourt dernonco@adobe.com \nUniversity of Central Florida § Adobe Research\n\n",
"§ Doo \nUniversity of Central Florida § Adobe Research\n\n",
"Soon Kim dkim@adobe.com \nUniversity of Central Florida § Adobe Research\n\n",
"Lidan Wang lidwang@adobe.com \nUniversity of Central Florida § Adobe Research\n\n",
"Walter Chang wachang@adobe.com \nUniversity of Central Florida § Adobe Research\n\n",
"Fei Liu feiliu@cs.ucf.edu \nUniversity of Central Florida § Adobe Research\n\n"
] | [
"University of Central Florida § Adobe Research\n",
"University of Central Florida § Adobe Research\n",
"University of Central Florida § Adobe Research\n",
"University of Central Florida § Adobe Research\n",
"University of Central Florida § Adobe Research\n",
"University of Central Florida § Adobe Research\n",
"University of Central Florida § Adobe Research\n",
"University of Central Florida § Adobe Research\n"
] | [
"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop"
] | Fusing sentences containing disparate content is a remarkable human ability that helps create informative and succinct summaries. Such a simple task for humans has remained challenging for modern abstractive summarizers, substantially restricting their applicability in realworld scenarios. In this paper, we present an investigation into fusing sentences drawn from a document by introducing the notion of points of correspondence, which are cohesive devices that tie any two sentences together into a coherent text. The types of points of correspondence are delineated by text cohesion theory, covering pronominal and nominal referencing, repetition and beyond. We create a dataset containing the documents, source and fusion sentences, and human annotations of points of correspondence between sentences. Our dataset bridges the gap between coreference resolution and summarization. It is publicly shared to serve as a basis for future work to measure the success of sentence fusion systems. 1 | 10.18653/v1/2020.acl-srw.26 | [
"https://www.aclweb.org/anthology/2020.acl-srw.26.pdf"
] | 219,559,167 | 2006.05621 | 2a698bf8f1a019106b91ecd34e53cee104f48e7d |
Understanding Points of Correspondence between Sentences for Abstractive Summarization
Association for Computational LinguisticsCopyright Association for Computational Linguistics198 July 5 -July 10, 2020. 2020
Logan Lebanoff loganlebanoff@knights.ucf.edu
University of Central Florida § Adobe Research
John Muchovej john.muchovej@knights.ucf.edu
University of Central Florida § Adobe Research
Franck Dernoncourt dernonco@adobe.com
University of Central Florida § Adobe Research
§ Doo
University of Central Florida § Adobe Research
Soon Kim dkim@adobe.com
University of Central Florida § Adobe Research
Lidan Wang lidwang@adobe.com
University of Central Florida § Adobe Research
Walter Chang wachang@adobe.com
University of Central Florida § Adobe Research
Fei Liu feiliu@cs.ucf.edu
University of Central Florida § Adobe Research
Understanding Points of Correspondence between Sentences for Abstractive Summarization
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
the 58th Annual Meeting of the Association for Computational Linguistics: Student Research WorkshopAssociation for Computational Linguistics191198 July 5 -July 10, 2020. 2020191
Fusing sentences containing disparate content is a remarkable human ability that helps create informative and succinct summaries. Such a simple task for humans has remained challenging for modern abstractive summarizers, substantially restricting their applicability in realworld scenarios. In this paper, we present an investigation into fusing sentences drawn from a document by introducing the notion of points of correspondence, which are cohesive devices that tie any two sentences together into a coherent text. The types of points of correspondence are delineated by text cohesion theory, covering pronominal and nominal referencing, repetition and beyond. We create a dataset containing the documents, source and fusion sentences, and human annotations of points of correspondence between sentences. Our dataset bridges the gap between coreference resolution and summarization. It is publicly shared to serve as a basis for future work to measure the success of sentence fusion systems. 1
Introduction
Stitching portions of text together into a sentence is a crucial first step in abstractive summarization. It involves choosing which sentences to fuse, what content from each of them to retain and how best to present that information (Elsner and Santhanam, 2011). A major challenge in fusing sentences is to establish correspondence between sentences. If there exists no correspondence, it would be difficult, if not impossible, to fuse sentences. In Table 1, we present example source and fusion sentences, where the summarizer attempts to merge two sentences into a summary sentence with improper use of points of correspondence. In this paper, we seek to uncover hidden correspondences between sen-
[Source Sentences]
Robert Downey Jr. is making headlines for walking out of an interview with a British journalist who dared to veer away from the superhero movie Downey was there to promote.
The journalist instead started asking personal questions about the actor's political beliefs and "dark periods" of addiction and jail time.
[Summary] Robert Downey Jr started asking personal questions about the actor's political beliefs.
[Source Sentences]
"Real Housewives of Beverly Hills" star and former child actress Kim Richards is accused of kicking a police officer after being arrested Thursday morning.
A police representative said Richards was asked to leave but refused and then entered a restroom and wouldn't come out.
[Summary] Kim Richards is accused of kicking a police officer who refused to leave.
[Source Sentences]
The kind of horror represented by the Blackwater case and others like it [...] may be largely absent from public memory in the West these days, but it is being used by the Islamic State in Iraq and Syria (ISIS) to support its sectarian narrative.
In its propaganda, ISIS has been using Abu Ghraib and other cases of Western abuse to legitimize its current actions [...] [Summary] In its propaganda, ISIS is being used by the Islamic State in Iraq and Syria. Table 1: Unfaithful summary sentences generated by neural abstractive summarizers, in-house and PG (See et al., 2017). They attempt to merge two sentences into one sentence with improper use of points of correspondence between sentences, yielding nonsensical output. Summaries are manually re-cased for readability.
tences, which has a great potential for improving content selection and deep sentence fusion.
Sentence fusion (or multi-sentence compression) plays a prominent role in automated summarization and its importance has long been recognized (Barzilay et al., 1999). Early attempts to fuse sentences build a dependency graph from sentences, then decode a tree from the graph using integer linear programming, finally linearize the tree to generate a summary sentence (Barzilay and McKeown, 2005;Filippova and Strube, 2008;Thadani and McKeown, 2013a). Despite valuable insights gained from PoC Type Source Sentences Summary Sentence
Pronominal
[S1] The bodies showed signs of torture.
• The bodies of the men, which showed signs Referencing
[S2] They were left on the side of a highway in Chilpancingo, about an of torture, were left on the side of a highway hour north of the tourist resort of Acapulco in the state of Guerrero.
in Chilpancingo.
Nominal
[S1] Bahamian R&B singer Johnny Kemp , best known for the 1988 party • Johnny Kemp is "believed to have drowned at Referencing anthem "Just Got Paid," died this week in Jamaica. a beach in Montego Bay," police say.
[S2] The singer is believed to have drowned at a beach in Montego Bay on Thursday, the Jamaica Constabulatory Force said in a press release. Common-Noun [S1] A nurse confessed to killing five women and one man at hospital.
• The nurse, who has been dubbed "nurse Referencing
[S2] A former nurse in the Czech Republic murdered six of her elderly death" locally, has admitted killing the victims patients with massive doses of potassium in order to ease her workload.
with massive doses of potassium.
Repetition
[S1] Stewart said that she and her husband, Joseph Naaman, booked • Couple spends $1,200 to ship their cat, Felix , Felix on their flight from the United Arab Emirates to New York on April 1. on a flight from the United Arab Emirates.
[S2] The couple said they spent $1,200 to ship Felix on the 14-hour flight.
Event Triggers
[S1] Four employees of the store have been arrested , but its manager • The four store workers arrested could spend was still at large, said Goa police superintendent Kartik Kashyap.
3 years each in prison if convicted .
[S2] If convicted , they could spend up to three years in jail, Kashyap said. these attempts, experiments are often performed on small datasets and systems are designed to merge sentences conveying similar information. Nonetheless, humans do not restrict themselves to combine similar sentences, but also disparate sentences containing fundamentally different content but remain related to make fusion sensible (Elsner and Santhanam, 2011). We focus specifically on analyzing fusion of disparate sentences, which is a distinct problem from fusing a set of similar sentences.
While fusing disparate sentences is a seemingly simple task for humans to do, it has remained challenging for modern abstractive summarizers (See et al., 2017;Celikyilmaz et al., 2018;Chen and Bansal, 2018;Liu and Lapata, 2019). These systems learn to perform content selection and generation through end-to-end learning. However, such a strategy is not consistently effective and they struggle to reliably perform sentence fusion (Falke et al., 2019;Kryściński et al., 2019). E.g., only 6% of summary sentences generated by pointer-generator networks (See et al., 2017) are fusion sentences; the ratio for human abstracts is much higher (32%). Further, Lebanoff et al. (2019a) report that 38% of fusion sentences contain incorrect facts. There exists a pressing need for-and this paper contributes to-broadening the understanding of points of correspondence used for sentence fusion.
We present the first attempt to construct a sizeable sentence fusion dataset, where an instance in the dataset consists of a pair of input sentences, a fusion sentence, and human-annotated points of correspondence between sentences. Distinguishing our work from previous efforts (Geva et al., 2019), our input contains disparate sentences and output is a fusion sentence containing important, though not equivalent information of the input sentences. Our investigation is inspired by Halliday and Hasan's theory of text cohesion (1976) that covers a broad range of points of correspondence, including entity and event coreference (Ng, 2017;Lu and Ng, 2018), shared words/concepts between sentences and more. Our contributions are as follows.
• We describe the first effort at establishing points of correspondence between disparate sentences. Without a clear understanding of points of correspondence, sentence fusion remains a daunting challenge that is only sparsely and sometimes incorrectly performed by abstractive summarizers.
• We present a sizable dataset for sentence fusion containing human-annotated corresponding regions between pairs of sentences. It can be used as a testbed for evaluating the ability of summarization models to perform sentence fusion. We report on the insights gained from annotations to suggest important future directions for sentence fusion. Our dataset is released publicly.
Annotating Points of Correspondence
We cast sentence fusion as a constrained summarization task where portions of text are selected from each source sentence and stitched together to form a fusion sentence; rephrasing and reordering are allowed in this process. We propose guidelines for annotating points of correspondence (PoC) between sentences based on Halliday and Hasan's theory of cohesion (1976).
We consider points of correspondence as cohesive phrases that tie sentences together into a coherent text. Guided by text cohesion theory, we categorize PoC into five types, including pronominal referencing ("they"), nominal referencing ("Johnny Kemp"), common-noun referencing ("five women"), Figure 1: An illustration of the annotation interface. A human annotator is asked to highlight text spans referring to the same entity, then choose one from the five pre-defined PoC types.
repetition, and event trigger words that are related in meaning ("died" and "drowned"). An illustration of PoC types is provided in Table 2. Our categorization emphasizes the lexical linking that holds a text together and gives it meaning.
A human annotator is instructed to identify a text span from each of the source sentences and summary sentence, thus establishing a point of correspondence between source sentences, and between source and summary sentences. As our goal is to understand the role of PoC in sentence fusion, we do not consider the case if PoC is only found in source sentences but not summary sentence, e.g., "Kashyap said" and "said Goa police superintendent Kartik Kashyap" in Table 2. If multiple PoC co-exist in an example, an annotator is expected to label them all; a separate PoC type will be assigned to each PoC occurrence. We are particularly interested in annotating inter-sentence PoC. If entity mentions ("John" and "he") are found in the same sentence, we do not explicitly label them but assume such intra-sentence referencing can be captured by an existing coreference resolver. Instances of source sentences and summary sentences are obtained from the test and validation splits of the CNN/DailyMail corpus (See et al., 2017) following the procedure described by Lebanoff et al. (2019a). We take a human summary sentence as an anchor point to find two document sentences that are most similar to it based on ROUGE. It becomes an instance containing a pair of source sentences and their summary. The method allows us to identify a large quantity of candidate fusion instances.
Annotations are performed in two stages. Stage one removes all spurious pairs that are generated by the heuristic, i.e. a summary sentence that is not a valid fusion of the corresponding two source sentences. Human annotators are given a pair of sentences and a summary sentence and are asked whether it represents a valid fusion. The pairs identified as valid fusions by a majority of annotators move on to stage two. Stage two identifies the corresponding regions in the sentences. As shown in Figure 1, annotators are given a pair of sentences and their summary and are tasked with highlighting the corresponding regions between each sentence. They must also choose one of the five PoC types (repetition, pronominal, nominal, common-noun referencing, and event triggers) for the set of corresponding regions.
We use Amazon mechanical turk, allowing only workers with 95% approval rate and at least 5,000 accepted tasks. To ensure high quality annotations, we first run a qualification round of 10 tasks. Workers performing sufficiently on these tasks were allowed to annotate the whole dataset. For task one, 2,200 instances were evaluated and 621 of them were filtered out. In total, we annotate points of correspondence for 1,599 instances, taken from 1,174 documents. Similar to (Hardy et al., 2019), we report Fleiss' Kappa judged on each word (highlighted or not), yielding substantial inter-annotator agreement (κ=0.58) for annotating points of correspondence. We include a reference to the original article that each instance was taken from, thus providing context for each instance. Figure 2 shows statistics of PoC occurrence frequencies and the distribution of different PoC types. A majority of sentence pairs have one or two points of correspondence. Only a small percentage (6.5%) do not share a PoC. A qualitatively analysis shows that these sentences often have an implicit discourse relationship, e.g., "The two men speak. Scott then gets out of the car, again, and runs away." In this example, there is no clear portion of text that is shared between the sentences; rather, the connection lies in the fact that one event happens after the other. Most of the PoC are a flavor of coreference (pronominal, nominal, or common-noun). Few are exact repetition. Further, we find that only 38% of points of correspondence in the sentence pair share (Honnibal and Montani, 2017), and AllenNLP (Gardner et al., 2017). We base our evaluation on the standard metric used for coreference resolution, B-CUBED algorithm (Bagga and Baldwin, 1998), with some modifications. Each resolver is run on an input pair of sentences to obtain multiple clusters, each representing an entity (e.g., Johnny Kemp) containing multiple mentions (e.g., Johnny Kemp; he; the singer) of that entity. More than one cluster can be detected by the coreference resolver, as additional entities may exist in the given sentence pair (e.g., Johnny Kemp and the police). Similarly, in Section §2, human annotators identified multiple PoC clusters, each representing a point of correspondence containing one mention from each sentence. We evaluate how well the resolver-detected clusters compare to the human-detected clusters (i.e., PoCs). If a resolver cluster overlaps both mentions for the gold-standard PoC, then this resolver cluster is classified as a hit. Any resolver cluster that does not overlap both PoC mentions is a miss. Using this metric, we can calculate precision, recall, and F1 scores based on correctly/incorrectly identified tokens from the outputs of each resolver.
The results are presented in Table 3. The three resolvers exhibit similar performance, but the scores on identifying points of correspondence are less than satisfying. The SpaCy resolver has the highest precision (59.2%) and Stanford CoreNLP achieves the highest F1-score (35.3%). We observe that existing coreference resolvers can sometimes struggle to use the high-level reasoning that humans use to determine what connects two sentences together. Next, we go deeper into understanding what PoC types these resolvers struggle with. We present the recall scores of these resolvers split by PoC correspondence type. Event coreference poses the most difficulty by far, which is understandable as coreference resolution only focuses on entities rather than events. More work into detecting event coreference can bring significant improvements in PoC identification. Common-noun coreference also poses a challenge, in part because names and pronouns give strong clues as to the relationships between mentions, while common-noun relationships are more difficult to identify since they lack these clues.
Sentence Fusion
Truly effective summarization will only be achievable when systems have the ability to fully recognize points of correspondence between sentences. It remains to be seen whether such knowledge can be acquired implicitly by neural abstractive systems through joint content selection and generation. We next conduct an initial study to assess neural abstractive summarizers on their ability to perform sentence fusion to merge two sentences into a summary sentence. The task represents an important, atomic unit of abstractive summarization, because a long summary is still generated one sentence at a time (Lebanoff et al., 2019b).
We compare two best-performing abstractive summarizers: Pointer-Generator uses an encoderdecoder architecture with attention and copy mechanism (See et al., 2017); Transformer adopts a decoder-only Transformer architecture similar to that of (Radford et al., 2019), where a summary is Table 4: ROUGE scores of neural abstractive summarizers on the sentence fusion dataset. We also report the percentage of output sentences that are indeed fusion sentences (%Fuse) decoded one word at a time conditioned on source sentences and the previously-generated summary words. We use the same number of heads, layers, and units per layer as BERT-base (Devlin et al., 2018). In both cases, the summarizer was trained on about 100k instances derived from the train split of CNN/DailyMail, using the same heuristic as described in ( §2) without PoC annotations. The summarizer is then tested on our dataset of 1,599 fusion instances and evaluated using standard metrics (Lin, 2004). We also report how often each summarizer actually draws content from both sentences (%Fuse), rather than taking content from only one sentence. A generated sentence counts as a fusion if it contains at least two non-stopword tokens from each sentence not already present in the other sentence. Additionally, we include a Concat-Baseline creating a fusion sentence by simply concatenating the two source sentences.
The results according to the ROUGE evaluation (Lin, 2004) are presented in Table 4. Sentence fusion appears to be a challenging task even for modern abstractive summarizers. Pointer-Generator has been shown to perform strongly on abstractive summarization, but it is less so on sentence fusion and in other highly abstractive settings (Narayan et al., 2018). Transformer significantly outperforms other methods, in line with previous findings . We qualitatively examine system outputs. Table 1 presents fusions generated by these models and exemplifies the need for infusing models with knowledge of points of correspondence. In the first example, Pointer-Generator incorrectly conflates Robert Downey Jr. with the journalist asking questions. Similarly, in the second example, Transformer states the police officer refused to leave when it was actually Richards. Had the models explicitly recognized the points of correspondence in the sentences-that the journalist is a separate entity from Robert Downey Jr. and that Richards is separate from police officer-then a more accurate summary could have been generated.
Related Work
Uncovering hidden correspondences between sentences is essential for producing proper summary sentences. A number of recent efforts select important words and sentences from a given document, then let the summarizer attend to selected content to generate a summary (Gehrmann et al., 2018;Hsu et al., 2018;Chen and Bansal, 2018;Putra et al., 2018;Lebanoff et al., 2018;Liu and Lapata, 2019). These systems are largely agnostic to sentence correspondences, which can have two undesirable consequences. If only a single sentence is selected, it can be impossible for the summarizer to produce a fusion sentence from it. Moreover, if non-fusible textual units are selected, the summarizer is forced to fuse them into a summary sentence, yielding output summaries that often fail to keep the original meaning intact. Therefore, in this paper we had investigated the correspondences between sentences to gain an understanding of sentence fusion.
Establishing correspondence between sentences goes beyond finding common words. Humans can fuse sentences sharing few or no common words if they can find other types of correspondence. Fusing such disparate sentences poses a serious challenge for automated fusion systems (Marsi and Krahmer, 2005;Filippova and Strube, 2008;McKeown et al., 2010;Elsner and Santhanam, 2011;Thadani and McKeown, 2013b;Mehdad et al., 2013;Nayeem et al., 2018). These systems rely on common words to derive a connected graph from input sentences or subject-verb-object triples (Moryossef et al., 2019). When there are no common words in sentences, systems tend to break apart.
There has been a lack of annotated datasets and guidelines for sentence fusion. Few studies have investigated the types of correspondence between sentences such as entity and event coreference. Evaluating sentence fusion systems requires not only novel metrics (Zhao et al., 2019;Zhang et al., 2020;Durmus et al., 2020;Wang et al., 2020) but also high-quality ground-truth annotations. It is therefore necessary to conduct a first study to look into cues humans use to establish correspondence between disparate sentences.
We envision sentence correspondence to be related to text cohesion and coherence, which help establish correspondences between two pieces of text. Halliday and Hasan (1976) describe text cohesion as cohesive devices that tie two textual elements together. They identify five categories of cohesion: [S1] Palin actually turned against the bridge project only after it became a national symbol of wasteful spending.
[S2] Ms. Palin supported the bridge project while running for governor, and abandoned it after it became a national scandal.
[Fusion] Palin turned against the bridge project after it became a national scandal.
DiscoFuse (Geva et al., 2019)
[S1] Melvyn Douglas originally was signed to play Sam Bailey.
[S2] The role ultimately went to Walter Pidgeon.
[Fusion] Melvyn Douglas originally was signed to play Sam Bailey, but the role ultimately went to Walter Pidgeon.
Points of Correspondence Dataset (Our Work)
[S1] The bodies showed signs of torture.
[S2] They were left on the side of a highway in Chilpancingo, about an hour north of the tourist resort of Acapulco in the state of Guerrero.
[Fusion] The bodies of the men, which showed signs of torture, were left on the side of a highway in Chilpancingo. reference, lexical cohesion, ellipsis, substitution and conjunction. In contrast, coherence is defined in terms of discourse relations between textual elements, such as elaboration, cause or explanation. Previous work studied discourse relations (Geva et al., 2019), this paper instead focuses on text cohesion, which plays a crucial role in generating proper fusion sentences. Our dataset contains pairs of source and fusion sentences collected from news editors in a natural environment. The work is particularly meaningful to text-to-text and data-to-text generation (Gatt and Krahmer, 2018) that demand robust modules to merge disparate content.
We contrast our dataset with previous sentence fusion datasets. McKeown et al. (2010) compile a corpus of 300 sentence fusions as a first step toward a supervised fusion system. However, the input sentences have very similar meaning, though they often present lexical variations and different details. In contrast, our proposed dataset seeks to fuse significantly different meanings together into a single sentence. A large-scale dataset of sentence fusions has been recently collected (Geva et al., 2019), where each sentence has disparate content and are connected by various discourse connectives. This paper instead focuses on text cohesion and on fusing only the salient information, which are both vital for abstractive summarization. Examples are presented in Table 5.
Conclusion
In this paper, we describe a first effort at annotating points of correspondence between disparate sentences. We present a benchmark dataset comprised of the documents, source and fusion sentences, and human annotations of points of correspondence between sentences. The dataset fills a notable gap of coreference resolution and summarization research. Our findings shed light on the importance of modeling points of correspondence, suggesting important future directions for sentence fusion.
Figure 2 :
2Statistics of PoC occurrences and types.
(
McKeown et al., 2010)
Table 2 :
2Types of sentence correspondences. Text cohesion can manifest itself in different forms.
Table 3 :
3Results of various coreference resolvers on successfully identifying inter-sentence points of correspondence (PoC) and recall scores of these resolvers split by PoC correspondence type.any words (lemmatized). This makes identifying
them automatically challenging, requiring a deeper
understanding of what connects the two sentences.
3 Resolving Coreference
Coreference resolution (Ng, 2017) is similar to the
task of identifying points of correspondence. Thus,
a natural question we ask is how well state-of-the-
art coreference resolvers can be adapted to this task.
If coreference resolvers can perform reasonably
well on PoC identification, then these resolvers can
be used to extract PoC annotations to potentially
enhance sentence fusion. If they perform poorly,
coreference performance results can indicate areas
of improvement for future work on detecting points
of correspondence. In this paper, we compare three
coreference resolvers on our dataset, provided by
open-source libraries: Stanford CoreNLP (Man-
ning et al., 2014), SpaCy
Table 5 :
5Comparison of sentence fusion datasets.
https://github.com/ucfnlp/ points-of-correspondence
AcknowledgmentsWe are grateful to the anonymous reviewers for their helpful comments and suggestions. This research was supported in part by the National Science Foundation grant IIS-1909603.
Algorithms for scoring coreference chains. Amit Bagga, Breck Baldwin, The first international conference on language resources and evaluation workshop on linguistics coreference. Granada1Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first interna- tional conference on language resources and evalua- tion workshop on linguistics coreference, volume 1, pages 563-566. Granada.
Sentence fusion for multidocument news summarization. Regina Barzilay, Kathleen R Mckeown, https:/www.mitpressjournals.org/doi/pdf/10.1162/089120105774321091Computational Linguistics. 331Regina Barzilay and Kathleen R. McKeown. 2005. Sentence fusion for multidocument news summa- rization. Computational Linguistics, 31(3).
Information fusion in the context of multi-document summarization. Regina Barzilay, Kathleen R Mckeown, Michael Elhadad, Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). the Annual Meeting of the Association for Computational Linguistics (ACL)Regina Barzilay, Kathleen R. McKeown, and Michael Elhadad. 1999. Information fusion in the context of multi-document summarization. In Proceedings of the Annual Meeting of the Association for Computa- tional Linguistics (ACL).
Deep communicating agents for abstractive summarization. Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, Yejin Choi, Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). the North American Chapter of the Association for Computational Linguistics (NAACL)Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, and Yejin Choi. 2018. Deep communicating agents for abstractive summarization. In Proceedings of the North American Chapter of the Association for Com- putational Linguistics (NAACL).
Fast abstractive summarization with reinforce-selected sentence rewriting. Yen-Chun Chen, Mohit Bansal, Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). the Annual Meeting of the Association for Computational Linguistics (ACL)Yen-Chun Chen and Mohit Bansal. 2018. Fast ab- stractive summarization with reinforce-selected sen- tence rewriting. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics (ACL).
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. arXiv:1810.04805.
Feqa: A question answering evaluation framework for faithfulness assessment in abstractive summarization. Esin Durmus, He He, Mona Diab, Proceedings of the Annual Conference of the Association for Computational Linguistics (ACL). the Annual Conference of the Association for Computational Linguistics (ACL)Esin Durmus, He He, and Mona Diab. 2020. Feqa: A question answering evaluation framework for faith- fulness assessment in abstractive summarization. In Proceedings of the Annual Conference of the Associ- ation for Computational Linguistics (ACL).
Learning to fuse disparate sentences. Micha Elsner, Deepak Santhanam, Proceedings of ACL Workshop on Monolingual Text-To-Text Generation. ACL Workshop on Monolingual Text-To-Text GenerationMicha Elsner and Deepak Santhanam. 2011. Learning to fuse disparate sentences. In Proceedings of ACL Workshop on Monolingual Text-To-Text Generation.
Tobias Falke, Leonardo F R Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie Utama, Ido Dagan, and Iryna Gurevych. 2019.
Ranking generated summaries by correctness: An interesting but challenging application for natural language inference. Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). the Annual Meeting of the Association for Computational Linguistics (ACL)Ranking generated summaries by correctness: An in- teresting but challenging application for natural lan- guage inference. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics (ACL).
Sentence fusion via dependency graph compression. Katja Filippova, Michael Strube, Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). the Conference on Empirical Methods in Natural Language Processing (EMNLP)Katja Filippova and Michael Strube. 2008. Sentence fusion via dependency graph compression. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
Allennlp: A deep semantic natural language processing platform. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F Liu, Matthew Peters, Michael Schmitz, Luke S Zettlemoyer, Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew Peters, Michael Schmitz, and Luke S. Zettlemoyer. 2017. Allennlp: A deep semantic natural language processing platform.
Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Albert Gatt, Emiel Krahmer, Journal of Artificial Intelligence Research. 61Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artifi- cial Intelligence Research, 61:65-170.
Bottom-up abstractive summarization. Sebastian Gehrmann, Yuntian Deng, Alexander M Rush, Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). the Conference on Empirical Methods in Natural Language Processing (EMNLP)Sebastian Gehrmann, Yuntian Deng, and Alexander M. Rush. 2018. Bottom-up abstractive summariza- tion. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
DISCOFUSE: A large-scale dataset for discourse-based sentence fusion. Mor Geva, Eric Malmi, Idan Szpektor, Jonathan Berant, Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). the North American Chapter of the Association for Computational Linguistics (NAACL)Mor Geva, Eric Malmi, Idan Szpektor, and Jonathan Berant. 2019. DISCOFUSE: A large-scale dataset for discourse-based sentence fusion. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL).
. A K Michael, Ruqaiya Halliday, Hasan, Cohesion in English. English Language Series. Longman Group LtdMichael A. K. Halliday and Ruqaiya Hasan. 1976. Co- hesion in English. English Language Series. Long- man Group Ltd.
HighRES: Highlight-based reference-less evaluation of summarization. Hardy Hardy, Shashi Narayan, Andreas Vlachos, 10.18653/v1/P19-1330Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsHardy Hardy, Shashi Narayan, and Andreas Vlachos. 2019. HighRES: Highlight-based reference-less evaluation of summarization. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3381-3392, Florence, Italy. Association for Computational Linguistics.
2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. Matthew Honnibal, Ines Montani, To appearMatthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.
A unified model for extractive and abstractive summarization using inconsistency loss. Wan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). the Annual Meeting of the Association for Computational Linguistics (ACL)Jing Tang, and Min SunWan-Ting Hsu, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. 2018. A unified model for extractive and abstractive summarization using inconsistency loss. In Proceedings of the An- nual Meeting of the Association for Computational Linguistics (ACL).
Neural text summarization: A critical evaluation. Wojciech Kryściński, Nitish Shirish Keskar, Bryan Mc-Cann, Caiming Xiong, Richard Socher, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingEMNLPWojciech Kryściński, Nitish Shirish Keskar, Bryan Mc- Cann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing (EMNLP).
Analyzing sentence fusion in abstractive summarization. Logan Lebanoff, John Muchovej, Franck Dernoncourt, Soon Doo, Seokhwan Kim, Walter Kim, Fei Chang, Liu, Proceedings fo the EMNLP 2019 Workshop on New Frontiers in Summarization. fo the EMNLP 2019 Workshop on New Frontiers in SummarizationLogan Lebanoff, John Muchovej, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019a. Analyzing sentence fusion in ab- stractive summarization. In Proceedings fo the EMNLP 2019 Workshop on New Frontiers in Sum- marization.
Scoring sentence singletons and pairs for abstractive summarization. Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Soon Doo, Seokhwan Kim, Walter Kim, Fei Chang, Liu, Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). the Annual Meeting of the Association for Computational Linguistics (ACL)Logan Lebanoff, Kaiqiang Song, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019b. Scoring sentence singletons and pairs for abstractive summarization. In Proceedings of the Annual Meeting of the Association for Compu- tational Linguistics (ACL).
Adapting the neural encoder-decoder framework from single to multi-document summarization. Logan Lebanoff, Kaiqiang Song, Fei Liu, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingEMNLPLogan Lebanoff, Kaiqiang Song, and Fei Liu. 2018. Adapting the neural encoder-decoder framework from single to multi-document summarization. In Proceedings of the Conference on Empirical Meth- ods in Natural Language Processing (EMNLP).
ROUGE: a package for automatic evaluation of summaries. Chin-Yew Lin, Proceedings of ACL Workshop on Text Summarization Branches Out. ACL Workshop on Text Summarization Branches OutChin-Yew Lin. 2004. ROUGE: a package for au- tomatic evaluation of summaries. In Proceedings of ACL Workshop on Text Summarization Branches Out.
Generating wikipedia by summarizing long sequences. J Peter, Mohammad Liu, Etienne Saleh, Ben Pot, Ryan Goodrich, Łukasz Sepassi, Noam Kaiser, Shazeer, Sixth International Conference on Learning Representations. ICLRPeter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Łukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summariz- ing long sequences. In Sixth International Confer- ence on Learning Representations (ICLR).
Hierarchical transformers for multi-document summarization. Yang Liu, Mirella Lapata, Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). the Annual Meeting of the Association for Computational Linguistics (ACL)Yang Liu and Mirella Lapata. 2019. Hierarchical trans- formers for multi-document summarization. In Pro- ceedings of the Annual Meeting of the Association for Computational Linguistics (ACL).
Event coreference resolution: A survey of two decades of research. Jing Lu, Vincent Ng, Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI). the 27th International Joint Conference on Artificial Intelligence (IJCAI)Jing Lu and Vincent Ng. 2018. Event coreference reso- lution: A survey of two decades of research. In Pro- ceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI).
The Stanford CoreNLP natural language processing toolkit. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, David Mc-Closky, Association for Computational Linguistics (ACL) System Demonstrations. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.
Explorations in sentence fusion. Erwin Marsi, Emiel Krahmer, Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages. the ACL Workshop on Computational Approaches to Semitic LanguagesErwin Marsi and Emiel Krahmer. 2005. Explorations in sentence fusion. In Proceedings of the ACL Work- shop on Computational Approaches to Semitic Lan- guages.
Time-efficient creation of an accurate sentence fusion corpus. Kathleen Mckeown, Sara Rosenthal, Kapil Thadani, Coleman Moore, Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). the North American Chapter of the Association for Computational Linguistics (NAACL)Kathleen McKeown, Sara Rosenthal, Kapil Thadani, and Coleman Moore. 2010. Time-efficient creation of an accurate sentence fusion corpus. In Proceed- ings of the North American Chapter of the Associa- tion for Computational Linguistics (NAACL).
Abstractive meeting summarization with entailment and fusion. Yashar Mehdad, Giuseppe Carenini, Frank W Tompa, Raymond T Ng, Proceedings of the 14th European Workshop on Natural Language Generation. the 14th European Workshop on Natural Language GenerationYashar Mehdad, Giuseppe Carenini, Frank W. Tompa, and Raymond T. NG. 2013. Abstractive meeting summarization with entailment and fusion. In Pro- ceedings of the 14th European Workshop on Natural Language Generation.
Step-by-Step: Separating planning from realization in neural data-to-text generation. Amit Moryossef, Yoav Goldberg, Ido Dagan, Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). the North American Chapter of the Association for Computational Linguistics (NAACL)Amit Moryossef, Yoav Goldberg, and Ido Dagan. 2019. Step-by-Step: Separating planning from realization in neural data-to-text generation. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL).
Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. Shashi Narayan, Shay B Cohen, Mirella Lapata, 10.18653/v1/D18-1206Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsShashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1797-1807, Brussels, Bel- gium. Association for Computational Linguistics.
Abstractive unsupervised multidocument summarization using paraphrastic sentence fusion. Tanvir Mir Tafseer Nayeem, Yllias Ahmed Fuad, Chali, Proceedings of the International Conference on Computational Linguistics (COL-ING). the International Conference on Computational Linguistics (COL-ING)Mir Tafseer Nayeem, Tanvir Ahmed Fuad, and Yl- lias Chali. 2018. Abstractive unsupervised multi- document summarization using paraphrastic sen- tence fusion. In Proceedings of the International Conference on Computational Linguistics (COL- ING).
Machine learning for entity coreference resolution: A retrospective look at two decades of research. Vincent Ng, Proceedngs of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI). eedngs of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI)Vincent Ng. 2017. Machine learning for entity corefer- ence resolution: A retrospective look at two decades of research. In Proceedngs of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI).
Incorporating topic sentence on neural news headline generation. Jan Wira Gotama Putra, Hayato Kobayashi, Nobuyuki Shimizu, Jan Wira Gotama Putra, Hayato Kobayashi, and Nobuyuki Shimizu. 2018. Incorporating topic sen- tence on neural news headline generation.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Get to the point: Summarization with pointergenerator networks. Abigail See, J Peter, Christopher D Liu, Manning, Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). the Annual Meeting of the Association for Computational Linguistics (ACL)Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the Annual Meeting of the Association for Computational Lin- guistics (ACL).
Sentence compression with joint structural inference. Kapil Thadani, Kathleen Mckeown, Proceedings of CoNLL. CoNLLKapil Thadani and Kathleen McKeown. 2013a. Sen- tence compression with joint structural inference. In Proceedings of CoNLL.
Supervised sentence fusion with single-stage inference. Kapil Thadani, Kathleen Mckeown, Proceedings of the International Joint Conference on Natural Language Processing. the International Joint Conference on Natural Language ProcessingIJCNLPKapil Thadani and Kathleen McKeown. 2013b. Super- vised sentence fusion with single-stage inference. In Proceedings of the International Joint Conference on Natural Language Processing (IJCNLP).
Asking and answering questions to evaluate the factual consistency of summaries. Alex Wang, Kyunghyun Cho, Mike Lewis, Proceedings of the Annual Conference of the Association for Computational Linguistics (ACL). the Annual Conference of the Association for Computational Linguistics (ACL)Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the fac- tual consistency of summaries. In Proceedings of the Annual Conference of the Association for Com- putational Linguistics (ACL).
Bertscore: Evaluating text generation with bert. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, Yoav Artzi, International Conference on Learning Representations. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations.
MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance. Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Christian M Meyer, Steffen Eger, 10.18653/v1/D19-1053Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsWei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized em- beddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 563-578, Hong Kong, China. Association for Computational Lin- guistics.
| [
"https://github.com/ucfnlp/"
] |
[
"LocTex: Learning Data-Efficient Visual Representations from Localized Textual Supervision",
"LocTex: Learning Data-Efficient Visual Representations from Localized Textual Supervision"
] | [
"Zhijian Liu \nMIT\nToyota Research Institute\n\n",
"Simon Stent \nMIT\nToyota Research Institute\n\n",
"Jie Li \nMIT\nToyota Research Institute\n\n",
"John Gideon \nMIT\nToyota Research Institute\n\n",
"Song Han Mit \nMIT\nToyota Research Institute\n\n"
] | [
"MIT\nToyota Research Institute\n",
"MIT\nToyota Research Institute\n",
"MIT\nToyota Research Institute\n",
"MIT\nToyota Research Institute\n",
"MIT\nToyota Research Institute\n"
] | [] | Computer vision tasks such as object detection and semantic/instance segmentation rely on the painstaking annotation of large training datasets. In this paper, we propose LocTex that takes advantage of the low-cost localized textual annotations (i.e., captions and synchronized mouseover gestures) to reduce the annotation effort. We introduce a contrastive pre-training framework between images and captions, and propose to supervise the cross-modal attention map with rendered mouse traces to provide coarse localization signals. Our learned visual features capture rich semantics (from free-form captions) and accurate localization (from mouse traces), which are very effective when transferred to various downstream vision tasks. Compared with ImageNet supervised pre-training, LocTex can reduce the size of the pre-training dataset by 10× or the target dataset by 2× while achieving comparable or even improved performance on COCO instance segmentation. When provided with the same amount of annotations, LocTex achieves around 4% higher accuracy than the previous state-of-theart "vision+language" pre-training approach on the task of PASCAL VOC image classification. | 10.1109/iccv48922.2021.00217 | [
"https://arxiv.org/pdf/2108.11950v1.pdf"
] | 237,304,114 | 2108.11950 | a1e880ecb6bb2634ca77950a5396d385b2bbd9eb |
LocTex: Learning Data-Efficient Visual Representations from Localized Textual Supervision
Zhijian Liu
MIT
Toyota Research Institute
Simon Stent
MIT
Toyota Research Institute
Jie Li
MIT
Toyota Research Institute
John Gideon
MIT
Toyota Research Institute
Song Han Mit
MIT
Toyota Research Institute
LocTex: Learning Data-Efficient Visual Representations from Localized Textual Supervision
Computer vision tasks such as object detection and semantic/instance segmentation rely on the painstaking annotation of large training datasets. In this paper, we propose LocTex that takes advantage of the low-cost localized textual annotations (i.e., captions and synchronized mouseover gestures) to reduce the annotation effort. We introduce a contrastive pre-training framework between images and captions, and propose to supervise the cross-modal attention map with rendered mouse traces to provide coarse localization signals. Our learned visual features capture rich semantics (from free-form captions) and accurate localization (from mouse traces), which are very effective when transferred to various downstream vision tasks. Compared with ImageNet supervised pre-training, LocTex can reduce the size of the pre-training dataset by 10× or the target dataset by 2× while achieving comparable or even improved performance on COCO instance segmentation. When provided with the same amount of annotations, LocTex achieves around 4% higher accuracy than the previous state-of-theart "vision+language" pre-training approach on the task of PASCAL VOC image classification.
Introduction
The tremendous success of deep learning in computer vision can be credited in part to the existence of large annotated datasets, such as ImageNet [7,48]. However, acquiring high-quality annotations is usually very expensive and timeconsuming, especially for dense, pixel-wise labeling tasks. For instance, segmenting instances in a single image from the COCO dataset takes more than 10 minutes on average [30] * .
Pre-training plus fine-tuning is a widely-adopted solution to reduce the need for costly annotations. In the computer vision community, a convolutional neural network (CNN) backbone is first pre-trained to perform image classification on ImageNet. Then, the learned features can be trans-* 70k hours / 320k images = 0.22 hours/image = 13 minutes/image rich semantics and accurate localization. This is very useful when transferred to (c) downstream tasks that are sensitive to localization (e.g., object detection, instance segmentation).
ferred to other downstream tasks by fine-tuning on the target dataset. Over the past few years, this paradigm has enabled state-of-the-art performance on many computer vision tasks, including object detection [47], semantic segmentation [32] and instance segmentation [21].
Though effective, ImageNet pre-training has its caveats. (i) Its annotations (i.e., 1000-class labels) are very expensive to acquire. Annotating ImageNet is not as easy as it seems because differentiating among a fine-grained class taxonomy requires expert knowledge, which makes it hard to scale up or repeat. (ii) It is not as effective for those tasks that are more sensitive to localization than classification. As ImageNet pretraining only takes the object existence into consideration, its learned visual representations are supposed to be invariant to different object locations. Some recent research [20] has demonstrated competitive performance on object detection and instance segmentation with models trained from scratch.
To solve (i), researchers have explored pre-training backbone networks with coarse, freely available labels, such as metadata and hashtags [24]. There has also been increased attention in self-supervised pre-training that learns visual representations from unlabeled images [19,5,17]. Some of them have been successfully scaled up to hundreds of millions or even billions of images [19]. However, (ii) remains unsolved as they usually rely on some low-level visual cues (e.g., color, texture) and lack semantic understanding. In addition to this, (iii) self-supervised pre-training methods tend to be trained with prohibitively long schedules to exploit their potential. For instance, the recent approach of BYOL [17] requires 170 TPU days for a single training run.
In this paper, we propose LocTex to learn data-efficient visual representations using localized textual supervision, which is composed of free-form captions associated to synchronized mouse traces (see Figure 1a). This form of annotation can be easily acquired from non-expert workers, leading to (i) lower cost and better scalability. Technically, we propose to bridge the vision and language modalities with contrastive learning and supervise the cross-modal attention map with rendered mouse traces, providing (ii) coarse localization information that improves the performance of localization-sensitive downstream tasks. Finally, our method requires (iii) a similar amount of training time as ImageNet pre-training: it can be trained under a day with 8 GPUs.
After the pre-training, we transfer our learned feature representations to various downstream vision tasks, including image classification, object detection and instance segmentation. Compared with the ImageNet supervised pre-training, our proposed LocTex can reduce the size of the pre-training dataset by 10× or the target dataset by 2× while achieving comparable or better performance on the COCO instance segmentation. With the same amount of annotations, our LocTex achieves around 4% higher accuracy than the previous stateof-the-art "vision+language" pre-training approach [8] on the PASCAL VOC image classification.
Related Work
Supervised Pre-Training. Much of the recent success of computer vision can be attributed to the richness of image features learned via supervised training. ImageNet pretraining, in which image features are first learned through the supervised image classification on ImageNet [7] before being used on downstream tasks, is a highly popular model initialization method [15,10]. However, this approach has limitations which have become increasingly evident as the variety of downstream tasks and the types of new annotated data has increased dramatically over the years [45,20,62,52]. Unsupervised Learning. To go beyond the scale of Ima-geNet in terms of supervised learning is expensive. Hence, it becomes increasingly popular to seek methods for representation learning that can meet or exceed ImageNet supervised pre-training without the need for labelled data. An important direction in unsupervised learning is "self-supervised" learning, in which models are trained on pretext tasks where training labels can be obtained from the raw or augmented input. Common pretext tasks include predicting context [9], solving jigsaws [35], predicting rotation [14], colorization [59], and inpainting [38]. Generative models have also been widely used in representation learning to reconstruct the distribution of the input data, such as restricted Boltzmann machines (RBMs) [27], autoencoders [26] and generative adversarial networks (GANs) [12,11]. Recent explorations investigate intra-dataset patterns and feature discrimination, including clustering [3,4] and contrastive learning [19,17,5,56].
Vision & Language. Pre-training methods in natural language processing have witnessed tremendous improvement over the past few years [6,40,44,2]. Efforts trying to use the text in visual representation learning have never stopped. Early research tried to predict captions or text from associated images [42]. Srivastava et al. [53] applied Boltzmann machine to capture multi-modal features. Some works treat text or language as weak supervisory signals for vision and explore the trade-off between label quality and data scale. Li et al. [28] train visual models on YFCC-100M [55] using user-provided tags. JFT-300M [54] is also used for visual pre-training with automatic-generated web signals. More recent works like ICMLM [49], VirTex [8] and ConVIRT [60] try to leverage the novel pre-training approaches developed in NLP, such as masked language modeling and transformerbased modeling. A concurrent work of us [43] has explored contrastive learning between image and text at the web scale. In this work, we explore further in this direction with a focus on learning localization-aware features for spatially-sensitive tasks such as object detection and segmentation.
Annotation Efficiency. A key goal of our work is to learn powerful representations on data which can be acquired at a low annotation cost. Recent explorations focused on efficient labeling [1,31] and active learning [50] approach this problem by placing the model in the loop in order to increase the information gain per unit of annotation effort. We approach from an alternative angle by looking at widely available, natural sources of human supervision, which are already cheap or free to acquire. Our work centers around the Localized Narratives dataset [41], which complements verbal image descriptions with synchronized mouse-over gestures containing noisy spatial cues. We demonstrate that designing a system to leverage such multi-modal cues can provide significant performance benefit on visual representation learning with minimal annotation overhead.
Method
In this section, we introduce our approach of visual representation learning from localized textual supervision. We Figure 2: Overview of our data-efficient visual representation learning framework (LocTex). We first use a pair of visual and textual backbones to extract the features from the image and caption. We then apply the contrastive loss to pull the features from positive pairs together and push those from negative pairs apart. Finally, we compute the cross-modal attention map between visual and textual features and provide supervision using the rendered attention from the associated mouse trace.
present an overview of our LocTex framework in Figure 2. We pre-train the visual backbone (as well as the textual backbone) using contrastive learning over positive and negative image-caption pairs. We propose to make use of the accompanying mouse trace annotations to provide coarse learning signals for localization. After pre-training, we transfer the learned visual backbone to other downstream vision tasks (e.g., classification, detection and segmentation).
Annotations
In the computer vision community, ImageNet [7] was commonly used to pre-train visual backbone networks. However, annotating over 1000 fine-grained classes is very costly and cannot be easily scaled up [49]. In this paper, we propose to employ localized textual annotations (also known as localized narratives [41]) as it is relatively cheap to acquire and offers semantically dense information. The annotation we use consists of a caption with synchronized mouse trace:
Caption. Caption is a free-form annotation resulting from annotators being asked to describe the content of the image using natural language. As illustrated in Figure 1a, the information captured in the caption is semantically dense: i.e., the objects in the image as well as their attributes and relative spatial relationships. The underlying rich semantic information could potentially benefit a variety of downstream vision tasks. On the other hand, the cost of this form of annotation is much lower compared with other dense labeling [30] since it is a very natural task for humans to do and does not require the annotator to have extensive training or domain knowledge. Some recent datasets [41] adopt a two-stage data collection pipeline: they first ask the annotators to describe the image verbally and then apply either speech recognition or manual transcription to generate the final caption. From this collection protocol, the starting and ending timestamp of each token can also be obtained (which will be used to synchronize the mouse trace with the caption). Synchronized Mouse Trace. Compared with drawing a sequence of bounding boxes or instance masks, logging the mouse gestures of the subject while they describe the image is an easier and more natural way for human annotators to specify the object locations. It can be acquired almost freely in the caption annotation pipeline since the annotators only need to additionally hover their mouse over the region being described. Though the localization and semantic correspondence is too coarse for these annotations to be directly used for tasks like object detection, it does capture rich information about "what is where" at a high level.
Backbones
Given an image and its corresponding caption, we first apply two separate neural networks to extract their features. Visual Backbone. The visual backbone takes the raw image as input and outputs a feature map that contains the semantic information. This is also the only component which we will transfer to other vision downstream tasks. Theoretically, we can choose any convolutional neural network as our visual backbone. Following recent representation learning papers [19,5,8], we adopt a standard ResNet-50 [22] as our feature extractor throughout this paper to facilitate fair comparison. We remove the last linear classification layer and the preceding global average pooling layer to keep the spatial dimension. Thus, the output feature map from the visual backbone will have size 2048 × R × R, where R is the output resolution (which is 1/32 of the input resolution). Textual Backbone. The textual backbone encodes the input caption into a feature vector that captures the meaning of each word token. In this paper, we adopt a Transformer [57] architecture as our textual backbone. Specially, we implement a 4-layer 1024-wide model with 16 self-attention heads. Similar to Desai et al. [8], we replace the activation function from ReLU to GELU [23] for its better empirical performance. We refer the readers to Vaswani et al. [57] for more architectural details. Before feeding the caption in, we first tokenize it into a lower-cased byte pair encoding (BPE) [51] with a vocabulary size of 10K. This results in almost no outof-vocab unknown ([UNK]) tokens in our experiments. We also pad the input sequence with start of sequence ([SOS]) and end of sequence ([EOS]) tokens to mark the boundary. The output feature vector from the textual backbone has size 1024 × L where L is the caption length after tokenization.
Contrastive Loss
Given a batch of feature pairs extracted from visual and textual backbones:
{(x V,k , x T,k ) | 1 ≤ k ≤ n} (
where n is the batch size), we transform each feature with a global average pooling and a single 1024-dimension fully-connected layer. The resulting visual and textual features are denoted y V,k and y T,k (both size 1024). Now, a straightforward way to guide the pre-training is to match y V,k and y T,k in the feature space using a simple L1/L2 regression loss. However, this will lead to a collapsed solution where all features are projected to the same location in the feature space [17].
Motivated by Chen et al. [5], we encourage the visual and textual backbones to not only project the features of matching image-caption pairs to be closer but also the features of non-matching pairs to be further. Concretely, there are n 2 image-caption pairs {(y V,i , y T,j ) | 1 ≤ i, j ≤ n} in total, among which only the n pairs with i = j are positive (as they correspond to the same data) while the remaining (n 2 − n) pairs are negative. We use the InfoNCE loss [36] to pull the positive pairs together and push the negative pairs apart to guide the pre-training (see Figure 2b):
L C = − n i=1 log exp(sim(y V,i , y T,i )/τ ) j =i exp(sim(y V,i , y T,j )/τ ) ,(1)
where sim(u, v) = u T v/ u 2 v 2 is the cosine similarity between two vectors, and τ denotes a temperature parameter (which is set to 0.1 in our experiments).
Discussions. Contrastive learning is not the only way to bridge the vision and language modalities. It is also possible to use one modality as input and the other as output to form a supervised learning problem: i.e., image captioning [8] (image to caption) and image synthesis [46] (caption to image). However, the supervised formulation has a worse empirical performance than the contrastive one (see our comparisons with VirTex [8] in Table 4). Similar observations have also been made in our concurrent work [43]. We conjecture that this is because the relationship between image and caption is not one-to-one (i.e., a single image may be described in a multitude of ways, and vice versa). In this case, the encoding process (many-to-one projection) might be much easier than the decoding process (one-to-many projection).
Localization Loss
Applying the contrastive loss over the global visual and textual features (after average pooling) provides the model with a holistic sense of what objects are in the image. However, the model might not be able to correspond each instance with its spatial location. This greatly limits its effectiveness when transferred to localization-sensitive downstream tasks (e.g., object detection, instance segmentation). This is where the mouse trace can be helpful since it provides coarse localization information about the instances: i.e., where the annotators position their mouse when describing an object.
We provide an overview of our localization loss in Figure 2c. We first transform visual and textual features linearly using a 1024-dimension fully-connected layer. Note that we do not apply the global average pooling as we need to keep the spatial dimension to learn localization. Thus, the transformed visual feature z V,k will have a size of 1024 × R × R, and the transformed textual feature z V,k will have a size of 1024 × L. We then compute the image-caption attention map as the normalized product between two feature maps:
M k = softmax(z T T, k × z V, k ),(2)
which will then have the size of L × R × R. In M k , each location (i, x, y) corresponds to (the probability of) whether the object described by the token i is located in the region of (x, y). We observe that this may be supervised using the mouse trace. Given the fact that the mouse trace is synchronized with the caption, we first temporally crop the part of the mouse trace sequence that corresponds to each token in the caption. We then render the covered region of cropped mouse trace into a binary mask with resolution of R. Finally, we stack the rendered masks of all tokens together to generate the rendered attentionM k . Since it has the same format and definition as the image-caption attention map M k , we can use it to provide supervision on M k with a normalized L2 regression loss:
L L = n k=1 M k / M k 2 −M k / M k 2 2 . (3)
Discussions. The feature map from the visual backbone usually has a low resolution (i.e., R = 7 if the input size is 224×224), which largely limits the precision of the provided localized supervision. Therefore, we additionally apply the localization loss to the second last visual feature maps (which has 2× larger resolution) to provide supervision at a finer scale. The losses computed at different resolutions are added together with equal weights. We note that using even higher resolutions than this leads to worse performance (see Table 5). A likely reason for this is that the mouse trace annotations from the datasets we use, and mouse traces in general, are intrinsically noisy. In this case, downsampling to a lower resolution removes some of the spurious correlations that otherwise might be introduced, at the cost of weaker overall supervision.
Implementation Details
Pre-Training Dataset. We use Localized Narratives [41] as our pre-training dataset as it provides large-scale localized textual annotations: i.e., it annotates the whole COCO [30], Flickr30k [58], ADE20k [61], and part of Open Images [25] datasets with high-quality captions and synchronized mouse traces. In this paper, we present two variants of our LocTex: (1) a smaller one trained only with COCO images (which contains 118K images) to have a fair comparison with other "vision+language" baselines, and (2) a larger one trained on both COCO and Open Images data (which contains 809K annotated images) to test the scalability of our method. To compensate for the resolution difference with COCO, we downsample the images from Open Images by 0.6×. Data Augmentation. We apply standard data augmentations for images: i.e., random crop, random horizontal flip, color jittering and normalization. Following Desai et al. [8], we swap the 'left' and 'right' tokens in the caption when applying the horizontal flip. We limit the caption length to 60 tokens for computational efficiency: we pad the caption with zeros if its length is shorter than 60 or otherwise crop a random 60-token subsequence from the caption, which empirically helps to reduce overfitting. Loss Functions. We assign L C and L L with equal weights as they are roughly of the same magnitude. The contrastive loss is computed locally at each GPU to save the communication bandwidth. This reduces the number of negative pairs, while empirically, the convergence rate is not affected. Training Details. We pre-train the visual and textual backbones with a batch size of 1024 for 600 epochs. Optimization is carried out using stochastic gradient descent with a momentum of 0.9 and a weight decay of 10 −4 . We use a learning rate of 0.4 for the visual backbone, 0.002 for the textual backbone, and 0.4 for the linear transforms. We adopt the cosine learning rate decay schedule [33] with a linear warmup for the first 20 epochs. We distribute the training over 8 NVIDIA V100 GPUs with synchronized batch normalization [39] and automatic mixed-precision [34] (from PyTorch [37]). The total training time is around 18 hours.
Experiments
In this section, we evaluate the effectiveness of our pretrained visual backbone in various downstream vision tasks, Table 1: Results of linear classification on PASCAL VOC. Our LocTex outperforms supervised and self-supervised pretraining on ImageNet by 4-13% while using around 60% of the annotated images. It also achieves 4% higher accuracy than previous vision+language pre-training methods when trained with a similar amount of annotations.
including image classification, object detection and instance segmentation. The textual backbone also learns useful representations and can be transferred to language-related tasks in principle, though exploration of this is left as future work.
Image Classification
Following the common protocol [19], we first evaluate our method by linear classification on frozen features: the pretrained visual backbone is fixed and used to extract features. Setup. We adopt the PASCAL VOC dataset [13] for our linear evaluation. We first resize all images to 224×224 and feed them into our pre-trained ResNet-50. We then apply global average pooling to extract 2048-dimensional image features. We train a separate SVM for each class on VOC07 trainval and report the mean AP (over 20 classes) on the test split. Following VirTex [8], we train multiple SVMs with different cost values from {0.01, 0.1, 1, 10} and select the best SVM based on a 3-fold cross-validation. Baselines. We compare our method with three sets of baselines: (1) ImageNet pre-training (IN-Sup) that pre-trains the model on the large-scale ImageNet dataset to perform image classification, (2) self-supervised learning [19,29,4] that pre-trains the model with a large number of unlabeled images, and (3) vision+language pre-training [49,8] that pre-trains the model to perform image captioning on COCO. Results. Training the classifier from scratch yields a rather poor performance because the size of PASCAL VOC is fairly small (with only 9K images). The widely-adopted ImageNet pre-training (IN-Sup) significantly boosts the accuracy; however, it requires massive annotations over a fine-grained class hierarchy. From Table 1 [49,8] were trained with five captions per image, which increases the annotation cost by 5×. To have a fair comparison, we compare our LocTex with the 1-caption VirTex [8]. With the same amount of pre-training images, our LocTex achieves more than 4% higher accuracy, which is contributed by the better optimization formulation and the additional localization supervision (see Table 4). We are also on par in terms of the annotation cost as the extra mouse trace annotations we use can be acquired almost for free during the caption annotation [41]. We further scale our method up with the additional Open Images data. With a similar amount of annotated images, our LocTex outperforms the full VirTex by 4% and ICMLM [49] by around 5%.
Object Detection
We then evaluate our method by transferring our learned visual backbone to object detection. Here, the entire backbone is fine-tuned along with the object detector. Setup. We adopt the PASCAL VOC dataset [13] for our detection evaluation. Different from the linear evaluation setup, we also include VOC12 trainval into the training set. For the object detector, we use Faster-RCNN [47] with ResNet-C4 backbone. Following He et al. [19], we add an extra batch normalization right after the visual backbone. We fine-tune all models for 24K iterations with linear warmup. The learning rate is initialized with 0.02 and decayed by 10× at 18K and 22K iteration. We distribute the training across 8 GPUs with a total batch size of 16. Baselines. Apart from the full ImageNet pre-training baseline, we also scale it down with fewer pre-training images (10%, 20%, 50%) to match the annotation cost of VirTex and ours. We follow the same training protocol as torchvision and keep the number of epochs the same; otherwise, these Results. We present our object detection results in Table 3.
With a similar amount of pre-training images, our LocTex surpasses VirTex and IN-Sup by a large margin (1.5-2.2% and 4.8-11.3%, respectively). Remarkably, LocTex matches the full ImageNet pre-training performance with more than 10× fewer annotated images. The scaled-up version of Loc-Tex further pushes the AP to 56.9%, which is 2.6% higher than the full ImageNet pre-training performance despite using 1.6× fewer images.
Instance Segmentation
Finally, we evaluate our method on instance segmentation under the limited data setting. Similar to the detection setup, we train the visual backbone end-to-end with the model. Setup. We use the COCO dataset [30] (with train2017 and val2017 split) for segmentation evaluation. We choose Mask R-CNN [21] with ResNet-C4 backbone as our model. We add the extra batch normalization to the visual backbone. Figure 3: LocTex learns visual representations in a data-efficient manner: on COCO instance segmentation, it is able to reduce the pre-training dataset by 10× without loss of accuracy or reduce the target dataset by 2× with 5% higher accuracy.
Following the 2× schedule, we train the model with 180K iterations. The learning rate is initialized with 0.02, multiplied by 0.01 at 120K and 160K iteration. As we target at the limited data setting, we sample a subset from COCO images (e.g., 10%, 20%, 50%, 100%) for fine-tuning, and shrink the training schedule proportionally to the dataset size.
Results. From Table 2, our proposed LocTex consistently outperforms VirTex and IN-Sup under all data settings. We refer the readers to the appendix for detailed results under 50% and 100% data settings. In Figure 3, we further investigate our method from the data efficiency perspective:
-Pre-Training Data. Our LocTex can reduce the number of pre-training images by 10× without loss of accuracy.
With the same amount of pre-training data, it outperforms IN-Sup by more than 8% in terms of AP. This translates into 2.4× and 6.4× lower annotation cost compared to pre-training with classification and segmentation labels. We refer the readers to the appendix for more details.
-Fine-Tuning Data. The end goal of a good pre-training is to reduce the amount of costly annotation in the target task. Our LocTex reduces the target dataset by 2× while achieving more than 5% higher accuracy than training from scratch. Under extremely limited data settings (i.e., 5-10%), the improvement is even more significant: 2.7% and 7.2% AP boost compared with ImageNet pre-training and random initialization, with 2× data reduction.
Analysis
In this section, we provide some additional analysis of our model to understand how it works and might be improved. Effectiveness of L C and L L . The two major components of LocTex are the formulation of contrastive learning (L C ) and the use of low-cost mouse trace annotations (L L ). Thus, we present some ablation analysis by removing one or both from our framework. VirTex [8] can be seen as our model Table 4: The formulation of contrastive learning (L C ) and the use of low-cost mouse trace annotations (L L ) are important to the effectiveness of our visual representation learning.
removing both L C and L L (and using a predictive loss instead). From Table 4, both components contributes positively to our final performance on downstream vision tasks. We also observe that the contrastive loss is particularly effective on image classification; while the localization loss is more useful on instance segmentation. This phenomenon is well aligned with our design where L C provides holistic semantic information and L L offers detailed localization supervision.
Learned Image-Caption Attention Map. Although we focus on transferring the learned visual backbone to different downstream tasks, it is still fairly important to understand what the model actually learns from the pre-training stage.
In Figure 4, we visualize the learned image-caption attention map. We refer the readers to the appendix for more examples.
Here, the visualized attention maps are predicted from the second last visual feature map (with resolution of 14×14). We resize the attention maps to 224×224 and then overlay them to images. As shown in Figure 4, the learned attention maps have fairly accurate localization and are able to capture occluded and distant instances (e.g., cars and buildings in the third example). This explains why our model transfers well to detection and segmentation. As the model is trained with open-vocabulary textual annotations, it is able to learn rich visual concepts, some of which (e.g., helmets and goggles) are not even covered in the COCO categories. This shows great potential in the fine-grained localization tasks (such as LVIS [18]), which is left as future work. Another interesting direction is to study the zero-shot transfer performance to detection/segmentation based on the learned attention maps.
Resolution of Mouse Trace Supervision. We explore different resolutions for mouse trace supervision. As shown in Table 5, applying supervision at both 1× and 2× resolutions works the best across different downstream tasks. 1× alone does not work well due to its low resolution (7×7) while 4× introduces too much noise from the mouse trace annotation.
Performance "Upper Bound". We further investigate the performance upper bound of our method given perfect mouse trace annotations. We synthesize the clean image-caption attention maps using ground-truth COCO segmentation masks. Specifically, we first match each token in the caption with the COCO category names (as well as their synonyms and parent classes). For each token with a match, we compute the intersection-over-union (IoU) between its corresponding mouse trace and every instance mask in the matching cate-gory. Finally, we aggregate these instance masks with high IoUs as our oracle image-caption attention maps. The IoU matching process helps to deal with the case where the token in the caption only refers to one of the multiple instances from the category. Note that we apply the oracle supervision still at 1× and 2× scale to mimic the coarse resolution of real mouse traces. In Table 5, our LocTex trained with oracle supervision further pushes the performance by 2% on the PASCAL VOC image classification and 1% on the COCO instance segmentation.
Training Efficiency. In addition to annotation efficiency, our LocTex pre-training is also very efficient in computation. Its training cost is comparable with ImageNet supervised pre-training. We refer the readers to the appendix for details.
Conclusion
In this paper, we introduce LocTex to reduce the practical costs of data annotation by taking advantage of lowcost, multi-modal labels including free-form captions and mouse-over gestures. We adopt a cross-modal contrastive pre-training approach using images and captions, and propose to supervise the image-caption attention map via rendered mouse traces to provide coarse localization information. Extensive experiments verify that the visual features learned through our approach can be effectively and efficiently transferred to downstream tasks including image classification, object detection, and instance segmentation. We hope that our approach will provide a simple but strong baseline and inspire future exploration into how to extract more value from rich yet noisy localized textual annotations.
A.1. Annotation Cost
Annotations Cost (hours) mAP
Multi-label classification Multi-class labels 11.1K [30,8] We provide quantitative comparisons between various forms of annotations in Table A1. Here, all annotation costs are estimated on the 118K training images of COCO. Compared with classification and segmentation annotations, localized narratives are cheaper (lower cost) and offer richer information (higher accuracy). It is worth noticing that the annotation cost of localized narratives is dominated by manual transcription. Thus, its cost can be further reduced by 3.6× with an accurate automatic speech recognition system. Annotating over larger sets of classes can be even more challenging since memorizing and learning to distinguish over a large class hierarchy (e.g., 1000 classes for ImageNet) is very costly.
Pre-Training Time (GPU Hours)
Most pre-training efforts focus on improving performance and data efficiency. However, some of them suffer from extremely long training time. We conduct a thorough analysis on the training efficiency across the following pre-training methods:
-LocTex. We include three variants of our LocTex. One of them is trained only with COCO images for 600 epochs. The remaining two are trained on both COCO and Open Images data, one for 300 epochs and the other one for 500 epochs.
-SwAV/SEER [4,16]. We include three variants of SwAV as well. Two of them are trained with ImageNet data for 200 and 800 epochs, respectively. The other one is trained with 1 billion uncurated Instagram images (for one epoch).
-MoCo [19]. We include the baseline MoCo self-supervised pre-training on ImageNet for 200 epochs.
-IN-Sup. We follow the standard ImageNet supervised pre-training (as in torchvision) for 90 epochs.
The training time is measured on 8 NVIDIA V100 GPUs for all pre-training methods except SwAV/SEER. For SwAV/SEER, we directly adopt the statistics from their official GitHub repository † and paper [16]: the two trained with ImageNet data use 32 NVIDIA V100 GPUs, and the scaled-up one uses 512 NVIDIA V100 GPUs.
Results. Our LocTex pre-training is more efficient than ImageNet supervised pre-training while achieving more than 1% higher accuracy. Compared with the self-supervised pre-training, the improvement is more significant: i.e., we achieve the same linear classification accuracy (92.6) as the scaled-up SwAV with 227× less training time. In terms of data efficiency, the scaled-up SwAV requires 1 billion unlabeled images from Instagram while our LocTex makes use of 809K images with low-cost localized textual annotations. This suggests that supervised pre-training can be much more computationally efficient, and its annotation cost is also affordable if the form of annotation is chosen carefully (which is discussed in the main paper
A.3. Additional Results on COCO Instance Segmentation
In Table A2, we provide additional results of instance segmentation on COCO under 50% and 100% data settings. The experimental setup is exactly the same as in the main paper where we scale the training schedule linearly with the dataset size.
Results. The overall trend is the same as the one under 10% and 20% settings (which is presented in the main paper). With the same amount of labelled images, our LocTex always achieves the highest performance compared with ImageNet supervised pre-training and VirTex pre-training methods. It achieves more than 1% higher (box or mask) AP than the full ImageNet supervised pre-training baseline while using only half of the annotated images. Under the 100% data setting, our LocTex is able to push the instance segmentation performance from 40.2% to 41.4% in box AP and from 35.0% to 35.8% in mask AP.
A.4. Additional Visualizations of Learned Image-Caption Attention Maps
In Figure A1, we provide additional visualizations of learned image-caption attention maps on COCO. Note that these visualizations are picked randomly from COCO val2017. The only constraint we apply is to ensure that there are at least six entities in the image for visualization purposes. We observe that the learned attention map is able to localize the instances fairly accurately, even for some small instances (e.g., cap in the second example), which is especially useful for the downstream object detection and instance segmentation tasks.
Figure 1 :
1LocTex pre-trains the visual CNN backbone with (a) localized textual annotations, which consists of free-form captions associated with synchronized mouse traces. With our contrastive and localization loss, the model learns (b)
Figure A1 :
A1Additional visualization of learned image-caption attention maps (on COCO val2017).
, our LocTex achieves 1.6% higher accuracy than IN-Sup with only 10% of annotated images,10% Training Data
20% Training Data
# Pretrain
Images
AP bbox AP bbox
50
AP bbox
75
AP mask AP mask
50
AP mask
75
AP bbox AP bbox
50
AP bbox
75
AP mask AP mask
50
AP mask
75
Random Init
-
16.0
29.6
15.3
15.1
27.3
15.0
17.8
31.7
17.8
16.7
29.6
17.0
IN-Sup (10%)
128K
16.4
31.7
15.3
15.7
29.1
15.4
22.3
39.2
22.5
20.8
36.5
21.2
VirTex [8]
118K
23.7
41.9
24.0
21.5
38.6
21.4
28.9
47.4
30.6
25.6
44.1
26.2
LocTex (Ours)
118K
25.0
43.2
25.7
22.4
39.8
22.4
29.8
48.9
31.1
26.4
45.2
27.2
IN-Sup (50%)
640K
23.4
41.9
23.5
21.6
38.5
21.6
28.5
47.3
29.8
25.5
43.9
26.5
VirTex [8]
118K(×5) 26.3
44.1
27.1
23.4
40.9
23.8
30.7
49.4
32.3
27.1
45.9
27.9
LocTex (Ours)
809K
27.3
45.8
28.2
24.2
42.1
24.9
31.8
50.9
33.8
27.8
47.3
28.9
IN-Sup (100%)
1.28M
25.0
43.8
25.2
22.8
40.1
23.0
30.3
49.9
31.6
27.0
46.1
27.9
Table 2 :
2Results of instance segmentation on COCO. Our LocTex consistently outperforms VirTex and IN-Sup under 10% and 20% data settings. We refer the readers to the appendix for detailed results under 50% and 100% data settings.or 5.8% higher with around 60% of annotated images. The
superior performance comes from the use of cheap yet se-
mantically dense localized caption annotations.
Previous vision+language pre-training methods
Table 3 :
3Results of object detection on PASCAL VOC.Our
Table 5 :
5Analysis of mouse trace supervision. (1) Applying
supervision at too low or too high resolution does not work
well. (2) With oracle supervision, the performance is further
boosted by 2% on classification and 1% on segmentation.
Table A1 :
A1Comparison with different forms of annotations.
A.2. Training EfficiencySwAV/SEER LocTex (Ours) MoCo13.2% higherIN-SupVOC mAP (%)78
82.5
87
91.5
96
10
100
1000
10000
100000
1000000
79.4
86.8
87.9
88.9
92.6
92.6
91.9
88.4
227x reduction
4.7% higher
). † https://github.com/facebookresearch/swav50% Training Data
100% Training Data
# Pretrain
Images
AP bbox AP bbox
50
AP bbox
75
AP mask AP mask
50
AP mask
75
AP bbox AP bbox
50
AP bbox
75
AP mask AP mask
50
AP mask
75
Random Init
-
28.4
46.1
29.9
25.7
43.2
26.8
36.1
55.0
38.9
31.8
52.0
33.9
IN-Sup (10%)
128K
31.2
50.0
32.7
27.9
46.8
29.2
37.7
56.9
40.6
33.0
53.4
35.3
VirTex [8]
118K
35.5
54.8
37.9
31.1
51.6
32.7
39.8
59.4
42.6
34.6
56.1
36.7
LocTex (Ours)
118K
36.1
55.7
38.6
31.6
52.4
33.1
40.6
60.6
44.1
35.2
57.0
37.4
IN-Sup (50%)
640K
34.9
54.4
37.0
30.8
51.0
32.5
39.7
59.4
43.3
34.6
56.2
36.8
VirTex [8]
118K(×5) 36.6
55.9
39.3
31.9
52.6
33.6
40.8
60.5
44.2
35.2
57.0
37.6
LocTex (Ours)
809K
37.5
57.2
40.4
32.7
54.0
34.7
41.4
61.3
44.9
35.8
57.7
38.4
IN-Sup (100%)
1.28M
36.3
56.1
38.8
31.9
52.7
33.7
40.2
60.0
43.5
35.0
56.4
37.4
Table A2 :
A2Additional results of instance segmentation on COCO under 50% and 100% data settings.
Efficient Interactive Annotation of Segmentation Datasets with Polygon-RNN++. David Acuna, Huan Ling, Amlan Kar, Sanja Fidler, CVPR. David Acuna, Huan Ling, Amlan Kar, and Sanja Fidler. Effi- cient Interactive Annotation of Segmentation Datasets with Polygon-RNN++. In CVPR, 2018.
Language Models are Few-Shot Learners. Benjamin Tom B Brown, Nick Mann, Melanie Ryder, Jared Subbiah, Prafulla Kaplan, Arvind Dhariwal, Pranav Neelakantan, Girish Shyam, Amanda Sastry, Askell, NeurIPS. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Sub- biah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language Models are Few-Shot Learners. In NeurIPS, 2020.
Deep Clustering for Unsupervised Learning of Visual Features. Mathilde Caron, Piotr Bojanowski, Armand Joulin, Matthijs Douze, ECCV. Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep Clustering for Unsupervised Learning of Visual Features. In ECCV, 2018.
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Piotr Bojanowski, Armand Joulin, NeurIPS. Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal, Pi- otr Bojanowski, and Armand Joulin. Unsupervised Learning of Visual Features by Contrasting Cluster Assignments. In NeurIPS, 2020.
A Simple Framework for Contrastive Learning of Visual Representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, ICML. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geof- frey Hinton. A Simple Framework for Contrastive Learning of Visual Representations. In ICML, 2020.
Semi-supervised Sequence Learning. M Andrew, Quoc V Dai, Le, NeurIPS. Andrew M Dai and Quoc V Le. Semi-supervised Sequence Learning. In NeurIPS, 2015.
ImageNet: A Large-Scale Hierarchical Image Database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, CVPR. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009.
VirTex: Learning Visual Representations from Textual Annotations. Karan Desai, Justin Johnson, CVPR. 2021Karan Desai and Justin Johnson. VirTex: Learning Visual Representations from Textual Annotations. In CVPR, 2021.
Unsupervised Visual Representation Learning by Context Prediction. Carl Doersch, Abhinav Gupta, Alexei A Efros, ICCV. Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsuper- vised Visual Representation Learning by Context Prediction. In ICCV, 2015.
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, Trevor Darrell, ICML. Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition. In ICML, 2014.
Large Scale Adversarial Representation Learning. Jeff Donahue, Karen Simonyan, NeurIPS. Jeff Donahue and Karen Simonyan. Large Scale Adversarial Representation Learning. In NeurIPS, 2019.
. Ishmael Vincent Dumoulin, Ben Belghazi, Olivier Poole, Alex Mastropietro, Martin Lamb, Aaron Arjovsky, Courville, Adversarially Learned Inference. In ICLR. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Olivier Mas- tropietro, Alex Lamb, Martin Arjovsky, and Aaron Courville. Adversarially Learned Inference. In ICLR, 2017.
Mark Everingham, Ali Eslami, Luc Van Gool, K I Christopher, John Williams, Andrew Winn, Zisserman, The PASCAL Visual Object Classes Challenge: A Retrospective. IJCV. Mark Everingham, SM Ali Eslami, Luc Van Gool, Christo- pher KI Williams, John Winn, and Andrew Zisserman. The PASCAL Visual Object Classes Challenge: A Retrospective. IJCV, 2015.
Unsupervised Representation Learning by Predicting Image Rotations. Spyros Gidaris, Praveer Singh, Nikos Komodakis, In ICLR. Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsu- pervised Representation Learning by Predicting Image Rota- tions. In ICLR, 2018.
Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik, CVPR. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Ma- lik. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In CVPR, 2014.
Priya Goyal, Mathilde Caron, Benjamin Lefaudeux, Min Xu, Pengchao Wang, Vivek Pai, Mannat Singh, Vitaliy Liptchinsky, Ishan Misra, Armand Joulin, Piotr Bojanowski, Selfsupervised Pretraining of Visual Features in the Wild. arXiv. Priya Goyal, Mathilde Caron, Benjamin Lefaudeux, Min Xu, Pengchao Wang, Vivek Pai, Mannat Singh, Vitaliy Liptchin- sky, Ishan Misra, Armand Joulin, and Piotr Bojanowski. Self- supervised Pretraining of Visual Features in the Wild. arXiv, 2021.
Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, H Pierre, Elena Richemond, Carl Buchatskaya, Bernardo Doersch, Zhaohan Daniel Avila Pires, Mohammad Gheshlaghi Guo, Bilal Azar, Koray Piot, Rémi Kavukcuoglu, Michal Munos, Valko, NeurIPS. Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Do- ersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Moham- mad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko. Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning. In NeurIPS, 2020.
LVIS: A Dataset for Large Vocabulary Instance Segmentation. Agrim Gupta, Piotr Dollar, Ross Girshick, CVPR. Agrim Gupta, Piotr Dollar, and Ross Girshick. LVIS: A Dataset for Large Vocabulary Instance Segmentation. In CVPR, 2019.
Momentum Contrast for Unsupervised Visual Representation Learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick, CVPR. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum Contrast for Unsupervised Visual Rep- resentation Learning. In CVPR, 2020.
Rethinking ImageNet Pre-training. Kaiming He, Ross Girshick, Piotr Dollár, ICCV. Kaiming He, Ross Girshick, and Piotr Dollár. Rethinking ImageNet Pre-training. In ICCV, 2019.
Piotr Dollár, and Ross Girshick. Mask R-CNN. Kaiming He, Georgia Gkioxari, ICCV. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Gir- shick. Mask R-CNN. In ICCV, 2017.
Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, CVPR. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In CVPR, 2016.
Dan Hendrycks, Kevin Gimpel, Gaussian Error Linear Units (GELUs). arXiv. Dan Hendrycks and Kevin Gimpel. Gaussian Error Linear Units (GELUs). arXiv, 2016.
Learning Visual Features from Large Weakly Supervised Data. Armand Joulin, Laurens Van Der Maaten, Allan Jabri, Nicolas Vasilache, ECCV. Armand Joulin, Laurens Van Der Maaten, Allan Jabri, and Nicolas Vasilache. Learning Visual Features from Large Weakly Supervised Data. In ECCV, 2016.
The Open Images Dataset V4: Unified Image Classification, Object Detection, and Visual Relationship Detection at Scale. IJCV. Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. The Open Images Dataset V4: Unified Image Classification, Object Detection, and Visual Relationship Detection at Scale. IJCV, 2020.
Building High-Level Features using Large Scale Unsupervised Learning. V Quoc, Le, ICASSP. Quoc V Le. Building High-Level Features using Large Scale Unsupervised Learning. In ICASSP, 2013.
Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations. Honglak Lee, Roger Grosse, Rajesh Ranganath, Andrew Y Ng, ICML. Honglak Lee, Roger Grosse, Rajesh Ranganath, and An- drew Y Ng. Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations. In ICML, 2009.
Learning Visual N-Grams from Web Data. Ang Li, Allan Jabri, Armand Joulin, Laurens Van Der Maaten, ICCV. Ang Li, Allan Jabri, Armand Joulin, and Laurens van der Maaten. Learning Visual N-Grams from Web Data. In ICCV, 2017.
Prototypical Contrastive Learning of Unsupervised Representations. Junnan Li, Pan Zhou, Caiming Xiong, Richard Socher, C H Steven, Hoi, ICLR. 2021Junnan Li, Pan Zhou, Caiming Xiong, Richard Socher, and Steven CH Hoi. Prototypical Contrastive Learning of Unsu- pervised Representations. In ICLR, 2021.
Microsoft COCO: Common Objects in Context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, ECCV. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO: Common Objects in Context. In ECCV, 2014.
Fast Interactive Object Annotation with Curve-GCN. Huan Ling, Jun Gao, Amlan Kar, Wenzheng Chen, Sanja Fidler, CVPR. Huan Ling, Jun Gao, Amlan Kar, Wenzheng Chen, and Sanja Fidler. Fast Interactive Object Annotation with Curve-GCN. In CVPR, 2019.
Fully Convolutional Networks for Semantic Segmentation. Jonathan Long, Evan Shelhamer, Trevor Darrell, CVPR. Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully Convolutional Networks for Semantic Segmentation. In CVPR, 2015.
SGDR: Stochastic Gradient Descent with Warm Restarts. Ilya Loshchilov, Frank Hutter, ICLR. Ilya Loshchilov and Frank Hutter. SGDR: Stochastic Gradient Descent with Warm Restarts. In ICLR, 2017.
Mixed Precision Training. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, Wu Hao, In ICLR. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Wu Hao. Mixed Precision Training. In ICLR, 2018.
Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles. Mehdi Noroozi, Paolo Favaro, ECCV. Mehdi Noroozi and Paolo Favaro. Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles. In ECCV, 2016.
Aaron Van Den Oord, Yazhe Li, Oriol Vinyals, Representation Learning with Contrastive Predictive Coding. arXiv. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Represen- tation Learning with Contrastive Predictive Coding. arXiv, 2018.
PyTorch: An Imperative Style, High-Performance Deep Learning Library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary Devito, NeurIPS. Lu Fang, Junjie Bai, and Soumith ChintalaMartin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit SteinerAdam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, An- dreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Im- perative Style, High-Performance Deep Learning Library. In NeurIPS, 2019.
Context Encoders: Feature Learning by Inpainting. Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, Alexei A Efros, CVPR. Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context Encoders: Feature Learning by Inpainting. In CVPR, 2016.
MegDet: A Large Mini-Batch Object Detector. Chao Peng, Tete Xiao, Zeming Li, Yuning Jiang, Xiangyu Zhang, Kai Jia, Gang Yu, Jian Sun, CVPR. Chao Peng, Tete Xiao, Zeming Li, Yuning Jiang, Xiangyu Zhang, Kai Jia, Gang Yu, and Jian Sun. MegDet: A Large Mini-Batch Object Detector. In CVPR, 2018.
. E Matthew, Mark Peters, Mohit Neumann, Matt Iyyer, Christopher Gardner, Kenton Clark, Luke Lee, Zettlemoyer, Deep Contextualized Word Representations. In NAACL. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gard- ner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep Contextualized Word Representations. In NAACL, 2018.
Connecting Vision and Language with Localized Narratives. Jordi Pont-Tuset, Jasper Uijlings, Soravit Changpinyo, Radu Soricut, Vittorio Ferrari, ECCV. Jordi Pont-Tuset, Jasper Uijlings, Soravit Changpinyo, Radu Soricut, and Vittorio Ferrari. Connecting Vision and Lan- guage with Localized Narratives. In ECCV, 2020.
Learning visual representations using images with captions. Ariadna Quattoni, Michael Collins, Trevor Darrell, CVPR. Ariadna Quattoni, Michael Collins, and Trevor Darrell. Learn- ing visual representations using images with captions. In CVPR, 2007.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Gretchen Krueger, and Ilya Sutskever. Learning Transferable Visual Models From Natural Language Supervision. arXiv. Amanda Askell, Pamela Mishkin, Jack ClarkAlec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning Transferable Visual Models From Natural Language Supervision. arXiv, 2021.
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, JMLRColin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. JMLR, 2020.
Transfusion: Understanding Transfer Learning for Medical Imaging. Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, Samy Bengio, NeurIPS. Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, and Samy Bengio. Transfusion: Understanding Transfer Learning for Medical Imaging. In NeurIPS, 2019.
. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever, arXivAditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-Shot Text-to-Image Generation. arXiv, 2021.
Faster R-CNN: towards real-time object detection with region proposal networks. Kaiming Shaoqing Ren, Ross He, Jian Girshick, Sun, NeurIPS. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: towards real-time object detection with region proposal networks. In NeurIPS, 2015.
. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, ImageNet Large Scale Visual Recognition Challenge. IJCVOlga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San- jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. ImageNet Large Scale Visual Recognition Challenge. IJCV, 2015.
Learning Visual Representations with Caption Annotations. Julien Mert Bulent Sariyildiz, Diane Perez, Larlus, ECCV. Mert Bulent Sariyildiz, Julien Perez, and Diane Larlus. Learn- ing Visual Representations with Caption Annotations. In ECCV, 2020.
Active Learning for Convolutional Neural Networks: A Core-Set Approach. Ozan Sener, Silvio Savarese, ICLR. Ozan Sener and Silvio Savarese. Active Learning for Convo- lutional Neural Networks: A Core-Set Approach. In ICLR, 2018.
Neural Machine Translation of Rare Words with Subword Units. Rico Sennrich, Barry Haddow, Alexandra Birch, ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural Machine Translation of Rare Words with Subword Units. In ACL, 2016.
Object Detection from Scratch with Deep Supervision. Zhiqiang Shen, Zhuang Liu, Jianguo Li, Yu-Gang Jiang, Yurong Chen, Xiangyang Xue, TPAMIZhiqiang Shen, Zhuang Liu, Jianguo Li, Yu-Gang Jiang, Yurong Chen, and Xiangyang Xue. Object Detection from Scratch with Deep Supervision. TPAMI, 2019.
Multimodal Learning with Deep Boltzmann Machines. Nitish Srivastava, Ruslan Salakhutdinov, NeurIPS. Nitish Srivastava, Ruslan Salakhutdinov, et al. Multimodal Learning with Deep Boltzmann Machines. In NeurIPS, 2012.
Revisiting Unreasonable Effectiveness of Data in Deep Learning Era. Chen Sun, Abhinav Shrivastava, Saurabh Singh, Abhinav Gupta, Chen Sun, Abhinav Shrivastava, Saurabh Singh, and Abhinav Gupta. Revisiting Unreasonable Effectiveness of Data in Deep Learning Era. In ICCV, 2017.
Bart Thomee, A David, Gerald Shamma, Benjamin Friedland, Karl Elizalde, Douglas Ni, Damian Poland, Li-Jia Borth, Li, The New Data in Multimedia Research. CACM. 100Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. YFCC100M: The New Data in Multimedia Research. CACM, 2016.
Contrastive Multiview Coding. Yonglong Tian, Dilip Krishnan, Phillip Isola, ECCV. Yonglong Tian, Dilip Krishnan, and Phillip Isola. Contrastive Multiview Coding. In ECCV, 2019.
Attention Is All You Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, NeurIPS. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention Is All You Need. In NeurIPS, 2017.
Peter Young, Alice Lai, Micah Hodosh, Julia Hockenmaier, From Image Descriptions to Visual Denotations: New Similarity Metrics for Semantic Inference over Event Descriptions. TACL. Peter Young, Alice Lai, Micah Hodosh, and Julia Hocken- maier. From Image Descriptions to Visual Denotations: New Similarity Metrics for Semantic Inference over Event Descrip- tions. TACL, 2014.
Colorful Image Colorization. Richard Zhang, Phillip Isola, Alexei A Efros, ECCV. Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful Image Colorization. In ECCV, 2016.
Yuhao Zhang, Hang Jiang, Yasuhide Miura, D Christopher, Curtis P Manning, Langlotz, Contrastive Learning of Medical Visual Representations from Paired Images and Text. arXiv. Yuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D Manning, and Curtis P Langlotz. Contrastive Learning of Medical Visual Representations from Paired Images and Text. arXiv, 2020.
Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, Antonio Torralba, Semantic Understanding of Scenes through the ADE20K Dataset. IJCV. Bolei Zhou, Hang Zhao, Xavier Puig, Tete Xiao, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Semantic Understand- ing of Scenes through the ADE20K Dataset. IJCV, 2019.
Rethinking Pre-training and Self-training. Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, D Ekin, Quoc V Cubuk, Le, NeurIPS. Barret Zoph, Golnaz Ghiasi, Tsung-Yi Lin, Yin Cui, Hanxiao Liu, Ekin D Cubuk, and Quoc V Le. Rethinking Pre-training and Self-training. In NeurIPS, 2020.
| [
"https://github.com/facebookresearch/swav50%"
] |
[
"UIUC BioNLP at SemEval-2021 Task 11: A Cascade of Neural Models for Structuring Scholarly NLP Contributions",
"UIUC BioNLP at SemEval-2021 Task 11: A Cascade of Neural Models for Structuring Scholarly NLP Contributions"
] | [
"Haoyang Liu \nSchool of Information Sciences\nUniversity of Illinois at Urbana-Champaign\n\n",
"Janina Sarol mjsarol@illinois.edu \nSchool of Information Sciences\nUniversity of Illinois at Urbana-Champaign\n\n",
"Halil Kilicoglu \nSchool of Information Sciences\nUniversity of Illinois at Urbana-Champaign\n\n"
] | [
"School of Information Sciences\nUniversity of Illinois at Urbana-Champaign\n",
"School of Information Sciences\nUniversity of Illinois at Urbana-Champaign\n",
"School of Information Sciences\nUniversity of Illinois at Urbana-Champaign\n"
] | [
"Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)"
] | We propose a cascade of neural models that performs sentence classification, phrase recognition, and triple extraction to automatically structure the scholarly contributions of NLP publications in English. To identify the most important contribution sentences in a paper, we used a BERT-based classifier with positional features (Subtask 1). A BERT-CRF model was used to recognize and characterize relevant phrases in contribution sentences (Subtask 2). We categorized the triples into several types based on whether and how their elements were expressed in text, and addressed each type using separate BERT-based classifiers as well as rules (Subtask 3). Our system was officially ranked second in Phase 1 evaluation and first in both parts of Phase 2 evaluation. After fixing a submission error in Phase 1, our approach yielded the best results overall. In this paper, in addition to a system description, we also provide further analysis of our results, highlighting its strengths and limitations. We make our code publicly available at https://github.com/ Liu-Hy/nlp-contrib-graph. | 10.18653/v1/2021.semeval-1.45 | [
"https://www.aclanthology.org/2021.semeval-1.45.pdf"
] | 234,469,811 | 2105.05435 | bc147137b4a591e84cfd4553c985f604329f5b21 |
UIUC BioNLP at SemEval-2021 Task 11: A Cascade of Neural Models for Structuring Scholarly NLP Contributions
August 5-6, 2021
Haoyang Liu
School of Information Sciences
University of Illinois at Urbana-Champaign
Janina Sarol mjsarol@illinois.edu
School of Information Sciences
University of Illinois at Urbana-Champaign
Halil Kilicoglu
School of Information Sciences
University of Illinois at Urbana-Champaign
UIUC BioNLP at SemEval-2021 Task 11: A Cascade of Neural Models for Structuring Scholarly NLP Contributions
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
the 15th International Workshop on Semantic Evaluation (SemEval-2021)Bangkok, ThailandAugust 5-6, 2021377
We propose a cascade of neural models that performs sentence classification, phrase recognition, and triple extraction to automatically structure the scholarly contributions of NLP publications in English. To identify the most important contribution sentences in a paper, we used a BERT-based classifier with positional features (Subtask 1). A BERT-CRF model was used to recognize and characterize relevant phrases in contribution sentences (Subtask 2). We categorized the triples into several types based on whether and how their elements were expressed in text, and addressed each type using separate BERT-based classifiers as well as rules (Subtask 3). Our system was officially ranked second in Phase 1 evaluation and first in both parts of Phase 2 evaluation. After fixing a submission error in Phase 1, our approach yielded the best results overall. In this paper, in addition to a system description, we also provide further analysis of our results, highlighting its strengths and limitations. We make our code publicly available at https://github.com/ Liu-Hy/nlp-contrib-graph.
Introduction
With the deluge of scientific publications in recent years, keeping pace with the literature and managing information overload have become increasingly challenging for researchers. There is a growing need for tools that can automatically extract and structure semantic information from scientific publications to facilitate advanced approaches to information access and knowledge curation (Shen et al., 2018).
The field of natural language processing (NLP) has witnessed an enormous growth in recent years with advances in deep learning, and there are increasing efforts in developing methods to extract scholarly knowledge from NLP pub-lications (QasemiZadeh and Schumann, 2016; D' Souza and Auer, 2020b). One such effort is NLPCONTRIBUTIONS, an annotation scheme for describing the scholarly contributions in NLP publications and a corpus annotated using this annotation scheme (D' Souza and Auer, 2020b). This corpus has been proposed for training and testing of machine reading models, whose output can be integrated with the Open Research Knowledge Graph framework (ORKG) (Jaradeh et al., 2019). ORKG formalizes the research contributions of a scholarly publication as a knowledge graph, which can further be linked to other publications via the graph. The goal of the NLPContributionGraph (NCG) shared task (D' Souza et al., 2021) is to facilitate the development of machine reading models that can extract ORKG-compatible scholarly contribution information from NLP publications. The shared task consists of three subtasks (see D 'Souza et al. (2021) for a more detailed description):
• Subtask 1: Identification of contribution sentences from NLP publications • Subtask 2: Recognition of scientific terms and relations in contribution sentences • Subtask 3: Extraction and classification of triples that pair scientific terms with relations
In this paper, we describe our contribution to NCG shared task. We built a cascade of neural classification and sequence labeling models based on BERT (Devlin et al., 2019). For subtask 3, we characterized triples based on whether and how their elements are expressed in text, and employed different models for each category. We also explored rule-based heuristics to improve model performance. Our models had the best overall performance in the shared task (57.27%, 46.41%, and 22.28% F 1 score in subtasks 1, 2, and 3, respectively). The results are encouraging for extracting Figure 1: End-to-end system diagram.
scholarly contributions from scientific publications, although there is much room for improvement.
System Overview
In this section, we first describe our data preprocessing steps. Next, we discuss our models for each subtask, and the experimental setup for our end-toend system (Phase 1). We provide an overview of the system in Figure 1 and provide examples for illustration, when necessary.
Data preprocessing
The participants of the shared task were provided three kinds of input: a) plain text files of the publications converted from PDF using Grobid 1 , b) sentences and tokens identified using Stanza (Qi et al., 2020), and c) triples and source texts organized by their information units (e.g., APPROACH) in JSON format.
Identifying headers and positional information
One major preprocessing step was to identify section headers in the publications and associate them with individual sentences. For sentence classification (subtask 1), we incorporated the topmost and innermost section headers associated with a sentence into its representation. The topmost header indicates the general role that a sentence plays in the article, while the innermost header provides more specific context for the sentence. For example, one topmost/innermost header pair is EXPERI-MENT/DATA SET AND EXPERIMENT SETTINGS.
In the absence of explicit section information in the input, we used rule-based heuristics to extract these headers. With the first heuristic (Heuris-tic1), we simply identified the sentences following blank lines in plain text files as section headers. In Heuristic2, we first identified candidate headers as sentences that contain fewer than 10 words, have the first letter capitalized, do not end with several stopwords (by, as, in, that, or and), do not contain question marks in the middle or end with some punctuation (comma, colon or full stop). Next, we determined the case format used for headers in the publication by counting the occurrences of each case format type (e.g., all uppercase: EX-PERIMENTAL SETUP). Among the headers that conform to the determined case format, we dis-tinguished topmost headers as those that contain several lexical cues (e.g., background, method) and are shorter than 5 words. Finally, we associated each sentence with the nearest preceding topmost and innermost header.
To incorporate headers into the sentence representation, we join the topmost and innermost header together with a colon between them and refer to it as the "title" of the sentence. In the case where a sentence is directly governed by a top-level header or it is a header itself, the title consists of the topmost header only.
We characterize the position of each sentence in the document with a combination of numeric features:
• The offset of the sentence in the entire paper.
• The offset of the sentence with respect to its topmost header.
• The offset of the sentence with respect to the header extracted using Heuristic1.
Each of these offset features are divided by the number of sentences in the corresponding discourse (entire paper or the section) to extract a proportional sentence position feature. Thus, for every sentence, a total of six positional features (three offsets, three proportional sentence positions) are computed.
JSON Parsing
We created two additional models to assist with triple extraction: a) a multi-class sentence classifier that labels each sentence with a single information unit and b) a binary phrase classifier that labels phrases as scientific terms vs. predicates (described below). To train these models, we extracted additional information from JSON files. First, we matched the contribution sentences with the source text in the JSON files to get the information unit labels of the sentences. Second, we aligned the phrases with the triples in the same information unit, and determined whether each phrase is a predicate or term based on its location in the triple.
Subtask 1: Contribution Sentence Classification
We built a binary classifier to determine whether each sentence describes a contribution of the publication. Our analysis revealed that this decision was not simply based on the semantics of the sentence, but also its position in the document. On one hand, the section header associated with the sentence provides important clues about the role of the sentence in the larger context. For example, the header "Related Work" indicates that sentences in this section are likely to discuss the contributions of prior research. On the other hand, some parts of the documents are naturally more salient than others (e.g. title, abstract, the first few lines of each section), where authors tend to summarize the most important information. To operationalize these insights, we designed a model that captures the information about the sentence, its topmost and innermost headers as well as its position in the document, as discussed above.
We used a BERT model to encode the sentence and its title (i.e., concatenated headers) separately and concatenated their textual representation together with the positional features to obtain a sentence representation. We then fed this representation into two dense layers, and used a final softmax layer for classification ( Figure 2).
Subtask 2: Phrase Recognition
Subtask 2 is similar to a named entity recognition (NER) task, although the participating systems were only required to extract relevant text spans and not to categorize them. One major difficulty with this subtask is that phrases do not align neatly with sentence constituents (e.g., noun phrases) and they vary greatly in length and in what counts as their boundaries (e.g. best results and our best results are both valid phrases).
For this subtask, we used a BERT-CRF model for phrase extraction and type classification (Souza et al., 2019). The raw text of the sentence is taken as the model input. A BIO scheme that incorporates phrase types (scientific term vs. predicate) is used (e.g., B-Predicate, I-Term, O). The probabilities produced by the BERT model are fed into a Conditional Random Field (CRF) layer (Lafferty et al., 2001) for end-to-end training. We note that while phrase type classification is not necessary for subtask 2, we perform it since it is useful for our subtask 3 model, described next.
Subtask 3: Triple Extraction
Subtask 3 involves organizing phrases into triples. In information extraction, semantic triples are typically composed of subject, predicate, and object terms each corresponding to specific textual spans. This is not always the case in this subtask. While in most cases all three terms are extracted from a single sentence, a non-negligible number of triples consist of at least one phrase that does not come from the sentence (e.g. (TASKS, has, Coreference resolution), where the subject is an information unit and the predicate is not a sentence element).
To better understand triple characteristics, we categorized them into several types based on their composition, and created separate relation classification models for each type. The triple categorization is presented in Table 1. For each type, we list their functions in information organization, their proportion to all triples, along with some examples. We note that input to the training process for triple extraction varies by the type of the triple (described for each type in Section 2.4.2).
Information Unit Classification
To aid triple extraction, we modified the binary classification model that we trained for subtask 1 to further classsify contribution sentences by their information units (multi-class classification). The process of labeling contribution sentences with information units was briefly described in Subsection 2.1.2.
In analyzing the information units, we identified two special pairs (MODEL vs. APPROACH and EXPERIMENTAL-SETUP vs. HYPERPARAM-ETERS). In the dataset, no document contains both units of a pair. The decision of which unit to choose is made at the document level. Therefore, we merged the labels of similar units before feeding the examples into the multi-class classification model.
After classification, we used lexical rules to split these units. Our rules were based on the following observations. First, the MODEL vs. APPROACH distinction seems related to how the authors mention their work in the abstract and section headers of the paper. Second, EXPERIMENTAL-SETUP is often used instead of HYPERPARAMETERS when the hardware or the framework used in the study is specified (e.g. V100 GPU, Pytorch).
We did not recognize CODE information units using this model, since we found that such sentences can be identified with a very high accuracy using a simple rule based on presence of a URL in the sentence.
Neural models for triple extraction
We extract triples of type A, B, C and D (Table 1) by formulating them as neural relation classification tasks. All the classifiers are vanilla BERT classifiers (one linear layer followed by softmax). For each type, we observed the patterns in the training data, and addressed the most common ones. Ignoring the less frequent patterns inevitably led to a lower recall ceiling in our models.
Type A This type, in which all triple elements are mentions in the sentence, represents the majority of the triples. The corresponding model classifies the triples as a whole ("triple classification"). To the best of our knowledge, little research has been done on relation classification among three phrases; however, the Transformer model at the core of BERT is versatile enough to succeed in a wide range of tasks. As our training examples, we take every combination of a predicate and two terms in a sentence as a candidate triple, and train a model that predicts whether the three phrases constitute a triple or not. We encode the relation between three phrases by marking their boundaries in the sentence, as shown in Example 1. We use angle brackets to enclose predicates, and square brackets to enclose terms.
(1) In this paper , we explore an alternate [[
semisupervised approach ]] which does << not require >> [[ additional labeled data ]] .
Type B To identify triples of type B (two terms from the sentence and the relation type one of has, name, or None), we classify the relation between each pair of terms in a sentence that are not related by a type A triple. We found that 96% of these triples preserve the order of the two terms in the sentence, so we also preserve the order for extraction. Type C Type C triples involve an information unit name as the subject along with a predicate and object from the sentence. We found that 89% of these triples take the first predicate and the first term in a sentence as their predicate and object respectively. Furthermore, in 98% of these sentences, the first predicate precedes the first term. Therefore, we classify each sentence whose first predicate precedes the first term, to predict whether a triple of this type can be extracted from the sentence. To train this classifier, we prepend the information unit name to the sentence text with a colon in between, as in Example 2 (Model is the information unit).
( Type D Type D triples are similar to Type C, but instead of a predicate phrase from the sentence, they involve the non-sentence predicate has. We found that 95% of these triples in the training set take the first term in the sentence as their object, and the first predicate in the sentence, if one exists, almost always follows the first term. Therefore, we classify each sentence that conforms to this pattern, to predict whether the information unit name and the first term constitute a has relation. We prepend the info unit name to the sentence in the same way as in Type C.
Rule-based triple extraction
Triples of type E and F are extracted using heuristic rules. For type E, the subject is always CON-TRIBUTION. The predicate can be has, in which case the object is the name of an information unit. If the related information unit is CODE or RE-SEARCH PROBLEM, the predicate is a fixed predicate (Code or has research problem, respectively) and the object is a phrase from the sentence. These rules use phrase and information units identified in earlier steps (Sections 2.3 and 2.4.1, respectively). We developed the following rules to extract cross-sentence triples (type F):
1. If the first sentence has a single entity, and the second sentence has at least 2 entities, we assign the entity in sentence 1 as the subject and the first and second entities in sentence 2 as the predicate and object, respectively. We add this triple to the list only if both subject and predicate are noun phrases, which prevents many false positives. We also add the corresponding triple in the form of INFO-UNIThas-subject (e.g. MODEL-has-Encoder). In many sentences that follow this rule, the first sentence is a section header. Table 2: End-to-end performance (Evaluation Phase 1). IAA: intra-annotator agreement.
2. If the two sentences each contain a single term and sentence 1 term is a substring of sentence 2 term or if sentence 1 term is an acronym of sentence 2 term, we create the following triple: term 1-name-term 2. We extract a term's acronym by combining the initials of each token in the entity. An example of a term pair that follows this rule is (GLUE, General Language Understanding Evaluation).
These rules are applied to consecutive sentences only. In the training set, we found 812 triples that follow these rules, 649 (80%) of which could be identified correctly using these rules.
Experimental Setup
We implemented our models using Simple Transformers 2 . We used SciBERT (Beltagy et al., 2019) as the pre-trained language model. To train our models, we used a batch size of 16, and empirically found the best learning rate for each model between 10 −5 and 10 −4 . One exception was that in our sentence classification model (subtask 1), we used a fixed learning rate of 10 −5 to fine-tune the BERT, and a larger learning rate between 5 × 10 −5 and 10 −3 for the dense layers. We used the AdamW optimizer (Loshchilov and Hutter, 2017) and the polynomial decay scheduler with the power of 0.5. We ran the experiments on a Google Cloud VM instance, using a Tesla V100 GPU.
Results
All the subtasks were evaluated on F 1 scores, and among them, triple extraction is evaluated by the micro-average of F 1 scores on each information unit. In the end-to-end evaluation (Phase 1), the participants were provided with the raw input to perform all three subtasks sequentially. We were officially ranked second in Phase 1, due to a submission error that resulted in phrase extraction F 1 of zero. Our correct submission achieved an average F 1 of 49.7%, the best score among all participating teams. Table 2 shows our performance in Phase 1, and the intra-annotator agreement (IAA) on each subtask (D' Souza and Auer, 2020a).
We observe that, although the performance of our system on sentence classification is lower than human performance (57.27% vs. 67.44% F 1 ), using its own sentence predictions, our system outperforms human annotators on phrase recognition (46.41% vs. 41.84% F 1 ), and reaches comparable performance to human annotators on triple extraction. We also note that our system generally performs better in terms of recall than precision.
We were officially ranked first in both parts of Evaluation Phase 2. In Part 1, the participants were provided with the sentences labels to conduct phrase recognition and triple extraction sequentially; in Part 2, both the sentence labels and the phrase labels were provided to extract triples. We essentially followed our method in Phase 1 on phrase recognition and triple extraction, but made several attempts to improve the performance, which we discuss in Section 4. Our results in both parts of the Phase 2 evaluation are shown in Table 3. Compared to Phase 1 evaluation, we observe a significant improvement in phrase recognition (46.41% vs. 78.57% F 1 ) in Part 1 and in triple extraction (22.28% to 43.44% and 61.29% F 1 ) when ground truth contribution sentences and phrases are provided.
Performance Analysis
In this section, we analyze the performance of several components of our system and compare different schemes for entity representation and triple extraction. We also discuss some possible methods for improvement based on our shared task results and follow-up experiments.
Contribution Sentence Classification
We conducted ablation experiments to evaluate the effect of features for contribution sentence classification. Table 5 shows the model performance on the 10% validation set when using all features, using either the title or the position features together with the sentence, and using the sentence only. We observe the title information significantly improves the performance, and the position features are also helpful, to a lesser extent. Combining the title and the position features gives the best performance on contribution sentence classification.
Information Unit Classification
In Evaluation Phase 2, the ground truth labels for contribution sentences increased the performance of our base model on information unit classification from 72.93% to 76.84% F 1 . To further improve our method, we ensembled 45 multi-class sentence classifiers by averaging their output (using bagging), which increased the F 1 score to 78.65%. Next, we improved our rules for distinguishing the special pairs (MODEL vs. APPROACH and EXPERIMENTAL-SETUP vs. HYPERPARAM-ETERS) by adjusting the lexical cues with more careful observation of the data, which results in our final performance (82.49% F 1 in Table 3).
For further analysis, we evaluated the classification performance on each information unit, as shown in Table 4. The related confusion matrix is shown in Fig. 3. We observe that severe confusion mainly occurs between MODEL vs. APPROACH and EXPERIMENTAL-SETUP vs. HYPERPARAM-ETERS, pairs that we grouped together in neural classification. This shows that while our sentence classification model has good accuracy, there is still much room for improvement in the rule-based differentiation of similar units. The differentiation between MODEL and AP-PROACH is particularly challenging. While some papers aim at discussing an abstract idea and some focus on system implementation, most papers fall in the gray area between them. We also attempted neural classification on the abstracts to deal with this issue, but the result were not satisfactory.
Phrase extraction and classification
Specific BIO VS. simple BIO Alternative to our method of using specific BIO tags to indicate phrase types (Subsection 2.3), we also used another scheme ("simple BIO"), in which we only used (B, I, O) tags to mark phrase boundaries.
With this scheme, we first trained a BERT-CRF model to extract the phrases, and then trained a binary BERT classifier to predict phrase types. The sentence along with the phrase marked by special tokens is fed into the BERT model for binary clas- Table 6: Phrase extraction and classification performance. We take predicate as the positive label to calculate the F 1 score for phrase classification.
sification. The performance comparison of these schemes is shown in Table 6. While both schemes are effective, simple BIO outperforms specific BIO in phrase extraction by a small margin, so we used this scheme in Evaluation Phase 2.
The difference may be due to the noise in phrase types. Specifically, there is a good number of gerund phrases, on which the predicate-term differentiation is challenging. Moreover, in some cases, a verb phrase is used as a term to form triples. Combined with the relatively low intra-annotator agreement, these observations suggest that uncertainty and noise in the data affects the performance of the models. Note that the specific BIO scheme eliminates the need for a separate phrase classification model, making it preferable when the training and inference speed is a concern.
Error analysis and improvement We investigated the wrong predictions of our phrase extraction model, and found that most errors are due to boundary detection issues. For example, in one sentence, the model predicts all layers of representation as a phrase, while all layers, of, representations are annotated as three separate phrases. The opposite situation also occurs, when the model predicts a single unit as separate phrases. Another type of boundary error occurs when the model cannot predict correctly whether to include a non-core phrase element, like an adverb, in the phrase or not (e.g., it predicts see that whereas the annotated phrase is also see that). We believe that a relaxed boundary match evaluation can be considered for this task.
We attribute these errors to the uncertainty in semantic granularity, and attempted to alleviate the problem by ensembling. We get 12 bootstrap samples from the training data, and on each sample, we train the model and save its snapshot after each epoch from the 3th epoch to the 10th epoch, to get a total of 96 submodels. To aggregate their predictions, we extract a phrase in a sentence only if it is predicted by more than N submodels, where N is a hyperparameter around 48. We present the result in Table 6 for comparison. We observe that ensembling noticeably improved phrase extraction (from 77.13% to 78.57% F 1 ).
Triple extraction
Triple vs. pairwise classification In addition to triple classification method (Subsection 2.4.2) to extract type A triples, we also used pairwise classification for this task. In this scheme, we considered every (subject, predicate, object) triple as a composition of two (predicate, term) pairs, or "candidate pairs", and used a neural model to predict whether the two phrases in the pair are associated. After prediction, we reconstructed triples from the predicted pairs using rules. If a predicate is predicted to be associated with two terms, we combine them into a triple while preserving the order of the two terms in the sentence (subject first). If one predicate is associated with more than two terms, we only extract the triples in which the predicate is located between the two terms in the sentence. With only a few exceptions, we confirmed the effectiveness of these reconstruction rules; in other words, the performance of the pairwise scheme depends mainly on the classification accuracy on candidate pairs. We compared the performance of the two schemes for type A triple extraction on the 10% validation set. We also attempted to address the imbalance of class labels resulting from both schemes by downsampling and class weight adjustment. In the pairwise classification scheme ( there is a 11% drop in the F1 score from the candidate pair classification to triple prediction, which is not unexpected as the model needs to correctly classify both of the candidate pairs to correctly predict a triple. Table 8 shows the performance of the triple classification scheme, which achieves better performance compared to the pairwise classification scheme (87.54% vs. 80.37% F 1 ). We also observed that the best performance was obtained without dealing with the imbalanced data. It seems that despite constituting a small portion of the dataset (9.7%) , the number of the positive samples is large enough for the model to learn useful patterns.
Type-specific performance We also evaluated our deep learning methods for the extraction of the four types of triples, as shown in Table 9 Whereas our models for Type A, C, and D perform generally well, our model for Type B is far less accurate. Type B is a little special among the four types in that it requires the prediction of relation types. The type has is more difficult to predict than name, because the sentence often lacks semantic clues about the belonging or inclusion relationship between the two terms. A plausible idea is to incorporate has into the input, but it is difficult to do so without breaking the grammatical integrity of the sentence. We leave this improvement for future work.
Coordination in triple extraction A common problem we observed in our triple extraction models is the failure to account for coordination between terms. Example 3 shows a sentence with the terms in bold, and the two type C triples associating them. Our model only extracts the first triple, and misses the second.
(3) The MoE consists of a number of experts, each a simple feed -forward neural network, and a trainable gating network which selects a sparse combination of the experts to process each input.
(APPROACH, consists of, number of experts) (APPROACH, consists of, trainable gating network)
We attempted to address this issue in postprocessing, and used Stanza dependency parser (Qi et al., 2020) to detect coordination of words in phrases. If one phrase is used in a triple, we generated a parallel triple by replacing the term with the other. While this method improves recall (from 57.57% to 58.41%), it also led to precision errors (from 65.15% to 61.77%), its overall effect being negative (from 61.13% to 60.04% F 1 ). We plan to refine this approach in future work.
Conclusion
We developed a system to generate structured representations of research contributions described in NLP publications in a manner compatible with the ORKG framework, achieving the top performance in the NCG shared task. We combined a cascade of state-of-the-art BERT-based classification and sequence labeling models with rule-based methods. In particular, we proposed a novel approach for triple extraction, where we tackled triples with different characteristics using different relation classification methods. We also explored various alternatives to the components in our end-to-end system to analyze the contribution of individual components.
In future work, we plan to improve the differentiation of similar units (e.g., MODEL vs. AP-PROACH), improve the extraction of type B triples, and address coordinated triples more thoroughly. We did not attempt to extract approximately 6% of the triples that did not fit in our classification (Table 1). These often involve nested information units, and we also hope to explore them in more depth in future work.
Figure 2 :
2Sentence classification model architecture
Figure 3 :
3Confusion Matrix
Triple types, their roles, and frequency. Types A-D are addressed using neural models and Types E-F with rules. 6% of triples do not fit in these categories and are not shown.Composition
Examples
Role
Pct.
Type A
Three phrases in a sentence
(Deep -ED, obtain,
BLEU score)
Organize the
semantics of a
sentence.
57%
Type B
Two terms in a sentence with
an added predicate has or
name
(ByteNet Decoder, has,
30 residual blocks)
Organize the
semantics of a
sentence.
7%
Type C
Information unit (subject),
and two phrases in a sentence
(predicate and object)
(HYPERPARAMETERS,
use, cross -entropy loss)
Link a sentence
to its
information unit.
9%
Type D
Information unit (subject),
has (predicate), and a term in
the sentence (object)
(HYPERPARAMETERS,
has, starting learning
rate)
Link a sentence
to its
information unit.
9%
Type E
CONTRIBUTION (subject),
has (predicate), information
unit (object) OR
CONTRIBUTION (subject),
fixed (predicate), and a phrase
(object) for the information
units RESEARCH PROBLEM
and CODE
(CONTRIBUTION, has,
RESULTS),
(CONTRIBUTION, has
research problem,
neural machine
translation)
Link the
"Contribution"
node of each
paper to an
information unit.
9%
Type F
Cross-sentence triples
(Positional Encoding,
inject, some
information)
Structure the
information
across sentences
3%
Table 1:
Table 3 :
3Performance in phrase and triple extraction (Evaluation Phase 2). Note that we focused only on triple extraction in Part 2, therefore the information unit extraction performance remains the same.Unit
name
Research
problem
Ap-
proach
Model
Code
Dataset
Experi-
mental
Setup
Hyperpa-
rameters
Base-
lines
Results
Tasks
Experi-
ments
Ablation
analysis
F1
94.64
24.14
86.22
87.50
80.00
58.29
72.61
91.45
94.65
90.48
83.16
90.68
Table 4 :
4Information unit classification performance.Settings
F 1
P
R
Sentence + title + position 65.11 63.96 66.30
Sentence + title
63.87 61.00 67.03
Sentence + position
52.28 46.38 59.89
Sentence only
51.39 49.00 54.03
Table 5 :
5Results of ablation experiments on contribu-
tion sentence classification task.
Table 7 :
7Performance of the pairwise classification scheme.
Table 7 )
7,
Table 8 :
8Performance of the triple classification scheme.
.Type
F1
P
R
A
87.54 85.93 89.22
B
55.56 88.24 40.54
C
83.33 77.96 89.51
D
75.86 78.11 73.74
Table 9 :
9Performance of triple extraction on each type.
https://grobid.readthedocs.io/
https://github.com/ThilinaRajapakse/ simpletransformers
SciB-ERT: A pretrained language model for scientific text. Iz Beltagy, Kyle Lo, Arman Cohan, 10.18653/v1/D19-1371Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A pretrained language model for scientific text.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsIn Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3615- 3620, Hong Kong, China. Association for Computa- tional Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
SemEval-2021 task 11: NLPContributionGraphstructuring scholarly NLP contributions for a research knowledge graph. D' Jennifer, Sören Souza, Ted Auer, Pedersen, Proceedings of the Fifteenth Workshop on Semantic Evaluation. the Fifteenth Workshop on Semantic EvaluationBangkokAssociation for Computational LinguisticsJennifer D'Souza, Sören Auer, and Ted Pedersen. 2021. SemEval-2021 task 11: NLPContributionGraph - structuring scholarly NLP contributions for a re- search knowledge graph. In Proceedings of the Fif- teenth Workshop on Semantic Evaluation, Bangkok (online). Association for Computational Linguistics.
D' Jennifer, Sören Souza, Auer, Graphing Contributions in Natural Language Processing Research: Intra-Annotator Agreement on a Trial Dataset. Jennifer D'Souza and Sören Auer. 2020a. Graph- ing Contributions in Natural Language Processing Research: Intra-Annotator Agreement on a Trial Dataset.
NLPContributions: An Annotation Scheme for Machine Reading of Scholarly Contributions in Natural Language Processing Literature. D' Jennifer, Sören Souza, Auer, Jennifer D'Souza and Sören Auer. 2020b. NLPContri- butions: An Annotation Scheme for Machine Read- ing of Scholarly Contributions in Natural Language Processing Literature.
Open Research Knowledge Graph: Next Generation Infrastructure for Semantic Scholarly Knowledge. Mohamad Yaser Jaradeh, Allard Oelen, Kheir Eddine Farfar, Manuel Prinz, D' Jennifer, Gábor Souza, Markus Kismihók, Sören Stocker, Auer, 10.1145/3360901.3364435Association for Computing Machinery. New York, NY, USAMohamad Yaser Jaradeh, Allard Oelen, Kheir Ed- dine Farfar, Manuel Prinz, Jennifer D'Souza, Gábor Kismihók, Markus Stocker, and Sören Auer. 2019. Open Research Knowledge Graph: Next Genera- tion Infrastructure for Semantic Scholarly Knowl- edge, page 243-246. Association for Computing Ma- chinery, New York, NY, USA.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John Lafferty, Andrew Mccallum, Fernando Cn Pereira, John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Prob- abilistic models for segmenting and labeling se- quence data.
. Ilya Loshchilov, Frank Hutter, arXiv:1711.05101Decoupled weight decay regularization. arXiv preprintIlya Loshchilov and Frank Hutter. 2017. Decou- pled weight decay regularization. arXiv preprint arXiv:1711.05101.
The ACL RD-TEC 2.0: A language resource for evaluating term extraction and entity recognition methods. Behrang Qasemizadeh, Anne-Kathrin Schumann, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Portorož, SloveniaEuropean Language Resources Association (ELRABehrang QasemiZadeh and Anne-Kathrin Schumann. 2016. The ACL RD-TEC 2.0: A language resource for evaluating term extraction and entity recogni- tion methods. In Proceedings of the Tenth Inter- national Conference on Language Resources and Evaluation (LREC'16), pages 1862-1868, Portorož, Slovenia. European Language Resources Associa- tion (ELRA).
Stanza: A python natural language processing toolkit for many human languages. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, Christopher D Manning, 10.18653/v1/2020.acl-demos.14Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 58th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsOnline. Association for Computational LinguisticsPeng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 101- 108, Online. Association for Computational Linguis- tics.
A web-scale system for scientific knowledge exploration. Zhihong Shen, Hao Ma, Kuansan Wang, 10.18653/v1/P18-4015Proceedings of ACL 2018, System Demonstrations. ACL 2018, System DemonstrationsMelbourne, AustraliaAssociation for Computational LinguisticsZhihong Shen, Hao Ma, and Kuansan Wang. 2018. A web-scale system for scientific knowledge explo- ration. In Proceedings of ACL 2018, System Demon- strations, pages 87-92, Melbourne, Australia. Asso- ciation for Computational Linguistics.
Fábio Souza, Rodrigo Nogueira, Roberto Lotufo, arXiv:1909.10649Portuguese named entity recognition using BERT-CRF. arXiv preprintFábio Souza, Rodrigo Nogueira, and Roberto Lotufo. 2019. Portuguese named entity recognition using BERT-CRF. arXiv preprint arXiv:1909.10649.
| [
"https://github.com/ThilinaRajapakse/"
] |
[
"Recurrent Neural Network based Part-of-Speech Tagger for Code-Mixed Social Media Text",
"Recurrent Neural Network based Part-of-Speech Tagger for Code-Mixed Social Media Text"
] | [
"Raj Nath ",
"Patel "
] | [] | [] | This paper describes Centre for Development of Advanced Computing's (CDACM) submission to the shared task-'Tool Contest on POS tagging for Code-Mixed Indian Social Media (Facebook, Twitter, and Whatsapp) Text', collocated with ICON-2016. The shared task was to predict Part of Speech (POS) tag at word level for a given text. The codemixed text is generated mostly on social media by multilingual users. The presence of the multilingual words, transliterations, and spelling variations make such content linguistically complex. In this paper, we propose an approach to POS tag code-mixed social media text using Recurrent Neural Network Language Model (RNN-LM) architecture. We submitted the results for Hindi-English (hi-en), Bengali-English (bn-en), and Telugu-English (teen) code-mixed data. | null | [
"https://arxiv.org/pdf/1611.04989v2.pdf"
] | 14,534,451 | 1611.04989 | 5f29b7428ab6f03067eb6a40c024a723f2c2b249 |
Recurrent Neural Network based Part-of-Speech Tagger for Code-Mixed Social Media Text
Raj Nath
Patel
Recurrent Neural Network based Part-of-Speech Tagger for Code-Mixed Social Media Text
KBCS, CDAC Mumbai rajnathp@cdac.in Prakash B. Pimpale KBCS, CDAC Mumbai prakash@cdac.in Sasikumar M. KBCS, CDAC Mumbai sasi@cdac.in
This paper describes Centre for Development of Advanced Computing's (CDACM) submission to the shared task-'Tool Contest on POS tagging for Code-Mixed Indian Social Media (Facebook, Twitter, and Whatsapp) Text', collocated with ICON-2016. The shared task was to predict Part of Speech (POS) tag at word level for a given text. The codemixed text is generated mostly on social media by multilingual users. The presence of the multilingual words, transliterations, and spelling variations make such content linguistically complex. In this paper, we propose an approach to POS tag code-mixed social media text using Recurrent Neural Network Language Model (RNN-LM) architecture. We submitted the results for Hindi-English (hi-en), Bengali-English (bn-en), and Telugu-English (teen) code-mixed data.
Introduction
Code-Mixing and Code-Switching are observed in the text or speech produced by a multilingual user. Code-Mixing occurs when a user changes the language within a sentence, i.e. a clause, phrase or word of one language is used within an utterance of another language. Whereas, the co-occurrence of speech extract of two different grammatical systems is known as Code-Switching.
The language analysis of code-mixed text is a non-trivial task. Traditional approaches of POS tagging are not effective, for this text, as it does not adhere to any grammatical structure in general. Many studies have shown that RNN based POS taggers produced comparable results and, is also the state-of-the-art for some languages. How-ever, to the best of our knowledge, no study has been done for RNN based POS tagging of codemixed data.
In this paper, we have proposed a POS tagger using RNN-LM architecture for code-mixed Indian social media text. Earlier, researchers have adopted RNN-LM architecture for Natural language Understanding (NLU) (Yao et al., 2013;Yao et al., 2014) and Translation Quality Estimation (Patel and Sasikumar, 2016). RNN-LM models are similar to other vector-space language models (Bengio et al., 2003;Morin and Bengio, 2005;Schwenk, 2007;Mnih and Hinton, 2009) where we represent each word with a high dimensional real-valued vector. We modified RNN-LM architecture to predict the POS tag of a word, given the word and its context. Let's consider the following example:
Input: behen ki shaadi and m not there
Output : G N G PRP G N CC G V G R G R
In the above sentence, to predict POS tag (G N) for the word shaadi using an RNN-LM model with window size 3, the input will be ki shaadi and. Whereas, in standard RNN-LM model, ki and will be the input with shaadi as the output. We will discuss details of various models tried and their implementations in section 3.
In this paper, we show that our approach achieves results close to the state-of-the-art systems such as 1 Stanford (Toutanova et al., 2003), and 2 HunPos (Halácsy et al., 2007 (Màrquez and Giménez, 2004), Decision Tree (Schmid and Laws, 2008), Hidden Markov Model (HMM) (Kupiec, 1992) and, Conditional Random Field Auto Encoders (Ammar et al., 2014) have been tried for this task. Among these works, Neural Network (NN) based models is mainly related to this paper. In NN family, RNN is widely used network for various NLP applications (Mikolov et al., 2010;Mikolov et al., 2013a;Mikolov et al., 2013b;Socher et al., 2013a;Socher et al., 2013b).
Recently, RNN based models have been used to POS tag the formal text, but have not been tried yet on code-mixed data. Wang et al. (2015) have tried Bidirectional Long Short-Term Memory (LSTM) on Penn Treebank WSJ test set, and reported stateof-the-art performance. Qin (2015) has shown that RNN models outperform Majority Voting (MV) and HMM techniques for POS tagging of Chinese Buddhist text. Zennaki et al. (2015) have used RNN for resource-poor languages and reported comparable results with state-of-the-art systems (Das and Petrov, 2011;Duong et al., 2013;Gouws and Søgaard, 2015).
Work on POS tagging code-mixed Indian social media text is at a very nascent stage to date. Vyas et al. (2014) and Jamatia et al. (2015) have worked on data labeling and automatic POS tagging of such data using various machine learning techniques. Building further on that labeled data, Pimpale and Patel (2015) and, Sarkar (2015) have tried word embedding as an additional feature to the machine learning based classifiers for POS tagging.
Experimental Setup
RNN Models
There are many variants of RNN networks for different applications. For this task, we used elaman (Elman, 1990), Long Short-Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997), Deep LSTM, Gated Recurrent Unit (GRU) , which are widely used RNN models in the NLP literature.
In the following sub-sections, we gave a brief description of each model with mathematical equations (1,2, and 3). In the equations, x t and y t are the input and output vectors respectively. h t and h t−1 represent the current and previous hidden states respectively. W * are the weight matrices and b * are the bias vectors.
is the elementwise multiplication of the vectors. We used sigm, the logistic sigmoid and tanh, the hyperbolic tangent function to add nonlinearity in the network with sof tmax function at the output layer.
ELMAN
Elman and Jordon (Jordan, 1986) networks are the simplest network in RNN family and are known as Simple RNN. Elman network is defined by the following set of equations:
h t = sigm(W xh x t + W hh h t−1 + b h ) (1) y t = sof tmax(W hy h t + b y )
LSTM
LSTM is found to be better for modeling of long-range dependencies than Simple RNN. Simple RNN also suffers from the problem of vanishing and exploding gradient (Bengio et al., 1994). LSTM and other complex RNN models tackle this problem by introducing a gating mechanism. Many variants of LSTM (Graves, 2013;Yao et al., 2014;Jozefowicz et al., 2015) have been tried in literature for the various tasks. We implemented the following version:
i t = sigm(W xi x t + W hi h t−1 + b i ) (2) o t = sigm(W xo x t + W ho h t−1 + b o ) f t = sigm(W xf x t + W hf h t−1 + b f ) j t = tanh(W xj x t + W hj h t−1 + b j ) c t = c t−1 f t + i t j t h t = tanh(c t ) o t y t = sof tmax(W hy h t + b y )
where i, o, f are input, output and f orget gates respectively. j is the new memory content and c is updated memory.
Deep LSTM
In this paper, we used Deep LSTM with two layers. Deep LSTM is created by stacking multiple LSTM on the top of each other. The output of lower LSTM forms input to the upper LSTM. For example, if h t is the output of lower LSTM, then we apply a matrix transform to form the input x t for the upper LSTM. The Matrix transformation enables us to have two consecutive LSTM layers of different sizes.
GRU
GRU is quite a similar network to the LSTM, without any memory unit. GRU network also uses a different gating mechanism with reset (r) and update (z) gates. The following set of equations defines a GRU model:
r t = sigm(W xr x t + W hr h t−1 + b r ) (3) z t = sigm(W xz x t + W hz h t−1 + b z ) h t = tanh(W xh x t + W hh (r t h t−1 ) + b h ) h t = z t h t−1 + (1 − z t ) h t y t = sof tmax(W hy h t + b y )
Implementation
All the models were implemented using 3 THEANO framework (Bergstra et al., 2010;Bastien et al., 2012). For all the models, the word embedding dimensionality was 100, no of hidden units were 100 and the context word window size was 5 (w i−2 w i−1 w i w i+1 w i+2 ). We initialized all the square weight matrices as random orthogonal matrices. All the bias vectors were initialized to zero. Other weight matrices were sampled from a Gaussian distribution with mean 0 and variance 0.0001.
We trained all the models using Truncated Back-Propagation-Through-Time (T-BPTT) (Werbos, 1990) with the stochastic gradient descent. Standard values of hyperparameters were used for RNN model training, as suggested in the literature (Yao et al., 2014;Patel and Sasikumar, 2016). The depth of BPTT was fixed to 7 for all the models. We trained each model for 50 epochs and used Ada-delta (Zeiler, 2012) to adapt the learning rate of each parameter automatically ( = 10 −6 and ρ = 0.95).
Data
We used the data shared by the contest organizers (Jamatia and Das, 2016). The code-mixed data of bn-en, hi-en and te-en was shared separately for the Facebook (fb), Twitter (twt) and Whatsapp (wa) posts and conversations with Coarse-Grained (CG) and Fine-Grained (FG) POS annotations. We combined the data from fb, twt, and wa for CG and FG annotation of each language pair. The data was divided into training, testing, and development sets. Testing and development sets were randomly sampled from the complete data. Table 1 details sizes of the different sets at the sentence and token level. Tag-set counts for CG and FG are also provided.
We preprocess the text for Mentions, Hashtags, Smilies, URLs, Numbers and, Punctuations. In the preprocessing, we mapped all the words of a group to a single new token as they have the same POS tag. For example, all the Mentions like @dhoni, @bcci, and @iitb were mapped to @user; all the Hashtags like #dhoni, #bcci, #iitb were mapped to #user.
Methodology
The RNN-LM models use only the context words' embedding as the input features. We experimented with three RNN model configurations. In the first setting (Simple RNN, LSTM, Deep LSTM, GRU), we learn the word representation from scratch with the other model parameters. In the second configuration (GRU Pre), we trained word representations (pre-training) using word2vec (Mikolov et al., 2013b) tool and fine tuned with the training of other parameters of the network. Pre-training not only guides the learning towards minima with better generalization in non-convex optimization (Bengio, 2009;Erhan et al., 2010) but also improves the accuracy of the system (Kreutzer et al., 2015;Patel and Sasikumar, 2016). In the third setting (GRU Pre Lang), we also added language of the words as an additional feature with the context words. We learn the vector representation of languages similar to that of words, from scratch.
Results
We used F1-Score to evaluate the experiments, results are displayed in the Table 2. We trained models as described in the section 3.4. To compare our results, we also trained the Stanford and HunPos taggers on the same data, accuracy is given in Table 2.
From the table, it is evident that pre-training and language as an additional feature is helpful. Also, the accuracy of our best system (GRU Pre Lang) is comparable to that of Stanford and HunPos. GRU models are out-performing other models (Simple RNN, LSTM, Deep LSTM) for this task also as reported by Chung et al. (2014)
Submission to the Shared Task
The contest was having two type of submissions, first, constrained: restricted to use only the data shared by the organizers with the participants' implemented systems; second, unconstrained: participants were allowed to use the publicly available resources (training data, implemented systems etc.). We submitted for all the language pairs (hien, bn-en and, te-en) and domains (fb, twt and, wa). For constrained submission, the output of GRU Pre Lang was used. We trained Stanford POS tagger with the same data for unconstrained submission. Jamatia and Das (2016) evaluated all the submitted systems against another gold-test set and reported the results.
Analysis
We did a preliminary analysis of our systems and reported few points in this section.
• The POS categories, contributing more in the error are G X, G V, G N and G J for coarsegrained and V VM, JJ, N NN and N NNP for fine-grained systems. Also, we did the confusion matrix analysis and found that these POS tags are mostly confused with each other only. For instance, G J POS tag was tagged 28 times wrongly to the other POS tags in which 17 times it was G N. Table 3. System confuses in POS tagging of such words. With adding language as an additional feature, we were able to tag these type of words correctly. We developed language independent and generic POS tagger for social media text using RNN networks. We tried Simple RNN, LSTM, Deep LSTM and, GRU models. We showed that GRU outperforms other models, and also benefits from pretraining and language as an additional feature. Also, the accuracy of our approach is comparable to that of Stanford and HunPos.
In the future, we could try RNN models with more features like POS tags of context words, prefixes and suffixes, length, position, etc. Word characters also have been found to be a very useful feature in RNN based POS taggers.
) . http://nlp.stanford.edu/software/ tagger.shtml (Maximum-Entropy based POS tagger) 2 https://code.google.com/archive/p/ hunpos/ (Hidden Markov Model based POS tagger) arXiv:1611.04989v2 [cs.CL] 16 Nov 2016 POS tagging has been investigated for decades in the literature of Natural Language Processing (NLP). Different methods like a Support Vector Machine1 2 Related Work
Table 2 :
2F1 scores for different experiments
• RNN models require a huge amount of corpus to train the model parameters. From the results, we can observe that for hien and te-en with only approx 2K training sentences, the results of best RNN model (GRU Pre Lang) are comparable to Stanford and HunPos. For bn-en, the corpus was very less (only approx 0.5K sentences) for RNN training which resulted into poor performance compared to Stanford and Hun-Pos. With this and the earlier work on RNN based POS tagging, we can expect that RNN models could achieve state-of-the-art accuracy with given the sufficient amount of training data.• In general, LSTM and Deep LSTM models perform better than Simple RNN. But here, Simple RNN is outperforming both LSTM and Deep LSTM. The reason could be less amount of data for training such a complex model.• Few orthographically similar words of English and Hindi, having different POS tags are given with examples in
Table 3 :
3tumane to dekha hi nhi. G PRT to en they go to school. CC hi hi mummy to aisi hi hain. G V hi en hi, how are you. G PRT Similar words in hi-en data 7 Conclusion and Future Workword lang example
POS
are
hi
are shyaam kidhar ho?
PSP
are
en they are going.
G V
to
hi
http://deeplearning.net/software/ theano/#download
Conditional random field autoencoders for unsupervised structured prediction. Ammar, Advances in Neural Information Processing Systems. References [Ammar et al.2014] Waleed Ammar, Chris Dyer, and Noah A Smith. 2014. Conditional random field autoencoders for unsupervised structured prediction. In Advances in Neural Information Processing Sys- tems, pages 3311-3319.
Theano: new features and speed improvements. [ Bastien, NIPS 2012 deep learning workshop. [Bastien et al.2012] Frederic Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian Goodfellow, Arnaud Bergeron, Nicolas Bouchard, David Warde- Farley, and Yoshua Bengio. 2012. Theano: new fea- tures and speed improvements. In NIPS 2012 deep learning workshop.
Learning long-term dependencies with gradient descent is difficult. [ Bengio, IEEE Transactions on Neural Networks. [Bengio et al.1994] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependencies with gradient descent is difficult. In IEEE Transactions on Neural Networks, pages 157- 166.
A neural probabilistic language model. [ Bengio, In Journal of Machine Learning Reseach. 3[Bengio et al.2003] Yoshua Bengio, Rejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neu- ral probabilistic language model. In Journal of Ma- chine Learning Reseach, volume 3.
Learning Deep Architectures for AI. Foundations and trends R in Machine Learning. Yoshua Bengio, 2Yoshua Bengio. 2009. Learning Deep Architectures for AI. Foundations and trends R in Machine Learning, 2(1):1-127.
Theano: a CPU and GPU math expression compiler. Bergstra, Proceedings of the Python for scientific computing conference (SciPy). the Python for scientific computing conference (SciPy)4[Bergstra et al.2010] James Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde- Farley, and Yoshua Bengio. 2010. Theano: a CPU and GPU math expression compiler. In Proceed- ings of the Python for scientific computing confer- ence (SciPy), volume 4.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. [ Cho, Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). the Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, Qatar[Cho et al.2014] Kyunghyun Cho, Bart Van Merrin- boer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Ben- gio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine trans- lation. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing (EMNLP), Doha, Qatar.
Empirical evaluation of gated recurrent neural networks on sequence modeling. Chung , arXiv:1412.3555cs.NE[Chung et al.2014] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Em- pirical evaluation of gated recurrent neural net- works on sequence modeling. In arXiv:1412.3555 [cs.NE].
Unsupervised part-of-speech tagging with bilingual graph-based projections. Petrov2011] Dipanjan Das, Slav Das, Petrov, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1[Das and Petrov2011] Dipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graph-based projections. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 600-609. Association for Computational Linguistics.
Simpler unsupervised pos tagging with bilingual projections. Duong, ACL. [Duong et al.2013] Long Duong, Paul Cook, Steven Bird, and Pavel Pecina. 2013. Simpler unsupervised pos tagging with bilingual projections. In ACL (2), pages 634-639.
Finding Structure in Time. Jeffrey L Elman, Cognitive science. 142Jeffrey L Elman. 1990. Finding Structure in Time. Cognitive science, 14(2):179-211.
Why Does Unsupervised Pre-training Help Deep Learning?. Erhan, Journal of Machine Learning Research. 11[Erhan et al.2010] Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why Does Unsu- pervised Pre-training Help Deep Learning? Journal of Machine Learning Research, 11(Feb):625-660.
Simple task-specific bilingual word embeddings. Søgaard2015] Stephan Gouws, Anders Gouws, Søgaard, Proceedings of NAACL-HLT. NAACL-HLT[Gouws and Søgaard2015] Stephan Gouws and Anders Søgaard. 2015. Simple task-specific bilingual word embeddings. In Proceedings of NAACL-HLT, pages 1386-1390.
Alex Graves, arXiv:1308.0850Generating sequences with recurrent neural networks. Alex Graves. 2013. Generat- ing sequences with recurrent neural networks. arXiv:1308.0850.
Hunpos: An open source trigram tagger. Halácsy, Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions. the 45th annual meeting of the ACL on interactive poster and demonstration sessionsAssociation for Computational LinguisticsAndrás Kornai, and Csaba Oravecz[Halácsy et al.2007] Péter Halácsy, András Kornai, and Csaba Oravecz. 2007. Hunpos: An open source trigram tagger. In Proceedings of the 45th annual meeting of the ACL on interactive poster and demon- stration sessions, pages 209-212. Association for Computational Linguistics.
Long short-term memory. Sepp Hochreiter and Jurgen Schmidhuber. Neural computation[Hochreiter and Schmidhuber1997] Sepp Hochreiter and Jurgen Schmidhuber. 1997. Long short-term memory. In Neural computation, pages 1735-1780.
Anupam Jamatia, Amitava Das, Task Report: Tool Contest on POS Tagging for Code-Mixed Indian Social Media (Facebook, Twitter, and Whatsapp) Text@ICON. Proceedings of ICON[Jamatia and Das2016] Anupam Jamatia and Amitava Das. 2016. Task Report: Tool Contest on POS Tagging for Code-Mixed Indian Social Media (Face- book, Twitter, and Whatsapp) Text@ICON 2016. In Proceedings of ICON 2016.
Part-of-speech tagging for code-mixed English-Hindi Twitter and Facebook chat messages. RECENT ADVANCES IN. [ Jamatia, Anupam Jamatia, Björn Gambäck, and Amitava Das. 239[Jamatia et al.2015] Anupam Jamatia, Björn Gambäck, and Amitava Das. 2015. Part-of-speech tagging for code-mixed English-Hindi Twitter and Facebook chat messages. RECENT ADVANCES IN, page 239.
Attractor Dynamics and Parallellism in a Connectionist Sequential Machine. Michael I Jordan, Proceedings of 1986 Cognitive Science Conference. 1986 Cognitive Science ConferenceMichael I Jordan. 1986. Attractor Dy- namics and Parallellism in a Connectionist Sequen- tial Machine. In Proceedings of 1986 Cognitive Sci- ence Conference, pages 531-546.
An empirical exploration of recurrent network architectures. [ Jozefowicz, Proceedings of the 32nd International Conference on Machine Learning. the 32nd International Conference on Machine LearningRafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever[Jozefowicz et al.2015] Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. 2015. An empirical exploration of recurrent network architectures. In Proceedings of the 32nd International Conference on Machine Learning, pages 2342-2350.
QUality Estimation from ScraTCH (QUETCH): Deep Learning for Word-level Translation Quality Estimation. [ Kreutzer, Proceedings of the Tenth Workshop on Statistical Machine Translation. the Tenth Workshop on Statistical Machine TranslationLisboa, Portugal[Kreutzer et al.2015] Julia Kreutzer, Shigehiko Scha- moni, and Stefan Riezler. 2015. QUality Estima- tion from ScraTCH (QUETCH): Deep Learning for Word-level Translation Quality Estimation. In Pro- ceedings of the Tenth Workshop on Statistical Ma- chine Translation, pages 316-322, Lisboa, Portugal.
Robust part-ofspeech tagging using a hidden markov model. Julian Kupiec, Computer Speech & Language. 6Julian Kupiec. 1992. Robust part-of- speech tagging using a hidden markov model. Com- puter Speech & Language, 6:225-242.
A general pos tagger generator based on support vector machines. [ Màrquez, ] L Giménez2004, J Màrquez, Giménez, Journal of Machine Learning Research. [Màrquez and Giménez2004] L Màrquez and J Giménez. 2004. A general pos tagger gen- erator based on support vector machines. Journal of Machine Learning Research.
Recurrent neural network based language model. [ Mikolov, Proceedings of Interspeech. InterspeechMakuhari, Chiba, Japan[Mikolov et al.2010] Tomas Mikolov, Martin Karafiat, Lukas Burget, Jan Cernocky, and Sanjeev Khudan- pur. 2010. Recurrent neural network based lan- guage model. In Proceedings of Interspeech, vol- ume 2, Makuhari, Chiba, Japan.
Exploiting Similarities among Languages for Machine Translation. [ Mikolov, CoRR. [Mikolov et al.2013a] Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting Similarities among Languages for Machine Translation. In CoRR, pages 1-10.
Distributed representations of words and phrases and their compositionality. [ Mikolov, Advances in Neural Information Processing Systems. [Mikolov et al.2013b] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111-3119.
A scalable hierarchical distributed language model. Andriy Mnih, Geoffrey E Hinton, Advances in neural information processing systems. Mnih and Hinton2009[Mnih and Hinton2009] Andriy Mnih and Geoffrey E. Hinton. 2009. A scalable hierarchical distributed language model. In Advances in neural information processing systems, pages 1081-1088.
Hierarchical Probabilistic Neural Network Language Model. Bengio2005] Frederic Morin, Yoshua Morin, Bengio, Aistats. 5[Morin and Bengio2005] Frederic Morin and Yoshua Bengio. 2005. Hierarchical Probabilistic Neural Network Language Model. In Aistats, volume 5, pages 246-252.
Translation Quality Estimation using Recurrent Neural Network. M Sasikumar, Experiments with POS Tagging Code-mixed Indian Social Media Text. ICON. Prakash B. Pimpale and Raj Nath PatelBerlin, GermanyAssociation for Computational Linguistics2Proceedings of the First Conference on Machine Translation[Patel and Sasikumar2016] Raj Nath Patel and M Sasikumar. 2016. Translation Quality Es- timation using Recurrent Neural Network. In Proceedings of the First Conference on Machine Translation, volume 2, pages 819-824, Berlin, Ger- many. Association for Computational Linguistics. [Pimpale and Patel2015] Prakash B. Pimpale and Raj Nath Patel. 2015. Experiments with POS Tagging Code-mixed Indian Social Media Text. ICON.
Longlu Qin, POS tagging of Chinese Buddhist texts using Recurrent Neural Networks. Stanford UniversityTechnical reportLonglu Qin. 2015. POS tagging of Chinese Buddhist texts using Recurrent Neural Networks. Technical report, Stanford University.
Part-of-Speech Tagging for Code-mixed Indian Social Media Text at ICON. Kamal Sarkar, Kamal Sarkar. 2015. Part-of-Speech Tag- ging for Code-mixed Indian Social Media Text at ICON 2015. ICON.
Estimation of conditional probabilities with decision trees and an application to fine-grained pos tagging. Helmut Schmid, Florian Laws, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational Linguistics1Schmid and Laws2008. Association for Computational Linguistics[Schmid and Laws2008] Helmut Schmid and Florian Laws. 2008. Estimation of conditional probabilities with decision trees and an application to fine-grained pos tagging. In Proceedings of the 22nd Inter- national Conference on Computational Linguistics- Volume 1, pages 777-784. Association for Compu- tational Linguistics.
Continuous space language models. Holger Schwenk, Computer Speech and Language. 21Holger Schwenk. 2007. Continuous space language models. In Computer Speech and Language, volume 21, pages 492-518.
Parsing With Compositional Vector Grammars. Socher, Proceedings of the ACL 2013. the ACL 2013[Socher et al.2013a] Richard Socher, John Bauer, Christopher D. Manning, , and Andrew Y. Ng. 2013a. Parsing With Compositional Vector Gram- mars. In Proceedings of the ACL 2013, pages 455-465.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jy Wu, Proceedings of EMNLP. EMNLPSocher et al.2013b[Socher et al.2013b] Richard Socher, Alex Perelygin, , and Jy Wu. 2013b. Recursive deep models for se- mantic compositionality over a sentiment treebank. In Proceedings of EMNLP, pages 1631-1642.
Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network. [ Toutanova, Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyAssociation for Computational Linguistics1[Toutanova et al.2003] Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Hu- man Language Technology-Volume 1, pages 173- 180. Association for Computational Linguistics.
Pos tagging of english-hindi code-mixed social media content. Vyas, EMNLP. 14[Vyas et al.2014] Yogarshi Vyas, Spandana Gella, Jatin Sharma, Kalika Bali, and Monojit Choudhury. 2014. Pos tagging of english-hindi code-mixed so- cial media content. In EMNLP, volume 14, pages 974-979.
Part-ofspeech tagging with bidirectional long short-term memory recurrent neural network. [ Wang, arXiv:1510.06168arXiv preprint[Wang et al.2015] Peilu Wang, Yao Qian, Frank K Soong, Lei He, and Hai Zhao. 2015. Part-of- speech tagging with bidirectional long short-term memory recurrent neural network. arXiv preprint arXiv:1510.06168.
Backpropagation through time: what it does and how to do it. Paul J Werbos, IEEE. 78Paul J. Werbos. 1990. Backpropagation through time: what it does and how to do it. In IEEE, volume 78, pages 550-1560.
Recurrent neural networks for language understanding. [ Yao, INTERSPEECH. [Yao et al.2013] Kaisheng Yao, Geoffrey Zweig, Mei- Yuh Hwang, Yangyang Shi, and Dong Yu. 2013. Recurrent neural networks for language understand- ing. In INTERSPEECH, pages 2524-2528.
Spoken language understanding using long shortterm memory neural networks. [ Yao, Spoken Language Technology Workshop (SLT). IEEE[Yao et al.2014] Kaisheng Yao, Baolin Peng, Yu Zhang, Dong Yu, Geoffrey Zweig, and Yangyang Shi. 2014. Spoken language understanding using long short- term memory neural networks. In Spoken Language Technology Workshop (SLT), IEEE, pages 189-194.
ADADELTA: an adaptive learning rate method. D Matthew, Zeiler, arXiv:1212.5701cs.LGMatthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. arXiv:1212.5701 [cs.LG].
Unsupervised and Lightly Supervised Part-of-Speech Tagging Using Recurrent Neural Networks. [ Zennaki, Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation. the 29th Pacific Asia Conference on Language, Information and ComputationShanghai, ChinaNasredine Semmar, and Laurent Besacier[Zennaki et al.2015] Othman Zennaki, Nasredine Sem- mar, and Laurent Besacier. 2015. Unsupervised and Lightly Supervised Part-of-Speech Tagging Us- ing Recurrent Neural Networks. In Proceedings of the 29th Pacific Asia Conference on Language, In- formation and Computation, Shanghai, China.
| [] |
[
"Can Unsupervised Knowledge Transfer from Social Discussions Help Argument Mining?",
"Can Unsupervised Knowledge Transfer from Social Discussions Help Argument Mining?"
] | [
"Subhabrata Dutta \nJadavpur University\nDelhi Technological University\nJadavpur University\n\n",
"Jeevesh Juneja \nJadavpur University\nDelhi Technological University\nJadavpur University\n\n",
"Dipankar Das dipankar.dipnil@gmail.com \nJadavpur University\nDelhi Technological University\nJadavpur University\n\n"
] | [
"Jadavpur University\nDelhi Technological University\nJadavpur University\n",
"Jadavpur University\nDelhi Technological University\nJadavpur University\n",
"Jadavpur University\nDelhi Technological University\nJadavpur University\n"
] | [
"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics"
] | Identifying argument components from unstructured texts and predicting the relationships expressed among them are two primary steps of argument mining. The intrinsic complexity of these tasks demands powerful learning models. While pretrained Transformerbased Language Models (LM) have been shown to provide state-of-the-art results over different NLP tasks, the scarcity of manually annotated data and the highly domaindependent nature of argumentation restrict the capabilities of such models. In this work, we propose a novel transfer learning strategy to overcome these challenges. We utilize argumentation-rich social discussions from the ChangeMyView subreddit as a source of unsupervised, argumentative discourse-aware knowledge by finetuning pretrained LMs on a selectively masked language modeling task. Furthermore, we introduce a novel promptbased strategy for inter-component relation prediction that compliments our proposed finetuning method while leveraging on the discourse context. Exhaustive experiments show the generalization capability of our method on these two tasks over within-domain as well as out-of-domain datasets, outperforming several existing and employed strong baselines. 1 | 10.18653/v1/2022.acl-long.536 | [
"https://www.aclanthology.org/2022.acl-long.536.pdf"
] | 247,627,719 | 2203.12881 | e3cfaea172c5b9b69a217b99884f4ee590bbe31d |
Can Unsupervised Knowledge Transfer from Social Discussions Help Argument Mining?
Association for Computational LinguisticsCopyright Association for Computational LinguisticsMay 22-27, 2022 c 2022
Subhabrata Dutta
Jadavpur University
Delhi Technological University
Jadavpur University
Jeevesh Juneja
Jadavpur University
Delhi Technological University
Jadavpur University
Dipankar Das dipankar.dipnil@gmail.com
Jadavpur University
Delhi Technological University
Jadavpur University
Can Unsupervised Knowledge Transfer from Social Discussions Help Argument Mining?
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
the 60th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1May 22-27, 2022 c 2022Tanmoy Chakraborty IIIT-Delhi tanmoy@iiitd.ac.in
Identifying argument components from unstructured texts and predicting the relationships expressed among them are two primary steps of argument mining. The intrinsic complexity of these tasks demands powerful learning models. While pretrained Transformerbased Language Models (LM) have been shown to provide state-of-the-art results over different NLP tasks, the scarcity of manually annotated data and the highly domaindependent nature of argumentation restrict the capabilities of such models. In this work, we propose a novel transfer learning strategy to overcome these challenges. We utilize argumentation-rich social discussions from the ChangeMyView subreddit as a source of unsupervised, argumentative discourse-aware knowledge by finetuning pretrained LMs on a selectively masked language modeling task. Furthermore, we introduce a novel promptbased strategy for inter-component relation prediction that compliments our proposed finetuning method while leveraging on the discourse context. Exhaustive experiments show the generalization capability of our method on these two tasks over within-domain as well as out-of-domain datasets, outperforming several existing and employed strong baselines. 1
Introduction
Computational argument mining from texts is the fine-grained process of understanding opinion dynamics. In the most fundamental sense, argument understanding requires the identification of the opinions posed and justifications provided to support or falsify them. Generally, automated argument mining is a multi-stage pipeline identified with three general steps (Lippi and Torroni, 2015;Stab and Gurevych, 2017) -separating argumentative spans from non-argumentative ones, classi- * *Equal contribution 1 We release all code, models and data used at https: //github.com/Jeevesh8/arg_mining fying argument components, and inducing a structure among them (support, attack, etc.). While different argumentation models define different taxonomies for argument components, popular approaches broadly categorize them as 'claims' and 'premises' (Stab and Gurevych, 2017;Egawa et al., 2019;Mayer et al., 2020). As these components are not necessarily aligned to sentence-level segments and can be reflected within clausal levels, the task of argument component identification requires a token-level boundary detection of components and component type classification.
Context of argumentation in online discussions. Online discussions originating from backand-forth posts from users reflect a rich interaction of opinion dynamics on large scale. In Figure 1, we show a sample argument component annotation of consecutive posts from two users. The token-level granularity of components ensures that a single sentence may contain multiple components of the same (in 1st post) or different kinds (in 2nd and 4th posts). Moreover, two adjacent spans of texts, even with the same argumentative role, can be defined as two separate components (see the 4th post for example). It is trivial to say that the meaning of any post (as well as its argumentative role) is de-pendent on the context. To be specific, the third post can be identified as argumentative (a premise in this case) only when its predecessor post and its components are taken as the context. Similarly, a certain span of the first post is quoted in the second one signaling a concrete manifestation of dialogic continuity. One may even observe the user-specific argumentation styles: 1st user (author of the first and third posts) usually keeps claims and premises in separate sentences, while the 2nd user prefers to use multi-component, complex sentences. Existing studies on argumentation formalism recognize such continuity and define inter-post component relations (Ghosh et al., 2014;Hidey et al., 2017). However, the previous approaches for automated extraction, classification and relating argumentative components work on individual posts only and define the inter-post discourse in the later stages of relation prediction. This is trivially counter-intuitive for two major reasons: (i) if we consider two text spans from separate comments to be linked by some argumentative relation, then there exists a continuity of discourse between these spans and a model is likely to benefit if it decides the boundaries and types of these two components conditioned on that continuous information; (ii) users carry their style of argumentation (simple consecutive sentences vs. long complex ones, usage of particular markers like 'I think that ' etc.), and if the model is informed about these while observing the complete conversation with back-and-forth posts, it is more likely to extract correct components easily.
Scarcity of labeled data. Irrespective of the domain, argument annotation is a resource-intensive process. A few previous studies (Habernal and Gurevych, 2015;Al-Khatib et al., 2016) attempted to exploit a large amount of unlabeled data in a semi-supervised fashion. However, such methods require the components to be defined at sentencelevel (and thereby adding redundant spans into the predictions) as they perform some sentence similarity matching to generate pseudo-labels. Pretrained language models like BERT (Devlin et al., 2019) provide a workaround to handle the scarcity of task-specific annotated data. A parameter-intensive model is initially trained in a self-supervised manner on a large bulk of text; this pretraining enables the model to learn general language representation, which is then finetuned on task-specific labeled data. However, the amount of the latter still deter-mines the expressive power of such models (Wang et al., 2020).
Present work. Considering these challenges, we formulate a novel transfer learning method using Transformer-based language models. We use large amount of unlabelled discussion threads from Reddit's r/ChangeMyView (CMV) community as the source of argumentative knowledge. Pretrained, Transformer-based language models are finetuned on this dataset using a Masked Language Modelling task. Instead of randomly masking tokens to predict, we select several markers in the text that are shown to signal argumentative discourse in previous works (Chakrabarty et al., 2019;Eckle-Kohler et al., 2015). The language models are then made to predict these markers in the MLM task, thereby learning to relate different components of text according to their role in the argumentation presented. We call this novel finetuning method Selective Masked Language Modeling (sMLM). Furthermore, to explore the role of context in argument mining, we use sMLM to finetune a post-level language model based on BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) and a thread-level language model based on Longformer (Beltagy et al., 2020). We present efficient incorporation of several Reddit-specific structural cues into the Longformer architecture. These finetuned language models are then used for two fundamental components of argument mining: tokenlevel argument component identification (ACI) and inter-component relation type prediction (RTP). To further utilize the sMLM-based training of the language models, we propose a novel prompt-based approach to predict relations among argument components. We perform exhaustive experiments to explore the efficacy of our proposed methods for argument mining in both in-domain and out-ofdomain benchmark datasets: manually annotated Reddit discussions and scientific papers. Our experiments show clear improvements achieved by our methods (0.59 and 0.69 F1 for ACI and RTP, respectively) over several state-of-the-art baselines. 2
Related Work
A general overview of argument mining can be found in the survey articles by Lytos et al. (2019) and Lawrence and Reed (2019). In the current scope, we look into three major areas of research in argument mining.
Argument component detection and classification. Previous studies have sought to address argument boundary detection and component type prediction either as separate, successive tasks in the pipeline (Stab and Gurevych, 2017) or jointly in a single computational pass (Eger et al., 2017). Studies also explored classical machine learning frameworks like SVM-HMM (Habernal and Gurevych, 2017), CRF (Stab and Gurevych, 2017), etc. with rich manual feature engineering. With the development of neural network-based algorithms, BiLSTM-CNN-CRF models emerged as a popular choice (Schulz et al., 2018;Eger et al., 2017;Chernodub et al., 2019). Very recently, large pretrained language models like BERT have also been utilized (Mayer et al., 2020;Chakrabarty et al., 2019).
Discourse markers for learning language representation. Similar to our sMLM finetuning strategy, Nie et al. (2019) proposed an unsupervised sentence representation learning strategy where a neural model is trained to predict the appropriate discourse marker connecting two input sentences. Using a set of 15 markers, they showed that such a finetuning can help models in downstream NLI tasks. Chakrabarty et al. (2019) used a distant supervision approach using a single marker In my honest opinion to finetune BERT on a large collection of ChangeMyView threads and then performed argument component classification. However, they did not deal with the component identification task and performed classification of already identified components at sentence-level. Opitz and Frank (2019) suggested that while identifying the relation between two components, these models often rely more on the context and not the content of the components; discourse markers present within the context provide strong signals for the relation prediction task.
Argument mining over Reddit. A few recent studies explored argumentation over Reddit. Hidey et al. (2017) proposed a two-tier annotation scheme of claim-premise components and their relations, defining five different semantic roles of premises, using ChangeMyView discussion data. Egawa et al. (2019) also analyzed semantic roles of argument components over ChangeMyView threads; however, their primary focus remained on the dynamics of persuasion, similar to Dutta et al. (2020).
Selective MLM finetuning of Pretrained Language Models
Though pretrained language models are developed to overcome the problem of small annotated data on different language processing tasks, they still require task-specific finetuning for better results (Wang et al., 2020). In the specific domain of argument mining, annotated data is scarce, and attempting to finetune a massive language model with very small training data comes with the risk of overfitting. Moreover, different datasets follow different strategies for annotation. We seek to devise a novel transfer learning strategy where a given Transformer-based pretrained language model is directed to focus on argumentative discourse using large-scale, unlabelled data. We choose the ChangeMyView (CMV) community as the source of this transfer for two specific reasons: (i) it provides us with a large, readily available resource of interactions strictly focused on debates around versatile topics, and (ii) discussions in CMV contain a mixture of dialogic continuity over successive turns along with elaborate argumentation presented in a single turn. We hypothesize that such a versatile combination of discourse can make the language model more generalizable over dialogic as well as monologic argument mining tasks.
Discourse structure of CMV
Discussion forums like Reddit facilitate users to begin a discussion with an initial post (submissions, in the case of Reddit) and then comments under that post to instantiate a discussion. Users may post a comment in reply to the submission as well as the already posted comments. A typical discussion over Reddit forms a tree-like structure rooted at the submission. Any path from the root to a leaf comment can be perceived as an independent dialogic discourse among two or multiple users; henceforth, we will call such paths as threads. Formally, a thread T is an ordered se-
quence {(u i , P j )|i, j ∈ N, u i ∈ U T },
where P j is a text object (a submission when j = 1 and a comment, otherwise), u i is the author of P j , and U T is the set of all unique users engaged in the thread T . For brevity, we indicate P j as a post in general.
The dialogic nature of discussions naturally assumes this context to be the whole thread T . However, if we consider any two successive posts P j and P j+1 in T , they manifest the interests and styles of two separate participants along with the discourse continuity of the overall thread, which must be distinguished within the definition of the context. To take into account the complete dialogic context of the thread, we represent a thread as a single contiguous sequence of tokens with each post P j from user u i being preceded by a special token [USER-i] with i ∈ {0, · · · , |U T | − 1}, to encode which post is written by which user.
Reddit also offers users a quoting facility: users can quote a segment from the previous post (one to which they are replying) within their posts and emphasize that their opinions are specifically focused on that segment. We delimit such quoted segments with special tokens [STARTQ] and [ENDQ] in the quoting post to demarcate the dialogic discourse. Chakrabarty et al. (2019) also used quoting as signals for following premises. Additionally, we replace URLs with the special token [URL] to inform the presence of external references that often act as justifications of subjective opinions.
Selective MLM finetuning
Masked Language Modeling is a common strategy of training large language models; a certain fraction of the input tokens are masked and the model is trained to predict them, consequently learning a generalized language representation. Instead of randomly selecting tokens to mask, we select specific markers that might signal argumentative discourse. While the model is trained to predict these markers, it learns the roles and relationships of the text spans preceding and following them. Following the work by Eckle-Kohler et al. (2015), we select multiple markers signaling Opinion, Causation, Rebuttal, Fact presentation, Assumption, Summary, and some additional words, which serve multiple purposes depending on the context. As shown in Figure 2, to predict the marker I think in the first post, the model needs to learn that the following text span "that most Jewish people · · · " expresses the user's opinion on the topic. Similarly, in the second post, for the input segment " span 0 So span 1 if span 2 ", to correctly predict the masked markers as So and if, a language model needs to learn the fact that the truth value of the statement expressed in span 1 is conditioned upon span 2 , and this dependence is inferred from span 0 .
Effect of context sizes. CMV threads provide a natural segmentation of the discourse context into comment/post-level vs. thread-level. We seek to ex- plore the effect of the context size at different modules of argument mining (i.e., argument component detection and relation type prediction). For this, we use our proposed selective MLM approach to finetune a pretrained RoBERTa/BERT-base model in the comment/post-level regime, and train Longformer models in the thread-level regime. Longformer uses sparse, global attention (i.e., some tokens attend to all the tokens in the input sequence) to capture the long-range dependencies. We use the special tokens indicating the users (c.f. Section 3.1) as the globally attending tokens for Longformer.
Argument component identification
After finetuning the language model on the selective MLM task, we proceed to our first task of identifying argument components in threads. Since the detection is done in tokenlevel, we use the standard BIO tagging scheme: for a component class type , the beginning and the continuation of that component are marked as Btype and Itype , respectively, while any non-component token is labeled as O. Therefore, if one uses the usual claim-premise model of argumentation, the label set becomes {B-claim, I-claim, B-premise, I-premise, O}.
Inter-component relation prediction
While identifying the relation between two given related argument components, it is important to understand the role of those text segments within the context of the discourse. Furthermore, we seek USER-1 CMV: I feel skill is largely determined by experience. Compliments on skill are almost meaningless. In high school, I thought I was "good at math" as I'm the son of a math teacher and electrical engineer. In college, I learned that math was not something you're "good at" but something you have to put hard work into and is almost the sole determiner in the level of skill you obtain. So then isn't almost any compliment almost to be expected? I've spent a lot of time with similar problems --how could I not know all the details and little tricks of these problems? I feel a compliment recognizes something given: I feel everyone is passionate about something, whether it be math or psychology or medicine. I don't hear "you're so good at biology" but I think I should. USER-2 Then wouldn't a complement be just an acknowledgement of the time and effort you put into something that most people see as hard or worthwhile? This implies the complement is meaningful. ( Most people don't do this -either they don't put the time and effort into something generally hard or worthwhile or the time and effort isn't hard or worthwhile .) tion where we seek to classify the relation between the claims posed by USER-1 and USER-2, highlighted in red and green, respectively; the thread is converted to the prompt input by appending the prompt template. The language model the converts this prompt token sequence into fixed dimensional vectors from which the vector corresponding to the position of the masking token is used for relation classification.
to utilize the knowledge acquired by a language model in the sMLM finetuning step as well. Keeping these two factors in mind, we propose a novel, prompt-based identification of argument components. This approach is inspired by recent popularity of prompt-based fine-tuning methods in the community (Liu et al., 2021). At its core, these methods involve directly prompting the model for the required knowledge, rather than fine-tuning [CLS] or mean-pooled embeddings. For example, to directly use a model to summarise a text, we can append "TL;DR:" to the text (Radford et al., 2019), and let the model generate tokens following it; we expect the next few tokens to constitute a summary of all the previous text.
Since the underlying Transformer LMs have been trained using some Cloze task (i.e., filling the blanks from the context) previously, it is more natural for it to predict a token given a context. However, there are two challenges: (i) one needs to design a suitable prompt, and (ii) in case of classification tasks like RTP, it is challenging to perform Answer Mapping, i.e., to map all the possible tokens to some particular relation class. To tackle these challenges, we design our proposed relation prediction method in the following manner (see Figure 3)
For each pair of related components, say, component-1 and component-2, said by user-i and user-j, respectively, where component-2 refers to component-1, we append to the thread, a prompt with the template: "[USER-i] said <component1> [MASK] [MASK] [MASK] [USER-j] said <com-ponent2>" (we used three mask tokens since that is the upper bound of the marker size used for sMLM). We expect that the words predicted at the masked position such as "because", "in spite of what" etc. would be indicative of the relation of the two components. For the example thread shown in Figure 3, in a zero-shot prediction, sMLM-finetuned Longformer predicts "I", "disagree", "I" at the three masked positions. This "disagree" clearly corresponds to the undercutter relation between the two components. In fact, the base Longformer without sMLM finetuning predicts a space, a full stop and another space at the three masked positions. This additionally proves the efficacy of the sMLM finetuning.
Instead of engineering a token-to-relation type mapping, the predicted token embeddings at the masked positions are concatenated and fed into a linear layer to predict probabilities over the set of relation types. This way, we allow the model to learn and map from the token space to the relation type space.
Experiment Setup
Dataset
For the sMLM finetuning, we use the subset of Winning Args (ChangeMyView) (CMV) dataset (Tan et al., 2016) provided in ConvoKit (Chang et al., 2020). We use 99% of this data for training, and reserve 1% for checking accuracy on the sMLM task. The entire data consists of 3, 051 submissions and 293, 297 comments posted in the ChangeMyView subreddit by 34, 911 unique users. We extract the threads from these posts following the reply structure and end up with 120, 031 threads in total.
To train and evaluate all the models for ACI and RTP, we use the manually annotated Reddit discussion threads provided by Hidey et al. (2017) and further extended by Chakrabarty et al. (2019) for training and evaluation. The extended version of this dataset contains 113 CMV discussion threads manually annotated with argument components following the standard claim-premise model.
Additionally, we use the argument annotated Dr. Inventor Corpus (Lauscher et al., 2018) which consists of 40 scientific publications from the field of computer graphics. There are three types of argumentative components here: Background Claims (BC), consisting of claims from previous works in the paper, Own Claim (OC) consisting of the new claims made by the authors of the paper, and Data. The Data class mainly consists of citations, references to figures, etc. This dataset has three relation types, viz., support, contradicts and semantically same. Additional dataset details are provided in Appendix A.
Baseline methods
For ACI, we consider two state-of-the-art tokenlevel argument identification models: £ LSTM-MTL. Eger et al. (2017) proposed an end-to-end argument mining architecture which uses a BiLSTM-CNN-CRF sequence tagger to jointly learn component detection, classification, and relation parsing tasks. £ LSTM-MData. Schulz et al. (2018) proposed a BiLSTM-CNN-CRF based model which aims to generalize argument mining using multidomain training data in an MTL setting. We augment our data with their original set of 6 datasets.
For RTP, as no prior work exists to the best of our knowledge, we consider our own baselines. First, we consider £ Context-less RoBERTa, a pretrained RoBERTa model, which takes the two components with a [SEP] token between them and predicts the relation using [CLS] token's embedding. It is context-less as only two components without the surrounding context are used to predict the label. Second, we consider £ Contextless QR-Bert. This uses the same fine-tuning methodology as Contextless RoBERTa and is initialized from the pre-trained Quote-Response relation prediction model of Chakrabarty et al. (2019).
For RTP, we try another traditional strategy, instead of prompting, for our models: £ Mean Pooling. The mean pooling approach first finds an embedding of each of the two related components by averaging the Transformer embeddings at all token positions within a component. These embeddings are concatenated and passed into a linear layer for predicting the type of relation between the two related components.
To further evaluate the efficacy of our sMLM training strategy, we finetune a pretrained Longformer on the Winning Args Corpus, with the usual MLM, i.e., masking 15% of tokens at random, instead of selective masking. We call this the domainadapted Longformer, DA-LF.
Implementation details
We use the pretrained base version of Longformer (12 layers, 768 model size). The size of the local attention window was set to the default 512. The maximum sequence length was fixed at 4096.
Following the suggestions in Reimers and Gurevych (2017), we repeat our experiments on the 5 different data splits. The scores reported in the tables for various models correspond to the average value of the mean of 5 runs, over the last 5 epochs for that particular metric. We provide additional implementation details in Appendix B.
Evaluation
We evaluate the models based on precision, recall, and F1 scores for predicting claims and premises. For a more rigorous setting, we use exact match of the whole span between gold and predicted labels, i.e., if the gold label is Table 1 shows the results for argument component identification on the CMV Modes dataset. We compare models based on their micro-averaged F1 over the two component types (claims, premises), and token level accuracy. Firstly, we observe huge difference in token-level accuracy scores as we move from the existing best performing LSTM based methods with accuracy of 0.54 to BERT, having an accuracy of 0.62. Such a difference is expected since pretrained language models like BERT provide a head-start in case of small datasets like CMV Modes. Though the token-level accuracy increases, the micro-averaged F1 for exact component match does not increase much till we start using RoBERTa. Since pretrained Longformer was trained originally from the RoBERTa checkpoint (Beltagy et al., 2020), we can conclude that RoBERTa provides significant performance gain compared to BERT, owing to its larger training data and protocol. Longformer trained with our proposed sMLM finetuning clearly outperforms the rest of the models in terms of overall F1 score for component identification. However, the effects of selective MLM is more prominant in case of thread-level context (i.e, Longformer) compared to comment-level context (i.e, RoBERTa). We can observe that context plays different roles for different component types: while sMLMfinetuned Longformer and RoBERTa perform comparably for claim detection, in case of premises, the access to the complete context helps the Longformer to perform better. We can observe a similar trend in ACI-task on Dr. Inventor dataset (see Table 2). While Base Longformer performs comparable to its sMLM counterpart to detect Background and Own Claims, sMLM provides a 4 point improvement in F1 score for the Data class which plays a similar role of premises towards the claims. Intuitively, textual segments expressing claims contain independent signals of opinion that is less dependent on the context; pretrained language models might be able to decipher their roles without additional information either from the thread-level context (in case of CMV Modes, specifically) or enhanced relation-awareness induced by the sMLM finetuning. However, identifying segments that serve the role of premises to a claim intrinsically depends on the claims as well as the discourse expressed in a larger context.
Argument component identification
Model Support Agreement Direct Attack Undercutter Partial Overall F1 P R F1 P R F1 P R F1 P R F1 P RF1
Relation type prediction
In Table 3, we present the results for relation type identification on the CMV Modes dataset. We again compare models based on their microaveraged F1 over all relation types. Firstly, we consider the traditional mean pooling approach. Within this approach, we observe a 3 point improvement for the sMLM pre-trained Longformer on the 80-20 split, while maintaining same performance on the 50-50 split. Furthermore, the prompt based methods consistently outperform the mean pooling one, irrespective of whether we use base Longformer or sMLM pretrained one. Within the prompting approach, we also observe increased and consistent improvement in performance due to sMLM pretraining on both 80-20 and 50-50 splits. The gap in micro-F1 scores between sMLM and base Longformer for 80-20 split increases from 3 points in mean pooling to 5 points in prompting (0 to 7 points improvements for 50-50 split). As we can observe in Figure 4, sMLMfinetuned Longformer admits a very narrow margin of variation on random splits, compared to the base Longformer. Furthermore, sMLM finetuning consistently outperforms domain-adapted finetuning (DA-LF), indicating the unique knowledge transfer achieved by the former.
We hypothesise that this approach works better as this regime models our final RTP task, as a task that is more natural (in a sense similar to the (τ, B)−natural tasks of Saunshi et al. (2021)) for a Longformer model pre-trained with sMLM. Intuitively, the model learns to predict discourse markers at masked positions during sMLM pre-training and during fine-tuning on downstream tasks too, the model will naturally try to predict discourse markers at the masked positions. The discourse markers occurring at the masked positions are directly related to the relation between the two components. For instance, when there is a "but" between two components, we know that the two components present opposing views more or less. Here again, we observe that sMLM does not hurt the base performance under domain shift (Table 4).
We observe that the RoBERTa model performs worse than Base-LF-prompt, which incorporates the entire context of the thread. Also the effect worsens with reduced training set size, and RoBERTa model performs worse by 7 points in terms of micro-F1 for the 50-50 split. Furthermore, we observe that the mean pooling strategy, even though it uses context, performs worse (by 4 points Table 5: Performance of base Longformer and sMLM Longformer for predicting segments having some markers in "near" (5 tokens on either side of its) boundaries, and the rest of segments ("far"). on 80-20 split) than the context-less RoBERTa. Though, our sMLM pretrained model, manages to perform at par with the context-less RoBERTa with the mean pooling strategy. This means, that the using the right fine-tuning method is essential. Extra context can be utilised fully in longformer, only when pre-training and fine-tuning tasks are nicely aligned.
Dependence on the presence of markers
Following the analyses presented by Opitz and Frank (2019), we investigate whether the presence/absence of the markers used in the sMLM step within the vicinity of the components play any role in the ACI or RTP performances. Since the relation type among component-pairs that reside distant from each other are less likely to be inferred by the presence of markers in the context, we analyse the percentage of wrong predictions as we vary the distance between two related components, in Figure 5. While error rate does vary proportionally to the distance, we observe that sMLM-LF consistently yields lower percentage of wrong predictions as we vary the distance between the related components compared to base Longformer. This clearly indicates the superior capability induced by the sMLM finetuning to decipher the relationship among components not linked by direct context (i.e., not within a sentence or a single comment).
For the ACI task, however, we observe that the absence of markers in the vicinity of the components actually enables better identification, both in case of sMLM finetuned and pretrained Longformer (see Table 5).
Conclusion
We presented the results for two important tasks in the argument mining pipeline, viz., ACI and RTP. The experiments clearly elucidated the importance of alignment between the downstream and pre-trainig tasks, and the effect of various ways of modelling the tasks. The importance of entire thread's context in discussion forums, as well as how to incorporate that into transformer-based models fruitfully has also been made clear.
A Dataset Details
Stats for the CMV Modes dataset are provided in Table 6. These stats are obtained after truncation of threads to 4096 token sequence length. During data analysis, we observed that several threads share the same initial post(submission). Hence, we make sure that all threads with the same initial post entirely lie in either the train split, or the test.
For both CMV Modes, and Dr. Inventor Corpus, we only consider contiguous spans of texts as single components, as opposed to the labelling in the dataset. Discontiguous spans are re-labelled as separate components and the model is trained and tested with these new labels, instead.
For CMV Modes dataset, we add an extra "continue" class of relations to denote relation between two dis-contiguous spans of same argumentative component annotated in the data. We group together various relation types annotated in the CMV modes data into the 5 broad classes as follows: support("continue" class and "support" class), agreement("agreement", "understand" classes), direct attack("attack", "rebuttal attack", "rebuttal", "disagreement" classes), undercutter attack("undercutter", "undercutter attack" classes), partial("partial agreement", "partial attack", "partial disagreement" classes). These groupings are based on the broad annotation guidelines provided for the annotations of CMV Modes data.
For Dr. Inventor Corpus, due to the low number of semantically same relations(44) compared to support(4535) and contradicts(564) in the original dataset, we add the label("parts-of-same") which indicates that two dis-contiguous spans belong to the same argumentative component to the semantically same category. We also, merge together sections of papers to efficiently utilise 4096 token length of Longformer model. The detailed statistics after truncation to 4096 sequence length are presented in Table 7.
B Implementation Details
We use the pretrained base version of Longformer (12 layers, 768 model size). The size of the local attention window was set to the default 512. The maximum sequence length was fixed at 4096. We added the special tokens that we used, to the pretrained Longformer tokenizer. For ACI our models use a CRF layer 3 . sMLM training for Longformer 3 We use the implementation of AllenNLP (Gardner et al., 2018) based models was done on thread level and for BERT and RoBERTa based models on commentlevel. We used mini-batch learning; approximately similar length input threads were batched together keeping the total number of tokens per batch fixed to 8, 194 for Longformer and 1024 for BERT and RoBERTa models, and accumulated gradients over 3 batches. We trained our models for a total of 10 epochs on sMLM task, while saving checkpoints after each epoch. We used Adam optimizer with a learning rate of 10 −6 . For all downstream tasks, we train our models for 30 epochs, again, with Adam optimizer with learning rate of 2e − 5 as suggested by Mosbach et al. (2021). We use same batch sizes as sMLM training and accumulate gradients over 4 batches. We observe that for prompting RTP on CMV-Modes, not making [USER-i] tokens global, leads to better performance, hence we report results for same.
We find that sMLM training for 4 epochs is most beneficial, for performance on downstream task. Hence, we report results for the same checkpoint. Following the suggestions in Reimers and Gurevych (2017), we repeat our experiments on 5 different data splits and present the distributions in the Appendix. For the results at any epoch, the score plotted corresponds to mean over the 5 runs, and error regions correspond to the Bessel corrected standard deviation. The scores reported in the tables for various models correspond to the average value of the mean of 5 runs, over the last 5 epochs for that particular metric.
Figure 1 :
1Token-level claim (red) and premise (blue) annotation of a discussion thread formed by consecutive posts from two users. Second post quotes a span from the first (shown in italics). Highlighted regions signify component boundaries (to demarcate consecutive components of the same kind as in the fourth post).
Figure 2 :
2Example of selective masking in a sample CMV thread; sMLM finetuning requires a pretrained language model to predict the masked (highlighted in red) tokens (or all the subwords constituting them) based on the context.
Figure 3 :
3<thread token sequence> USER-1 said <component-1> [MASK][MASK][MASK] USER-2 said <componentExample outline of prompt-based relation predic-
[O, B-claim, I-claim, I-claim, I-claim, O] then only the predictions [O, B-claim, I-claim, I-claim, I-claim, O], or [O, Iclaim, I-claim, I-claim, I-claim, O] can be considered as true positives. We use the popular SeqEval (Nakayama, 2018) framework.
Figure 4 :
4Micro-F1 scores for predicting relation types among argument components by Base and sMLM-finetuned Longformer models over the course of training using (a) 50-50 split and (b) 80-20 split. We use 5 different runs on random splits for each model to report the mean (solid lines) and variance.
Figure 5 :
5Percentage of erroneous classifications for RTP for Base-LF-prompt and LF-sMLM-prompt on component-pairs at different distances.
Figure 6 :
6On CMV Modes data, sMLM-LF-mp's mean F1 converges to 0.59 compared to 0.56 for Base-LF-mp in 80-20 split (a) and 0.56 in 50-50 split (b).
Figure 7 :Figure 8 :Figure 9 :Figure 10 :Figure 11 :Figure 12 :Figure 13 :Figure 14 :
7891011121314Change in sMLM-LF performance on CMV Modes RTP (a) 80-20 and (b) 50-50 split when number of mask tokens in the prompt is changed from 3 to 2. The model with 2 masked token converges to 0.70 (0.66) and the mean for 3 masked tokens converges to 0.67 (0.69). Contextless Roberta's mean f1 converges to around 0.599, compared to 0.62 of Base Longformer on RTP. Contextless Roberta's mean f1 converges to around 0.55, compared to 0.617 of Base Longformer on RTP. QR-BERT converges to an f1 score 0.59 compared to 0.60 for RoBERTa on the 80-20 split of CMV-Modes for RTP. QR-BERT converges to an f1 score 0.54 compared to 0.55 for RoBERTa on the 50-50 split of CMV-Modes for RTP. Both Base LF and our sMLM pretrained Longformer converge to an f1 of 0.85 with promptbased RTP on Dr. Inventor corpus. The Domain Adapted LF converges to around 0.66 compared to 0.69 for sMLM-LF, on the 50-50 split on CMVThe Domain Adapted LF converges to around 0.61 compared to 0.67 for sMLM-LF, on the 80-20 split on CMV-Modes
Table 2 :
2Results on Dr. Inventor dataset for argument component identification using sMLM-finetuned and base Longformer models.
Table 3 :
3Relation type wise Precision (P), Recall (R) and F1 score on the CMV Modes dataset for various models. The highest scores in every column are in bold. The suffix "mp" and "prompt" indicate that the model was trained using Mean Pooling and Prompting strategies, respectively. The F1 in last column is the Micro/weighted-F1 over all the prediction classes.Relation
types
Base-LF-prompt
sMLM-LF-prompt
P
R
F1
P
R
F1
Support
0.91
0.90
0.91
0.89
0.92
0.91
Contradict
0.60
0.60
0.60
0.65
0.55
0.60
Semantically
same
0.74
0.77
0.75
0.77
0.75
0.77
Table 4 :
4Relation Type wise Precision (P), Recall (R) and F1 score on Dr. Inventor Corpus for prompt-based relation prediction using sMLM and base Longformer models.
Table 6 :
6Statistics for the CMV-Modes dataset.Component Type
# Tokens
O
153429
B-BC
3215
I-BC
39574
B-OC
5300
I-OC
74239
B-D
3994
I-D
19058
Relation Types
# of relations
support
4535
Contradicts
564
Semantically Same
1049
Table 7 :
7Statistics for the Dr. Inventor dataset.
Type Markers Opinion i agree, i disagree, i think, in my opinion, imo, imho Causation because, since, as, therefore, if, so, according to, hence, thus, consequently Rebuttal in contrast, yet, though, in spite of, but regardless of, however, on the contrary Factual moreover, in addition, further to this, in fact, also, firstly, secondly, lastly Assumption in the event of, as long as, so long as, provided that, assuming that, given that Summary tldr Misc. why, where, what, how, when, while
Table 8 :
8Types and examples of different discourse markers used for selective MLM finetuning.
Table 8
8provides examples of markers of various kinds, that are masked during the sMLM training.
The source codes and datasets have been submitted separately.
AcknowledgementsThe authors would like to thank Chris Hidey and Smaranda Muresan, for clarifications providing regarding their work. T. Chakraborty would like to acknowledge the support of Ramanujan Fellowship, CAI, IIIT-Delhi and ihub-Anubhuti-iiitd Foundation set up under the NM-ICPS scheme of the Department of Science and Technology, India.
Crossdomain mining of argumentative text through distant supervision. Khalid Al-Khatib, Henning Wachsmuth, Matthias Hagen, Jonas Köhler, Benno Stein, 10.18653/v1/N16-1165Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan DiegoAssociation for Computational LinguisticsCaliforniaKhalid Al-Khatib, Henning Wachsmuth, Matthias Ha- gen, Jonas Köhler, and Benno Stein. 2016. Cross- domain mining of argumentative text through distant supervision. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1395-1404, San Diego, Califor- nia. Association for Computational Linguistics.
Longformer: The long-document transformer. CoRR, abs. Iz Beltagy, Matthew E Peters, Arman Cohan, Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. CoRR, abs/2004.05150v2.
AMPERSAND: Argument mining for PER-SuAsive oNline discussions. Tuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy Mckeown, Alyssa Hwang, 10.18653/v1/D19-1291Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsTuhin Chakrabarty, Christopher Hidey, Smaranda Muresan, Kathy McKeown, and Alyssa Hwang. 2019. AMPERSAND: Argument mining for PER- SuAsive oNline discussions. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2933-2943, Hong Kong, China. Association for Computational Linguistics.
. Jonathan P Chang, Caleb Chiam, Liye Fu, Andrew Z Wang, Justine Zhang, and Cristian Danescu-Niculescu-Mizil. 2020. Convokit: A toolkit for the analysis of conversationsJonathan P. Chang, Caleb Chiam, Liye Fu, An- drew Z. Wang, Justine Zhang, and Cristian Danescu- Niculescu-Mizil. 2020. Convokit: A toolkit for the analysis of conversations.
TARGER: Neural argument mining at your fingertips. Artem Chernodub, Oleksiy Oliynyk, Philipp Heidenreich, Alexander Bondarenko, Matthias Hagen, Chris Biemann, Alexander Panchenko, 10.18653/v1/P19-3031Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 57th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsFlorence, ItalyAssociation for Computational LinguisticsArtem Chernodub, Oleksiy Oliynyk, Philipp Hei- denreich, Alexander Bondarenko, Matthias Hagen, Chris Biemann, and Alexander Panchenko. 2019. TARGER: Neural argument mining at your finger- tips. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Sys- tem Demonstrations, pages 195-200, Florence, Italy. Association for Computational Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Changing views: Persuasion modeling and argument extraction from online discussions. Subhabrata Dutta, Dipankar Das, Tanmoy Chakraborty, 10.1016/j.ipm.2019.102085Information Processing & Management. 572102085Subhabrata Dutta, Dipankar Das, and Tanmoy Chakraborty. 2020. Changing views: Persuasion modeling and argument extraction from online dis- cussions. Information Processing & Management, 57(2):102085.
On the role of discourse markers for discriminating claims and premises in argumentative discourse. Judith Eckle-Kohler, Roland Kluge, Iryna Gurevych, 10.18653/v1/D15-1267Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsJudith Eckle-Kohler, Roland Kluge, and Iryna Gurevych. 2015. On the role of discourse markers for discriminating claims and premises in argumen- tative discourse. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 2236-2242, Lisbon, Portugal. As- sociation for Computational Linguistics.
Annotating and analyzing semantic role of elementary units and relations in online persuasive arguments. Ryo Egawa, Gaku Morio, Katsuhide Fujita, 10.18653/v1/P19-2059Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop. the 57th Annual Meeting of the Association for Computational Linguistics: Student Research WorkshopFlorence, ItalyAssociation for Computational LinguisticsRyo Egawa, Gaku Morio, and Katsuhide Fujita. 2019. Annotating and analyzing semantic role of elemen- tary units and relations in online persuasive argu- ments. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics: Student Research Workshop, pages 422-428, Florence, Italy. Association for Computational Lin- guistics.
Neural end-to-end learning for computational argumentation mining. Steffen Eger, Johannes Daxenberger, Iryna Gurevych, 10.18653/v1/P17-1002Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Long Papers)Steffen Eger, Johannes Daxenberger, and Iryna Gurevych. 2017. Neural end-to-end learning for computational argumentation mining. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 11-22, Vancouver, Canada. Association for Computational Linguistics.
Allennlp: A deep semantic natural language processing platform. Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters, Michael Schmitz, Luke Zettlemoyer, Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Pe- ters, Michael Schmitz, and Luke Zettlemoyer. 2018. Allennlp: A deep semantic natural language process- ing platform.
Analyzing argumentative discourse units in online interactions. Debanjan Ghosh, Smaranda Muresan, Nina Wacholder, Mark Aakhus, Matthew Mitsui, 10.3115/v1/W14-2106Proceedings of the First Workshop on Argumentation Mining. the First Workshop on Argumentation MiningBaltimore, MarylandAssociation for Computational LinguisticsDebanjan Ghosh, Smaranda Muresan, Nina Wacholder, Mark Aakhus, and Matthew Mitsui. 2014. Analyz- ing argumentative discourse units in online interac- tions. In Proceedings of the First Workshop on Ar- gumentation Mining, pages 39-48, Baltimore, Mary- land. Association for Computational Linguistics.
Exploiting debate portals for semi-supervised argumentation mining in user-generated web discourse. Ivan Habernal, Iryna Gurevych, 10.18653/v1/D15-1255Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsIvan Habernal and Iryna Gurevych. 2015. Exploit- ing debate portals for semi-supervised argumenta- tion mining in user-generated web discourse. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 2127- 2137, Lisbon, Portugal. Association for Computa- tional Linguistics.
Argumentation Mining in User-Generated Web Discourse. Ivan Habernal, Iryna Gurevych, 10.1162/COLI_a_00276Computational Linguistics. 431Ivan Habernal and Iryna Gurevych. 2017. Argumen- tation Mining in User-Generated Web Discourse. Computational Linguistics, 43(1):125-179.
Analyzing the semantic types of claims and premises in an online persuasive forum. Christopher Hidey, Elena Musi, Alyssa Hwang, Smaranda Muresan, Kathy Mckeown, 10.18653/v1/W17-5102Proceedings of the 4th Workshop on Argument Mining. the 4th Workshop on Argument MiningCopenhagen, DenmarkAssociation for Computational LinguisticsChristopher Hidey, Elena Musi, Alyssa Hwang, Smaranda Muresan, and Kathy McKeown. 2017. Analyzing the semantic types of claims and premises in an online persuasive forum. In Proceedings of the 4th Workshop on Argument Mining, pages 11- 21, Copenhagen, Denmark. Association for Compu- tational Linguistics.
An argument-annotated corpus of scientific publications. Anne Lauscher, Goran Glavaš, Simone Paolo Ponzetto, 10.18653/v1/W18-5206Proceedings of the 5th Workshop on Argument Mining. the 5th Workshop on Argument MiningBrussels, BelgiumAssociation for Computational LinguisticsAnne Lauscher, Goran Glavaš, and Simone Paolo Ponzetto. 2018. An argument-annotated corpus of scientific publications. In Proceedings of the 5th Workshop on Argument Mining, pages 40-46, Brus- sels, Belgium. Association for Computational Lin- guistics.
Argument mining: A survey. John Lawrence, Chris Reed, 10.1162/coli_a_00364Computational Linguistics. 454John Lawrence and Chris Reed. 2019. Argument mining: A survey. Computational Linguistics, 45(4):765-818.
Argument mining: A machine learning perspective. Marco Lippi, Paolo Torroni, Theory and Applications of Formal Argumentation. ChamSpringer International PublishingMarco Lippi and Paolo Torroni. 2015. Argument min- ing: A machine learning perspective. In Theory and Applications of Formal Argumentation, pages 163- 176, Cham. Springer International Publishing.
Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig, Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2021. Pre- train, prompt, and predict: A systematic survey of prompting methods in natural language processing.
. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approachYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach.
Panagiotis Sarigiannidis, and Kalina Bontcheva. 2019. The evolution of argumentation mining: From models to social media and emerging tools. Anastasios Lytos, Thomas Lagkas, 10.1016/j.ipm.2019.102055Information Processing & Management. 566102055Anastasios Lytos, Thomas Lagkas, Panagiotis Sarigian- nidis, and Kalina Bontcheva. 2019. The evolution of argumentation mining: From models to social media and emerging tools. Information Processing & Man- agement, 56(6):102055.
Transformer-based argument mining for healthcare applications. Tobias Mayer, Elena Cabrio, Serena Villata, 10.3233/FAIA200334-Including 10th Conference on Prestigious Applications of Artificial Intelligence. Santiago de Compostela, SpainIOS Press29ECAI 2020 -24th European Conference on Artificial IntelligenceTobias Mayer, Elena Cabrio, and Serena Villata. 2020. Transformer-based argument mining for healthcare applications. In ECAI 2020 -24th European Confer- ence on Artificial Intelligence, 29 August-8 Septem- ber 2020, Santiago de Compostela, Spain, August 29 -September 8, 2020 -Including 10th Conference on Prestigious Applications of Artificial Intelligence (PAIS 2020), volume 325 of Frontiers in Artificial In- telligence and Applications, pages 2108-2115. IOS Press.
On the stability of fine-tuning bert: Misconceptions, explanations, and strong baselines. Marius Mosbach, Maksym Andriushchenko, Dietrich Klakow, Marius Mosbach, Maksym Andriushchenko, and Diet- rich Klakow. 2021. On the stability of fine-tuning bert: Misconceptions, explanations, and strong base- lines.
2018. seqeval: A python framework for sequence labeling evaluation. Hiroki Nakayama, Hiroki Nakayama. 2018. seqeval: A python framework for sequence labeling evaluation. Software available from https://github.com/chakki-works/seqeval.
DisSent: Learning sentence representations from explicit discourse relations. Allen Nie, Erin Bennett, Noah Goodman, 10.18653/v1/P19-1442Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsAllen Nie, Erin Bennett, and Noah Goodman. 2019. DisSent: Learning sentence representations from ex- plicit discourse relations. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4497-4510, Florence, Italy. Association for Computational Linguistics.
Dissecting content and context in argumentative relation analysis. Juri Opitz, Anette Frank, 10.18653/v1/W19-4503Proceedings of the 6th Workshop on Argument Mining. the 6th Workshop on Argument MiningFlorence, ItalyAssociation for Computational LinguisticsJuri Opitz and Anette Frank. 2019. Dissecting content and context in argumentative relation analysis. In Proceedings of the 6th Workshop on Argument Min- ing, pages 25-34, Florence, Italy. Association for Computational Linguistics.
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging. Nils Reimers, Iryna Gurevych, Nils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of lstm-networks for sequence tagging.
A mathematical exploration of why language models help solve downstream tasks. Nikunj Saunshi, Sadhika Malladi, Sanjeev Arora, Nikunj Saunshi, Sadhika Malladi, and Sanjeev Arora. 2021. A mathematical exploration of why language models help solve downstream tasks.
Multi-task learning for argumentation mining in low-resource settings. Claudia Schulz, Steffen Eger, Johannes Daxenberger, Tobias Kahse, Iryna Gurevych, 10.18653/v1/N18-2006Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, Louisiana2Short Papers. Association for Computational LinguisticsClaudia Schulz, Steffen Eger, Johannes Daxenberger, Tobias Kahse, and Iryna Gurevych. 2018. Multi-task learning for argumentation mining in low-resource settings. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 35-41, New Orleans, Louisiana. Association for Computa- tional Linguistics.
Parsing Argumentation Structures in Persuasive Essays. Christian Stab, Iryna Gurevych, 10.1162/COLI_a_00295Computational Linguistics. 433Christian Stab and Iryna Gurevych. 2017. Parsing Ar- gumentation Structures in Persuasive Essays. Com- putational Linguistics, 43(3):619-659.
Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. Chenhao Tan, Vlad Niculae, Cristian Danescu-Niculescu-Mizil, Lillian Lee, 10.1145/2872427.2883081Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee. Proceedings of the 25th International Conference on World Wide Web, WWW '16Chenhao Tan, Vlad Niculae, Cristian Danescu- Niculescu-Mizil, and Lillian Lee. 2016. Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. In Pro- ceedings of the 25th International Conference on World Wide Web, WWW '16, page 613-624, Repub- lic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
Jing Gao, and Ahmed Hassan Awadallah. 2020. Adaptive self-training for few-shot neural sequence labeling. Yaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu, abs/2010.03680v2CoRRYaqing Wang, Subhabrata Mukherjee, Haoda Chu, Yuancheng Tu, Ming Wu, Jing Gao, and Ahmed Has- san Awadallah. 2020. Adaptive self-training for few-shot neural sequence labeling. CoRR, abs/2010.03680v2.
| [
"https://github.com/chakki-works/seqeval."
] |
[
"UNSUPERVISED LEARNING OF SENTENCE REPRESEN- TATIONS USING SEQUENCE CONSISTENCY",
"UNSUPERVISED LEARNING OF SENTENCE REPRESEN- TATIONS USING SEQUENCE CONSISTENCY"
] | [
"Siddhartha Brahma brahma@us.ibm.com \nIBM Research\nAlmadenUSA\n"
] | [
"IBM Research\nAlmadenUSA"
] | [] | Computing universal distributed representations of sentences is a fundamental task in natural language processing. We propose a simple, yet surprisingly powerful unsupervised method to learn such representations by enforcing consistency constraints on sequences of tokens. We consider two classes of such constraintssequences that form a sentence and between two sequences that form a sentence when merged. We learn a sentence encoder by training it to distinguish between consistent and inconsistent examples. Extensive evaluation on several transfer learning and linguistic probing tasks shows improved performance over strong unsupervised and supervised baselines, substantially surpassing them in several cases. | null | [
"https://arxiv.org/pdf/1808.04217v2.pdf"
] | 51,969,623 | 1808.04217 | 1c27dc8a913278557c7eb91352c3dca38579b9fb |
UNSUPERVISED LEARNING OF SENTENCE REPRESEN- TATIONS USING SEQUENCE CONSISTENCY
29 Sep 2018
Siddhartha Brahma brahma@us.ibm.com
IBM Research
AlmadenUSA
UNSUPERVISED LEARNING OF SENTENCE REPRESEN- TATIONS USING SEQUENCE CONSISTENCY
29 Sep 2018Submitted as a conference paper to ICLR 2019
Computing universal distributed representations of sentences is a fundamental task in natural language processing. We propose a simple, yet surprisingly powerful unsupervised method to learn such representations by enforcing consistency constraints on sequences of tokens. We consider two classes of such constraintssequences that form a sentence and between two sequences that form a sentence when merged. We learn a sentence encoder by training it to distinguish between consistent and inconsistent examples. Extensive evaluation on several transfer learning and linguistic probing tasks shows improved performance over strong unsupervised and supervised baselines, substantially surpassing them in several cases.
INTRODUCTION
In natural language processing, the use of distributed representations has become standard through the effective use of word embeddings. In a wide range of NLP tasks, it is beneficial to initialize the word embeddings with ones learnt from large text corpora like word2vec Mikolov et al. (2013) or GLoVe Pennington et al. (2014) and tune them as a part of a target task e.g. text classification. It is therefore a natural question to ask whether such standardized representations of whole sentences that can be widely used in downstream tasks, is possible.
There are two classes of approaches to this problem. Taking cue from word2vec, an unsupervised learning approach is taken by SkipThought (Kiros et al. (2015)), FastSent (Hill et al. (2016)) and QuickThoughts Logeswaran & Lee (2018) exploiting the closeness of adjacent sentences in a text corpus. More recently, the work of Conneau et al. (2017) takes a supervised learning approach. They train a sentence encoder on large scale natural language inference datasets (Bowman et al. (2015); Williams et al. (2018)) and show that the learned encoding transfers well to to a set of transfer tasks. This is reminiscent of the approach taken by ImageNet Deng et al. (2009) in the computer vision community. In (Subramanian et al. (2018)), the authors train a sentence encoder on multiple tasks to get improved performance.
In this paper, we take a slightly different unsupervised approach to learning sentence representations. We define a sequence of tokens to be consistent if they form a valid sentence. While defining consistency precisely is difficult, we generate approximately inconsistent sequences of tokens by slightly perturbing consistent ones. We extend the notion of consistency to pairs of sequencestwo sequences are defined to be consistent if they can me merged to form a consistent sequence. We then train a sentence representation encoder to discriminate between consistent and inconsistent sequences or pairs of sequences. Note that external supervised labels are not required for training as we generate our own labels using the notion of consistency.
RELATED WORK
Learning general sentence encoders is a fundamental problem in NLP and there is a long list of works that addresses this problem. To begin with, an untrained BiLSTM model with max pooling of the intermediate states performs fairly well on several transfer and linguistic probing tasks . Evidently the structure of such a model incorporates certain biases that are beneficial. The next set of simple models consists of a bag-of-words approach using learned word embeddings like GloVe (Pennington et al. (2014)) or FastText (Joulin et al. (2017)), where a simple average of the word embeddings in a sentence define the sentence embedding. Although very fast, these approaches are limited by the dimensions of the word embeddings and do not take into account the order of the words.
Due to the availability of practically unlimited textual data, learning sentence encoders using unsupervised learning is an attractive proposition. The SkipThought model of Kiros et al. (2015) learns sentence encoders by using an encoder-decoder architecture. Exploiting the relatedness inherent in adjacent sentences, the model is trained by using the encoder to encode a particular sentence and then using the decoder to decode words in adjacent sentences. This approach is directly inspired by a similar objective for learning word embeddings like word2vec (Mikolov et al. (2013)). The Bagof-Words approach is developed further in the FastSent model (Hill et al. (2016)) which uses a bagof-words representation of a sentence to predict words in adjacent sentences. In work by Arora et al. (2017) it is shown that a simple post processing of the average word-embeddings can perform comparably or better than skip-thought like objectives. In more recent work, Logeswaran & Lee (2018) propose QuickThoughts which use a form of discriminative training on encodings of sentences, by biasing the encodings of adjacent sentences to be closer to each other than non-adjacent sentences, where closeness is defined by the dot product. The work of Pagliardini et al. (2018) where the authors use n-gram features for unsupervised learning is also relevant. The other notable unsupervised approach is that of using a denoising autoencoder (Hill et al. (2016)).
More recently, supervised learning has been used to learn better sentence encoders. Conneau et al. (2017) use the SNLI (Bowman et al. (2015)) and MultiNLI (Williams et al. (2018)) corpus to train a sentence encoder on the natural language inference task that performs well on several transfer tasks. Another domain where large datasets for supervised training is available is machine translation and the work of McCann et al. (2017) exploits this in learning sentence encoders by training it on the machine translation task. Finally, Subramanian et al. (2018) combine several supervised and unsupervised objectives in a multi-task framework to obtain some of the best results on learning general sentence representations.
Our approach is based on automatically generating (possibly noisy) training data by perturbing sentences. Such an approach was used by Wagner et al. (2009) to train a classifier to judge the grammaticality of sentences. The ungrammatical sentences were generated by, among other things, dropping and inserting words. Recent work by Warstadt et al. (2018) extend this approach by using neural network classifiers. Finally, in parallel to our work, a recent report Ranjan et al. (2018) uses word dropping and word permutation to generate fake sentences and learn sentence encoders. Our work is substantially more general and exhaustive.
CONSISTENT SEQUENCES AND DISCRIMINATIVE TRAINING
Consider S = {w 1 , w 2 , · · · , w n }, an ordered sequence of n tokens. We define this sequence to be consistent if the tokens form a valid sentence. Let E be an encoder that encodes any sequence of tokens into a fixed length distributed representation. Our goal is to train E to discriminate between consistent sequences and inconsistent ones. We argue that such an encoder will have to encapsulate many structural properties of sentences, thereby resulting in a good sentence representation.
Whether a sequence of tokens S is consistent is a notoriously hard problem to solve. We take a slightly different approach. We start from a consistent sequence (e.g. from some corpus) and introduce perturbations to generate sequences S ′ that are not consistent. We take inspiration from the standard operations in string edit distance and consider the following variations for generating S ′ .
• ConsSent-D(k): We pick k random tokens in S and delete them.
• ConsSent-P(k): We pick k random tokens in S and permute them randomly (avoiding the identity permutation).
• ConsSent-I(k): We pick k random tokens not from S and insert them at random positions in S.
Model
Positive Example Negative Example ConsSent-D(1) Maya goes to school . Maya goes to . ConsSent-P(2) Maya goes to school . Maya to goes school . ConsSent-I(1) Maya goes to school . Maya goes are to school. ConsSent-R(1) Maya goes to school . Maya doesn't to school . ConsSent-C(2) Maya goes to school . Maya it .
She loves it . She loves goes to school . ConsSent-N(2) Maya goes to school . Maya it school .
She loves it . She loves goes to . • ConsSent-R(k): We pick k random tokens in S and replace them with other random tokens not in S.
It is important to note that it is possible that in some cases S ′ will itself form a valid sentence and hence violate the definition of consistency. We do not address this issue and assume that such cases will be a relatively few and will not influence the encoder in a substantial manner. Also, with larger values of k, the chance of this happening goes down. We train E to distinguish between S and S ′ using a binary classifier.
We extend the definition of consistency to pairs of sequences. Given two sequences, we define them to be consistent if they can be merged into a consistent sequence without changing the order of the tokens. Similar to the above definitions, we generate consistent and inconsistent pairs by starting from a consistent sequence and splitting them into two parts. We consider the following variations.
• ConsSent-N(k): If n is the number of tokens in a sequence S 1 , let S 1 1 be a random subsequence of S 1 , and let S 2 1 = S 1 \S 1 1 be the complementary subsequence. For a consistent sequence S 1 , S 1 1 and S 2 1 form a consistent pairs of sequences. Let S 1 2 and S 2 2 be a partition for a different consistent sequence S 2 , such that S 2 1 = S 2 2 . Then S 1 1 and S 2 2 form an inconsistent pair of sequences, by virtue of the fact they belong to two different consistent sequences. We can vary the complexity of the encoder E by training it to discriminate between a consistent pair (S 1 1 , S 2 1 ) and k − 1 other inconsistent pairs (S 1 1 , S 2 2 ), (S 1 1 , S 2 3 ), · · · , (S 1 1 , S 2 k ) for different values of k.
It is possible to pose the task of discriminating the consistent pair (S 1 1 , S 2 1 ) from the k − 1 inconsistent pairs as a classification problem with a classification layer applied to encodings of the pairs. But this introduces additional parameters which is avoidable for sentence pairs. Instead, we train E by enforcing the constraint that
E(S 1 1 ) · E(S 2 1 ) ≥ E(S 1 1 ) · E(S 2 j ) ∀j ∈ {2, k}
In other words, we train the encoder to place the representations of consistent pairs of sequences closer in terms of dot product than inconsistent pairs. A similar procedure was also used in training sentence representations in Logeswaran & Lee (2018), but with whole sentences appearing adjacent to each other in a larger body of text. Note that such kind of training is not possible for classifying single sequences.
• ConsSent-C(k): We generate S 1 1 and S 2 1 from S 1 by partitioning it at a random point. Thus both S 1 1 and S 2 1 are contiguous subsequences of S 1 . If S 1 is consistent, then the two partitioned sequences form a consistent pair. We generate inconsistent pairs by pairing S 1 1 with k − 1 other S 2 j originating from the partition of different consistent sequences S j .
In Table1
EXPERIMENTS
We use the Billionword corpus (Chelba et al. (2014)) to train our models. We use the first 50 shards of the training set (approximately 15 million sentences) for training and 50000 sentences from the validation set for validation. For ConsSent-D(k), ConsSent-P(k), ConsSent-I(k) and ConsSent-R(k) for each sentence we delete, permute, insert or replace k tokens with a probability of 0.5. This produces roughly an equal number of consistent and inconsistent sequences. For ConsSent-{D,I,K}(k) we sweep over k ∈ {1, 2, 3, 4, 5} and for ConsSent-P(k) we sweep over k ∈ {2, 3, 4, 5, 6} to train a total of 20 encoders.
In the case of ConsSent-N(k), for each consistent sequence S 1 , we partition it into two subsequences by randomly picking a token to be in the first part S 1 1 with probability 0.5. The remaining tokens go into S 2 1 . We pick (k − 1) other random consistent sequences S 2 , · · · S k and do the same. For ConsSent-C(k), for each consistent sequence S 1 , we pick i ∈ {2, · · · , n − 1} uniformly at random and partition it at i to produce S 1 1 and S 2 1 . The remainder of the training procedure is the same as ConsSent-N(k). For both these models, we sweep over k ∈ {2, 3, 4, 5, 6} to train a total of 10 encoders. Overall, we train 30 encoders for the six methods.
We train the BiLSTM-Max encoder E with a hidden dimension of 2048, resulting in 4096 dimensional sentence representations. For ConsSent-D(k), ConsSent-P(k), ConsSent-I(k) and ConsSent-R(k), the sentence representations are passed through two linear layers of size 512 before the classification Softmax.
For ConsSent-N(k) and ConsSent-C(k), we pair S 1 1 with (k − 1) random S 2 j from within the same minibatch to generate the inconsistent pairs. For optimization, we use SGD with an initial learning rate of 0.1 which is decayed by 0.99 after every epoch or by 0.2 if there is a drop in the validation accuracy. Gradients are clipped to a maximum norm of 5.0 and we train for a maximum of 20 epochs.
We evaluate the sentence encodings using the SentEval benchmark Conneau & Kiela (2018). This benchmark consists of two sets of tasks related to transfer learning and predicting linguistic properties of sentences. In the first set, there are 6 text classification tasks (MR, CR, SUBJ, MPQA, SST, TREC), one task on paraphrase detection (MRPC) and one on entailment classification (SICK-E). All these 8 tasks have accuracy as their performance measure. There are two other tasks on estimating the semantic relatedness of two sentences (SICK-R and STSB) for which the performance measure is Pearson correlation (expressed as percentage) between the estimated similarity scores and ground truth scores. For each of these datasets, the learned ConsSent sequence encoders are used to produce representations of sentences. These representations are then used for classification or score estimation using a logistic regression layer. We also use a L2 regularizer on the weights of the logistic layer whose coefficient is tuned using the validation sets. The goal of testing ConsSent on these tasks is to evaluate the quality of the encoders as general sentence representation generators which can be used in a wide variety of downstream tasks with limited training data.
The second set of tasks probes for 10 difference linguistic properties of sentences. These include tasks like predicting which of a set of target words appears in a sentence (WordContent), the number of the subject in the main clause i.e. whether the subject is singular or plural (SubjNum), depth of the syntactic tree (TreeDepth) and length of the sentence quantized into a few bins (SentLen). Some of these properties are syntactic in nature, while some require deeper understanding of the semantics of a sentence. The goal of testing ConsSent on these tasks is to evaluate how much linguistic information is captured by the encoders. For each of the tasks, the representations produced by the ConsSent encoders are input to a classifier with a linear layer followed by Sigmoid followed by a classification layer. We tune the classifier on the validation sets by varying the dimension of the linear layer in [50,100,200] and the dropout before the classification layer in [0, 0.1, 0.2]. For more details on these tasks, please refer to (Chelba et al. (2014); ).
RESULTS ON TRANSFER TASKS
In Table 2 we present results on each of the transfer tasks in SentEval. In addition to the accuracy and correlation figures, we also report an average of all the 10 scores in the last column. We only show two of the best performing models (out of 5) for each of the six methods. Certain trends can Table 2: Performance of ConsSent on the transfer tasks in the SentEval benchmark. SkipThought is described in (Kiros et al., 2015), QuickThoughts in (Logeswaran & Lee, 2018) and MultiTask in Subramanian et al. (2018) and InferSent in Conneau et al. (2017). SK-R and SK-E stand for SICK-R and SICK-E respectively. AVG is a simple average over all the tasks. Bold indicates best result among our models and underline indicates best overall for unsupervised tasks. ). be established from these numbers. ConsSent-N(3), which discriminate between pairs of sequences, performs the best on an average. Among the methods that classify single sequences, ConsSent-R(2) and ConsSent-R(3) perform the best, second only to ConsSent-N. ConsSent-C is dominated by ConsSent-N in most cases, while ConsSent-D and ConsSent-P perform the worst. Notably, all the methods perform better than SkipThought-LN (Kiros et al. (2015)) on an average and on most individual tasks. Note that the different methods use sentence representations of varying length, which may be smaller than our 4096 length representations.
Looking at the individual tasks, both ConsSent-D(2) and ConsSent-I(2) achieve an accuracy of 90.3% on MPQA, which is better that other strongly performing unsupervised representation learning algorithms including QuickThoughts (Logeswaran & Lee (2018)) which uses an order of mag- nitude more data to train. Similarly, ConsSent-N(3) achieves an accuracy of 77.3% on the MRPC dataset, substantially better than QuickThoughts. This model also performs very well on STSB, achieving the best accuracy of 75.8% among unsupervised methods.
We compare the relative performance of the encoders with varying values of k for three of the transfer tasks -TREC, MRPC and STSB in Fig. 1. For TREC and MRPC, which are classification tasks, there is roughly an inverted V shaped trend with some intermediate value of k giving the best results for ConSent-D,P,I,R. Note that for smaller values of k, the encoders are exposed to negative examples that are are relatively similar to the positive ones and hence the discriminative training can be noisy. On the other hand, as k increases, the encoders may latch onto superficial patterns and hence not generalize well. For ConsSent-C,N the trends are less clear for TREC but are closer to an inverted V for MRPC. For the semantic scoring task of STSB, the trend lines show no clear pattern.
RESULTS ON LINGUISTIC PROBING TASKS
We present results on the 10 linguistic probing tasks in Table 3 where we also compare with other unsupervised methods like a sequence autoencoder and SkipThought. All the encoders perform surprisingly well on most of the tasks, with the best ones ConsSent-D(5) and ConsSent-P(3) attaining an average score of 80.5%, which is 7.5% more than the score achieved by SkipThought. For these tasks, encoders trained on single seqeunces perform better than the ones trained using pairs of sequences. Notably, the performance is significantly better than even the supervised baseline re- (5)) and overall (ConsSent-R (2)).
sults trained on machine translation and natural language entailment in . The performance of a third method Seq2Tree using a gated convolutional network (GCN) is however significantly better than the ConsSent encoders (except on the WordContent task). We have not experimented with a GCN encoder and it is possible that such an encoder may give better results.
COMPARISON ACROSS TRANSFER AND LINGUISTIC PROBING TASKS
In this section, we compare the ConsSent models across the two sets of tasks. In Fig.3 we plot the average performance of all the models in the transfer tasks and in Fig.4 we plot the average performance of all the models on the linguistic probing tasks. From these two plots, it is clear that ConsSent-C and ConsSent-N are significantly better at transfer tasks than encoding linguistic knowledge. On the other hand, ConsSent-D and ConsSent-P are better at encoding linguistic information than doing well on transfer tasks. The models that perform best for all the tasks on an average come from ConsSent-I and ConsSent-R. In fact, we take an average over all the 20 tasks, the best model is ConsSent-R(2). We show the performance of the best model in the transfer tasks ConsSent-N(3), the best model on the linguistic tasks ConsSent-D(5) and the best model over all the tasks ConsSent-R(2) in Fig.5.
we show toy positive and negative training examples for each of these methods. The choice of the encoder E is important to generate good sentence representations. Following Conneau et al. (2017), we use a bidirectional LSTM to process a sequence of tokens and take a max-pool of the intermediate hidden states to compute a distributed representation.
Figure 1 :
1Performance of ConsSent models on the TREC, MRPC and STSB tasks. The y-values are in percentages. The x-values are k for ConSent-{D,I,R}(k) and k − 1 for ConSent-{C,N,P}(k).
Figure 2 :
2Performance of ConsSent models on the WordContent, BigramShift and SubjNum tasks. The y-values are in percentages. The x-values are k for ConSent-{D,I,R}(k) and k − 1 for ConSent-{C,N,P}(k).
Figure 3 :Figure 4 :
34Average performance of all models for the transfer tasks. The bars in a group represent increasing values of k. Average performance of all models for the linguistic probing tasks. The bars in a group represent increasing values of k.
Figure 5 :
5Performance of the best models on transfer tasks (ConsSent-N(3)), linguistic probing tasks (ConsSent-D
Table 1 :
1Example sequences for the six ConsSent models.
Table 3 :
3Performance of ConsSent on the linguistic probing tasks in the SentEval benchmark. The other results have been taken from
A simple but tough-to-beat baseline for sentence embeddings. Sanjeev Arora, Yingyu Liang, Tengyu Ma, abs/1607.03474ICLR. Sanjeev Arora, Yingyu Liang, and Tengyu Ma. A simple but tough-to-beat base- line for sentence embeddings. In ICLR, volume abs/1607.03474, 2017. URL http://arxiv.org/abs/1607.03474.
A large annotated corpus for learning natural language inference. R Samuel, Gabor Bowman, Christopher Angeli, Christopher D Potts, Manning, EMNLP. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large annotated corpus for learning natural language inference. In EMNLP, 2015. URL https://arxiv.org/abs/1508.05326.
Suffix bidirectional long short-term memory. Siddhartha Brahma, abs/1805.07340CoRRSiddhartha Brahma. Suffix bidirectional long short-term memory. CoRR, abs/1805.07340, 2018.
One billion word benchmark for measuring progress in statistical language modeling. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, INTER-SPEECH. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, and Phillipp Koehn. One billion word benchmark for measuring progress in statistical language modeling. In INTER- SPEECH, 2014.
Senteval: An evaluation toolkit for universal sentence representations. Alexis Conneau, Douwe Kiela, abs/1803.05449CoRRAlexis Conneau and Douwe Kiela. Senteval: An evaluation toolkit for universal sentence represen- tations. CoRR, abs/1803.05449, 2018.
Supervised learning of universal sentence representations from natural language inference data. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, Antoine Bordes, EMNLP. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. Supervised learning of universal sentence representations from natural language inference data. In EMNLP, 2017. URL http://aclweb.org/anthology/D17-1070.
What you can cram into a single vector: Probing sentence embeddings for linguistic properties. Alexis Conneau, Germán Kruszewski, Guillaume Lample, Loïc Barrault, Marco Baroni, ACL. Alexis Conneau, Germán Kruszewski, Guillaume Lample, Loïc Barrault, and Marco Baroni. What you can cram into a single vector: Probing sentence embeddings for linguistic properties. In ACL, 2018.
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Fei-Fei Li, CVPR. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei-Fei Li. Imagenet: A large-scale hierarchical image database. In CVPR, 2009.
Learning distributed representations of sentences from unlabelled data. Felix Hill, Kyunghyun Cho, Anna Korhonen, HLT-NAACL. Felix Hill, Kyunghyun Cho, and Anna Korhonen. Learning distributed repre- sentations of sentences from unlabelled data. In HLT-NAACL, 2016. URL http://www.aclweb.org/anthology/N16-1162.
Bag of tricks for efficient text classification. Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational LinguisticsAssociation for Computational Linguistics2Short PapersArmand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient text classification. In Proceedings of the 15th Conference of the European Chapter of the As- sociation for Computational Linguistics: Volume 2, Short Papers, pp. 427-431. Association for Computational Linguistics, April 2017.
Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Skip-thought vectors. Ryan Kiros, Yukun Zhu, R Ruslan, Richard Salakhutdinov, Zemel, NIPS. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, An- tonio Torralba, and Sanja Fidler. Skip-thought vectors. In NIPS, 2015. URL https://arxiv.org/abs/1506.06726.
An efficient framework for learning sentence representations. Lajanugen Logeswaran, Honglak Lee, ICLR. Lajanugen Logeswaran and Honglak Lee. An efficient framework for learning sentence representa- tions. In ICLR, 2018. URL http://arxiv.org/abs/1803.02893.
Learned in translation: Contextualized word vectors. Bryan Mccann, James Bradbury, Caiming Xiong, Richard Socher, NIPS. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. Learned in translation: Con- textualized word vectors. In NIPS, 2017. URL http://arxiv.org/abs/1708.00107.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, Jeffrey Dean, NIPS. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representa- tions of words and phrases and their compositionality. In NIPS, 2013.
DisSent: Sentence representation learning from explicit discourse relations. Allen Nie, Erin D Bennett, Noah D Goodman, Allen Nie, Erin D. Bennett, and Noah D. Goodman. DisSent: Sentence representation learning from explicit discourse relations. 2018. URL http://arxiv.org/abs/1710.04334.
Unsupervised learning of sentence embeddings using compositional n-gram features. Matteo Pagliardini, Prakhar Gupta, Martin Jaggi, NAACL. Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. Unsupervised learning of sen- tence embeddings using compositional n-gram features. In NAACL, 2018. URL https://arxiv.org/abs/1703.02507.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, EMNLP. Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In EMNLP, 2014. URL https://www.aclweb.org/anthology/D14-1162.
Evaluation of sentence embeddings in downstream and linguistic probing tasks. Christian S Perone, Roberto Silveira, Thomas S Paula, abs/1806.06259CoRRChristian S. Perone, Roberto Silveira, and Thomas S. Paula. Evaluation of sentence embeddings in downstream and linguistic probing tasks. CoRR, abs/1806.06259, 2018.
Fake sentence detection as a training task for sentence encoding. Heeyoung Viresh Ranjan, Niranjan Kwon, Minh Balasubramanian, Hoai, arXiv:1808.03840arXiv preprintViresh Ranjan, Heeyoung Kwon, Niranjan Balasubramanian, and Minh Hoai. Fake sentence detec- tion as a training task for sentence encoding. arXiv preprint arXiv:1808.03840, 2018.
Learning general purpose distributed sentence representations via large scale multi-task learning. Sandeep Subramanian, Adam Trischler, Yoshua Bengio, Christopher J Pal, In ICLR. Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J. Pal. Learning general purpose distributed sentence representations via large scale multi-task learning. In ICLR, 2018. URL http://arxiv.org/abs/1804.00079.
Judging grammaticality: Experiments in sentence classification. Joachim Wagner, Jennifer Foster, Josef Van Genabith, CALICO Journal. 263Joachim Wagner, Jennifer Foster, and Josef van Genabith. Judging grammaticality: Experiments in sentence classification. CALICO Journal, 26(3):474-490, 2009.
Alex Warstadt, Amanpreet Singh, Samuel R Bowman, Neural network acceptability judgments. CoRR, abs/1805.12471. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. Neural network acceptability judgments. CoRR, abs/1805.12471, 2018.
A broad-coverage challenge corpus for sentence understanding through inference. NAACL. Adina Williams, Nikita Nangia, Samuel R Bowman, Adina Williams, Nikita Nangia, and Samuel R. Bowman. A broad-coverage chal- lenge corpus for sentence understanding through inference. NAACL, 2018. URL http://arxiv.org/abs/1704.05426.
| [] |
[
"TALK THE WALK: NAVIGATING GRIDS IN NEW YORK CITY THROUGH GROUNDED DIALOGUE",
"TALK THE WALK: NAVIGATING GRIDS IN NEW YORK CITY THROUGH GROUNDED DIALOGUE"
] | [
"Harm De Vries devries@iro.umontreal.ca \nMILA\nUniversité de Montréal\n\n",
"Kurt Shuster \nFacebook AI Research\n\n",
"Dhruv Batra \nGeorgia Institute of Technology\n\n\nFacebook AI Research\n\n",
"Devi Parikh \nGeorgia Institute of Technology\n\n\nFacebook AI Research\n\n",
"Jason Weston \nFacebook AI Research\n\n",
"Douwe Kiela dkiela@fb.com \nFacebook AI Research\n\n"
] | [
"MILA\nUniversité de Montréal\n",
"Facebook AI Research\n",
"Georgia Institute of Technology\n",
"Facebook AI Research\n",
"Georgia Institute of Technology\n",
"Facebook AI Research\n",
"Facebook AI Research\n",
"Facebook AI Research\n"
] | [] | We introduce "Talk The Walk", the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a "guide" and a "tourist") that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task. | null | [
"https://arxiv.org/pdf/1807.03367v3.pdf"
] | 49,669,712 | 1807.03367 | 4278ef8172d7aae7251161a2df4b2dc46e0336d3 |
TALK THE WALK: NAVIGATING GRIDS IN NEW YORK CITY THROUGH GROUNDED DIALOGUE
Harm De Vries devries@iro.umontreal.ca
MILA
Université de Montréal
Kurt Shuster
Facebook AI Research
Dhruv Batra
Georgia Institute of Technology
Facebook AI Research
Devi Parikh
Georgia Institute of Technology
Facebook AI Research
Jason Weston
Facebook AI Research
Douwe Kiela dkiela@fb.com
Facebook AI Research
TALK THE WALK: NAVIGATING GRIDS IN NEW YORK CITY THROUGH GROUNDED DIALOGUE
We introduce "Talk The Walk", the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a "guide" and a "tourist") that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task.
Figure 1
: Example of the Talk The Walk task: two agents, a "tourist" and a "guide", interact with each other via natural language in order to have the tourist navigate towards the correct location. The guide has access to a map and knows the target location but not the tourist location, while the tourist does not have a map and is tasked with navigating a 360-degree street view environment. hoods in New York City (NYC) 1 . As the main focus of our task is on interactive dialogue, we limit the difficulty of the control problem by having the tourist navigating a 2D grid via discrete actions (turning left, turning right and moving forward). Our street view environment was integrated into ParlAI (Miller et al., 2017) and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication.
We argue that for artificial agents to solve this challenging problem, some fundamental architecture designs are missing, and our hope is that this task motivates their innovation. To that end, we focus on the task of localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism. To model the interaction between language and action, this architecture repeatedly conditions the spatial dimensions of a convolution on the communicated message sequence.
This work makes the following contributions: 1) We present the first large scale dialogue dataset grounded in action and perception; 2) We introduce the MASC architecture for localization and show it yields improvements for both emergent and natural language; 4) Using localization models, we establish initial baselines on the full task; 5) We show that our best model exceeds human performance under the assumption of "perfect perception" and with a learned emergent communication protocol, and sets a non-trivial baseline with natural language.
TALK THE WALK
We create a perceptual environment by manually capturing several neighborhoods of New York City (NYC) with a 360 camera 2 . Most parts of the city are grid-like and uniform, which makes it wellsuited for obtaining a 2D grid. For Talk The Walk, we capture parts of Hell's Kitchen, East Village, the Financial District, Williamsburg and the Upper East Side-see Figure 5 in Appendix 13 for their respective locations within NYC. For each neighborhood, we choose an approximately 5x5 grid and capture a 360 view on all four corners of each intersection, leading to a grid-size of roughly 10x10 per neighborhood.
The tourist's location is given as a tuple (x, y, o), where x, y are the coordinates and o signifies the orientation (north, east, south or west). The tourist can take three actions: turn left, turn right and go forward. For moving forward, we add (0, 1), (1, 0), (0, −1), (−1, 0) to the x, y coordinates for the respective orientations. Upon a turning action, the orientation is updated by o = (o + d) mod 4 where d = −1 for left and d = 1 for right. If the tourist moves outside the grid, we issue a warning that they cannot go in that direction and do not update the location. Moreover, tourists are shown different types of transitions: a short transition for actions that bring the tourist to a different corner of the same intersection; and a longer transition for actions that bring them to a new intersection.
The guide observes a map that corresponds to the tourist's environment. We exploit the fact that urban areas like NYC are full of local businesses, and overlay the map with these landmarks as localization points for our task. Specifically, we manually annotate each corner of the intersection with a set of landmarks Λ x,y = {l 0 , . . . , l K }, each coming from one of the following categories: Project Perception Action Language Dial. Size Acts Visual Dialog (Das et al., 2016) Real Human 120k dialogues 20 GuessWhat (de Vries et al., 2016) Real Human 131k dialogues 10 VNL (Anderson et al., 2017) Real Human 23k instructions -Embodied QA (Das et al., 2017a) Simulated Scripted 5k questions -TalkTheWalk
Real Human 10k dialogues 62 Table 1: Talk The Walk grounds human generated dialogue in (real-life) perception and action.
• Bar
• Playfield • Bank • Hotel • Shop • Subway • Coffee Shop • Restaurant • Theater
The right-side of Figure 1 illustrates how the map is presented. Note that within-intersection transitions have a smaller grid distance than transitions to new intersections. To ensure that the localization task is not too easy, we do not include street names in the overhead map and keep the landmark categories coarse. That is, the dialogue is driven by uncertainty in the tourist's current location and the properties of the target location: if the exact location and orientation of the tourist were known, it would suffice to communicate a sequence of actions.
TASK
For the Talk The Walk task, we randomly choose one of the five neighborhoods, and subsample a 4x4 grid (one block with four complete intersections) from the entire grid. We specify the boundaries of the grid by the top-left and bottom-right corners (x min , y min , x max , y max ). Next, we construct the overhead map of the environment, i.e. {Λ x ,y } with x min ≤ x ≤ x max and y min ≤ y ≤ y max . We subsequently sample a start location and orientation (x, y, o) and a target location (x, y) tgt at random 3 .
The shared goal of the two agents is to navigate the tourist to the target location (x, y) tgt , which is only known to the guide. The tourist perceives a "street view" planar projection S x,y,o of the 360 image at location (x, y) and can simultaneously chat with the guide and navigate through the environment. The guide's role consists of reading the tourist description of the environment, building a "mental map" of their current position and providing instructions for navigating towards the target location. Whenever the guide believes that the tourist has reached the target location, they instruct the system to evaluate the tourist's location. The task ends when the evaluation is successful-i.e., when (x, y) = (x, y) tgt -or otherwise continues until a total of three failed attempts. The additional attempts are meant to ease the task for humans, as we found that they otherwise often fail at the task but still end up close to the target location, e.g., at the wrong corner of the correct intersection.
DATA COLLECTION
We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI (Miller et al., 2017) to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix 14. We paired Turkers at random and let them alternate between the tourist and guide role across different HITs.
DATASET STATISTICS
The Talk The Walk dataset consists of over 10k successful dialogues-see Table 11 in the appendix for the dataset statistics split by neighborhood. Turkers successfully completed 76.74% of all finished tasks (we use this statistic as the human success rate). More than six hundred participants successfully completed at least one Talk The Walk HIT. Although the Visual Dialog (Das et al., 2016) and GuessWhat (de Vries et al., 2016) datasets are larger, the collected Talk The Walk dialogs are significantly longer. On average, Turkers needed more than 62 acts (i.e utterances and actions) before they successfully completed the task, whereas Visual Dialog requires 20 acts. The majority of acts comprise the tourist's actions, with on average more than 44 actions per dialogue. The guide produces roughly 9 utterances per dialogue, slightly more than the tourist's 8 utterances.
Turkers use diverse discourse, with a vocabulary size of more than 10K (calculated over all success-ful dialogues). An example from the dataset is shown in Appendix 13. The dataset is available at https://github.com/facebookresearch/talkthewalk.
EXPERIMENTS
We investigate the difficulty of the proposed task by establishing initial baselines. The final Talk The Walk task is challenging and encompasses several important sub-tasks, ranging from landmark recognition to tourist localization and natural language instruction-giving. Arguably the most important sub-task is localization: without such capabilities the guide can not tell whether the tourist reached the target location. In this work, we establish a minimal baseline for Talk The Walk by utilizing agents trained for localization. Specifically, we let trained tourist models undertake random walks, using the following protocol: at each step, the tourist communicates its observations and actions to the guide, who predicts the tourist's location. If the guide predicts that the tourist is at target, we evaluate its location. If successful, the task ends, otherwise we continue until there have been three wrong evaluations. The protocol is given as pseudo-code in Appendix 11.
TOURIST LOCALIZATION
The designed navigation protocol relies on a trained localization model that predicts the tourist's location from a communicated message. Before we formalize this localization sub-task in Section 3.1.1, we further introduce two simplifying assumptions-perfect perception and orientationagnosticism-so as to overcome some of the difficulties we encountered in preliminary experiments.
Perfect Perception
Early experiments revealed that perceptual grounding of landmarks is difficult: we set up a landmark classification problem, on which models with extracted CNN (He et al., 2016) or text recognition features barely outperform a random baseline-see Appendix 12 for full details. This finding implies that localization models from image input are limited by their ability to recognize landmarks, and, as a result, would not generalize to unseen environments. To ensure that perception is not the limiting factor when investigating the landmarkgrounding and action-grounding capabilities of localization models, we assume "perfect perception": in lieu of the 360 image view, the tourist is given the landmarks at its current location. More formally, each state observation S x,y,o now equals the set of landmarks at the (x, y)-location, i.e. S x,y,o = Λ x,y . If the (x, y)-location does not have any visible landmarks, we return a single "empty corner" symbol. We stress that our findings-including a novel architecture for grounding actions into an overhead map, see Section 4.2.1-should carry over to settings without the perfect perception assumption.
Orientation-agnosticism We opt to ignore the tourist's orientation, which simplifies the set of actions to [Left, Right, Up, Down], corresponding to adding [(-1, 0), (1, 0), (0, 1), (0, -1)] to the current (x, y) coordinates, respectively. Note that actions are now coupled to an orientation on the map-e.g. up is equal to going north-and this implicitly assumes that the tourist has access to a compass. This also affects perception, since the tourist now has access to views from all orientations: in conjunction with "perfect perception", implying that only landmarks at the current corner are given, whereas landmarks from different corners (e.g. across the street) are not visible.
Even with these simplifications, the localization-based baseline comes with its own set of challenges. As we show in Section 5.1, the task requires communication about a short (random) path-i.e., not only a sequence of observations but also actions-in order to achieve high localization accuracy. This means that the guide needs to decode observations from multiple time steps, as well as understand their 2D spatial arrangement as communicated via the sequence of actions. Thus, in order to get to a good understanding of the task, we thoroughly examine whether the agents can learn a communication protocol that simultaneously grounds observations and actions into the guide's map.
In doing so, we thoroughly study the role of the communication channel in the localization task, by investigating increasingly constrained forms of communication: from differentiable continuous vectors to emergent discrete symbols to the full complexity of natural language.
FORMALIZATION
The full navigation baseline hinges on a localization model from random trajectories. While we can sample random actions in the emergent communication setup, this is not possible for the natural language setup because the messages are coupled to the trajectories of the human annotators. This leads to slightly different problem setups, as described below.
Emergent language A tourist, starting from a random location, takes T ≥ 0 random actions A = {α 0 , . . . , α T −1 } to reach target location (x tgt , y tgt ). Every location in the environment has a corresponding set of landmarks Λ x,y = {l 0 , . . . , l K } for each of the (x, y) coordinates. As the tourist navigates, the agent perceives T + 1 state-observations Z = {ζ 0 , . . . , ζ T } where each observation ζ t consists of a set of K landmark symbols {l t 0 , . . . , l t K }. Given the observations Z and actions A, the tourist generates a message M which is communicated to the other agent. The objective of the guide is to predict the location (x tgt , y tgt ) from the tourist's message M .
Natural language In contrast to our emergent communication experiments, we do not take random actions but instead extract actions, observations, and messages from the dataset. Specifically, we consider each tourist utterance (i.e. at any point in the dialogue), obtain the current tourist location as target location (x, y) tgt , the utterance itself as message M , and the sequence of observations and actions that took place between the current and previous tourist utterance as Z and A, respectively. Similar to the emergent language setting, the guide's objective is to predict the target location (x, y) tgt models from the tourist message M . We conduct experiments with M taken from the dataset and with M generated from the extracted observations Z and actions A.
MODEL
This section outlines the tourist and guide architectures. We first describe how the tourist produces messages for the various communication channels across which the messages are sent. We subsequently describe how these messages are processed by the guide, and introduce the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding into the 2D overhead map in order to predict the tourist's location.
THE TOURIST
For each of the communication channels, we outline the procedure for generating a message M . Given a set of state observations {ζ 0 , . . . , ζ T }, we represent each observation by summing the Ldimensional embeddings of the observed landmarks, i.e. for {o 0 , .
. . , o T }, o t = l∈ζt E Λ (l)
, where E Λ is the landmark embedding lookup table. In addition, we embed action α t into a Ldimensional embedding a t via a look-up table E A . We experiment with three types of communication channel.
Continuous vectors
The tourist has access to observations of several time steps, whose order is important for accurate localization. Because summing embeddings is order-invariant, we introduce a sum over positionally-gated embeddings, which, conditioned on time step t, pushes embedding information into the appropriate dimensions. More specifically, we generate an observation message m obs = T t=0 sigmoid(g t ) o t , where g t is a learned gating vector for time step t. In a similar fashion, we produce action message m act and send the concatenated vectors m = [m obs ; m act ] as message to the guide. We can interpret continuous vector communication as a single, monolithic model because its architecture is end-to-end differentiable, enabling gradient-based optimization for training.
Discrete symbols Like the continuous vector communication model, with discrete communication the tourist also uses separate channels for observations and actions, as well as a sum over positionally gated embeddings to generate observation embedding h obs . We pass this embedding through a sigmoid and generate a message m obs by sampling from the resulting Bernoulli distributions:
h obs = T t=0 sigmoid(g t ) o t ; m obs i ∼ Bernoulli(sigmoid(h obs i ))
The action message m act is produced in the same way, and we obtain the final tourist message m = [m obs ; m act ] through concatenating the messages.
The communication channel's sampling operation yields the model non-differentiable, so we use policy gradients (Sutton & Barto, 1998;Williams, 1992) to train the parameters θ of the tourist model. That is, we estimate the gradient by
∇ θ E m∼p(h) [r(m)] = E m [∇ θ log p(m)(r(m) − b)],
where the reward function r(m) = − log p(x, y) tgt |m, Λ) is the negative guide's loss (see Section 4.2) and b a state-value baseline to reduce variance. We use a linear transformation over the con-catenated embeddings as baseline prediction, i.e. b = W base [h obs ; h act ] + b base , and train it with a mean squared error loss 4 .
Natural Language Because observations and actions are of variable-length, we use an LSTM encoder over the sequence of observations embeddings [o t ] T +1 t=0 , and extract its last hidden state h obs . We use a separate LSTM encoder for action embeddings [a t ] T t=0 , and concatenate both h obs and h act to the input of the LSTM decoder at each time step:
i k = [E dec (w k−1 ); h obs ; h act ] h dec k = f LST M (i t , h dec k−1 ) p(w k |w <k , A, Z) = softmax(W out h dec k + b out ) k ,(1)
where E dec a look-up table, taking input tokens w k . We train with teacher-forcing, i.e. we optimize the cross-entropy loss: − K log p(w k |w <k , A, Z). At test time, we explore the following decoding strategies: greedy, sampling and a beam-search. We also fine-tune a trained tourist model (starting from a pre-trained model) with policy gradients in order to minimize the guide's prediction loss.
THE GUIDE
Given a tourist message M describing their observations and actions, the objective of the guide is to predict the tourist's location on the map. First, we outline the procedure for extracting observation embedding e and action embeddings a t from the message M for each of the types of communication.
Next, we discuss the MASC mechanism that takes the observations and actions in order to ground them on the guide's map in order to predict the tourist's location.
Continuous For the continuous communication model, we assign the observation message to the observation embedding, i.e. e = m obs . To extract the action embedding for time step t, we apply a linear layer to the action message, i.e. a t = W act t m act + b act t . Discrete For discrete communication, we obtain observation e by applying a linear layer to the observation message, i.e. e = W obs m obs + b obs . Similar to the continuous communication model, we use a linear layer over action message m act to obtain action embedding a t for time step t.
Natural Language The message M contains information about observations and actions, so we use a recurrent neural network with attention mechanism to extract the relevant observation and action embeddings. Specifically, we encode the message M , consisting of K tokens w k taken from vocabulary V , with a bidirectional LSTM:
− → h k = f LST M ( − −− → h k−1 , E W (w k )); ← − h k = f LST M ( ← −− − h k+1 , E W (w k )); h k = [ − → h k ; ← − h k ] (2)
where E W is the word embedding look-up table. We obtain observation embedding e t through an attention mechanism over the hidden states h:
s k = h k · c t ; e t = k softmax(s) k h k ,(3)
where c 0 is a learned control embedding who is updated through a linear transformation of the previous control and observation embedding:
c t+1 = W ctrl [c t ; e t ] + b ctrl .
We use the same mechanism to extract the action embedding a t from the hidden states. For the observation embedding, we obtain the final representation by summing positionally gated embeddings, i.e., e = T t=0 = sigmoid(g t ) e t .
MASKED ATTENTION FOR SPATIAL CONVOLUTIONS (MASC)
We represent the guide's map as U ∈ R G1×G2×L , where in this case G 1 = G 2 = 4, where each L-dimensional (x, y) location embedding u x,y is computed as the sum of the guide's landmark embeddings for that location.
Motivation While the guide's map representation contains only local landmark information, the tourist communicates a trajectory of the map (i.e. actions and observations from multiple locations), implying that directly comparing the tourist's message with the individual landmark embeddings is probably suboptimal. Instead, we want to aggregate landmark information from surrounding locations by imputing trajectories over the map to predict locations. We propose a mechanism for (left, right, up, down), which can be expressed as a 2D convolution over the map embeddings. For simplicity, let us assume that the map embedding U is 1-dimensional, then a left action can be realized through application of the following 3x3 kernel:
0 0 0 1 0 0 0 0 0
, which effectively shifts all values of U one position to the left. We propose to learn such state-transitions from the tourist message through a differentiable attention-mask over the spatial dimensions of a 3x3 convolution.
MASC We linearly project each predicted action embedding a t to a 9-dimensional vector z t , normalize it by a softmax and subsequently reshape the vector into a 3x3 mask Φ t :
z t = W act a t + b act , φ t = softmax(z t ), Φ t = φ 0 t φ 1 t φ 2 t φ 3 t φ 4 t φ 5 t φ 6 t φ 7 t φ 8 t .(4)
We learn a 3x3 convolutional kernel W ∈ R 3×3×N ×N , with N features, and apply the mask Φ t to the spatial dimensions of the convolution by first broadcasting its values along the feature dimensions, i.e.Φ x,y,i,j = Φ x,y , and subsequently taking the Hadamard product: W t =Φ t W . For each action step t, we then apply a 2D convolution with masked weight W t to obtain a new map embedding U t+1 = U t * W t , where we zero-pad the input to maintain identical spatial dimensions.
Prediction model We repeat the MASC operation T times (i.e. once for each action), and then aggregate the map embeddings by a sum over positionally-gated embeddings: u x,y = T t=0 sigmoid(g t ) u x,y t . We score locations by taking the dot-product of the observation embedding e, which contains information about the sequence of observed landmarks by the tourist, and the map. We compute a distribution over the locations of the map p(x, y|M, Λ) by taking a softmax over the computed scores:
s x,y = e · u x,y , p(x, y|M, Λ) = exp(s x,y )
x ,y exp(s x ,y )
.
Predicting T While emergent communication models use a fixed length trasjectory T , natural language messages may differ in the number of communicated observations and actions. Hence, we predict T from the communicated message. Specifically, we use a softmax regression layer over the last hidden state h K of the RNN, and subsequently sample T from the resulting multinomial distribution:
z = softmax(W tm h K + b tm );T ∼ Multinomial(z).(6)
We jointly train the T -prediction model via REINFORCE, with the guide's loss as reward function and a mean-reward baseline.
COMPARISONS
To better analyze the performance of the models incorporating MASC, we compare against a no-MASC baseline in our experiments, as well as a prediction upper bound.
No MASC We compare the proposed MASC model with a model that does not include this mechanism. Whereas MASC predicts a convolution mask from the tourist message, the "No MASC" model uses W , the ordinary convolutional kernel to convolve the map embedding U t to obtain U t+1 . We also share the weights of this convolution at each time step.
Prediction upper-bound Because we have access to the class-conditional likelihood p(Z, A|x, y), we are able to compute the Bayes error rate (or irreducible error). No model (no matter how expressive) with any amount of data can ever obtain better localization accuracy as there are multiple locations consistent with the observations and actions.
RESULTS AND DISCUSSION
In this section, we describe the findings of various experiments. First, we analyze how much information needs to be communicated for accurate localization in the Talk The Walk environment, and find that a short random path (including actions) is necessary. Next, for emergent language, we show that the MASC architecture can achieve very high localization accuracy, significantly outperforming the baseline that does not include this mechanism. We then turn our attention to the natural language experiments, and find that localization from human utterances is much harder, reaching an accuracy level that is below communicating a single landmark observation. We show that generated utterances from a conditional language model leads to significantly better localization performance, by successfully grounding the utterance on a single landmark observation (but not yet on multiple observations and actions). Finally, we show performance of the localization baseline on the full task, which can be used for future comparisons to this work.
ANALYSIS OF LOCALIZATION TASK
Task is not too easy The upper-bound on localization performance in Table 2 suggest that communicating a single landmark observation is not sufficient for accurate localization of the tourist (∼35% accuracy). This is an important result for the full navigation task because the need for twoway communication disappears if localization is too easy; if the guide knows the exact location of the tourist it suffices to communicate a list of instructions, which is then executed by the tourist. The uncertainty in the tourist's location is what drives the dialogue between the two agents.
Importance of actions
We observe that the upperbound for only communicating observations plateaus around 57% (even for T = 3 actions), whereas it exceeds 90% when we also take actions into account. This implies that, at least for random walks, it is essential to communicate a trajectory, including observations and actions, in order to achieve high localization accuracy.
EMERGENT LANGUAGE LOCALIZATION
We first report the results for tourist localization with emergent language in Table 2.
MASC improves performance The MASC architecture significantly improves performance compared to models that do not include this mechanism. For instance, for T = 1 action, MASC already achieves 56.09 % on the test set and this further increases to 69.85% for T = 3. On the other hand, no-MASC models hit a plateau at 43%. In Appendix 10, we analyze learned MASC values, and show that communicated actions are often mapped to corresponding state-transitions.
Continuous vs discrete
We observe similar performance for continuous and discrete emergent communication models, implying that a discrete communication channel is not a limiting factor for localization performance.
NATURAL LANGUAGE LOCALIZATION
We report the results of tourist localization with natural language in Table 3. We compare accuracy of the guide model (with MASC) trained on utterances from (i) humans, (ii) a supervised model with various decoding strategies, and (iii) a policy gradient model optimized with respect to the loss of a frozen, pre-trained guide model on human utterances.
Human utterances Compared to emergent language, localization from human utterances is much harder, achieving only 16.17% on the test set. Here, we report localization from a single utterance, but in Appendix 9.2 we show that including up to five dialogue utterances only improves performance to 20.33%. We also show that MASC outperform no-MASC models for natural language communication. greedy bar from bar from bar and rigth rigth bulding bulding sampling which bar from bar from bar and bar rigth bulding bulding.. Table 5: Samples from the tourist models communicating in natural language. Contrary to the human generated utterance, the supervised model with greedy and beam search decoding produces an utterance containing the current state observation (bar). Also the reinforcement learning model mentions the current observation but has lost linguistic structure. The fact that these localization models are better grounded in observations than human utterances explains why they obtain higher localization accuracy.
Generated utterances
We also investigate generated tourist utterances from conditional language models. Interestingly, we observe that the supervised model (with greedy and beam-search decoding) as well as the policy gradient model leads to an improvement of more than 10 accuracy points over the human utterances. However, their level of accuracy is slightly below the baseline of communicating a single observation, indicating that these models only learn to ground utterances in a single landmark observation.
Better grounding of generated utterances We analyze natural language samples in Table 5, and confirm that, unlike human utterances, the generated utterances are talking about the observed landmarks. This observation explains why the generated utterances obtain higher localization accuracy. The current language models are most successful when conditioned on a single landmark observation; We show in Appendix 9.1.1 that performance quickly deteriorates when the model is conditioned on more observations, suggesting that it can not produce natural language utterances about multiple time steps. Table 4 shows results for the best localization models on the full task, evaluated via the random walk protocol defined in Algorithm 1.
LOCALIZATION-BASED BASELINE
Comparison with human annotators Interestingly, our best localization model (continuous communication, with MASC, and T = 3) achieves 88.33% on the test set and thus exceed human performance of 76.74% on the full task. While emergent models appear to be stronger localizers, humans might cope with their localization uncertainty through other mechanisms (e.g. better guidance, bias towards taking particular paths, etc). The simplifying assumption of perfect perception also helps.
Number of actions Unsurprisingly, humans take fewer steps (roughly 15) than our best random walk model (roughly 34). Our human annotators likely used some form of guidance to navigate faster to the target.
CONCLUSION
We introduced the Talk The Walk task and dataset, which consists of crowd-sourced dialogues in which two human annotators collaborate to navigate to target locations in the virtual streets of NYC. For the important localization sub-task, we proposed MASC-a novel grounding mechanism to learn state-transition from the tourist's message-and showed that it improves localization performance for emergent and natural language. We use the localization model to provide baseline numbers on the Talk The Walk task, in order to facilitate future research.
RELATED WORK
The Talk the Walk task and dataset facilitate future research on various important subfields of artificial intelligence, including grounded language learning, goal-oriented dialogue research and situated navigation. Here, we describe related previous work in these areas.
Related tasks There has been a long line of work involving related tasks. Early work on taskoriented dialogue dates back to the early 90s with the introduction of the Map Task (Anderson et al., 1991) and Maze Game (Garrod & Anderson, 1987) corpora. Recent efforts have led to larger-scale goal-oriented dialogue datasets, for instance to aid research on visually-grounded dialogue (Das et al., 2016;de Vries et al., 2016), knowledge-base-grounded discourse (He et al., 2017) or negotiation tasks (Lewis et al., 2017). At the same time, there has been a big push to develop environments for embodied AI, many of which involve agents following natural language instructions with respect to an environment (Artzi & Zettlemoyer, 2013;Yu et al., 2017;Hermann et al., 2017;Mei et al., 2016;Chaplot et al., 2018b;a), following-up on early work in this area (MacMahon et al., 2006;Chen & Mooney, 2011). An early example of navigation using neural networks is (Hadsell et al., 2007), who propose an online learning approach for robot navigation. Recently, there has been increased interest in using end-to-end trainable neural networks for learning to navigate indoor scenes (Gupta et al., 2017b;a) or large cities (Brahmbhatt & Hays, 2017;Mirowski et al., 2018), but, unlike our work, without multi-agent communication. Also the task of localization (without multi-agent communication) has recently been studied (Chaplot et al., 2018a;Vo et al., 2017).
Grounded language learning Grounded language learning is motivated by the observation that humans learn language embodied (grounded) in sensorimotor experience of the physical world (Barsalou, 2008;Smith & Gasser, 2005). On the one hand, work in multi-modal semantics has shown that grounding can lead to practical improvements on various natural language understanding tasks (see Baroni, 2016;Kiela, 2017, and references therein). In robotics, researchers dissatisfied with purely symbolic accounts of meaning attempted to build robotic systems with the aim of grounding meaning in physical experience of the world (Roy, 2005;Steels & Hild, 2012). Recently, grounding has also been applied to the learning of sentence representations , image captioning (Lin et al., 2014;Xu et al., 2015), visual question answering (Antol et al., 2015;, visual reasoning Perez et al., 2018), and grounded machine translation (Riezler et al., 2014;Elliott et al., 2016). Grounding also plays a crucial role in the emergent research of multi-agent communication, where, agents communicate (in natural language or otherwise) in order to solve a task, with respect to their shared environment (Lazaridou et al., 2016;Das et al., 2017b;Mordatch & Abbeel, 2017;Evtimova et al., 2017;Lewis et al., 2017;Kottur et al., 2017).
IMPLEMENTATION DETAILS
For the emergent communication models, we use an embedding size L = 500. The natural language experiments use 128-dimensional word embeddings and a bidirectional RNN with 256 units. In all experiments, we train the guide with a cross entropy loss using the ADAM optimizer with default hyper-parameters (Kingma & Ba, 2014). We perform early stopping on the validation accuracy, and report the corresponding train, valid and test accuracy. We optimize the localization models with continuous, discrete and natural language communication channels for 200, 200, and 25 epochs, respectively. To facilitate further research on Talk The Walk, we make our code base for reproducing experiments publicly available at https://github.com/facebookresearch/ talkthewalk.
ADDITIONAL NATURAL LANGUAGE EXPERIMENTS
First, we investigate the sensitivity of tourist generation models to the trajectory length, finding that the model conditioned on a single observation (i.e. T = 0) achieves best performance. In the next subsection, we further analyze localization models from human utterances by investigating MASC and no-MASC models with increasing dialogue context. After training the supervised tourist model (conditioned on observations and action from human expert trajectories), there are two ways to train an accompanying guide model. We can optimize a location prediction model on either (i) extracted human trajectories (as in the localization setup from human utterances) or (ii) on all random paths of length T (as in the full task evaluation). Here, we investigate the impact of (1) using either human or random trajectories for training the guide model, and (2) the effect of varying the path length T during the full-task evaluation. For random trajectories, guide training uses the same path length T as is used during evaluation. We use a pretrained tourist model with greedy decoding for generating the tourist utterances. Table 6 summarizes the results.
Human vs random trajectories We only observe small improvements for training on random trajectories. Human trajectories are thus diverse enough to generalize to random trajectories.
Effect of path length There is a strong negative correlation between task success and the conditioned trajectory length. We observe that the full task performance quickly deteriorates for both human and random trajectories. This suggests that the tourist generation model can not produce natural language utterances that describe multiple observations and actions. Although it is possible that the guide model can not process such utterances, this is not very likely because the MASC architectures handles such messages successfully for emergent communication.
EFFECT OF BEAM-SIZE
We report localization performance of tourist utterances generated by beam search decoding of varying beam size in Table 7. We find that performance decreases from 29.05% to 20.87% accuracy on the test set when we increase the beam-size from one to eight.
LOCALIZATION FROM HUMAN UTTERANCES
We conduct an ablation study for MASC on natural language with varying dialogue context. Specifically, we compare localization accuracy of MASC and no-MASC models trained on the last [1, 3, 5] utterances of the dialogue (including guide utterances). We report these results in Table 8. In all cases, MASC outperforms the no-MASC models by several accuracy points. We also observe that mean predictedT (over the test set) increases from 1 to 2 when more dialogue context is included.
10 VISUALIZING MASC PREDICTIONS Figure 2 shows the MASC values for a learned model with emergent discrete communications and T = 3 actions. Specifically, we look at the predicted MASC values for different action sequences taken by the tourist. We observe that the first action is always mapped to the correct state-transition, but that the second and third MASC values do not always correspond to right state-transitions.
Action sequence: Right, Left, Up Action sequence: Up, Right, Down Figure 2: We show MASC values of two action sequences for tourist localization via discrete communication with T = 3 actions. In general, we observe that the first action always corresponds to the correct state-transition, whereas the second and third are sometimes mixed. For instance, in the top example, the first two actions are correctly predicted but the third action is not (as the MASC corresponds to a "no action"). In the bottom example, the second action appears as the third MASC.
EVALUATION ON FULL SETUP
We provide pseudo-code for evaluation of localization models on the full task in Algorithm 1, as well as results for all emergent communication models in Table 9. Table 9: Accuracy of localization models on full task, using evaluation protocol defined in Algorithm 1. We report the average over 3 runs.
Algorithm 1 Performance evaluation of location prediction model on full Talk The Walk setup procedure EVALUATE(tourist, guide, T, x tgt , y tgt , maxsteps)
x, y ← randint(0, 3), randint(0, 3) initialize with random location features, actions ← array(), array() features[0] ← features at location (x, y) for t = 0; t < T ; t + + do create T -sized feature buffer action ← uniform sample from action set x, y ← update location given action features[t + 1] ← features at location (x, y) actions[t] ← action
for i = 0; i < maxsteps; i + + do M ← tourist(features, actions) p(x, y|·) ← guide(M) x pred , y pred ← sample from p(x, y|·) if x pred , y pred == x tgt ,
LANDMARK CLASSIFICATION
While the guide has access to the landmark labels, the tourist needs to recognize these landmarks from raw perceptual information. In this section, we study landmark classification as a supervised learning problem to investigate the difficulty of perceptual grounding in Talk The Walk.
The Talk The Walk dataset contains a total of 307 different landmarks divided among nine classes, see Figure 4 for how they are distributed. The class distribution is fairly imbalanced, with shops and restaurants as the most frequent landmarks and relatively few play fields and theaters. We treat landmark recognition as a multi-label classification problem as there can be multiple landmarks on a corner 5 .
For the task of landmark classification, we extract the relevant views of the 360 image from which a landmark is visible. Because landmarks are labeled to be on a specific corner of an intersection, we assume that they are visible from one of the orientations facing away from the intersection. For example, for a landmark on the northwest corner of an intersection, we extract views from both the north and west direction. The orientation-specific views are obtained by a planar projection of the full 360-image with a small field of view (60 degrees) to limit distortions. To cover the full field of view, we extract two images per orientation, with their horizontal focus point 30 degrees apart. Hence, we obtain eight images per 360 image with corresponding orientation υ ∈ {N 1, N 2, E1, E2, S1, S2, W 1, W 2}.
We run the following pre-trained feature extractors over the extracted images:
ResNet We resize the extracted view to a 224x224 image and pass it through a ResNet-152 network He et al. (2016) to obtain a 2048-dimensional feature vector S resnet x,y,υ ∈ R 2048 from the penultimate layer.
Text Recognition We use a pre-trained text-recognition model to extract a set of text messages S text x,y,υ = {R text β } B β=0 from the images. Local businesses often advertise their wares through key phrases on their storefront, and understanding this text might be a good indicator of the type of landmark. In Figure 3, we show the results of running the text recognition module on a few extracted images.
For the text recognition model, we use a learned look-up table E text to embed the extracted text features e β x,y,υ = E text (R text β ), and fuse all embeddings of four images through a bag of embeddings, i.e., e f used = υ∈relevant views β e β x,y,υ . We use a linear layer followed by a sigmoid to predict the probability for each class, i.e. sigmoid(W e f used + b). We also experiment with replacing the look-up embeddings with pre-trained FastText embeddings Bojanowski et al. (2016). For the ResNet model, we use a bag of embeddings over the four ResNet features, i.e. e f used = υ∈relevant views S resnet x,y,υ , before we pass it through a linear layer to predict the class probabilities: sigmoid(W e f used + b). We also conduct experiments where we first apply PCA to the extracted ResNet and FastText features before we feed them to the model.
To account for class imbalance, we train all described models with a binary cross entropy loss weighted by the inverted class frequency. We create a 80-20 class-conditional split of the dataset into a training and validation set. We train for 100 epochs and perform early stopping on the validation loss.
The F1 scores for the described methods in Table 10. We compare to an "all positive" baseline that always predicts that the landmark class is visible and observe that all presented models struggle to outperform this baseline. Although 256-dimensional ResNet features achieve slightly better precision on the validation set, it results in much worse recall and a lower F1 score. Our results indicate that perceptual grounding is a difficult task, which easily merits a paper of its own right, and so we leave further improvements (e.g. better text recognizers) for future work. Dataset split We split the full dataset by assigning entire 4x4 grids (independent of the target location) to the train, valid or test set. Specifically, we design the split such that the valid set contains at least one intersection (out of four) is not part of the train set. For the test set, all four intersections are novel. See our source code, available at URLANONYMIZED, for more details on how this split is realized. ok. turn so that the theater is on your right. Guide: then go straight Tourist: That would be going back the way I came Guide:
DATASET DETAILS
Example
yeah. I was looking at the wrong bank Tourist: I'll notify when I am back at the brooks brothers, and the bank.
Figure 3 :
3Result of running the text recognizer of Gupta et al. (2016) on four examples of the Hell's Kitchen neighborhood. Top row: two positive examples. Bottom row: example of false negative (left) and many false positives (right)
Figure 4 :
4Frequency of landmark classes
Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT Guide: Hello, what are you near? Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT Tourist: Hello, in front of me is a Brooks Brothers Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT Guide: Is that a shop or restaurant? Tourist: ACTION:TURNLEFT Tourist: It is a clothing shop. Tourist: ACTION:TURNLEFT Guide: You need to go to the intersection in the northwest corner of the map Tourist: ACTION:TURNLEFT Tourist: There appears to be a bank behind me. Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT Guide: Ok, turn left then go straight up that road Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT Guide: There should be shops on two of the corners but you need to go to the corner without a shop. Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT Guide: let me know when you get there. Tourist: on my left is Radio city Music hall Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT Tourist: I can't go straight any further. Guide:
Figure 6 :Figure 7 :
67Tourist: ACTION:TURNRIGHT Guide: make a right when the bank is on your left Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNRIGHT Tourist: Making the right at the bank. Tourist: ACTION:FORWARD ACTION:FORWARD Tourist: I can't go that way. Tourist: ACTION:TURNLEFT Tourist: Bank is ahead of me on the right Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT Guide: turn around on that intersection Tourist: I can only go to the left or back the way I just came. Tourist: ACTION:TURNLEFT Guide: you're in the right place. do you see shops on the corners? Guide: If you're on the corner with the bank, cross the street Tourist: I'm back where I started by the shop and the bank. Tourist: ACTION:TURNRIGHT Guide: on the same side of the street? Tourist: crossing the street now Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT Tourist: there is an I love new york shop across the street on the left from me now Tourist: ACTION:TURNRIGHT ACTION:FORWARD Guide: ok. I'll see if it's right. Guide: EVALUATE_LOCATION Guide: It's not right. Tourist: What should I be on the look for? Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT ACTION:TURNRIGHT Guide:There should be shops on two corners but you need to be on one of the corners without the shop. Guide:Try the other corner. Tourist: this intersection has 2 shop corners and a bank corner Guide:yes. that's what I see on the map. Tourist: should I go to the bank corner? or one of the shop corners? or the blank corner (perhaps a hotel) Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT Guide:Go to the one near the hotel. The map says the hotel is a little further down but it might be a little off. Tourist: It's a big hotel it's possible. Tourist: ACTION:FORWARD ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT Tourist: I'm on the hotel Set of instructions presented to turkers before starting their first task. (cont.) Set of instructions presented to turkers before starting their first task.
Table 2 :
2Accuracy results for tourist localization with emergent language, showing continuous (Cont.) and discrete (Disc.) communication, along with the prediction upper bound. T denotes the length of the path and a in the "MASC" column indicates that the model is conditioned on the communicated actions. translating landmark embeddings according to state transitions
Table 3 :
3Localization accuracy of tourist communicating in natural language.Train Valid Test
#steps
Random
18.75 18.75 18.75 -
Human
76.74 76.74 76.74 15.05
Best Cont. 89.44 86.35 88.33 34.47
Best Disc. 86.23 82.81 87.08 34.83
Best NL
39.65 39.68 50.00 39.14
Table 4 :
4Full task evaluation of localization models using protocol of Appendix 11.Method
Decoding
Utterance
Observations
(Bar)
Actions
-
Human
a field of some type
Supervised
greedy
at a bar
sampling
sec just hard to tell which is a restaurant ?
beam search im at a bar
Policy Grad.
Table 6 :
6Full task performance of localization
models trained on human and random trajecto-
ries. There are small benefits for training on ran-
dom trajectories, but the most important hyper-
parameter is to condition the tourist utterance
on a single observation (i.e. trajectories of size
T = 0.) at evaluation time.
Beam size Train Valid Test
Random
6.25
6.25
6.25
1
34.14 29.90 29.05
2
26.24 23.65 25.10
4
23.59 22.87 21.80
8
20.31 19.24 20.87
Table 7: Localization performance using pre-
trained tourist (via imitation learning) with
beam search decoding of varying beam size.
Locations and observations extracted from hu-
man trajectories. Larger beam-sizes lead to
worse localization performance.
#utterances MASC Train Valid Test
E[T ]
Random
6.25
6.25
6.25
-
1
23.95 13.91 13.89 0.99
23.46 15.56 16.17 1.00
3
26.92 16.28 16.62 1.00
20.88 17.50 18.80 1.79
5
25.75 16.11 16.88 1.98
30.45 18.41 20.33 1.99
Table 8 :
8Localization given last {1, 3, 5} dialogue utterances (including the guide). We observe
that (1) performance increases when more utterances are included; and (2) MASC outperforms no-
MASC in all cases; and (3) meanT increases when more dialogue context is included.
9.1 TOURIST GENERATION MODELS
9.1.1 PATH LENGTH
FeaturesTrain loss Valid Loss Train F1 Valid F1 Valid prec. Valid recallAll positive
-
-
-
0.39313 0.26128
1
Random (0.5)
-
-
-
0.32013 0.24132
0.25773
Textrecog
0.01462
0.01837
0.31205
0.31684 0.2635
0.50515
Fasttext
0.00992
0.00994
0.24019
0.31548 0.26133
0.47423
Fasttext (100 dim) 0.00721
0.00863
0.32651
0.28672 0.24964
0.4433
ResNet
0.00735
0.00751
0.17085
0.20159 0.13114
0.58763
ResNet (256 dim)
0.0051
0.00748
0.60911
0.31953 0.27733
0.50515
Table 10 :
10Results for landmark classification.
Figure 5: Map of New York City with red rectangles indicating the captured neighborhoods of the Talk The Walk dataset.Neighborhood
#success #failed #disconnects
Hell's Kitchen
2075
762
867
Williamsburg
2077
683
780
East Village
2035
713
624
Financial District 2042
607
497
Upper East
2081
359
576
Total
10310
3124
3344
Table 11 :
11Dataset statistics split by neighborhood and dialogue status.
We avoided using existing street view resources due to licensing issues. 2 A 360fly 4K camera.
Note that we do not include the orientation in the target, as we found in early experiments that this led to an unnatural task for humans. Similarly, we explored bigger grid sizes but found these to be too difficult for most annotators.
This is different from A2C which uses a state-value baseline that is trained by the Bellman residual
Strictly speaking, this is more general than a multi-label setup because a corner might contain multiple landmarks of the same class.
The hcrc map task corpus. Anne H Anderson, Miles Bader, Ellen Gurman Bard, Elizabeth Boyle, Gwyneth Doherty, Simon Garrod, Stephen Isard, Jacqueline Kowtko, Jan Mcallister, Jim Miller, Catherine Sotillo, Henry S Thompson, Regina Weinert, Language and Speech. 344Anne H. Anderson, Miles Bader, Ellen Gurman Bard, Elizabeth Boyle, Gwyneth Doherty, Simon Garrod, Stephen Isard, Jacqueline Kowtko, Jan McAllister, Jim Miller, Catherine Sotillo, Henry S. Thompson, and Regina Weinert. The hcrc map task corpus. Language and Speech, 34(4):351- 366, 1991.
and Anton van den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian D Reid, Stephen Gould, abs/1711.07280CoRRPeter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian D. Reid, Stephen Gould, and Anton van den Hengel. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. CoRR, abs/1711.07280, 2017.
Vqa: Visual question answering. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, Lawrence Zitnick, Devi Parikh, Proc. of ICCV. of ICCVStanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zit- nick, and Devi Parikh. Vqa: Visual question answering. In Proc. of ICCV, 2015.
Weakly supervised learning of semantic parsers for mapping instructions to actions. Yoav Artzi, Luke Zettlemoyer, Transactions of the Association of Computational Linguistics. 1Yoav Artzi and Luke Zettlemoyer. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association of Computational Linguistics, 1:49-62, 2013.
Grounding distributional semantics in the visual world. Marco Baroni, Language and Linguistics Compass. 101Marco Baroni. Grounding distributional semantics in the visual world. Language and Linguistics Compass, 10(1):3-13, 2016.
. W Lawrence, Barsalou. Grounded cognition. Annual Review of Psychology. 591Lawrence W. Barsalou. Grounded cognition. Annual Review of Psychology, 59(1):617-645, 2008.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, arXiv:1607.04606arXiv preprintPiotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606, 2016.
Deepnav: Learning to navigate large cities. Samarth Brahmbhatt, James Hays, abs/1701.09135Samarth Brahmbhatt and James Hays. Deepnav: Learning to navigate large cities. CoRR, abs/1701.09135, 2017. URL http://arxiv.org/abs/1701.09135.
Active neural localization. Devendra Singh Chaplot, Emilio Parisotto, Ruslan Salakhutdinov, arXiv:1801.08214arXiv preprintDevendra Singh Chaplot, Emilio Parisotto, and Ruslan Salakhutdinov. Active neural localization. arXiv preprint arXiv:1801.08214, 2018a.
Gated-attention architectures for task-oriented language grounding. Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, Ruslan Salakhutdinov, AAAIDevendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Ra- jagopal, and Ruslan Salakhutdinov. Gated-attention architectures for task-oriented language grounding. AAAI, 2018b.
Learning to interpret natural language navigation instructions fro mobservations. L David, Raymond J Chen, Mooney, Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI-2011). the 25th AAAI Conference on Artificial Intelligence (AAAI-2011)San Francisco, CA, USADavid L. Chen and Raymond J. Mooney. Learning to interpret natural language navigation instruc- tions fro mobservations. In Proceedings of the 25th AAAI Conference on Artificial Intelligence (AAAI-2011), San Francisco, CA, USA, August 2011.
. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, M F José, Devi Moura, Dhruv Parikh, Batra, arXiv:1611.08669Visual dialog. arXiv preprintAbhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, and Dhruv Batra. Visual dialog. arXiv preprint arXiv:1611.08669, 2016.
Embodied question answering. Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, Dhruv Batra, abs/1711.11543CoRRAbhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Em- bodied question answering. CoRR, abs/1711.11543, 2017a.
Learning cooperative visual dialog agents with deep reinforcement learning. Abhishek Das, Satwik Kottur, M F José, Stefan Moura, Dhruv Lee, Batra, arXiv:1703.06585arXiv preprintAbhishek Das, Satwik Kottur, José MF Moura, Stefan Lee, and Dhruv Batra. Learning cooperative visual dialog agents with deep reinforcement learning. arXiv preprint arXiv:1703.06585, 2017b.
Florian Harm De Vries, Sarath Strub, Olivier Chandar, Hugo Pietquin, Aaron C Larochelle, Courville, arXiv:1611.08481Guesswhat?! visual object discovery through multi-modal dialogue. arXiv preprintHarm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron C. Courville. Guesswhat?! visual object discovery through multi-modal dialogue. arXiv preprint arXiv:1611.08481, 2016.
Modulating early visual processing by language. Florian Harm De Vries, Jeremie Strub, Hugo Mary, Olivier Larochelle, Aaron C Pietquin, Courville, Proc. of NIPS. of NIPSHarm de Vries, Florian Strub, Jeremie Mary, Hugo Larochelle, Olivier Pietquin, and Aaron C. Courville. Modulating early visual processing by language. In Proc. of NIPS, 2017.
Multi30k: Multilingual englishgerman image descriptions. Desmond Elliott, Stella Frank, Khalil Sima'an, Lucia Specia, arXiv:1605.00459arXiv preprintDesmond Elliott, Stella Frank, Khalil Sima'an, and Lucia Specia. Multi30k: Multilingual english- german image descriptions. arXiv preprint arXiv:1605.00459, 2016.
Emergent language in a multi-modal, multi-step referential game. Katrina Evtimova, Andrew Drozdov, Douwe Kiela, Kyunghyun Cho, arXiv:1705.10369arXiv preprintKatrina Evtimova, Andrew Drozdov, Douwe Kiela, and Kyunghyun Cho. Emergent language in a multi-modal, multi-step referential game. arXiv preprint arXiv:1705.10369, 2017.
Saying what you mean in dialogue: A study in conceptual and semantic co-ordination. Simon Garrod, Anthony Anderson, Cognition. 272Simon Garrod and Anthony Anderson. Saying what you mean in dialogue: A study in conceptual and semantic co-ordination. Cognition, 27(2):181 -218, 1987.
Synthetic data for text localisation in natural images. A Gupta, A Vedaldi, A Zisserman, IEEE Conference on Computer Vision and Pattern Recognition. A. Gupta, A. Vedaldi, and A. Zisserman. Synthetic data for text localisation in natural images. In IEEE Conference on Computer Vision and Pattern Recognition, 2016.
Cognitive mapping and planning for visual navigation. Saurabh Gupta, James Davidson, Sergey Levine, Rahul Sukthankar, Jitendra Malik, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionSaurabh Gupta, James Davidson, Sergey Levine, Rahul Sukthankar, and Jitendra Malik. Cognitive mapping and planning for visual navigation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017a.
Unifying map and landmark based representations for visual navigation. Saurabh Gupta, David Fouhey, Sergey Levine, Jitendra Malik, arXiv:1712.08125arXiv preprintSaurabh Gupta, David Fouhey, Sergey Levine, and Jitendra Malik. Unifying map and landmark based representations for visual navigation. arXiv preprint arXiv:1712.08125, 2017b.
Online learning for offroad robots: Using spatial label propagation to learn long-range traversability. Raia Hadsell, Pierre Sermanet, Jeff Han, Beat Flepp, Urs Muller, Yann Lecun, Proc. of Robotics: Science and Systems (RSS). of Robotics: Science and Systems (RSS)11Raia Hadsell, Pierre Sermanet, Jeff Han, Beat Flepp, Urs Muller, and Yann LeCun. Online learning for offroad robots: Using spatial label propagation to learn long-range traversability. In Proc. of Robotics: Science and Systems (RSS), volume 11, 2007.
Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. He He, Anusha Balakrishnan, Mihail Eric, Percy Liang, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1He He, Anusha Balakrishnan, Mihail Eric, and Percy Liang. Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1766- 1776, Vancouver, Canada, July 2017. Association for Computational Linguistics. URL http: //aclweb.org/anthology/P17-1162.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016.
Grounded language learning in a simulated 3d world. Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojtek Czarnecki, Max Jaderberg, Denis Teplyashin, arXiv:1706.06551arXiv preprintKarl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojtek Czarnecki, Max Jaderberg, Denis Teplyashin, et al. Grounded language learn- ing in a simulated 3d world. arXiv preprint arXiv:1706.06551, 2017.
Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, Lawrence Zitnick, Ross Girshick, Proc. of CVPR. of CVPRJustin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. In Proc. of CVPR, 2017.
Deep embodiment: grounding semantics in perceptual modalities. Douwe Kiela, UCAM-CL-TR-899University of Cambridge, Computer LaboratoryTechnical ReportDouwe Kiela. Deep embodiment: grounding semantics in perceptual modalities (PhD thesis). Technical Report UCAM-CL-TR-899, University of Cambridge, Computer Laboratory, Febru- ary 2017.
Virtual embodiment: A scalable long-term strategy for artificial intelligence research. Douwe Kiela, Luana Bulat, Anita L Vero, Stephen Clark, arXiv:1610.07432arXiv preprintDouwe Kiela, Luana Bulat, Anita L. Vero, and Stephen Clark. Virtual embodiment: A scalable long-term strategy for artificial intelligence research. arXiv preprint arXiv:1610.07432, 2016.
Learning visually grounded sentence representations. Douwe Kiela, Alexis Conneau, Allan Jabri, Maximilian Nickel, arXiv:1707.06320arXiv preprintDouwe Kiela, Alexis Conneau, Allan Jabri, and Maximilian Nickel. Learning visually grounded sentence representations. arXiv preprint arXiv:1707.06320, 2017.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, abs/1412.6980CoRRDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014.
Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog. Satwik Kottur, M F José, Stefan Moura, Dhruv Lee, Batra, abs/1706.08502Satwik Kottur, José M.F. Moura, Stefan Lee, and Dhruv Batra. Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog. volume abs/1706.08502, 2017.
Multi-agent cooperation and the emergence of (natural) language. Angeliki Lazaridou, Alexander Peysakhovich, Marco Baroni, arXiv:1612.07182arXiv preprintAngeliki Lazaridou, Alexander Peysakhovich, and Marco Baroni. Multi-agent cooperation and the emergence of (natural) language. arXiv preprint arXiv:1612.07182, 2016.
Deal or no deal? endto-end learning for negotiation dialogues. Mike Lewis, Denis Yarats, Devi Yann N Dauphin, Dhruv Parikh, Batra, arXiv:1706.05125arXiv preprintMike Lewis, Denis Yarats, Yann N Dauphin, Devi Parikh, and Dhruv Batra. Deal or no deal? end- to-end learning for negotiation dialogues. arXiv preprint arXiv:1706.05125, 2017.
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, Proc. of ECCV. of ECCVTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Proc. of ECCV, 2014.
Walk the talk: Connecting language, knowledge, and action in route instructions. Matt Macmahon, Brian Stankiewicz, Benjamin Kuipers, Proceedings of the 21st National Conference on Artificial Intelligence (AAAI-2006). the 21st National Conference on Artificial Intelligence (AAAI-2006)Boston, MA, USAMatt MacMahon, Brian Stankiewicz, and Benjamin Kuipers. Walk the talk: Connecting language, knowledge, and action in route instructions. In Proceedings of the 21st National Conference on Artificial Intelligence (AAAI-2006), Boston, MA, USA, July 2006.
Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. Hongyuan Mei, Mohit Bansal, Matthew R Walter, Proceedings of AAAI. AAAIHongyuan Mei, Mohit Bansal, and Matthew R Walter. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In Proceedings of AAAI, 2016.
H Alexander, Will Miller, Adam Feng, Jiasen Fisch, Dhruv Lu, Antoine Batra, Devi Bordes, Jason Parikh, Weston, arXiv:1705.06476Parlai: A dialog research software platform. arXiv preprintAlexander H Miller, Will Feng, Adam Fisch, Jiasen Lu, Dhruv Batra, Antoine Bordes, Devi Parikh, and Jason Weston. Parlai: A dialog research software platform. arXiv preprint arXiv:1705.06476, 2017.
Learning to navigate in cities without a map. Piotr Mirowski, Matthew Koichi Grimes, Mateusz Malinowski, Karl Moritz Hermann, Keith Anderson, Denis Teplyashin, Karen Simonyan, Koray Kavukcuoglu, Andrew Zisserman, Raia Hadsell, abs/1804.00168CoRRPiotr Mirowski, Matthew Koichi Grimes, Mateusz Malinowski, Karl Moritz Hermann, Keith An- derson, Denis Teplyashin, Karen Simonyan, Koray Kavukcuoglu, Andrew Zisserman, and Raia Hadsell. Learning to navigate in cities without a map. CoRR, abs/1804.00168, 2018. URL http://arxiv.org/abs/1804.00168.
Emergence of grounded compositional language in multi-agent populations. Igor Mordatch, Pieter Abbeel, arXiv:1703.04908arXiv preprintIgor Mordatch and Pieter Abbeel. Emergence of grounded compositional language in multi-agent populations. arXiv preprint arXiv:1703.04908, 2017.
Film: Visual reasoning with a general conditioning layer. Ethan Perez, Florian Strub, Harm De, Vincent Vries, Aaron Dumoulin, Courville, Proc. of AAAI. of AAAIEthan Perez, Florian Strub, Harm De Vries, Vincent Dumoulin, and Aaron Courville. Film: Visual reasoning with a general conditioning layer. In Proc. of AAAI, 2018.
Response-based learning for grounded machine translation. Stefan Riezler, Patrick Simianer, Carolin Haas, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational Linguistics1Stefan Riezler, Patrick Simianer, and Carolin Haas. Response-based learning for grounded machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 881-891, 2014.
Grounding words in perception and action: computational insights. Deb Roy, Trends in cognitive sciences. 98Deb Roy. Grounding words in perception and action: computational insights. Trends in cognitive sciences, 9(8):389-396, 2005.
The development of embodied cognition: Six lessons from babies. Linda Smith, Michael Gasser, Artificial Life. 111-2Linda Smith and Michael Gasser. The development of embodied cognition: Six lessons from babies. Artificial Life, 11(1-2):13-29, 2005.
Language grounding in robots. Luc Steels, Manfred Hild, Springer Science & Business MediaLuc Steels and Manfred Hild. Language grounding in robots. Springer Science & Business Media, 2012.
End-to-end optimization of goal-driven and visually grounded dialogue systems. Florian Strub, Jeremie Harm De Vries, Bilal Mary, Aaron Piot, Olivier Courville, Pietquin, arXiv:1703.05423arXiv preprintFlorian Strub, Harm De Vries, Jeremie Mary, Bilal Piot, Aaron Courville, and Olivier Pietquin. End-to-end optimization of goal-driven and visually grounded dialogue systems. arXiv preprint arXiv:1703.05423, 2017.
| [
"https://github.com/facebookresearch/talkthewalk.",
"https://github.com/facebookresearch/"
] |
[
"Transformers with convolutional context for ASR",
"Transformers with convolutional context for ASR"
] | [
"Abdelrahman Mohamed \nFacebook AI Research\n\n",
"Dmytro Okhonko \nFacebook AI Research\n\n",
"Luke Zettlemoyer \nFacebook AI Research\n\n"
] | [
"Facebook AI Research\n",
"Facebook AI Research\n",
"Facebook AI Research\n"
] | [] | The recent success of transformer networks for neural machine translation and other NLP tasks has led to a surge in research work trying to apply it for speech recognition. Recent efforts studied key research questions around ways of combining positional embedding with speech features, and stability of optimization for large scale learning of transformer networks. In this paper, we propose replacing the sinusoidal positional embedding for transformers with convolutionally learned input representations. These contextual representations provide subsequent transformer blocks with relative positional information needed for discovering long-range relationships between local concepts. The proposed system has favorable optimization characteristics where our reported results are produced with fixed learning rate of 1.0 and no warmup steps. The proposed model reduces the word error rate (WER) by 12% and 16% relative to previously published work on Librispeech "dev other" and "test other" subsets respectively, when no extra LM text is provided. Full code to reproduce our results will be available online at the time of publication. | null | [
"https://arxiv.org/pdf/1904.11660v2.pdf"
] | 135,465,923 | 1904.11660 | 606c03a5bb36b77bfc99e33fd7cb6731b13b8eab |
Transformers with convolutional context for ASR
Abdelrahman Mohamed
Facebook AI Research
Dmytro Okhonko
Facebook AI Research
Luke Zettlemoyer
Facebook AI Research
Transformers with convolutional context for ASR
Index Terms: speech recognition, Transformer network
The recent success of transformer networks for neural machine translation and other NLP tasks has led to a surge in research work trying to apply it for speech recognition. Recent efforts studied key research questions around ways of combining positional embedding with speech features, and stability of optimization for large scale learning of transformer networks. In this paper, we propose replacing the sinusoidal positional embedding for transformers with convolutionally learned input representations. These contextual representations provide subsequent transformer blocks with relative positional information needed for discovering long-range relationships between local concepts. The proposed system has favorable optimization characteristics where our reported results are produced with fixed learning rate of 1.0 and no warmup steps. The proposed model reduces the word error rate (WER) by 12% and 16% relative to previously published work on Librispeech "dev other" and "test other" subsets respectively, when no extra LM text is provided. Full code to reproduce our results will be available online at the time of publication.
Introduction
Speech Recognition systems have experienced many advances over the past decade, with neural acoustic and language models leading to impressive new levels of performance across many challenging tasks [1,2]. Advances in alignment-free sequence-level loss functions like CTC and ASG [3,4] enabled easier training with letters as output units [5,6]. The success of sequence-to-sequence models in neural machine translation systems [7,8] offered further simplification to ASR systems by integrating the acoustic and the language models into a single encoder-decoder architecture that is jointly optimized [9,10]. The encoder focuses on building acoustic representations that the decoder, through different attention mechanisms, can use to generate the target units.
Recently, transformer networks have been shown to perform well for neural machine translation [11] and many other NLP tasks [12]. A Transformer layer distinguishes itself from a regular recurrent network by entirely relying on a key-value "self"-attention mechanism for learning relationships between distant concepts, rather than relying on recurrent connections and memory cells to preserve information, as in LSTMs, that can fade over time steps. Transformer layers can be seen as bagof-concept layers because they don't preserve location information in the weighted sum self-attention operation. To model word order, sinusoidal positional embeddings are used [11].
There has been recent research interest in using transformer networks for end-to-end ASR both with CTC loss [13] and in an encoder-decoder framework [14,15] with modest performance compared to baseline systems. For a standard hybrid ASR system, [16] introduced a time-constrained key-value self-attention layer to be used in tandem with other TDNN and recurrent layers. Using time-restricted self-attention context enabled the authors to model input positions as 1-hot vectors, however, they didn't show a conclusive evidence for the impact of the self-attention context size. One interesting research question in all previous work was: how to best introduce positional information for input speech features. Answers range from dropping it altogether, adding it to input features/embedding, and concatenating it with input features leaving it to the neural network to decide how to combine them.
In this paper, we take an alternative approach. We propose replacing sinusoidal positional embedding with contextually augmented inputs learned by 2-D convolutional layers over input speech features in the encoder, and by 1-D convolutional layers over previously generated outputs in the decoder. Lower layers build atomic concepts, both in encoders and decoders, by learning local relationships between time steps. Long-range sequential structure modeling is left to subsequent layers. Although the transformer's flexible inductive bias is able to mimic convolution filters in its lower layers, we argue that this comes at the expense of brittle optimization. We believe that adding early convolutional layers allows the model to learn implicit relative positional encodings which enable subsequent transformer layers to recover the right order of the output sequence.
Using convolutional layers as input processors before recurrent layers in acoustic encoders has been previously proposed for computational reasons with minimal impact on performance [17]. So, we focus our experiments on understanding the impact of the convolutional context size consumed by the decoder 1-D convolutional layers. Our best model configuration, with a fixed learning rate of 1.0, no hyperparameter or decoder optimization, achieves 12% and 16% relative reduction in WER compared to previously published results on the acoustically challenging Librispeech [18] "dev other" and "test other" subsets, when no extra LM text data is used during decoding.
Transformers with convolutional context
We propose dividing the modeling task into two subcomponents: learning local relationships within a small context with convolutional layers, and learning global sequential structure of the input with transformer layers. This division simplifies transformer optimization leading to more stable training and better results because we don't need to force lower transformer layers to learn local dependencies.
Transformer layer
Transformer layers [11] have the ability to learn long range relationships for many sequential classification tasks [12]. Multi- Let dinput be input dimension to a transformer layer, each time step in the input is projected into d k , d k , dv dimensional vectors representing the queries(Q), keys(K) and values(V ) for attention, where similarities between keys and queries determine combination weights of values for each time step,
Attention(Q, K, V ) = Sof tmax( QK T √ d k )V(1)
The dot product between keys and queries is scaled by the inverse square root of the key dimension. This self-attention operation is done h times in parallel, for the case of h attention heads, with different projection matrices from dinput to d k , d k , and dv. The final output is a concatenation of h vectors each with dimension dv which is in turn linearly projected to the desired output dimension of the self-attention layer.
On top of the self-attention component, transformer layers have multiple operations applied on each time step; dropout, residual connection, layer norm, two fully connected layers with a ReLU layer in between, another residual and Layer norm operations. Figure(1)-left show the details of one transformer layer as proposed by [11].
Adding context to transformer
Our convolutional layers are added below the Transformer layers, and we do not make any use of positional encodings. The model learns an acoustic language model over the bag of discovered acoustic units as it goes deeper in the encoder. The experimental results show that using a relatively deep encoder is critical for getting good performance. For the encoder, we used 2-D convolutional blocks with layer norms and ReLU after each convolutional layer. Each convolutional block contains K convolutional layers followed by a 2-D max pooling layer, as shown in figure(2)-right. For the decoder, we follow a similar approach using 1-D convolutions over embeddings of previously predicted words (shown in figure(2)-left with N 1-D convolutional layers in each decoder convolutional block). Each block in the model is repeated multiple times (shown on the top right corner of each block). On the decoder side, we use a separate multi-head attention layer to aggregate encoder context for each decoder transformer block. We found that hav- ing more than one attention layer improves the overall system recognition performance. The decoder 1-D convolution only looks at historical predictions with its end point at the current time step. Similarly, the transformer layers have future target steps masked, so that decoder self-attention is only running over current and previous time steps to respect left-to-right output generation. We are not investigating online/streaming decoding conditions in this paper, so the encoder self-attention is allowed to operate over the entire input utterance.
Full end-to-end model architecture
Experimental results
Experimental Setup
We evaluate performance on the Librispeech dataset [18] containing 1000h of training data with development and test sets split into simple ("clean") and harder ("other") subsets 1 . We use 5k "unigram" subword target units learned by the sentence piece package [20] with full coverage of all training text data. Input speech is represented as 80-D log mel-filterbank coefficients plus three fundamental frequency features computed every 10ms with a 25ms window.
All experiments were not tuned to best possible performance using training hyperparameter or decoder optimization. We don't use scheduled sampling or label smoothing. For regularization, we use a single dropout rate of 0.15 across all blocks as part of our default configurations. For model optimization, we use the AdaDelta algorithm [21] with fixed learning rate=1.0 and gradient clipping at 10.0. We run all configurations for 80 epochs, we then report results on an average model computed over the last 30 checkpoints. Averaging the last few checkpoints brings the model weights closer to the nearest local minimum. We could have stopped training models much earlier than 80 epochs, with a different early stopping point for different runs, but decided to stick by a generic training recipe to simplify reproducing our results. It is important to mention that we aren't using a learning rate warmup schedule and yet the model converges to the reported WER results in a stable way. This general fixed training recipe wasn't optimized on any part of Librispeech.
The standard convolutional tranformer model used in most experiments has the following configuration: (1) Two 2-D convolutional blocks, each with two conv. layers with kernel size=3, max-pooling kernel=2. The first block has 64 feature maps while the second has 128, (2) 10 encoder transformer blocks all with transformer dim=1024, 16 heads, intermediate ReLU layer size=2048, (3) decoder input word embedding dim=512, (4) three 1-D conv. layers each with kernel size=3, no max pooling is used for the decoder side 1-D convolution, and (5) 10 decoder transformer blocks each with encoder-side multihead attention, otherwise the configuration is identical to the encoder transformer block. This canonical model has about 223M parameters, and it takes about 24 hours to perform all 80 epochs on 2 machines each with 8GPUs with 16GB of memory. All results are reported without any external language model trained on extra text data. Our focus is to study the contextual transformer decoder's ability to model the statistical properties of the spoken training data. We use beam size of 5 during inference for all experiments except mentioned otherwise.
Model Comparisions
We first studied performance of our approach to alternative architectures and positional encoding schemes. In table (1) we show the WER of the proposed transformer encoder-decoder model with convolutional context using the canonical configuration in the first row. Replacing the 1-D convolutional context in the decoder with sinusoidal positional embedding, as proposed in the baseline machine translation transformers [11] and adopted in [13,15], shows inferior WER performance. By combining sinusoidal and convolutional position embedding (rows 1+2), we don't observe any gains. This supports our intuition that the relative convolutional positional information provides sufficient signal for the transformer layers to recreate more global word order. We also found that having multiple encoder-side attention layers is critical for achieving the best WER. Increasing the intermediate ReLU layer in each encoder and decoder layer was found to greatly improve the overall WER across different sets, however increasing the number of attention heads, while keeping the attention dimension the same, deteriorates the performance. To better understand these results, we also studied the effects of different hyperparameter settings. Table(2) shows the effect of different decoder convolutional context sizes spread over different depths. All configurations in table(2) share the same canonical configuration and the number of 1-D conv. feature maps were chosen to ensure the total number of parameters are fixed between all configurations. The best performance comes from using the same parameter budget over wider context that is built over multiple convolutional layers. However, the decoder is able to get reasonable WER even with a context of just 3 words as input to the transformer layers. Using a deep transformer encoder capture long range structure of the data as an acoustic LM built on top of learned concepts from the we added models that use externally trained LMs on extra text data, although their results aren't comparable to ours. Compared to models with no external LM, our model brings 12% to 16% relative WER reduction on the acoustically challenging "dev other" and "test other" subsets of Librispeech. This suggests that the convolutional tranformer indeed learns long-range acoustic characteristics of speech data, e.g. speaker and environment characteristics, because the model doesn't bring much improvement to the "dev clean" and "test clean" subsets which need external text data for improvement. The results confirm our belief that the improvements found in this paper are orthogonal to further potential improvements to the WER using an LM trained on much larger text corpus.
Model
Conclusion and future work
We presented a transformer seq2seq ASR system with learned convolutional context, both for the encoder and the decoder. Input convolutional layers capture relative positional information which enables subsequent transformer blocks to learn long range relationships between local concepts in the encoder, and recover the target sequence in the decoder. Using a deep transformer encoder was important to reach best performance, as we demonstrated empirically. Our best configuration achieves 12% and 16% relative reduction in WER compared to previously published systems on Librispeech "dev other" and "test other" subsets respectively, when no extra LM text is provided. Future work includes extending our approach to other NLP tasks, and testing its impact on larger scale benchmarks.
Figure 1 :
1Left: components of one transformer block. Right: Block diagram of the full end-to-end model head self-attention is the core component of transformer layers.
Figure( 1 )
1-right shows the full end-to-end system architecture.
Figure 2 :
2Left: One decoder side 1-D convolutional block. Right: One encoder side 2-D convolutional block.
Table 2 :
2WER for different decoder convolution architecturesconvolutional layers. Also deeper encoder help marginalize out
global utterance specific speaker and environment characteris-
tics while focusing on the content. A deeper deocder, although
not as critical, showed better overall performance. Table(3)
shows WER for different depth configurations. We wanted to
understand the effect of fixing encoder depth while changing
decoder and vice versa, fixing the total sum of encoder and de-
coder depths, as well as using same depth on both sides all the
way up to 14 transformer layers.
Setup
Enc/Dec
depth
dev
clean
dev
other
test
clean
test
other
Same
Enc/Dec
depth
6/6
5.6
14.5
5.7
15.3
8/8
5.3
14.0
5.3
13.9
10/10
5.2
13.7
5.3
14.0
12/12
5.0
13.0
5.0
13.3
14/14
5.0
12.9
5.0
13.4
Same
total
depth
2/10
7.5
18.4
7.8
18.7
4/8
6.2
15.5
6.0
16.0
6/6
5.6
14.5
5.7
15.3
8/4
5.4
13.9
5.3
14.3
10/2
5.2
14.1
5.4
14.6
Same
Enc
depth
10/2
5.2
14.1
5.4
14.6
10/4
5.1
13.7
5.2
14.2
10/6
5.0
13.3
5.1
13.7
10/8
5.2
13.6
5.2
14.3
10/10
5.2
13.7
5.3
14.0
Same
Dec
depth
2/10
7.5
18.4
7.8
18.7
4/10
6.4
15.4
6.3
16.4
6/10
5.7
14.5
5.7
14.6
8/10
5.2
14.0
5.4
14.3
10/10
5.2
13.7
5.3
14.0
Table 3 :
3WER for different transformer depth in encoder and decoder3.3. Final ResultsBased on these experimental findings, we combined the best performing configurations into one model that is similar to the canonical model except: (1) we use 4k ReLU layers in all trans-former blocks in the encoder and the decoder, (2) we use 16 encoder transformer blocks, and (3) we only use 6 decoder transformer blocks. The results of the best model are shown in table(4). For decoding of the best model we used beam size of 20.Table(4) compares this model to other previously published results on the Librispeech dataset. For completeness,
Table 4 :
4WER comparison with other previously published work on Librispeech
We decided to concentrate on Librispeech and not on smaller datasets, e.g. TIMIT, WSJ, as with current model capacities research findings on smaller datasets can't reliably generalize to new scenarios and don't provide universal modeling trends. Early CTC experiments showed no gains for WSJ[19] while it was later proved to be one of the current best large scale loss functions[6].
Deep neural networks for acoustic modeling in speech recognition. G Hinton, L Deng, D Yu, G Dahl, A Rahman Mohamed, N Jaitly, A Senior, V Vanhoucke, P Nguyen, T Sainath, B Kingsbury, Signal Processing Magazine. G. Hinton, L. Deng, D. Yu, G. Dahl, A. rahman Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, and B. Kingsbury, "Deep neural networks for acoustic modeling in speech recognition," Signal Processing Magazine, 2012.
. T Mikolov, M Karafiát, L Burget, J Cernocký, S Khudanpur, Recurrent neural network based language model," in INTER-SPEECHT. Mikolov, M. Karafiát, L. Burget, J. Cernocký, and S. Khudan- pur, "Recurrent neural network based language model," in INTER- SPEECH, 2010.
Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. A Graves, S Fernández, F Gomez, J Schmidhuber, Proceedings of the 23rd International Conference on Machine Learning, ser. ICML '06. the 23rd International Conference on Machine Learning, ser. ICML '06A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, "Con- nectionist temporal classification: Labelling unsegmented se- quence data with recurrent neural networks," in Proceedings of the 23rd International Conference on Machine Learning, ser. ICML '06, 2006.
Wav2letter: an end-to-end convnet-based speech recognition system. R Collobert, C Puhrsch, G Synnaeve, R. Collobert, C. Puhrsch, and G. Synnaeve, "Wav2letter: an end-to-end convnet-based speech recognition system," 2016. [Online]. Available: http://arxiv.org/abs/1609.03193
Towards end-to-end speech recognition with recurrent neural networks. A Graves, N Jaitly, ICML. A. Graves and N. Jaitly, "Towards end-to-end speech recognition with recurrent neural networks," in ICML, 2014.
Deep speech: Scaling up end-to-end speech recognition. A Y Hannun, C Case, J Casper, B Catanzaro, G Diamos, E Elsen, R Prenger, S Satheesh, S Sengupta, A Coates, A Y Ng, A. Y. Hannun, C. Case, J. Casper, B. Catanzaro, G. Diamos, E. Elsen, R. Prenger, S. Satheesh, S. Sengupta, A. Coates, and A. Y. Ng, "Deep speech: Scaling up end-to-end speech recognition," 2014. [Online]. Available: http://arxiv.org/abs/1412. 5567
Learning phrase representations using RNN encoder-decoder for statistical machine translation. K Cho, B Van Merrienboer, Ç Gülçehre, F Bougares, H Schwenk, Y Bengio, K. Cho, B. van Merrienboer, Ç . Gülçehre, F. Bougares, H. Schwenk, and Y. Bengio, "Learning phrase representations using RNN encoder-decoder for statistical machine translation," 2014. [Online]. Available: http://arxiv.org/abs/1406.1078
Sequence to sequence learning with neural networks. I Sutskever, O Vinyals, Q V Le, I. Sutskever, O. Vinyals, and Q. V. Le, "Sequence to sequence learning with neural networks," 2014. [Online]. Available: http://arxiv.org/abs/1409.3215
Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. W Chan, N Jaitly, Q V Le, O Vinyals, ICASSP. W. Chan, N. Jaitly, Q. V. Le, and O. Vinyals, "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition," in ICASSP, 2016. [Online]. Available: http://williamchan.ca/papers/wchan-icassp-2016.pdf
Attention-based models for speech recognition. J Chorowski, D Bahdanau, D Serdyuk, K Cho, Y Bengio, J. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, "Attention-based models for speech recognition," 2015. [Online].
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," 2017. [Online]. Available: http://arxiv.org/abs/1706.03762
BERT: pre-training of deep bidirectional transformers for language understanding. J Devlin, M Chang, K Lee, K Toutanova, J. Devlin, M. Chang, K. Lee, and K. Toutanova, "BERT: pre-training of deep bidirectional transformers for language understanding," 2018. [Online]. Available: http://arxiv.org/abs/ 1810.04805
Self-attention networks for connectionist temporal classification in speech recognition. J Salazar, K Kirchhoff, Z Huang, J. Salazar, K. Kirchhoff, and Z. Huang, "Self-attention networks for connectionist temporal classification in speech recognition," 2019. [Online]. Available: https://arxiv.org/abs/1901.10055
Self-attentional acoustic models. M Sperber, J Niehues, G Neubig, S Stüker, A Waibel, M. Sperber, J. Niehues, G. Neubig, S. Stüker, and A. Waibel, "Self-attentional acoustic models," 2018. [Online]. Available: http://arxiv.org/abs/1803.09519
Syllable-based sequence-to-sequence speech recognition with the transformer in mandarin chinese. S Zhou, L Dong, S Xu, B Xu, InterspeechS. Zhou, L. Dong, S. Xu, and B. Xu, "Syllable-based sequence-to-sequence speech recognition with the transformer in mandarin chinese," in Interspeech, 2018. [Online]. Available: https://arxiv.org/abs/1804.10752
A time-restricted self-attention layer for asr. D Povey, H Hadian, P Ghahremani, K Li, S Khudanpur, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. D. Povey, H. Hadian, P. Ghahremani, K. Li, and S. Khudanpur, "A time-restricted self-attention layer for asr," 2018 IEEE Inter- national Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018.
Espnet: End-to-end speech processing toolkit. S Watanabe, T Hori, S Karita, T Hayashi, J Nishitoba, Y Unno, N E Y Soplin, J Heymann, M Wiesner, N Chen, A Renduchintala, T Ochiai, InterspeechS. Watanabe, T. Hori, S. Karita, T. Hayashi, J. Nishitoba, Y. Unno, N. E. Y. Soplin, J. Heymann, M. Wiesner, N. Chen, A. Ren- duchintala, and T. Ochiai, "Espnet: End-to-end speech processing toolkit," in Interspeech, 2018.
Librispeech: An asr corpus based on public domain audio books. V Panayotov, G Chen, D Povey, S Khudanpur, V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, "Lib- rispeech: An asr corpus based on public domain audio books," 2015.
Hybrid speech recognition with deep bidirectional lstm. A Graves, N Jaitly, A Rahman Mohamed, IEEE Workshop on Automatic Speech Recognition and Understanding. ASRUA. Graves, N. Jaitly, and A. rahman Mohamed, "Hybrid speech recognition with deep bidirectional lstm," in In IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU, 2013.
Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. T Kudo, J Richardson, the 2018 Conference on Empirical Methods in Natural Language Processing. T. Kudo and J. Richardson, "Sentencepiece: A simple and lan- guage independent subword tokenizer and detokenizer for neural text processing," the 2018 Conference on Empirical Methods in Natural Language Processing, 2018.
ADADELTA: an adaptive learning rate method. M D Zeiler, M. D. Zeiler, "ADADELTA: an adaptive learning rate method," 2012. [Online]. Available: http://arxiv.org/abs/1212.5701
The CAPIO 2017 conversational speech recognition system. K J Han, A Chandrashekaran, J Kim, I R Lane, K. J. Han, A. Chandrashekaran, J. Kim, and I. R. Lane, "The CAPIO 2017 conversational speech recognition system," 2018. [Online]. Available: http://arxiv.org/abs/1801.00059
Improved training of end-to-end attention models for speech recognition. A Zeyer, K Irie, R Schlüter, H Ney, A. Zeyer, K. Irie, R. Schlüter, and H. Ney, "Improved training of end-to-end attention models for speech recognition," 2018. [Online]. Available: http://arxiv.org/abs/1805.03294
Letterbased speech recognition with gated convnets. V Liptchinsky, G Synnaeve, R Collobert, abs/1712.09444CoRR. V. Liptchinsky, G. Synnaeve, and R. Collobert, "Letter- based speech recognition with gated convnets," CoRR, vol. abs/1712.09444, 2017. [Online]. Available: http://arxiv.org/abs/ 1712.09444
Sequenceto-sequence speech recognition with time-depth separable convolutions. A Hannun, A Lee, Q Xu, R Collobert, A. Hannun, A. Lee, Q. Xu, and R. Collobert, "Sequence- to-sequence speech recognition with time-depth separable convolutions," 2019. [Online]. Available: UnderReview
Fully convolutional speech recognition. N Zeghidour, Q Xu, V Liptchinsky, N Usunier, G Synnaeve, R Collobert, abs/1812.06864CoRR. N. Zeghidour, Q. Xu, V. Liptchinsky, N. Usunier, G. Syn- naeve, and R. Collobert, "Fully convolutional speech recogni- tion," CoRR, vol. abs/1812.06864, 2018.
| [] |
[
"Falsesum: Generating Document-level NLI Examples for Recognizing Factual Inconsistency in Summarization",
"Falsesum: Generating Document-level NLI Examples for Recognizing Factual Inconsistency in Summarization"
] | [
"Prasetya Ajie Utama \nUKP Lab\nTechnical University of Darmstadt\nGermany\n",
"Joshua Bambrick jbambrick7@bloomberg.net ",
"Sadat Nafise ",
"Moosavi \nUKP Lab\nTechnical University of Darmstadt\nGermany\n\nDepartment of Computer Science\nThe University of Sheffield\n\n",
"Iryna Gurevych \nUKP Lab\nTechnical University of Darmstadt\nGermany\n",
"♦ † Bloomberg ",
"\nLondonUnited Kingdom\n"
] | [
"UKP Lab\nTechnical University of Darmstadt\nGermany",
"UKP Lab\nTechnical University of Darmstadt\nGermany",
"Department of Computer Science\nThe University of Sheffield\n",
"UKP Lab\nTechnical University of Darmstadt\nGermany",
"LondonUnited Kingdom"
] | [
"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies"
] | Neural abstractive summarization models are prone to generate summaries which are factually inconsistent with their source documents. Previous work has introduced the task of recognizing such factual inconsistency as a downstream application of natural language inference (NLI). However, state-of-the-art NLI models perform poorly in this context due to their inability to generalize to the target task. In this work, we show that NLI models can be effective for this task when the training data is augmented with high-quality task-oriented examples. We introduce Falsesum, a data generation pipeline leveraging a controllable text generation model to perturb human-annotated summaries, introducing varying types of factual inconsistencies. Unlike previously introduced document-level NLI datasets, our generated dataset contains examples that are diverse and inconsistent yet plausible. We show that models trained on a Falsesum-augmented NLI dataset improve the state-of-the-art performance across four benchmarks for detecting factual inconsistency in summarization. | 10.18653/v1/2022.naacl-main.199 | [
"https://www.aclanthology.org/2022.naacl-main.199.pdf"
] | 248,721,759 | 2205.06009 | 79d8cb6c6e3c80b8e48555ebda5944c3a6e0fc68 |
Falsesum: Generating Document-level NLI Examples for Recognizing Factual Inconsistency in Summarization
July 10-15, 2022
Prasetya Ajie Utama
UKP Lab
Technical University of Darmstadt
Germany
Joshua Bambrick jbambrick7@bloomberg.net
Sadat Nafise
Moosavi
UKP Lab
Technical University of Darmstadt
Germany
Department of Computer Science
The University of Sheffield
Iryna Gurevych
UKP Lab
Technical University of Darmstadt
Germany
♦ † Bloomberg
LondonUnited Kingdom
Falsesum: Generating Document-level NLI Examples for Recognizing Factual Inconsistency in Summarization
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJuly 10-15, 2022
Neural abstractive summarization models are prone to generate summaries which are factually inconsistent with their source documents. Previous work has introduced the task of recognizing such factual inconsistency as a downstream application of natural language inference (NLI). However, state-of-the-art NLI models perform poorly in this context due to their inability to generalize to the target task. In this work, we show that NLI models can be effective for this task when the training data is augmented with high-quality task-oriented examples. We introduce Falsesum, a data generation pipeline leveraging a controllable text generation model to perturb human-annotated summaries, introducing varying types of factual inconsistencies. Unlike previously introduced document-level NLI datasets, our generated dataset contains examples that are diverse and inconsistent yet plausible. We show that models trained on a Falsesum-augmented NLI dataset improve the state-of-the-art performance across four benchmarks for detecting factual inconsistency in summarization.
Introduction
Recent advances in conditional text generation and the availability of large-scale datasets have given rise to models which generate highly fluent abstractive summaries Zhang et al., 2019). However, studies indicate that such models are susceptible to generating factually inconsistent outputs, i.e., where the content of the summary is not semantically entailed by the input document (Kryscinski et al., 2019;Goodrich et al., 2019). This motivates a new line of research for recognizing factual inconsistency in generated summaries (Kryscinski et al., 2020;Pagnoni et al., 2021;Wang et al., 2020;Fabbri et al., 2021).
This factual consistency problem is closely related to the task of natural language inference (NLI) whereby a hypothesis sentence is classified as either entailed, neutral, or contradicted by a given premise sentence (Condoravdi et al., 2003;Dagan et al., 2006;Bowman et al., 2015). Using an input document as the premise and a corresponding generated summary as the hypothesis, earlier solutions have adopted out-of-the-box NLI models to detect factual inconsistency, albeit with limited success (Falke et al., 2019;Kryscinski et al., 2020).
This poor performance largely stems from the fact that most NLI datasets are not designed to reflect the input characteristics of downstream tasks (Khot et al., 2018). Such datasets may not always capture the kinds of entailment phenomena which naturally arise from neural abstractive summarization. More importantly, there is also a discrepancy in terms of the input granularity, i.e., the premises in this consistency classification task consist of multi-sentence documents while common NLI datasets use single-sentence premises.
In this work, we introduce Falsesum, a data generation pipeline that produces NLI examples consisting of documents paired with gold summaries as positive examples and automatically generated inconsistent summaries as negative examples. We propose a novel strategy to train a text generation model to render false summaries of a given document using only supervision from an existing summarization dataset (Nallapati et al., 2016). In addition, our generator supports switchable input control codes to determine the type of factual error exhibited in the generated output. This design allows Falsesum to compose diverse and naturalistic outputs which more closely resemble the inconsistent summaries generated by summarization models (Maynez et al., 2020). This contrasts with previous solutions (e.g., Kryscinski et al., 2020;Yin et al., 2021), which synthesize NLI examples using rule-based transformations or language model-based replacements, limiting their diversity and ability to reflect realistic factual errors in summarization. Overall, our contributions in this paper are the following:
First, we present a novel training pipeline to create a text generation model which takes as input a pair of a document and a corresponding gold summary. It then perturbs the summary such that it is no longer factually consistent with the original document. Our strategy obviates the need for explicit examples of inconsistent summaries, using only an existing summarization dataset. We use this model to generate a large-scale NLI dataset for the task of recognizing factually inconsistent summaries. The resultant dataset consists of pairs with documents as the premise and naturalistic summaries as the hypotheses, each labeled as either entailment or non-entailment.
Second, we demonstrate the utility of our generated data for augmenting existing NLI datasets. We show that on four benchmark datasets, NLI models trained on Falsesum-augmented data outperform those trained on previous document-level NLI datasets. We conduct an analysis to show that Falsesum-generated summaries are plausible and hard to distinguish from human-written summaries. Lastly, we show that the improvement over the benchmarks is largely attributable to the diversity of factual errors that Falsesum introduces.
Related Work
This work is related to the growing body of research into factual consistency and hallucination in text generation models, particularly for summa-rization (Cao et al., 2018). Research has found that around 30% of summaries generated by abstractive summarization models contain information which is inconsistent with the source document (Kryscinski et al., 2019). This motivates the development of an automatic approach to assess factual consistency in generated summaries, in addition to the benchmark datasets to measure the progress in this task (Falke et al., 2019;Kryscinski et al., 2020;Pagnoni et al., 2021;Fabbri et al., 2021).
Earlier work by Goodrich et al. (2019) proposes to use an information extraction model to extract relation tuples from the ground-truth summary text and the generated summary and then count the overlap as the measure of factuality. Eyal et al. (2019); Durmus et al. (2020); Wang et al. (2020) use a question-answering model to detect factual inconsistency by matching the predicted answers using the document and the summary as the context. Concurrently, researchers have drawn a connection between factual consistency and natural language inference (NLI), observing that all information in a summary should be entailed by the source document. While this approach enables the summary to be directly evaluated without first extracting its intermediate semantic structure, earlier attempts were largely unsuccessful. Falke et al. (2019) use the probabilities assigned to the entailment label by NLI models to re-rank the summary candidates given by beam search but found no improvement in the consistency errors. Kryscinski et al. (2020) evaluate out-of-the-box NLI models on the task of inconsistency detection in a binary classification setting and show that the performance is only slightly better than majority voting.
In the same paper, Kryscinski et al. (2020) pro-pose FactCC, a synthetic NLI data generation process which applies a set of transformation rules to obtain examples of inconsistent summaries (e.g., sentence negation, entity swapping). They demonstrate that the resulting NLI model performs well on realistic test cases which are obtained by manually annotating the output of several summarization models. This highlights the importance of NLI examples beyond sentence-level granularity and which more closely resemble the input characteristics of the downstream tasks (Mishra et al., 2021). 2 While the FactCC model is moderately effective for detecting factual inconsistency, subsequent work indicates that it only performs well on easier test cases, where highly extractive summaries (i.e., those with high lexical overlap between a summary and the source document) tend to be factually consistent and more abstractive summaries are likely to be inconsistent (Zhang et al., 2020). Furthermore, Goyal and Durrett (2021) show that the synthetic and rule-based nature of FactCC leads to lack of diversity of consistency error types and it poorly aligns with the error distribution found in more abstractive summaries.
Falsesum addresses these limitations using controlled natural language generation to construct an NLI dataset which better targets the summarization domain. Inspired by the recent work on controllable generation (Keskar et al., 2019;Ross et al., 2021), we employ a generation model conditioned on an input code which controls the type of consistency errors induced. We further use the generated document-level NLI examples for augmentation and show that NLI models can benefit from the additional data without hurting their existing inference ability (Min et al., 2020).
Falsesum Approach
Design Overview
Falsesum takes as an input a source document D and a corresponding reference summary S + . The framework then preprocesses and formats D and S + and feeds them into a generation model G which outputs a factually inconsistent summary S − . For each summarization example, we then have both positive (entailment) and negative (non-entailment) NLI tuples (D, S + , Y = 1), (D, S − , Y = 0), which consist of a document-level premise, a summary sentence, and the consistency label (1 indicates entailment).
Falsesum aims to produce a naturalistic S − which is contrastive with respect to its corresponding S + . This means that S + and S − should be indistinguishable in their surface characteristics (e.g., style, length, vocabularies) and only differ in their factual consistency with respect to D. This ensures that the resulting NLI model learns the correct notion of factual consistency rather than discriminating based on surface features (McCoy et al., 2019). In addition to naturalness, we consider the diversity of the consistency error types exhibited by S − . We follow the consistency error typology introduced by Maynez et al. (2020), which categorizes consistency errors as either intrinsic, i.e., errors due to incorrect consolidation of information from the source document, or extrinsic, i.e., errors due to assuming new information not directly inferable from the contents of the source document.
As illustrated in Figure 1, a generation model G is trained to imitate the consistency mistakes of summarization models. Specifically, it generates perturbed summaries by either (1) incorrectly inserting pieces of information from the source document into random spans of the original summary; or (2) amending pieces of information in the summary by hallucinating new "facts" not present in the source document.
To this end, the framework identifies (♦i) what information or "facts" in the source document are available to the generator; and (♦ii) where the incorrect information can be inserted into the gold summary, which is indicated by span masking. We obtain both by subsequently performing input preprocessing and formatting steps ( §3.2 and §3.3).
Next, we define the following seq2seq task to train the model G: "Given (♦i) a list of shuffled and formatted pieces of information extracted from source document and gold summary and (♦ii) a partially masked gold summary, fill in the blanks and generate the original gold summary." Note that using gold summaries means that we can apply the existing summarization corpus to train G to generate more coherent and plausible sentences.
Input Preprocessing
Following Goodrich et al. (2019), "facts" in the source document and the gold summary are de-fined as an open information extraction (OpenIE) tuple, which represents the predicate and argument structures found in a sentence. We denote each relation tuple as (arg 0 , pred, . . . , arg n ), where predicate pred describes the event (what happened) and its complementing semantic arguments arg represent the who, to whom, where, or how of the event. Predicates are usually the main verb of a clause. Both predicates and their arguments consist of spans of tokens (Fader et al., 2011).
We use an OpenIE implementation of Pred-Patt (White et al., 2016;Zhang et al., 2017), a pattern-based framework for predicate-arguments extraction. 3 As illustrated in the top half of Figure 2, we extract the relation tuples from each source document and its corresponding reference summaries. To minimize the risk of G inadvertently generating consistent summaries, we corrupt each extracted "fact" by removing one randomly chosen argument from each tuple. For instance, OpenIE may extract the following tuple from a sentence:
( Jo ARG 0 , plans to give PRED , Alex ARG 1 , apples ARG 2 )
We then randomly choose apples ARG 2 to be removed from the tuple. We additionally lemmatize the dependency root word of each argument and predicate span, e.g., plans to give ⇒ plan to give. This forces the model to learn to correct for grammaticality by inflecting the spans when inserting them to the masked spans. Once all such spans are extracted and processed, they are grouped and shuffled into two lists (predicates and arguments).
Input Formatting
Let P = (PRED 1 , . . . , PRED n ) and A = (ARG 1 , . . . , ARG m ) be the unordered lists of extracted predicates and arguments from a source document D and the summary sentence S + . Additionally, we assume a masked summary sentence M (described later), derived from S + , and a control code variable c ∈ {intrinsic, extrinsic}. Generator G is trained to compute p(S + |P, A, M, c). As illustrated in the bottom half of Figure 2, we encode all the conditional variables into the following format:
Predicates:P; Arguments:A; Code:c; Summary:M
In the following, we describe the key steps in the input formatting process: 3 We note that the quality of the OpenIE extractions may impact the overall quality of our data generation framework. Figure 2: Input format design of Falsesum. The framework first extracts the predicate and argument spans from the source document and the gold summary. The spans are then corrupted, lemmatized, and shuffled before being inserted into the input template.
Step 1: Span Removal Initially, P and A include predicate and argument spans from the original summary which may be used to reconstruct S + . However, at test time we remove these "gold" spans from the two lists to force the G to make consistency mistakes. The removal is also done when training the model for control code extrinsic to train G to predict plausible unseen spans. 4 We summarize the different input formatting in Table 1.
Step 2: Span Reduction To encourage G to generate fine-grained errors (Pagnoni et al., 2021;Goyal and Durrett, 2021), we also train it to hallucinate incorrect modifiers into spans from P and A. To this end, we randomly drop adjectives and adverbs from 10% of the gold predicate and argument spans. For instance, an argument span "recently elected prime minister" will be reduced to "minister". This teaches the model to generate the remaining part of the span given only the context provided in the formatted input.
Step 3: Control Code To control the type of consistency errors generated by G, we append the string "code:" followed by either "intrinsic" or "extrinsic" into the input tokens. The code is determined randomly with equal probability of 0.5. Once the code is chosen, we perform the remaining formatting steps accordingly (see Table 1).
Mode
Step 4: Summary Masking We derive masked summary M by replacing the spans of randomly selected predicates and arguments with a special token <span_i>, where i = 0 is reserved for the predicate, and i > 0 for their arguments. These tokens control where the incorrect information should be inserted by the generator model into the original summary (see Table 1).
Training Falsesum
We run the Falsesum data generation pipeline on the train split of the CNN/DailyMail corpus (Hermann et al., 2015), originally collected for question answering, but subsequently reformulated for summarization by Nallapati et al. (2016). This dataset contains English news documents paired with human-written summaries, each consisting of multiple sentences. We break the summaries down such that each Falsesum example consists of the document text and a single sentence summary. We then run the preprocessing and formatting steps on each document-summary pair. The resulting pairs of formatted input and target output are subsequently split into train and test sets which consist of 394,774 and 262,692 instances, respectively.
We use the T5-base model (Raffel et al., 2020) as generator G and fine-tune it on the seq2seq task described in §3.
Experimental Settings
Our experiments aim to demonstrate the effectiveness of Falsesum-generated document-level examples for NLI dataset augmentation. We evaluate the downstream performance of the NLI models by testing them against several benchmarks for determining the factual inconsistency of generated summaries. In this section, we describe the training setup of the NLI models, including the model and both the sentence-and document-level datasets.
Training
NLI models We train several NLI models by fine-tuning RoBERTa-base on either the original or the augmented MNLI dataset (Williams et al., 2018). The MNLI dataset consists of 392,702 train instances, each labeled as either "entailment", "neutral", or "contradiction". To enable the application of NLI data to this factual consistency task, we use a binary formulation of NLI, where the "neutral" and "contradiction" labels are combined into "non-entailment". The document-level inputs are formatted similarly to sentence-level examples, i.e., the document premise D and summary hypothesis (S + or S − ) are concatenated and a special classification token ([CLS]) is used (Devlin et al., 2019).
Document-level NLI datasets
We conduct augmentation comparisons with several multi-sentence NLI datasets which obtain examples from news or summarization domains. We consider the following datasets: ANLI (Nie et al., 2020), a paragraphlevel NLI dataset collected via an iterative and adversarial human-in-the-loop annotation protocol. It consists of mostly Wiki data but also includes a small portion of news text; DocNLI (Yin et al., 2021), a document-level NLI dataset containing multi-sentence premise and hypothesis sentences, collected by converting QA examples to NLI instances (Demszky et al., 2018) and replacing words and sentences in news summaries using a language model; FactCC (Kryscinski et al., 2020), a large-scale dataset specifically generated for training summary factual correctness classification models. The positive examples in FactCC are obtained by backtranslating a random sentence from a CNN/DailyMail news story, while negative examples are obtained by perturbing the sentence using predefined rules, e.g., entity swapping. For fair comparison, we sample 100,000 examples from each augmentation dataset in our experiments.
Benchmark Datasets
We evaluate these NLI models on four benchmark datasets to classify the factual consistency of abstractive summaries. These datasets differ in terms of the annotation protocol, the granularity of the summaries (single-or multi-sentence), the summarization corpus used, and the models used to generate the summaries that are annotated. The tasks are formulated as a binary classification with the labels "consistent" and "inconsistent". We evaluate NLI models on these tasks by mapping the predicted label "entailment" to "consistent" and "non-entailment" to "inconsistent". The benchmarks datasets are detailed in the following:
FactCC In addition introducing a synthetic training dataset for the task, Kryscinski et al. (2020) introduce a manually annotated test set. It contains 1,431 document and single-sentence summary pairs generated by various neural abstractive summarization models trained on CNN/DailyMail corpus. 6
Ranksum Falke et al. (2019) formulate the factual consistency problem in summarization as a ranking task. They introduce a dataset consisting of 107 documents, each paired with a set of five ranked summary candidates obtained from the beam search of a summarization model. Given the manually annotated consistency label on summary candidates, the task is to re-rank the list such that the top-1 summary is factually consistent.
Summeval Fabbri et al. (2021) introduce a comprehensive benchmark for factual consistency detection in summarization. It includes summaries generated by seven extractive models and sixteen abstractive models, which are judged by three annotators using a 5-point Likert scale. 7 QAGS The dataset collected by Wang et al. (2020) consists of 239 test set instances from XSUM (Narayan et al., 2018) and 714 instances from CNN/DailyMail. 8 Each instance consists of a pair of a source document and a single-sentence summary, which is labeled via majority voting on three annotators' labels.
Results and Discussion
Main Results
Performance on FactCC, QAGS, and SummEval is measured using balanced accuracy, which is suitable for class imbalanced settings, since the factually consistent label is the majority in some benchmark datasets. It is defined as the average recall of the two classes, such that majority label voting obtains only a 50% score. To measure ranking performance in Ranksum, we calculate the average Precision@1, which computes the fraction of times a factually consistent summary is ranked highest on each test instance. We perform five training runs for each setup using different random seeds and take the mean to address performance instability (Reimers and Gurevych, 2017 From the results in Table 2, we observe the following: (1) Models trained on sentence-level MNLI datasets perform poorly when evaluated directly on document-level benchmarks, even after we increase the maximum input token length from 128 to 512; 9 (2) This limitation can be alleviated by the sentence-wise prediction strategy ([split-doc]MNLI-128), 10 which achieves 66.63. Note, however, that this improvement comes at the expense of compute cost which is multiplied by a significant factor; (3) DocNLI and ANLI perform poorly even though they contain longer premise sentences, indicating that the length mismatch may not be the primary issue; (4) Falsesum obtains substantial improvement over the previous state-of-the-art FactCC, despite being derived from the same summarization dataset (CNN/DailyMail). This indicates that Falsesum provides higher quality examples and includes more types of entailment phenomena that occur naturally in this task.
Ablation Analysis on Falsesum Data
We perform an ablation analysis to study how each component of our data generation pipeline 9 Average context word count is only 22 in MNLI and 546 in FactCC. 10 See details in Appendix B contributes to the final performance. We first remove the contrastive property of the Falsesum data by randomly including only either the positive (D, S + , Y = 1) or negative (D, S − , Y = 0) NLI examples obtained from a single (D, S + ) pair. Next, we filter out the negative NLI instances that are generated using either intrinsic or extrinsic code. We refer to the three ablated datasets as −contrastive, −intrinsic and −extrinsic, respectively. We set the sampled training size to 100,000 for the three ablation setups and aggregate the results from five training runs. Table 3 shows the performance of the ablated models. We observe that removing contrastive pairs in the augmented training data results in a 1.06% drop on the overall benchmarks score. We also see that removing intrinsic error examples results in the highest performance loss, −5.03% compared to −2.22% by −extrinsic. This is explained by the fact that intrinsic consistency errors are more dominant on benchmarks that are built on the CNN/DailyMail corpus (Goyal and Durrett, 2021). We conclude that all the above properties are important for the overall improvements obtained by Falsesum.
Fine-grained Evaluation
Previous work has shown that NLI models are prone to relying on fallible heuristics which associate lexical overlap with entailment labels (McCoy et al., 2019). In the factual consistency task, this corresponds to models associating highly extractive summaries with the "consistent" label. This raises a question about whether Falsesum data alleviates this tendency in the resulting NLI models.
To answer this question, we partition the FactCC annotated test examples into five ordered subsets based on the lexical overlap between their , where density = 1.0 indicates that all words in a summary are also present in the source document and normalized coverage = 1.0 indicates that the summary is obtained by copying a continuous fragment of the source document. We then define overlap = normalized coverage × density. Figure 3 shows the comparison of FactCC and Falsesum augmentation performance across varying lexical overlap scores. We see that Falsesum performs better on all subsets of the FactCC test set with the greatest performance gap appearing on the 0.9 overlap subset. Upon closer inspection, we see that the FactCC model makes mostly false positive classification errors on this subset, i.e., it tends to predict highly extractive summaries as "consistent", leading to near majority voting performance of 50%. Falsesum, on the other hand, better discriminates the factual consistency of examples without over-relying on lexical overlap.
Data Quality Analysis
We conduct both manual and automatic quality evaluation of the Falsesum-generated dataset. First, we sample 200 generated negative examples and manually verify whether (i) the perturbed summary S − is indeed factually inconsistent; (ii) the type of consistency error follows the specified control code; (iii) the incorrect "fact" is inserted at the specified missing span. Following Kryscinski Table 4 show that about 86% of intrinsic 81% of extrinsic generated error examples are factually inconsistent, which happen due to several reasons, e.g., generator model chooses a span from the list that is similar to the original span, or generator model correctly guesses the original missing span. This further suggests that pre-trained language models such as RoBERTa-base can be robust against the induced label noise and can still learn a performant classifier. While G almost always inserts the incorrect "fact" at the specified positions, we observe that it often fails to follow the specified extrinsic code correctly. We suspect that this is because the model prefers the easier task of copying the input over generating novel phrases. 11 Following Gururangan et al. (2018), we also evaluate the naturalness of the generated dataset. We train an NLI model using positive examples from CNN/DailyMail and Falsesum-generated negative examples. The model receives no premise so must distinguish between entailed and non-entailed hypotheses using semantic plausibility or spurious surface features, e.g., grammatical mistakes or fluency errors. The relatively low accuracy of these models on Falsesum data (shown in Table 5) suggests that, compared to FactCC and DocNLI, Falsesum-generated summaries are relatively hard to distinguish from the gold ones.
Conclusion
NLI models present a promising solution for automatic assessment of factual consistency in summarization. However, the application of existing models for this task is hindered by several challenges, such as the mismatch of characteristics between their training dataset and the target task data. This mismatch includes the difference in terms of the input granularity (sentence vs. document level premises) and the types of (non-)entailment phenomena that must be recognized.
In this work, we present Falsesum, a data generation pipeline which renders large-scale documentlevel NLI datasets without manual annotation. Using our training strategy, we demonstrate that it is possible to learn to generate diverse and naturalistic factually inconsistent (non-entailed) summaries using only existing (entailed) consistent summaries for training. We show that the resultant data is effective for augmenting NLI datasets to improve the state-of-the-art performance across four summary factual inconsistency benchmarks.
A Hyperparameters
Generator model We train a T5-base model for three epochs with batch size of 24 using the AdamW optimizer. We set the maximum source token length to 256 and the target token length to 42. We use a learning rate of 3e −5 and fix the random seed to 11. For decoding, we set the minimum and maximum sequence length to 10 and 60, respectively. We sample using beam search with a beam of size two. We additionally set the repetition penalty to 2.5 and the length penalty to 1.0.
Classification model We train RoBERTa-base models on augmented and original MNLI datasets for three epochs with a batch size of 32. The learning rate is set to 1e −5 , while the maximum input token length is set to either 128 or 512. We use the following random seeds for the five training runs: 11, 12, 13, 14, and 15.
B Aggregating Predictions
We follow Falke et al. (2019) to adapt out-of-thebox MNLI models to document-level input by performing a sentence-wise prediction before aggregating the output. Given a document D consisting of sentences d 1 , . . . , d n , and a multi-sentence summary S consisting of s 1 , . . . , s m , we aggregate the probability scores given by the classifier model F on each d i , s j pair. The aggregated consistency score σ(D, S ) is given by:
σ(D, S ) = 1 m m j=1 max d∈D F(d, s j )
This means that it is sufficient for a summary sentence to be factually consistent given only a single entailing sentence in the source document. We then take the average scores across the summary sentences since each of them needs to be entailed by the source document. We use a similar aggregation method to evaluate augmented MNLI models on multi-sentence summaries from the Summeval and Ranksum benchmarks.
C Falsesum Details
In the preprocessing steps, we only perform the predicate and argument span extraction on the first 15 sentences for computational efficiency. For training, this is not an issue since the gold spans from the reference summary are included in the input. Additionally, we may extract multiple OpenIE relation tuples from each sentence. To avoid having overlapping spans from a single input, we randomly select two tuples from each sentence.
D Falsesum Examples
We include more examples of generated NLI instances in Table 6. We also include cases where Falsesum inadvertently generates factually consistent summaries in Table 7. Lastly, we show several examples of the formatted input and the generated output at test time in Table 8. entailment Swedish international takes to social media to express love for Arsenal. (intrinsic) non-entailment Swedish international has been on loan at Chelsea since last season.
A teenager who was struck down with an agonising bowel condition says dancing has helped him to overcome his debilitating illness. Macaulay Selwood, 17, was diagnosed with Crohn's two years ago and was so unwell that he was often left in agony on the floor unable to move. But his determination to continue his promising dancing career gave him the spur he needed to battle through. Lord of the Dance: Macaulay at his practice studio. He was diagnosed with Crohn's in September 2010 after collapsing in agony during a dance class . Recovery: 'Dancing has helped me overcome it (Crohn's). It kept me motivated' Now the teenager from Bristol has made it to the finals of the Irish dancing world championships in Boston, USA, and is hotly-tipped for glory. He will then have a trial at the famous performing arts school, ArtsEd, in London. At shows he has been compared with Riverdance star Michael Flatley while others have taken to calling him Billy Elliot, after the film character who overcomes the odd to becoming a dancing star. Macaulay did ballet at college before focusing on Irish dancing for the world championships and works at Tesco to fund his passion. . . . entailment Macaulay Selwood, 17, first starting suffering from Crohn's disease in 2010. (extrinsic) non-entailment The 22-year-old, who was diagnosed with Crohn's in 2010, has been recovering since 2010.
When Matthew Briggs, 32, from Huntington in North Yorkshire noticed that his father had posted a photo of them together on Facebook, he was initially pleased. But when he opened the photo and saw the image, Mr Briggs was left horrified by the sight of his 31st frame. Now, two years on, he has shed an astonishing 17st and, in November, will complete the New York marathon in memory of his mother Susan who died from multiple sclerosis when he was just 18. Pounding the pavements: Matthew Briggs, 32, has lost an impressive 17st in just two years of slimming . 'In March of 2000, she lost her battle with Multiple Sclerosis,' he says. 'She has always been my inspiration. I am the man I am today because of the woman she was.' Money raised by Mr Briggs' 26-mile run will be donated to the Multiple Sclerosis Society, a charity dedicated to beating the disease as well as supporting sufferers and their families. Mr Briggs, who has dropped from 31st to just under 14st, had piled on the pounds thanks to a diet of ready meals, takeaways and daily two litre bottles of Coca-Cola. But, after seeing the photo posted on Facebook and spurred on by a bet with his father, Mr Briggs joined his local Slimming World group and went on to shed more than 17st over two years. . . . entailment She died in 2000 of multiple sclerosis and funds raised will go to charity. (extrinsic) non-entailment She died in 2000 of multiple sclerosis and every penny she saves will go to charity. Table 6: Examples of NLI pairs generated by Falsesum. We show both the entailment and non-entailment hypotheses obtained from each source document. Green-highlighted spans indicate the information used consistently in the summary. Red-highlighted spans indicate information used or inserted by the model to generate an inconsistent summary. 2775 Table 7: Falsesum-generated summaries that are unintentionally consistent with the source document. Green-highlighted spans indicate information which is consistent with the document.
Predicates : is being offer for, were steal from, sell, Both as a solo artist and leader of the Heartbreakers, is one of , according to, where were rehearse for, contribute to, was induct into in; Arguments : the Heartbreakers, The band, Denise Quan, five guitars, the Recording Industry Association of America, more than 57 million albums, Petty, A 7,500 reward, a soundstage, the Rock & Roll Hall of Fame; Code : intrinsic; Summary :<span_1> <span_0> the 1960s.
gold Three of them were vintage guitars from the 1960s. (intrinsic) generated The band was inducted into the Rock & Roll Hall of Fame in the 1960s.
Predicates : : is only the second time in, How could have do with, was lace with, struggle against at, have score, expect to match, had settle into, ignite, has lost, Just as was walk into, were already circulate on, begin to filter, watch on in; Arguments : his chair, Anfield, clips, the stands, symbolism, 13 Premier League goals, Brendan Rodgers, through, Liverpool, the 100-plus strikes of last season, 13 games against Hull, everything, one; Code : intrinsic; Summary :Luis Suarez took three minutes to <span_0> <span_1>.
gold Luis Suarez took three minutes to get his first assist for Barcelona. (intrinsic) generated Luis Suarez took three minutes to ignite symbolism.
Predicates : allegedly know, supposedly write, in ' was underway, is investigate, file against in by, file in, forbid, was toss by in, wait for, fire at, accuse of, decide to fire based on, new information state, told, allegedly sent to, was complicate by, Even though was toss, allegedly made, hold no more, expose to; Arguments : the case, new information states, his sexual abuse, more recent damages, people, the blog posts, 2011, him, This week, her, allowing at one of his Los Angeles stores to post naked photos of Morales on a blog that was meant to appear as though it belonged to Morales, American Apparel, The Post, a settlement, The clothing company, Charney, new information saying he allowed an employee to impersonate and post naked photos online of an alleged victim of his sexual abuse who filed a case against him in 2011, a settlement 'in the low six-digits' was underway, the company title, employee, 2012, The $260 million lawsuit, a report from March 25, 2011 that said Morales allegedly sent nude photos of herself to Charney after she stopped working at the store, nude photos of herself, Morales; Code : extrinsic; Summary :Women in the video <span_0> <span_1>.
gold Women in the video have been identified as current or former American Apparel workers. (extrinsic) generated Women in the video were allegedly sexually assaulted by Morales. Table 8: Examples of the formatted input at test time and the real output of the Falsesum generation model. Blue-highlighted spans show the formatted input predicates. Green-highlighted spans show the formatted input arguments. Yellow-highlighted spans show the formatted input control code. Gray-highlighted spans show the formatted input masked gold summary. Red-highlighted spans show the information inserted by the model to render inconsistent summaries.
Figure 1 :
1Overview of the Falsesum generation framework. Falsesum preprocesses and formats the source document (A) and a gold summary (B) before feeding it to a fine-tuned generator model. The model produces a factually inconsistent summary, which can then be used to obtain (A, D) or (A, E) as the negative (non-entailment) NLI premise-hypothesis example pair. We also use the original (A, B) as a positive NLI example (entailment).
Figure 3 :
3Comparison between NLI models augmented with Falsesum and FactCC across different measures of summary extractiveness. The x-axis shows the median overlap score of each test subset. summary hypothesis and the source document premise. We define an overlap score using the normalized coverage and density summary extractiveness scores introduced by Grusky et al. (2018). Both measures have the range [0.0, 1.0]
Arguments : the corruption scandal, Two Pennsylvania judges, . . . , many children, the U.S. Code : intrinsic;Summary :<span_1> <span_0> federal fraud charges.test extrinsicPredicates : is pressing for, limit, . . . , is being erode, is fight; Arguments : panelist, action, . . . , sea level, Arctic melt, at the climate change conference Code : extrinsic; Summary : The Alliance <span_0> <span_1> <span_2>.Input
Expected Output
Description
train
intrinsic
Predicates : caught, plead guilty to, . . . , appear before,
face; Two Pennsylvania judges
plead guilty to federal
fraud charges.
Model learns to
combines listed
spans to produce
most plausible
summary.
test
intrinsic
Predicates : caught, plead guilty to, . . . , appear before,
face; Arguments : the corruption scandal, Two Pennsylva-
nia judges, . . . , many children, the U.S. Code : intrinsic;
Summary :<span_1> <span_0> federal fraud charges.
Many of the children face
federal fraud charges.
Model consoli-
dates incorrect
information.
train
extrinsic
Predicates : is pressing for, limit, . . . , is being erode, is
fight; Arguments : panelist, action, . . . , sea level, Arctic melt,
at the climate change conference Code : extrinsic; Summary :
The Alliance <span_0> <span_1> <span_2>.
The Alliance is pressing
for action at the climate
change conference.
Model
learns
to
hallucinate
new unsupported
information.
The Alliance is planning
to impose limits on emis-
sions.
Model
hallu-
cinates
new
unsupported
information.
Table 1 :
1Examples of input formatting on two different summarization instances for both intrinsic and extrinsic error types during training and testing. Gold input spans (indicated by boldface), which are extracted from the gold summary, are only visible to the model during intrinsic training. They are removed from the input in all other settings, as indicated by strikethrough text.
1. The NLI examples are produced by running the fine-tuned generator on the preprocessed and formatted test split. 5 This renders an equal number of positive and negative examples. In our experiments, we randomly sample 100,000 Falsesum examples to augment the NLI dataset.
).Benchmark Datasets
Dataset Augmentation
FactCC Ranksum QAGS SummEval Overall
Majority voting -
50.00
50.46
50.00
50.00
50.11
MNLI-128 -
57.39
57.01
59.72
54.11
57.06
[split-doc] MNLI-128 -
72.07
68.03
71.08
55.32
66.63
MNLI-512 -
57.93
51.40
52.73
48.75
51.43
MNLI-512 ANLI
53.91
55.76
53.54
49.56
53.19
MNLI-512 DocNLI
58.13
53.58
57.10
52.59
55.35
MNLI-512 FactCC
73.87
67.29
73.50
60.04
69.02
MNLI-512 Falsesum (ours)
83.52
72.90
75.05
65.18
74.17
Table 2 :
2Performance of MNLI models with different augmentation data across benchmarks to classify the factual consistency of summaries. MNLI-128 and MNLI-512 are RoBERTa-base models trained using maximum token length of 128 and 512, respectively.Training Dataset
Overall
∆
MNLI+Falsesum
74.17
MNLI+Falsesum -Contrastive
73.11
-1.06
MNLI+Falsesum -Extrinsic
71.95
-2.22
MNLI+Falsesum -Intrinsic
69.14
-5.03
Table 3 :
3Model performance when trained on ablated Falsesum dataset. Excluding the contrastive, extrinsic, and intrinsic examples results in lower overall performance, indicating each property is beneficial.
Table 4 :
4Manual verification of Falsesum-generated NLI examples. Label, type, and span indicate the percentage of generated summaries with correct inconsistency label, error type, and error span, respectively.FactCC DocNLI Falsesum
Majority voting
50.84
53.55
50.00
CBOW-GloVe
60.36
70.38
56.13
BiLSTM-GloVe 68.26
73.04
57.62
RoBERTA-base 82.15
78.46
69.38
Table 5 :
5Hypothesis-only model performance (accu-
racy) to measure the presence of artifacts and natural-
ness of Falsesum dataset (lower is better).
et al. (2020), the authors perform this annotation
to avoid high disagreement by crowd annotators in
this task (Falke et al., 2019). The results in
tive summarization. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 4784-4791. AAAI Press.Cleo Condoravdi, Dick Crouch, Valeria de Paiva, Rein-
hard Stolle, and Daniel G. Bobrow. 2003. Entail-
ment, intensionality and text understanding. In Pro-
ceedings of the HLT-NAACL 2003 Workshop on Text
Meaning, pages 38-45.
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2006. The pascal recognising textual entailment
challenge. In Machine Learning Challenges. Eval-
uating Predictive Uncertainty, Visual Object Classi-
fication, and Recognising Tectual Entailment, pages
177-190, Berlin, Heidelberg. Springer Berlin Hei-
delberg.
Dorottya Demszky, Kelvin Guu, and Percy Liang.
2018. Transforming question answering datasets
into natural language inference datasets. CoRR,
abs/1809.02922.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
standing. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 4171-4186, Minneapolis, Minnesota. Associ-
ation for Computational Linguistics.
Esin Durmus, He He, and Mona Diab. 2020. FEQA: A
question answering evaluation framework for faith-
fulness assessment in abstractive summarization. In
Proceedings of the 58th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 5055-
5070, Online. Association for Computational Lin-
guistics.
Matan Eyal, Tal Baumel, and Michael Elhadad. 2019.
Question answering as an automatic evaluation met-
ric for news article summarization. In Proceed-
ings of the 2019 Conference of the North American
Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 3938-3948, Min-
neapolis, Minnesota. Association for Computational
Linguistics.
Alexander R. Fabbri, Wojciech Kryściński, Bryan
McCann, Caiming Xiong, Richard Socher, and
Dragomir Radev. 2021. SummEval: Re-evaluating
Summarization Evaluation. Transactions of the As-
sociation for Computational Linguistics, 9:391-409.
Anthony Fader, Stephen Soderland, and Oren Etzioni.
2011. Identifying relations for open information ex-
traction. In Proceedings of the 2011 Conference on
Empirical Methods in Natural Language Processing,
pages 1535-1545, Edinburgh, Scotland, UK. Associ-
ation for Computational Linguistics.
Tobias Falke, Leonardo F. R. Ribeiro, Prasetya Ajie
Utama, Ido Dagan, and Iryna Gurevych. 2019.
Ranking generated summaries by correctness: An in-
teresting but challenging application for natural lan-
guage inference. In Proceedings of the 57th Annual
Meeting of the Association for Computational Lin-
guistics, pages 2214-2220, Florence, Italy. Associa-
tion for Computational Linguistics.
Ben Goodrich, Vinay Rao, Peter J. Liu, and Moham-
mad Saleh. 2019. Assessing the factual accuracy
of generated text. In Proceedings of the 25th ACM
SIGKDD International Conference on Knowledge
Discovery & Data Mining, KDD '19, page 166-175,
New York, NY, USA. Association for Computing
Machinery.
Tanya Goyal and Greg Durrett. 2021. Annotating and
modeling fine-grained factuality in summarization.
In Proceedings of the 2021 Conference of the North
American Chapter of the Association for Computa-
tional Linguistics: Human Language Technologies,
pages 1449-1462, Online. Association for Compu-
tational Linguistics.
Max Grusky, Mor Naaman, and Yoav Artzi. 2018.
Newsroom: A dataset of 1.3 million summaries with
diverse extractive strategies. In Proceedings of the
2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Hu-
man Language Technologies, Volume 1 (Long Pa-
pers), pages 708-719, New Orleans, Louisiana. As-
sociation for Computational Linguistics.
Suchin Gururangan, Swabha Swayamdipta, Omer
Levy, Roy Schwartz, Samuel Bowman, and Noah A.
Smith. 2018. Annotation artifacts in natural lan-
guage inference data. In Proceedings of the 2018
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies, Volume 2 (Short Papers),
pages 107-112, New Orleans, Louisiana. Associa-
tion for Computational Linguistics.
Karl Moritz Hermann, Tomás Kociský, Edward Grefen-
stette, Lasse Espeholt, Will Kay, Mustafa Suleyman,
and Phil Blunsom. 2015. Teaching machines to
read and comprehend. In Advances in Neural Infor-
mation Processing Systems 28: Annual Conference
on Neural Information Processing Systems 2015,
December 7-12, 2015, Montreal, Quebec, Canada,
pages 1693-1701.
Nitish Shirish Keskar, Bryan McCann, Lav R. Varsh-
ney, Caiming Xiong, and Richard Socher. 2019.
CTRL: A conditional transformer language model
for controllable generation. CoRR, abs/1909.05858.
Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018.
Scitail: A textual entailment dataset from science
question answering. In Proceedings of the Thirty-
Second AAAI Conference on Artificial Intelligence,
Mexican federal police have arrested a fugitive on the FBI's 10 Most Wanted list, Mexican authorities said. Jorge Alberto Lopez Orozco allegedly murdered his girlfriend and her two young sons. Jorge Alberto Lopez Orozco is wanted in Elmore County, Idaho, on charges that he shot and killed three people, the FBI said. Lopez was transferred to a jail in neighboring Michoacan state, officials said. The arrest came about after investigation and intelligence work by Mexican authorities, the attorney general's office said. According to the FBI, Lopez abducted his girlfriend, Rebecca Ramirez, and her two young sons from her father's house in Nyssa, Oregon, on July 30, 2002. The car he had been driving was found nearly two weeks later on a rural road near Mountain Home, Idaho, officials said. . . . entailment FBI was still working Friday to confirm the identity of the man in custody. (intrinsic) non-entailment An extradition order was issued in July 30, 2002, to determine the identity of the man in custody.He may have been allowed to leave the club without ever playing a league game for the first team, but Kristoffer Olsson still showed Arsenal some love as he departed. The 19-year-old Swede, whose only first-team appearance for the Gunners came off the bench in the Capital One Cup last season, has joined FC Midtjylland this week on a permanent deal. But, as the news was announced, Olsson took to Twitter to say 'Once a Gunner, always a Gunner'. Kristoffer Olsson (right) played just once for Arsenal's first team, in the Capital One cup against West Brom . Olsson expressed his love for the club on Twitter, despite being sold to FC Midtjylland . The tweet reflects Cesc Fabregas' comments when he left the club to join Barcelona, although the Spanish midfielder has sinced joined rivals Chelsea, after Arsene Wenger opted not to buy him back. Olsson has been on loan at FC Midtjylland since the beginning of the season, playing six times in the Danish top flight. The Sweden U21 international said on joining permanently: 'this is a club that believes in me and sees my potential.' Olsson has played six times on loan with FC Midtjylland and has now joined the Danish club permanently.The charred remains of a woman and her
sons, ages 2 and 4, were found inside a burned-out vehicle on August 11, 2002, it said. Each victim had been shot in the
head or chest. The FBI was still working Friday to confirm the identity of the man in custody, said Debbie Dujanovic, a
spokeswoman in the agency's Salt Lake City, Utah, field office. The Salt Lake City office has jurisdiction in the case. An
extradition order was issued in January 2007, the Mexican attorney general's office said in a news release Thursday. A reward
of up to $100,000 was being offered, the FBI said. Lopez, 33, was captured in Zihuatanejo, a city northwest of Acapulco
on the Pacific Coast in southern Mexico, the Mexican attorney general's office said. Zihuatanejo is in Guerrero state, but
The code to obtain the dataset is available online at https://github.com/joshbambrick/Falsesum
Contemporaneous work byLaban et al. (2022) attempts to improve the application of sentence-level NLI models to detect document-level factual inconsistencies using a learnable aggregation of sentence-level predictions. Our work is orthogonal since they can benefit from better quality training examples to train their aggregation weights.
It is possible that some spans from the source document are duplicates of gold ones. For instance, the document may mention "The Queen of England", while the gold span from the summary is "The Queen". We use a simple heuristic to remove such duplicates by searching for other spans whose (lemmatized) dependency root token is the same.
See Appendix A for the hyperparameter details.
We merge the test and validation sets into a single test set.7 We aggregate the label as "consistent" if all annotators rated the summary as a 5 and "inconsistent" otherwise. 8 This is the number of instances after we split multisentence summaries into separate single-sentence summary test instances, where an individual factuality judgement is available.
We include more examples of generated NLI instances as well as the inadvertently consistent output in Appendix D.
AcknowledgmentsWe would like to thank Marco Ponza, Marco Fiscato, Umut Topkara and other colleagues from Bloomberg AI for the thoughtful discussion and feedback throughout this project. We also thank Leonardo Ribeiro for comments on the earlier version of this work and the anonymous reviewers for their constructive feedback. The authors affiliated with UKP were supported by the German Research Foundation through the research training group "Adaptive Preparation of Information from Heterogeneous Sources" (AIPHES, GRK 1994/1) and by the German Federal Ministry of Education and Research and the Hessian State Ministry for Higher Education, Research and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE.
A large annotated corpus for learning natural language inference. R Samuel, Gabor Bowman, Christopher Angeli, Christopher D Potts, Manning, 10.18653/v1/D15-1075Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsSamuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632-642, Lisbon, Portugal. Association for Compu- tational Linguistics.
Faithful to the original. Ziqiang Cao, Furu Wei, Wenjie Li, Sujian Li, Fact aware neural abstrac-(AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18). New Orleans, Louisiana, USAAAAI PressZiqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2018. Faithful to the original: Fact aware neural abstrac- (AAAI-18), the 30th innovative Applications of Arti- ficial Intelligence (IAAI-18), and the 8th AAAI Sym- posium on Educational Advances in Artificial Intel- ligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5189-5197. AAAI Press.
Neural text summarization: A critical evaluation. Wojciech Kryscinski, Nitish Shirish Keskar, Bryan Mc-Cann, Caiming Xiong, Richard Socher, 10.18653/v1/D19-1051Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsWojciech Kryscinski, Nitish Shirish Keskar, Bryan Mc- Cann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 540- 551, Hong Kong, China. Association for Computa- tional Linguistics.
Evaluating the factual consistency of abstractive text summarization. Wojciech Kryscinski, Bryan Mccann, Caiming Xiong, Richard Socher, 10.18653/v1/2020.emnlp-main.750Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332-9346, Online. Association for Computa- tional Linguistics.
SummaC: Re-Visiting NLIbased Models for Inconsistency Detection in Summarization. Philippe Laban, Tobias Schnabel, Paul N Bennett, Marti A Hearst, 10.1162/tacl_a_00453Transactions of the Association for Computational Linguistics. 10Philippe Laban, Tobias Schnabel, Paul N. Bennett, and Marti A. Hearst. 2022. SummaC: Re-Visiting NLI- based Models for Inconsistency Detection in Sum- marization. Transactions of the Association for Computational Linguistics, 10:163-177.
BART: denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal ; Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, abs/1910.13461CoRRMike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019. BART: denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. CoRR, abs/1910.13461.
Roberta: A robustly optimized BERT pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, abs/1907.11692CoRRYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.
On faithfulness and factuality in abstractive summarization. Joshua Maynez, Shashi Narayan, Bernd Bohnet, Ryan Mcdonald, 10.18653/v1/2020.acl-main.173Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsOnlineJoshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factu- ality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906-1919, On- line. Association for Computational Linguistics.
Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. Tom Mccoy, Ellie Pavlick, Tal Linzen, 10.18653/v1/P19-1334Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsTom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Lin- guistics.
Syntactic data augmentation increases robustness to inference heuristics. R Thomas Junghyun Min, Dipanjan Mccoy, Emily Das, Tal Pitler, Linzen, 10.18653/v1/2020.acl-main.212Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsJunghyun Min, R. Thomas McCoy, Dipanjan Das, Emily Pitler, and Tal Linzen. 2020. Syntactic data augmentation increases robustness to inference heuristics. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 2339-2352, Online. Association for Computa- tional Linguistics.
Looking beyond sentencelevel natural language inference for question answering and text summarization. Anshuman Mishra, Dhruvesh Patel, Aparna Vijayakumar, Lorraine Xiang, Pavan Li, Kartik Kapanipathi, Talamadupula, 10.18653/v1/2021.naacl-main.104Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsOnlineAnshuman Mishra, Dhruvesh Patel, Aparna Vijayaku- mar, Xiang Lorraine Li, Pavan Kapanipathi, and Kar- tik Talamadupula. 2021. Looking beyond sentence- level natural language inference for question answer- ing and text summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1322-1336, On- line. Association for Computational Linguistics.
Abstractive text summarization using sequence-to-sequence RNNs and beyond. Ramesh Nallapati, Bowen Zhou, Çaglar Cicero Dos Santos, Bing Gucehre, Xiang, 10.18653/v1/K16-1028Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. The 20th SIGNLL Conference on Computational Natural Language LearningBerlin, GermanyAssociation for Computational LinguisticsRamesh Nallapati, Bowen Zhou, Cicero dos Santos, Çaglar Gucehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Lan- guage Learning, pages 280-290, Berlin, Germany. Association for Computational Linguistics.
Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. Shashi Narayan, Shay B Cohen, Mirella Lapata, 10.18653/v1/D18-1206Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsShashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1797-1807, Brussels, Bel- gium. Association for Computational Linguistics.
Xiang Ao, and Xiang Wan. 2020. Improving named entity recognition with attentive ensemble of syntactic information. Yuyang Nie, Yuanhe Tian, Yan Song, 10.18653/v1/2020.findings-emnlp.378Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsYuyang Nie, Yuanhe Tian, Yan Song, Xiang Ao, and Xiang Wan. 2020. Improving named entity recog- nition with attentive ensemble of syntactic informa- tion. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 4231-4245, Online. Association for Computational Linguistics.
Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics. Artidoro Pagnoni, Vidhisha Balachandran, Yulia Tsvetkov, 10.18653/v1/2021.naacl-main.383Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsArtidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding factuality in abstrac- tive summarization with FRANK: A benchmark for factuality metrics. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 4812-4829, Online. As- sociation for Computational Linguistics.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, J. Mach. Learn. Res. 2167Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. J. Mach. Learn. Res., 21:140:1-140:67.
Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. Nils Reimers, Iryna Gurevych, 10.18653/v1/D17-1035Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. 2017. Reporting score distributions makes a difference: Performance study of LSTM-networks for sequence tagging. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 338-348, Copenhagen, Denmark. Association for Computational Linguistics.
2021. Tailor: Generating and perturbing text with semantic controls. Alexis Ross, Tongshuang Wu, Hao Peng, Matthew E Peters, Matt Gardner, abs/2107.07150CoRRAlexis Ross, Tongshuang Wu, Hao Peng, Matthew E. Peters, and Matt Gardner. 2021. Tailor: Generating and perturbing text with semantic controls. CoRR, abs/2107.07150.
Asking and answering questions to evaluate the factual consistency of summaries. Alex Wang, Kyunghyun Cho, Mike Lewis, 10.18653/v1/2020.acl-main.450Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsAlex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the fac- tual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 5008-5020, Online. Association for Computational Linguistics.
Universal decompositional semantics on Universal Dependencies. Aaron Steven White, Drew Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, Benjamin Van Durme, 10.18653/v1/D16-1177Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsAaron Steven White, Drew Reisinger, Keisuke Sak- aguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Uni- versal decompositional semantics on Universal De- pendencies. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 1713-1723, Austin, Texas. Association for Computational Linguistics.
A broad-coverage challenge corpus for sentence understanding through inference. Adina Williams, Nikita Nangia, Samuel Bowman, 10.18653/v1/N18-1101Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaLong Papers1Association for Computational LinguisticsAdina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112-1122, New Orleans, Louisiana. Association for Computational Linguis- tics.
DocNLI: A large-scale dataset for documentlevel natural language inference. Wenpeng Yin, Dragomir Radev, Caiming Xiong, 10.18653/v1/2021.findings-acl.435Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Online. Association for Computational LinguisticsWenpeng Yin, Dragomir Radev, and Caiming Xiong. 2021. DocNLI: A large-scale dataset for document- level natural language inference. In Findings of the Association for Computational Linguistics: ACL- IJCNLP 2021, pages 4913-4922, Online. Associa- tion for Computational Linguistics.
PEGASUS: pre-training with extracted gap-sentences for abstractive summarization. Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter J Liu, Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter J. Liu. 2019. PEGASUS: pre-training with ex- tracted gap-sentences for abstractive summarization.
. Corr, abs/1912.08777CoRR, abs/1912.08777.
An evaluation of PredPatt and open IE via stage 1 semantic role labeling. Sheng Zhang, Rachel Rudinger, Benjamin Van Durme, IWCS 2017 -12th International Conference on Computational Semantics -Short papers. Sheng Zhang, Rachel Rudinger, and Benjamin Van Durme. 2017. An evaluation of PredPatt and open IE via stage 1 semantic role labeling. In IWCS 2017 -12th International Conference on Computa- tional Semantics -Short papers.
A close examination of factual correctness evaluation in abstractive summarization. Yuhui Zhang, Yuhao Zhang, Christopher D Manning, Yuhui Zhang, Yuhao Zhang, and Christopher D. Man- ning. 2020. A close examination of factual correct- ness evaluation in abstractive summarization.
Sales of cocktails have risen by more than 10 per cent in the past two years. More than one in five of Britain's pubs and bars now serve cocktails and the Mojito -a Cuban mix of white rum, sugar, lime, mint and soda water -is the most popular, according to a report. Pina Coladas (rum, coconut and pineapple juice) and Woo Woos (vodka, peach schnapps and cranberry juice) were also popular. The Mixed Drinks Report, by consultancy firm CGA Strategy, found more women than men choose cocktails, as 54 per cent of cocktail drinkers are female. Bomb and pitcher serves remain popular, with 74 per cent of 18 to 24-year-olds admitting to have bought a bomb drink, while nine in 10 in the same age range say they drink pitchers. The Mojito, a Cuban mix of white rum, sugar, lime, mint and soda water, is the most popular cocktail in Britain according to a report. Cocktails are enjoyed by the core 18 to 35-year-old demographic 'in all on-trade occasions' including throughout the night, as opposed to just the start. . .The Mojito, a Cuban mix of white rum, sugar, lime, mint and soda water, is the most popular cocktail in Britain according to a report . Sales of cocktails have risen by more than 10 per cent in the past two years. More than one in five of Britain's pubs and bars now serve cocktails and the Mojito -a Cuban mix of white rum, sugar, lime, mint and soda water -is the most popular, according to a report. Pina Coladas (rum, coconut and pineapple juice) and Woo Woos (vodka, peach schnapps and cranberry juice) were also popular. The Mixed Drinks Report, by consultancy firm CGA Strategy, found more women than men choose cocktails, as 54 per cent of cocktail drinkers are female. Bomb and pitcher serves remain popular, with 74 per cent of 18 to 24-year-olds admitting to have bought a bomb drink, while nine in 10 in the same age range say they drink pitchers. Cocktails are enjoyed by the core 18 to 35-year-old demographic 'in all on-trade occasions' including throughout the night, as opposed to just the start. . . .
Half of that [$9 billion] is roads and about $2 billion of that are the most pressing needs -those we get some help from the stimulus. The president's budget proposal is calling for more maintenance and construction money. From Yellowstone National Park to the EvergladesWe are picking away at it as much as we can and we've been fortunate to have the recovery act money," said Jeffrey Olson of the National Park Service. Olson said half of the $9 billion is slated to go for road repairs. Olsen said. Dan Wenk, the acting director of the National Park Service says most of those pressing needs include, "camp grounds, camp sites, it's amphitheaters for evening programs. It's the bathrooms. . .From Yellowstone National Park to the Everglades, America's 391 national parks are in need of repair -and thanks to the economic stimulus signed into law, help is now underway. President Obama and his family visit the Grand Canyon in Arizona, a national park. President Obama's $787 billion economic stimulus plan passed in February and designated $750 million dollars to the national parks. But not all of the stimulus money is being used -and the parks are facing a $9 billion backlog in maintenance projects. So far, nearly 10 percent is in the pipeline. "We are picking away at it as much as we can and we've been fortunate to have the recovery act money," said Jeffrey Olson of the National Park Service. Olson said half of the $9 billion is slated to go for road repairs. "Half of that [$9 billion] is roads and about $2 billion of that are the most pressing needs -those we get some help from the stimulus. The president's budget proposal is calling for more maintenance and construction money," Olsen said. Dan Wenk, the acting director of the National Park Service says most of those pressing needs include, "camp grounds, camp sites, it's amphitheaters for evening programs. It's the bathrooms. . . .
gold Park Service is dealing with a $9 billion backlog of maintenance needs. gold Park Service is dealing with a $9 billion backlog of maintenance needs.
| [
"https://github.com/joshbambrick/Falsesum"
] |
[
"Deconstruct to Reconstruct a Configurable Evaluation Metric for Open-Domain Dialogue Systems",
"Deconstruct to Reconstruct a Configurable Evaluation Metric for Open-Domain Dialogue Systems"
] | [
"Vitou Phy \nThe University of Tokyo\nJapan\n",
"Yang Zhao yangzhao@ibm.com \nIBM-Research Tokyo\nJapan\n",
"Akiko Aizawa aizawa@nii.ac.jp \nThe University of Tokyo\nJapan\n\nNational Institute of Informatics\nJapan\n"
] | [
"The University of Tokyo\nJapan",
"IBM-Research Tokyo\nJapan",
"The University of Tokyo\nJapan",
"National Institute of Informatics\nJapan"
] | [
"Proceedings of the 28th International Conference on Computational Linguistics"
] | Many automatic evaluation metrics have been proposed to score the overall quality of a response in open-domain dialogue. Generally, the overall quality is comprised of various aspects, such as relevancy, specificity, and empathy, and the importance of each aspect differs according to the task. For instance, specificity is mandatory in a food-ordering dialogue task, whereas fluency is preferred in a language-teaching dialogue system. However, existing metrics are not designed to cope with such flexibility. For example, BLEU score fundamentally relies only on word overlapping, whereas BERTScore relies on semantic similarity between reference and candidate response. Thus, they are not guaranteed to capture the required aspects, i.e., specificity. To design a metric that is flexible to a task, we first propose making these qualities manageable by grouping them into three groups: understandability, sensibleness, and likability, where likability is a combination of qualities that are essential for a task. We also propose a simple method to composite metrics of each aspect to obtain a single metric called USL-H, which stands for Understandability, Sensibleness, and Likability in Hierarchy 1 . We demonstrated that USL-H score achieves good correlations with human judgment and maintains its configurability towards different aspects and metrics. | 10.18653/v1/2020.coling-main.368 | [
"https://www.aclweb.org/anthology/2020.coling-main.368.pdf"
] | 226,227,390 | 2011.00483 | d33064d24f83536d648267f62b8982cab4bffee7 |
Deconstruct to Reconstruct a Configurable Evaluation Metric for Open-Domain Dialogue Systems
OnlineCopyright OnlineDecember 8-13, 2020
Vitou Phy
The University of Tokyo
Japan
Yang Zhao yangzhao@ibm.com
IBM-Research Tokyo
Japan
Akiko Aizawa aizawa@nii.ac.jp
The University of Tokyo
Japan
National Institute of Informatics
Japan
Deconstruct to Reconstruct a Configurable Evaluation Metric for Open-Domain Dialogue Systems
Proceedings of the 28th International Conference on Computational Linguistics
the 28th International Conference on Computational LinguisticsBarcelona, SpainOnlineDecember 8-13, 20204164
Many automatic evaluation metrics have been proposed to score the overall quality of a response in open-domain dialogue. Generally, the overall quality is comprised of various aspects, such as relevancy, specificity, and empathy, and the importance of each aspect differs according to the task. For instance, specificity is mandatory in a food-ordering dialogue task, whereas fluency is preferred in a language-teaching dialogue system. However, existing metrics are not designed to cope with such flexibility. For example, BLEU score fundamentally relies only on word overlapping, whereas BERTScore relies on semantic similarity between reference and candidate response. Thus, they are not guaranteed to capture the required aspects, i.e., specificity. To design a metric that is flexible to a task, we first propose making these qualities manageable by grouping them into three groups: understandability, sensibleness, and likability, where likability is a combination of qualities that are essential for a task. We also propose a simple method to composite metrics of each aspect to obtain a single metric called USL-H, which stands for Understandability, Sensibleness, and Likability in Hierarchy 1 . We demonstrated that USL-H score achieves good correlations with human judgment and maintains its configurability towards different aspects and metrics.
Introduction
Evaluating a dialogue response is crucial for the development of open-domain dialogue systems. It allows comparison between different systems, which is similar to how the machine translation community uses BLEU (Papineni et al., 2002) to evaluate the overall quality of the translation and determines whether a system is the state-of-the-art (Bahdanau et al., 2015;Sennrich et al., 2016;Aharoni et al., 2019). Without automatic evaluation metrics, many studies tend to rely on either expert or crowdsourced platform to score the responses, which are both time-consuming and cost-ineffective (Zhang et al., 2018;Zhan et al., 2019;Adiwardana et al., 2020). To cope with this, various metrics have been proposed to score the overall quality of a dialogue response.
Word overlap-based metrics, which were adopted from the MT community to measure the overlapping words between reference and candidate sentences, have been used to evaluate the dialogue responses (Sordoni et al., 2015;Zhang et al., 2018). However, Liu et al. (2016) showed that these metrics, i.e., BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), or ROUGE score (Lin, 2004), do not correlate well with human judgements, because there are many possible responses to reply to a given context. Recently, learning-based metrics, which aim to predict the overall quality of a response, have a better correlation with human judgment compared with word overlap-based metrics. Various training settings have also been explored. For example, ADEM (Lowe et al., 2017a) is trained to predict the score by learning to regress on human judgments, whereas PONE is trained with next utterance prediction task with sophisticated samplings.
However, these metrics are not configurable and may suffer from several limitations. First, they may not capture a certain quality that is essential for a particular task. As shown in Table 1, BERTScore Table 1: Examples on how metrics on overall quality may not capture specificity. B, R, U, and H denotes scores from BERTScore, BERT-RUBER, USL-H (proposed), and human, respectively. and BERT-RUBER tend to assign a relatively high score to the unspecific response. This might be due to the complexity of the overall score. Generally, a single overall score is usually comprised of different qualities, such as readability, specificity, and empathy, and the importance of each aspect differs according to the task. For example, specificity is preferred in food-ordering chatbots, whereas fluency is preferred in language-teaching chatbots. However, the existing metrics are not flexible to such changes. BERTScore (Zhang et al., 2020a), for example, relies on using pre-trained BERT embedding (Devlin et al., 2019) to compute similarity between reference and candidate responses; thus this does not guarantee good correlation for the specificity quality (Table 1). Another limitation is the difficulty in enhancing only a particular aspect of the metric. Suppose there is a single metric that can capture both sensibleness and specificity, and a new state-of-the-art metric on the latter quality is subsequently developed; it would be complicated to modify the existing metrics (i.e., BLEU or ADEM) to include this new SOTA metric.
Aside from evaluating a response using only a single overall score, some studies evaluate the response on multiple aspects, i.e., fluency, relevancy, specificity, and empathy (Zhang et al., 2018;Weston et al., 2018;Smith et al., 2020). The limitation of this approach is that with multiple scores to consider, it becomes unclear to determine which response is better. Is a specific response more preferable than an empathetic one?
To address these issues, we first propose simplifying the various qualities by grouping them into three main aspects: understandability (Nübel, 1997), sensibleness (Adiwardana et al., 2020), and likability. We assume these groups have hierarchical properties in the following way: (i) a response first needs to be understandable; (ii) then it needs to make sense to the context; (iii) other qualities, i.e., specificity, are just additional qualities that make an acceptable response more likable for a given task ( Figure 1). If we want the score to capture empathy instead, we only need to replace specificity in the top layer with empathy. In other words, the likability aspect does not need to implicitly capture the understandability or sensibleness, as it will be checked by the lower layers in the hierarchy. Based on these properties, we propose a simple method to combine scores of each aspect to obtain USL-H score, which stands for Understandability, Sensibleness, and Likability in Hierarchy. USL-H can be modified to add or remove a quality and to replace a metric with a more optimal alternative. This configurability removes the barrier of requiring a single complicated model and instead enables a combination of multiple sub-metrics.
For simplicity, we demonstrate the configurability using only specificity as our likability aspect. Experimenting on the DailyDialog dataset (Li et al., 2017), we show that valid utterance prediction, next utterance prediction, and masked language models have good correlations with human judgments on understandability, sensibleness, and specificity, respectively. Moreover, combining these sub-metrics as a single metric using our USL-H also correlates well with overall quality on both Pearson and Spearman correlations. Alternatively, we replace specificity with the empathy aspect, recombine the sub-metrics, and put it up against other metrics to select the most empathetic responses. We find that this new configuration can detect better empathetic responses compared to the rest. Through various experiments, we show that USL-H is configurable to capture certain qualities of a response and can be improved further upon replacing a sub-metric with a better performing alternative.
The main contributions of this paper are the following: (i) the grouping of various qualities of dialogue responses into three main aspects: understandability, sensibleness, and likability, (ii) introducing a configurable hierarchical evaluation metric that can be modified to work with a set of response's qualities and sub-metrics according to the task while achieving good correlation with human judgments.
Related Work
Automatic Evaluation Metrics Many automatic evaluation metrics have been proposed to evaluate the overall quality of a response. BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), and ROUGE (Lin, 2004) metrics are word overlap-based approaches, which utilize the model's responses and reference responses to compute the number of overlapping words. The more words overlap, the higher the scores are. However, Liu et al. (2016) showed that these metrics have a weak correlation with human judgment. Alternatively, embedding-based metrics (Wieting et al., 2016;Rus and Lintean, 2012;Forgues et al., 2014;Zhang et al., 2020a) are used as measurements in the previous studies, for which the embeddings of context and response are used to obtain the similarity score (Zhang et al., 2018;Zeng et al., 2019). However, due to many possible responses to a context, it is inaccurate to use these metrics.
Learning-based metrics have been explored recently, especially with the next utterance prediction setting (Tao et al., 2018;Ghazarian et al., 2019;. The model learns to determine whether or not a context-response pair is valid. It is typically trained with context-response pairs that appear in the dialogue as the positive examples. Then, negative sampling is used to obtain negative examples. Training using this setting demonstrates a moderate correlation with human judgment. However, since such learning-based metrics rely on the positive and negative examples, a sophisticated sampling technique is required to obtain appropriate examples that reflect a certain quality.
Score Composition Some studies have attempted to develop metrics by combining scores from different aspects into a single score (Zhan et al., 2019;Adiwardana et al., 2020). Zhan et al. (2019) proposed a metric that combines scores from semantic and syntactic sub-metrics with a weighted average. This metric had a moderate correlation with human judgment and outperformed all of the word-overlapbased metrics. However, it does not consider qualities, such as specificity or diversity.
Alternatively, Adiwardana et al. (2020) proposed a human evaluation metric that considers both sensibleness and specificity, where specificity is dependent on sensibleness and is scored only if the response is sensible; otherwise, it is zero by default. Then, they obtained the final human overall score by averaging them together. Unlike Zhan et al. (2019), they did not use any metric for any aspect. Instead, they suggest using perplexity as the evaluation metric to capture both qualities. However, using a single metric, like perplexity, to monitor multiple aspects is not configurable when we want to evaluate another aspect, for instance, sensibleness and empathy.
Evaluation Criteria
Fundamental Aspects
The overall quality of a response contains various qualities, such as readability, fluency, relevancy, sensibleness, and specificity. However, not every aspect is equally important. A response may contain an interesting detail, but such information is not usable if it is completely off-topic. Likewise, if a response is not understandable, we suspect that it is difficult to determine whether it is a suitable reply. Based on this observation, we propose to cluster the qualities into three groups -understandability (Nübel, 1997), sensibleness (Adiwardana et al., 2020), and likability -as illustrated in Figure 1. We assume these aspects to be independent from each other, except for sensibleness, which we will discuss later.
Understandability quantifies whether or not a response is understandable. A response does not have to be grammatically correct to be considered understandable due to the nature of conversations being compact, unstructured, and noisy (i.e., "Not really"). Also, it does not need to be interesting, nor does it need context to be understandable. Due to such independency, we consider this aspect as the fundamental building block among the three groups.
Sensibleness measures how suitable a response is to a given context (Adiwardana et al., 2020). For example, the response to the context "Dinner's ready!" can be short ("10 minutes"), generic ("Okay"), or intriguing ("It smells good already"). Any of these responses are considered as sensible. This quality is comprised of relevancy, consistency, common sense, and more. However, in this work, we do not focus on these sub-qualities. Instead, we consider the sensibleness quality as a whole. Please note that although a sensible response can be generic, short, and boring, it is rather important for the response to be on-topic than to have unique tokens. Thus, we place this quality on the second level of the hierarchy.
Likability quantifies how much a set of one or more qualities makes a response more likable for a particular task. These qualities can be diversity (Li et al., 2016), sentiment (Rashkin et al., 2019), specificity (Ke et al., 2018), engagement (Yi et al., 2019), fluency (Kann et al., 2018) and more. A likable response may or may not be sensible to the context. For example, a diverse response may contain many unique words, although it might be off-topic or completely incomprehensible. However, when combining with sensibleness, it can quantify how likable a sensible response is. Due to the enhancement that the likability aspect has on the response, we position it on the highest level of the hierarchy.
USL-H Metric
Each aspect, by itself, is not enough to evaluate the overall quality of a response. An understandable response might not be relevant to the context. A sensible response might be bland and generic. A response with many unique words (diversity) or rare words (specificity) does not guarantee that it is understandable or sensible. On the contrary, incorporating these qualities together with the following concept gives more useful information. First, we examine if the response is understandable. Then, we check whether it makes sense to the context. After that, we determine how likable that response is, i.e., how many unique or rare words in the response. If the response fails at any aspect of the hierarchy, the subsequent aspect will not be considered. With such construction, the likability score does not need to capture understandability nor sensibleness. Those criteria will be checked by aspects in the lower hierarchy. Such composition allows flexibility and configurability in utilizing different metrics, as it is not needed to search for a single metric that satisfies multiple aspects. Instead, we find metrics for those multiple aspects and combine them together with our proposed hierarchy to get a single metric.
Formally, let us denote s U , s S , s L for scores of understandability, sensibleness, and likability, respectively, and s L can be comprised of one or more qualities q j , i.e., specificity or empathy. In prior work, to reconstruct scores together, Zhan et al. (2019) uses a weighted average to combine syntactic and semantic scores, whereas Adiwardana et al. (2020) uses the arithmetic average to combine the sensibleness and specificity scores. Particularly, Adiwardana et al. (2020) considers specificity being dependent on sensibleness such that if sensibleness is 0, so is specificity. Although they limit only to sensibleness and specificity, we extend these simple heuristics with our hierarchy concept into the following equation:
s U SL-H = α 1 s U + α 2 s U s S + α 3 s U s S s L (1) s L = β j q j(2)
where s U , s S , s L , q j are continuous variables ranging between [0, 1]. α i = β j = 1. α i and β j are coefficients for each quality. These formulations can be applied to obtain scores for both automatic and human evaluations.
There are two intuitions behind this heuristic. (i) Understandability score s U adds in clarity and interpretability when the response is unsensible. Otherwise, it is difficult to determine whether the unsensibleness is due to the response being completely incomprehensible or off-topic. (ii) Likability brings in other qualities that are not covered by the other two aspects, although it is considered in the final score only if the response is understandable and sensible. There is one key property of likability. Suppose the response already makes sense to the context. In that case, the likability score does not have to be context-dependent, as it is just an extra quality on top of a response that is already sensible. This composition allows flexibility in using different metrics, i.e., context-independent or context-dependent, for this likability aspect. To sum up, we consider understandability and sensibleness being contextindependent and context-dependent, respectively, whereas likability being either of them.
Simplified USL-H During our preliminary analysis, we found that having s U in all the three terms is unstable. If the automatic evaluation metric corresponds to understandability misevaluates the response, the other qualities will be disregard. To alleviate such instability, we make an assumption that if a response is not understandable, it is unlikely that it is sensible (s S ≈ s U s S ). In other words, sensibleness, at the observation level, is dependent on understandability. Thus, we do not need s U s S , as s U is already captured by s S implicitly. With this assumption, we arrive at equation 3, which we will use for the remaining of the paper.
s U SL-H = α 1 s U + α 2 s S + α 3 s S s L(3)
4 Automatic Evaluation Metrics
Problem Setting
Each dialogue D is comprised of u 1 , u 2 , . . . , u n utterances, where each utterance contains (w 1 , w 2 , . . . , w m ) words. Two consecutive utterances, u i and u i+1 , where i < n, are selected to form a context-response pair (c, r 0 ), with c as the context and r 0 as the ground-truth response. For each context c, we use a different generative or retrieval system, as described in Section 5, to obtain a candidate response r.
Baseline Metrics for Overall Quality
Word-Overlap-based Metrics We use BLEU (Papineni et al., 2002), ROUGE-L (Lin, 2004), and METEOR (Banerjee and Lavie, 2005) to measure the word-overlapping score between r 0 and r.
Embedding-based Metrics Different responses may contain different lexical words, although they may share a similar meaning. Thus, we also experiment with embedding-based metrics by comparing semantic information between r 0 and r using the following metrics: Embedding Averaging (Wieting et al., 2016), Greedy Matching (Rus and Lintean, 2012), Vector Extrema (Forgues et al., 2014), and BERTScore (Zhang et al., 2020a).
Learning-based Metrics
We also include reference-free automatic evaluation metrics, which have recently emerged as a topic of interest. We will use (i) BERT-RUBER (Ghazarian et al., 2019), which computes the embeddings for c and r and predicts the probability of whether these two are the right pair; (ii) PONE , an extension of the BERT-RUBER metric, which utilizes generative responses as augmentation for positive labels and BERT-based retrieval responses as negative responses. However, through our experiments, we were not able to reproduce the performance mentioned in the study. Thus, we only use PONE with augmented negative responses, denoted as EN-PONE.
Metrics for Fundamental Aspects
Metric for Understandability We train a model using a valid utterance prediction (BERT-VUP) setting to capture the understandability of an utterance u by classifying whether or not it is valid. Unlike a sentence, which should be grammatically correct, an utterance does not need to satisfy this property, and the auxiliary verb or punctuation may be missing. We use these properties to build a training set of valid and invalid utterances. First, we randomly determine if u should be valid. If it is, we will assign that with label one and randomly apply one of the following rules: (i) remove punctuation at the end, (ii) remove stop words, or (iii) no modification. Alternatively, we label it as zero and apply one of the following rules from Sinha et al. (2020) to obtain a negative sample: (i) word reorder (shuffle the order of all words), (ii) word drop (randomly drop x% words), or (iii) words repeat (randomly select span(s) of words and randomly repeat them up to 3 times). For an utterance u with (w 1 , w 2 , . . . , w m ) words, we fine-tune BERT (Devlin et al., 2019) using max-pooling to obtain the utterance-level embedding. Then, we use a softmax layer to obtain the probability and use it as the final score s U .
Metric for Sensibleness We train another model using the next utterance prediction (BERT-NUP) task as the metric for the sensibleness. Given a context-response pair (c, r), the objective of the model is to classify whether that pair is a valid pair or not. To build label data for this binary classification task, we uses two consecutive utterances (u i , u i+1 ) from a dialogue D, where u i is the context c and u i+1 is its corresponding response r, and label them as a valid pair. Then, we keep u i as the context and select a random utterance u j from a pool of all the utterances in the training set and label that pair (u i , u j ) as the invalid pair. To fine-tune the BERT model, we first merge a context-response pair into a single array of tokens (w 1 , w 2 , . . . , w t ). Then, we use the same approach as BERT-VUP metric to obtain the score s S .
Metric for Specificity For simplicity of studying the configurability of our proposed metric, we select specificity as our likable quality. Following the use of Roberta in Mehri and Eskenazi (2020) to compute the mask language model (MLM) metric, we use a BERT-based model for consistency with the BERT-VUP and BERT-NUP metrics. Moreover, instead of using both (c, r), as in Mehri and Eskenazi (2020), we only use the response r to ensure the independence from the context c. Therefore, for a response r with m words, we sequentially mask one word at a time and feed it into BERT-MLM to predict negative log-likelihood (MLM-Likelihood) of all masked words. We also investigate negative cross-entropy (MLM-NCE), perplexity (MLM-PPL), and MLM-SLOR (Kann et al., 2018) to verify if they can be used for the understandability and specificity aspects.
Experiment
Training Corpus The corpus used in this study is DailyDialog (Li et al., 2017), which is about dayto-day communication on everyday topics. This dataset consists of 11,118/1,000/1,000 dialogues for train/valid/test sets with explicit textual information of five dialogue acts and seven emotion labels. We split this dataset evenly into two parts: (i) for training generative and retrieval models to generate candidate responses, and (ii) for training automatic evaluation metrics for scoring each aspect.
Building Response Candidates To effectively evaluate the evaluation metrics, it is important to have a mix of good and bad responses for the metrics to score. Therefore, we choose two retrieval methods, two generative methods, and one human-generation for a total of five responses per given context. This includes TF-IDF, DualEncoder (Lowe et al., 2017b), Seq2Seq with Attention Mechanism (Bahdanau et al., 2015), and DialoGPT (Zhang et al., 2020b). These five responses vary in quality, i.e., generative models may produce incomprehensible or unspecific responses, whereas retrieval models may select unsensible responses. Overall, we collected five responses from different models for 50 contexts, which accounts for 250 context-response pairs.
Human Judgement It is necessary to evaluate if the evaluation metrics are comparable to human judgment. To verify this, we recruited four volunteers to collect the human judgment on the 50 contexts. For each context, five different responses from different models described in the previous section were presented for evaluation. The annotators were asked to score each context response pair with the following questions: (i) Is this response understandable {0, 1}?, (ii) Does this make sense to the context {0, 1}?, (iii) Does it at least have some detail {0, 1}?, (iv) Overall, how good is this response {0,1,2,3}?
We also instructed the volunteers to consider these questions independently, with understandability and specificity independent from the context. Regarding evaluating the overall score, we did not provide fine-grained instructions of what each value represents. Instead, we only mentioned that the bad and good responses are corresponding with score of 0 and 3, respectively. How they score the responses is entirely subjective to each annotator. This allows us to observe how one would think if they were to judge the overall quality of a response. Then, we use Cohen's Kappa (Cohen, 1960) to measure pairwise inter-annotator agreements for all the aspects, presented in Table 2. The annotators moderately agree on all qualities, with the lowest agreement on the overall score. This result is expected because no detailed instruction was provided to assist their annotations.
Experimental Setup
We use a pre-trained base-model of BERT to fine-tune for the BERT-VUP, BERT-NUP, and BERT-MLM metrics separately, by using the HuggingFace framework 2 on an NVIDIA Tesla V100 PCIe 32GB. These three models are trained with an ADAM optimizer (Kingma and Ba, 2015) with a learning rate of 1e-5. We select the best version of each model based on the lowest validation loss.
Results
Analysis of Hierarchical Structure
To understand the relationship between understandability, sensibleness, likability (in this case, specificity), and the overall quality, first we apply linear regression to get the weight of each aspect among each annotator. Then, we apply softmax function on the weights to make them more interpretable. Figure 2 illustrates that sensibleness has the highest weight among all the aspects. This suggests that the annotators tend to rely on sensibleness as a key factor when determining the overall score.
To further investigate how the three aspects affect the overall score, we grouped the responses into five groups in Figure 3, where each group is denoted by (s U , s S , s Ls ). Then, for each group, we computed the mean of the annotated overall score (Human Vanilla), and also composited the annotated scores of the three aspects using our hierarchy method (Human USL-H). For comparison, we also used simple averaging (Human USL-A). The result is shown in Figure 3.
For Human Vanilla, the score for G1 is extremely low compared to the ones of other groups. This means that the score of sensibleness and specificity has no influence when the response is not understandable. Thus, understandability is a crucial building block before other qualities. Also, G2 is almost identical to G3. This suggests that the specificity does not influence the overall score if the response is unsensible. On the contrary, G4 achieves better scores than G3 even though it is entirely unspecific. This validates our hypothesis that sensibleness should be prioritized over specificity in evaluating the overall quality.
Note that the Human Vanilla scores of G2 and G3 are almost as low as the one of G1, which indicates that even if a response is understandable, it does not significantly affect the overall score unless the response is sensible. We suspect that this problem is due to the subjectivity of annotators because, in a real conversation, it is rare for a speaker to say an incomprehensible utterance. Moreover, we did not Table 3: Correlation for each response quality between the human score and automatic evaluation metrics. Bold denotes the best metric for the corresponding quality, and (*) refers to p < 0.01.
Metric
Human Table 4: Pearson correlation between automatic evaluation metrics and two types of human scores on overall quality. Vanilla score refers to a single overall score that the annotators assigned, whereas USL S -HH refers to human score obtained using our method. Bold denotes the best metric for each type of overall score, and (*) refers to p < 0.01.
Overall
provide any concrete instructions on how the overall score should be evaluated. Thus, the annotators fail to consider the understandability aspect properly. Recently, Mehri and Eskenazi (2020) found a similar result when they ask annotators to evaluate the overall quality with respect to five different aspects. Their study also showed that every annotator prioritizes each quality differently. Compared to Human Vanilla and Human USL-A, Human USL-H can perform better due to two factors. (1) Human USL-H explicitly considers understandability. It assigns higher scores to G2 as an incentive to a response for being understandable. This makes distinguishing between G1 and G2 easier.
(2) It mimics the characteristics of Human Vanilla, especially between G2 and G3, when the unsensible responses deserve the same score. However, this is the opposite with Human USL-A that assigns scores to G3 as high as G4, which contradicts with Human Vanilla. Due to the benefits of Human USL-H, we will use that as the human overall score, unless stated otherwise.
Suitable Metrics for Fundamental Aspects
In this section, we determine which metric is the most suitable for each aspect. We experiment with all the metrics described in Section 4 by comparing their scores with human judgment on understandability, sensibleness, and specificity using Pearson and Spearman rank correlations. Based on Pearson correlations, four highly correlated metrics of each aspect are selected, represented in Table 3. Among the selected metrics, the most suitable ones are BERT-VUP for understandability, BERT-NUP for sensibleness, and MLM-Likelihood for specificity. We notice that MLM-Likelihood and MLM-PPL are not the appropriate measures for understandability. These two metrics tend to assign a high score to Figure 4: Pearson correlation between overall score and USL S -H metric with changeable sub-metrics for each aspect. Each sub-figure corresponds to different compositions with one sub-metric changeable. The x-axis denotes the correlation of sub-metrics on one aspect. The y-axis denotes the correlation of the USL S -H scores. Parentheses (x) denotes that metric of x is changeable.
repetitive responses (i.e., I've got a lot of time to get a new place to be a good place to get a new place.). However, our BERT-VUP metric can recognize and correctly assign a low score to responses with such repetitions.
BERT-NUP outperforms other metrics in the sensibleness quality. Unlike BERT-RUBER and EN-PONE that obtain embeddings for context and response separately and concatenate them to obtain context-response pair embedding, BERT-NUP combines them into an array of tokens and may utilize the BERT's capability to find contextual patterns between their tokens.
The MLM-based metrics achieve moderate correlations with the human score on specificity. This may be due to the simple assumption that a response is specific if it contains at least one uncommon word. Furthermore, the language model tends to assign a lower probability to any rare word occurrence, which is consistent with our assumption.
Analysis of USL-H Metric
We select BERT-VUP, BERT-NUP, and MLM-likelihood as the metrics for understandability, sensibleness, and specificity, respectively. Because the MLM-Likelihood score is not between [0,1], we normalize that using MinMax normalization (Jain et al., 2005) to ensure consistency between scores. Then, we composite these scores into USL S -H score, a variant of USL-H score focusing only on specificity as part of likability. We also implement weighted average (α 1 s U + α 2 s S + α 3 s L ), similar to Mehri and Eskenazi (2020), denoted as USL S -A. We utilize the weights obtained from the linear regression ( Figure 2) and assign them as coefficients to α 1 , α 2 , and α 3 . Table 4 shows Pearson correlations between the automatic evaluation metrics with two types of human overall score (vanilla and USL S -H). To avoid ambiguity between USL S -H score of human and metrics, we denote human USL S -H as USL S -HH. Table 4 shows that the weighted USL S -H metric outperforms all other baselines; the BERT-NUP metric achieves the second-best performance. This agrees with our hypothesis that incorporating additional information, such as understandability and specificity, with sensibleness score can further enhance the evaluation metric performance. On the other hand, USL S -A has lower correlation compared with BERT-NUP and USL S -H. This may be because the metric attempts to incorporate the specificity quality, even if the response is incomprehensible or unsensible. This scenario would not occur with our proposed hierarchy since specificity becomes less important as the understandability or sensibleness drops.
Configurability
Improving an Aspect It is uncertain if the USL S -H metric can be improved further by utilizing a better sub-metric. Therefore, we tested with a different combination of sub-metrics, each of which has a different correlation. We use BERT-VUP, BERT-NUP, and MLM-Likelihood as the base metrics. To observe the effect of understandability on USL S -H, we fix BERT-NUP and MLM-Likelihood constant as we change only the understandability metric. Additionally, we assume that there is an ideal function for each aspect such that they are perfectly correlated with the human score. To obtain such a score, we use the human score itself. We apply this procedure for all three aspects. Figure 4-a, 4-b, 4-c shows the correlation of metric USL S -H with human USL S -H, as we change only the understandability, sensibleness, and specificity metric, respectively. Different metrics on Figure 4-a and Figure 4-c do not have any significant impacts on the correlation of the USL S -H scores, whereas using a perfectly correlated score does. This does not suggest that these two aspects are insignificant since the performance would decrease drastically if we use only BERT-NUP. Instead, it suggests that the metrics for these aspects may require further improvement to increase the performance of USL S -H. Figure 4-b, on the other hand, indicates that the better the sensibleness metric is, the more correlated USL S -H will be. Thus, little improvement on sensibleness could also enhance the USL S -H.
Swapping an Aspect To verify whether the USL-H metric is configurable to different aspects, we swap specificity with empathy quality. Then, we trained a BERT-based binary classifier similar to BERT-VUP and grouped the seven emotion labels provided in DailyDialog dataset into two labels: has emotion label and has no emotion label. We consolidated BERT-VUP, BERT-NUP, and this metric to get another variant of USL-H and denoted it as USL E -H, whose E stands for empathy. To demonstrate that USL E -H metric can recognize a sensible and empathetic response better than the other metrics, we use DialoGPT model to generate a pool of 100 responses given a context using two variants of the temperature. We use five metrics to evaluate them. The best response for each metric is selected and is paired against another response selected by another metric to determine which metric selects a better one given the same context. We apply this procedure to 50 different contexts. For each sample, we ask three crowdsource workers to choose a response that makes more sense and expresses more understanding of the feeling.
As shown in Table 5, the human evaluators agree that the responses selected by USL E -H have higher qualities in terms of sensibleness and empathy, compared to the ones selected by the other metrics. Furthermore, USL E -H outperforms USL S -H by a huge margin. This suggests that although USL S -H achieves good performance with specificity, it does not consider empathy quality. However, we can configure the metric by replacing specificity with the empathy sub-metric to obtain a more suitable variant for the task.
Conclusion and Future Work
This study demonstrated a bottom-up approach to building an automatic evaluation metric by deconstructing the overall quality of a response into three fundamental aspects (understandability, sensibleness, and likability), exploring a suitable metric for each aspect, and reconstructing them back to obtain a single metric. However, we restricted the likability aspect to only specificity or empathy. For our future work, we intend to investigate other likability scores, such as engagement or diversity, to ensure that this metric is usable across different tasks and datasets. Table 6: Correlation between the overall quality and different composite functions on combining human score of each aspect. Every correlation has p < 0.001. model for two reasons: (i) BERT can generalize well to other datasets, and (ii) to maintain consistency with other BERT-VUP and BERT-NUP metrics in our work.
Inverse Word Frequency We also experiment with the inverse word frequency metric proposed by (Zhang et al., 2018). However, based on Pearson correlation score, this metric was not in the top four.
Appendix C. Case Studies Table 7 consists of some examples to compare the USL S -H score with other baselines. The METEOR metric does not perform well due to few words overlapped between golden and candidate responses. Not only are the two responses lexically different, but also they are semantically different. This yields a relatively low score for BERTScore. On the other hand, BERT-RUBER metric assigns a better score compared with the former metrics; however, it does not consider the understandability aspect. For example, BERT-RUBER metric assigned a 0 score to example 3 when the response is understandable and deserved a score higher than 0. The USL S -H metric is on par with the human score. It recognizes when the response is incomprehensible, unsensible, or unspecific and can present these qualities in an interpretable manner. However, combining three sub-metrics into a single metric has one limitation. It requires every sub-metric to perform well; otherwise, it will not attain a score of 1.00, as shown in example 1.
Figure 1 :
1Decomposition the structure of a response quality.
Figure 2 :
2Coefficient of each quality on the overall score per annotator A i .
Figure 3 :
3Average of human overall score of the five groups. (s U ,s S ,s Ls ) denotes human score for understandability, sensibleness, and specificity, respectively. x denotes any score.
by obtaining the contextual embedding h i for each word w i andUnderstandable Sensible Specific Overall
Kappa
0.4333
0.6110
0.4572 0.4137
Table 2 :
2Inter-annotator agreement on Cohen's Kappa.
Table 5 :
5A/B testing by human comparing "sensible and feeling-expressive" response pairs that are selected by each metric, reporting wins rate for A over B (excluding ties).
Correlation Arithmetic Geometric Harmonic OursPearson
0.8038
0.8909
0.8909
0.9490
Spearman
0.8059
0.8666
0.8365
0.9399
I am sorry. The flight is late...Are you trying to act like a real man?Yes. That' s right. I'm afraid I'm afraid I'm afraid I'm afraid I'm afraid I can't.Context
Ground-Truth
Response
Candidate Response
Meteor
BERT
Score
RUBER USLS-H Human
Can you tell me how
to get to the Jade
Buddha Temple?
I'm sorry. I can't
quite follow you.
Would you mind
speaking slowly?
Go straight ahead
until you see the
roundabout,
then
take a left turn.
0.03
-0.16
1.00
0.78
1.00
Let' s play a game!
OK! How about
Scrabble?
Sure.
0.00
0.08
0.41
0.55
0.66
Sir
what's
the
boarding time?
0.02
-0.04
0.00
0.33
0.33
Excuse me, sir, but
are you Mr.Richard
of World Trading
Company?
0.05
-0.20
0.41
0.04
0.08
Table 7 :
7Examples of different metrics. BERTScore is within the range [-1, 1], whereas the range of the other metrics is in [0,1]. The human score in this case refers to the human score of USL S -H.
The implementation of our metrics is available at https://github.com/vitouphy/usl_dialogue_metric. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http: //creativecommons.org/licenses/by/4.0/.
https://huggingface.co/
https://parl.ai/ 4 https://www.mturk.com/ 5 http://pyenchant.github.io/pyenchant/
AcknowledgementsWe would like to thank An Tuan Dao, Takuma Udagawa, Thanakrit Julavanich, and Xanh Thi Ho for valuable discussions and the anonymous reviewers for insightful comments. This work was supported by National Institute of Informatics.Appendix A. Dataset CollectionA.1 Building Response CandidatesRetrieval Methods The response from this category is expected to be understandable, but may not be relevant to the context. We use TF-IDF and Dual Encoder(Lowe et al., 2017b) models, using ParlAI 3 for this experiment. During training, the models are provided with 2 candidates (1 correct response and 1 randomly sampled from the training set) and are trained to select the best one. During inference, we follow the method in(Liu et al., 2016)and use the whole corpus as candidates, with the correct response removed.Generative Methods Generative models generally can typically produce a response that is somewhat relevant to the given context; however, they sometimes lack in particular qualities, such as specificity, which results in generic and dull responses, e.g., "I don't know" or "Thank you"(Sordoni et al., 2015;Vinyals and Le, 2015;. Therefore, we select a simple seq2seq model with an attention mechanism(Bahdanau et al., 2015), which is also trained using ParlAI. We also use the pretrained DialoGPT(Zhang et al., 2020b) model because it is claimed to generate responses that are relevant and context-consistent.Human-Generated Response Using golden data as a response can introduce a bias in the results because the annotator knows the whole context during annotation. Moreover, within the experiment, the number of contexts visible to the models is limited to only a single turn. Hence, we conduct this data collection to ensure fairness. To complete this task, we use Mechanical Turk 4 and ask participants to write a response for a given context. To ensure the quality of the responses, we instruct them with the following requirements: (i) the response must have at least 5 words, (ii) the response must not contain any offensive language, and (iii) the response must not contain any emojis. The rejection of response with emoji is because the DailyDialog dataset does not contain them. We want to ensure that the humangenerated response remains as close as to the original distribution as much as possible. Subsequently, we use PyEnchant 5 to detect irregular words and manually correct them.A.2 Human Judgement ScoreBefore running the actual annotation on collecting human judgements from our volunteers, we conduct two trial runs to verify that our volunteers understand the task and to ensure that the inter-annotator agreement is acceptable.Appendix B. Further AnalysisB.1 Composite FunctionsAlthough the human USL S -H score explicitly includes understandability, it does not guarantee that the score of this composite function is a suitable replacement for the human overall quality. To ensure that the human USL S -H score maintains the quality of the human overall score, we computed their correlation. We also experimented with other composite functions, such as (i) arithmetic mean, (ii) geometric mean, and (iii) harmonic mean, and present the results inTable 6. Using the geometric mean or harmonic mean yields a better correlation than using the arithmetic mean; however, our USL S -H score outperforms all these functions on both the Pearson correlation and Spearman rank correlations. This implies that the USL S -H score can merge understandability, sensibleness, and specificity explicitly into a single score to reflect the required qualities of a response.B.2 Additional Sub-Metric AnalysisLanguge Modeling We also implemented a 2-layers LSTM-based Seq2Seq model, and it achieves similar performance compared to BERT-based MLM. We choose this BERT-MLM metric over Seq2Seq
Daniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, arXiv:2001.09977Towards a human-like open-domain chatbot. arXiv preprintDaniel Adiwardana, Minh-Thang Luong, David R So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, et al. 2020. Towards a human-like open-domain chatbot. arXiv preprint arXiv:2001.09977.
Massively multilingual neural machine translation. Roee Aharoni, Melvin Johnson, Orhan Firat, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3874-3884.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, abs/1409.0473CoRRDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.
Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. Satanjeev Banerjee, Alon Lavie, Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarizationSatanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pages 65-72.
A coefficient of agreement for nominal scales. Jacob Cohen, Educational and psychological measurement. 201Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37-46.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.
Bootstrapping dialog systems with word embeddings. Gabriel Forgues, Joelle Pineau, Jean-Marie Larchevêque, Réal Tremblay, Nips, modern machine learning and natural language processing workshop. Gabriel Forgues, Joelle Pineau, Jean-Marie Larchevêque, and Réal Tremblay. 2014. Bootstrapping dialog systems with word embeddings. In Nips, modern machine learning and natural language processing workshop.
Better automatic evaluation of opendomain dialogue systems with contextualized embeddings. Johnny Sarik Ghazarian, Aram Wei, Nanyun Galstyan, Peng, Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation. the Workshop on Methods for Optimizing and Evaluating Neural Language GenerationSarik Ghazarian, Johnny Wei, Aram Galstyan, and Nanyun Peng. 2019. Better automatic evaluation of open- domain dialogue systems with contextualized embeddings. In Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation, pages 82-89.
Score normalization in multimodal biometric systems. Anil Jain, Karthik Nandakumar, Arun Ross, Pattern recognition. 3812Anil Jain, Karthik Nandakumar, and Arun Ross. 2005. Score normalization in multimodal biometric systems. Pattern recognition, 38(12):2270-2285.
Sentence-level fluency evaluation: References help, but can be spared!. Katharina Kann, Sascha Rothe, Katja Filippova, Proceedings of the 22nd Conference on Computational Natural Language Learning. the 22nd Conference on Computational Natural Language LearningKatharina Kann, Sascha Rothe, and Katja Filippova. 2018. Sentence-level fluency evaluation: References help, but can be spared! In Proceedings of the 22nd Conference on Computational Natural Language Learning, pages 313-323.
Generating informative responses with controlled sentence function. Pei Ke, Jian Guan, Minlie Huang, Xiaoyan Zhu, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Pei Ke, Jian Guan, Minlie Huang, and Xiaoyan Zhu. 2018. Generating informative responses with controlled sentence function. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1499-1508.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations.
Pone: A novel automatic evaluation metric for open-domain generative dialogue systems. Tian Lan, Xian-Ling Mao, Wei Wei, Xiaoyan Gao, Heyan Huang, arXiv:2004.02399arXiv preprintTian Lan, Xian-Ling Mao, Wei Wei, Xiaoyan Gao, and Heyan Huang. 2020. Pone: A novel automatic evaluation metric for open-domain generative dialogue systems. arXiv preprint arXiv:2004.02399.
A diversity-promoting objective function for neural conversation models. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objective function for neural conversation models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119.
Dailydialog: A manually labelled multi-turn dialogue dataset. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, Shuzi Niu, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingLong Papers1Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 986-995.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, Proc. ACL workshop on Text Summarization Branches Out. ACL workshop on Text Summarization Branches Out10Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Proc. ACL workshop on Text Summarization Branches Out, page 10.
How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. Chia-Wei Liu, Ryan Lowe, Iulian Vlad Serban, Mike Noseworthy, Laurent Charlin, Joelle Pineau, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingChia-Wei Liu, Ryan Lowe, Iulian Vlad Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2122-2132.
Towards an automatic turing test: Learning to evaluate dialogue responses. Ryan Lowe, Michael Noseworthy, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, Joelle Pineau, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Ryan Lowe, Michael Noseworthy, Iulian Vlad Serban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017a. Towards an automatic turing test: Learning to evaluate dialogue responses. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1116-1126.
Training end-to-end dialogue systems with the ubuntu dialogue corpus. Ryan Thomas Lowe, Nissan Pow, Iulian Vlad Serban, Laurent Charlin, Chia-Wei Liu, Joelle Pineau, Dialogue & Discourse. 81Ryan Thomas Lowe, Nissan Pow, Iulian Vlad Serban, Laurent Charlin, Chia-Wei Liu, and Joelle Pineau. 2017b. Training end-to-end dialogue systems with the ubuntu dialogue corpus. Dialogue & Discourse, 8(1):31-65.
Usr: An unsupervised and reference free evaluation metric for dialog generation. Shikib Mehri, Maxine Eskenazi, arXiv:2005.00456arXiv preprintShikib Mehri and Maxine Eskenazi. 2020. Usr: An unsupervised and reference free evaluation metric for dialog generation. arXiv preprint arXiv:2005.00456.
End-to-end evaluation in verbmobil i. Rita Nübel, Proceedings of the MT Summit VI. the MT Summit VIRita Nübel. 1997. End-to-end evaluation in verbmobil i. In Proceedings of the MT Summit VI.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting on association for computational linguistics. the 40th annual meeting on association for computational linguisticsAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311-318. Association for Computational Linguistics.
Towards empathetic opendomain conversation models: A new benchmark and dataset. Eric Michael Hannah Rashkin, Margaret Smith, Y-Lan Li, Boureau, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsHannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open- domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5370-5381.
A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. Vasile Rus, Mihai Lintean, Proceedings of the Seventh Workshop on Building Educational Applications Using NLP. the Seventh Workshop on Building Educational Applications Using NLPVasile Rus and Mihai Lintean. 2012. A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 157-162.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715-1725.
Building endto-end dialogue systems using generative hierarchical neural network models. Alessandro Iulian V Serban, Yoshua Sordoni, Aaron Bengio, Joelle Courville, Pineau, Thirtieth AAAI Conference on Artificial Intelligence. Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end- to-end dialogue systems using generative hierarchical neural network models. In Thirtieth AAAI Conference on Artificial Intelligence.
Learning an unreferenced metric for online dialogue evaluation. Koustuv Sinha, Prasanna Parthasarathi, Jasmine Wang, Ryan Lowe, L William, Joelle Hamilton, Pineau, arXiv:2005.00583arXiv preprintKoustuv Sinha, Prasanna Parthasarathi, Jasmine Wang, Ryan Lowe, William L Hamilton, and Joelle Pineau. 2020. Learning an unreferenced metric for online dialogue evaluation. arXiv preprint arXiv:2005.00583.
Can you put it all together: Evaluating conversational agents' ability to blend skills. Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, Y-Lan Boureau, arXiv:2004.08449arXiv preprintEric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agents' ability to blend skills. arXiv preprint arXiv:2004.08449.
A neural network approach to context-sensitive generation of conversational responses. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, Bill Dolan, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAlessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive generation of conversational responses. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 196-205.
Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems. Chongyang Tao, Lili Mou, Dongyan Zhao, Rui Yan, Thirty-Second AAAI Conference on Artificial Intelligence. Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2018. Ruber: An unsupervised method for automatic evaluation of open-domain dialog systems. In Thirty-Second AAAI Conference on Artificial Intelligence.
A neural conversational model. Oriol Vinyals, Quoc Le, Proceedings of the International Conference on Machine Learning, Deep Learning Workshop. the International Conference on Machine Learning, Deep Learning WorkshopOriol Vinyals and Quoc Le. 2015. A neural conversational model. In Proceedings of the International Conference on Machine Learning, Deep Learning Workshop.
Retrieve and refine: Improved sequence generation models for dialogue. Jason Weston, Emily Dinan, Alexander Miller, Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI. the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AIJason Weston, Emily Dinan, and Alexander Miller. 2018. Retrieve and refine: Improved sequence generation models for dialogue. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd International Workshop on Search-Oriented Conversational AI, pages 87-92.
Towards universal paraphrastic sentence embeddings. John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu, ICLR. John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sentence embeddings. In ICLR.
Towards coherent and engaging spoken dialog response generation using automatic conversation evaluators. Sanghyun Yi, Rahul Goel, Chandra Khatri, Alessandra Cervone, Tagyoung Chung, Behnam Hedayatnia, Anu Venkatesh, Raefer Gabriel, Dilek Hakkani-Tur, Proceedings of the 12th International Conference on Natural Language Generation. the 12th International Conference on Natural Language GenerationSanghyun Yi, Rahul Goel, Chandra Khatri, Alessandra Cervone, Tagyoung Chung, Behnam Hedayatnia, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani-Tur. 2019. Towards coherent and engaging spoken dialog response generation using automatic conversation evaluators. In Proceedings of the 12th International Conference on Natural Language Generation, pages 65-75.
Dirichlet latent variable hierarchical recurrent encoder-decoder in dialogue generation. Min Zeng, Yisen Wang, Yuan Luo, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Min Zeng, Yisen Wang, and Yuan Luo. 2019. Dirichlet latent variable hierarchical recurrent encoder-decoder in dialogue generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1267-1272.
Ssa: A more humanized automatic evaluation method for open dialogue generation. Zhiqiang Zhan, Zifeng Hou, Qichuan Yang, Jianyu Zhao, Yang Zhang, Changjian Hu, 2019 International Joint Conference on Neural Networks (IJCNN). IEEEZhiqiang Zhan, Zifeng Hou, Qichuan Yang, Jianyu Zhao, Yang Zhang, and Changjian Hu. 2019. Ssa: A more humanized automatic evaluation method for open dialogue generation. In 2019 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE.
Learning to control the specificity in neural response generation. Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, Xueqi Cheng, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2018. Learning to control the specificity in neural response generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1108-1117.
Bertscore: Evaluating text generation with bert. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, Yoav Artzi, International Conference on Learning Representations. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020a. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations.
DIALOGPT : Large-scale generative pre-training for conversational response generation. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 58th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsOnlineAssociation for Computational LinguisticsYizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020b. DIALOGPT : Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 270-278, Online, jul. Association for Computational Linguistics.
| [
"https://github.com/vitouphy/usl_dialogue_metric."
] |
[
"Which Shortcut Solution Do Question Answering Models Prefer to Learn?",
"Which Shortcut Solution Do Question Answering Models Prefer to Learn?"
] | [
"Kazutoshi Shinoda shinoda@is.s.u-tokyo.ac.jp \nThe University of Tokyo\n\n\nNational Institute of Informatics\n\n",
"Saku Sugawara \nNational Institute of Informatics\n\n",
"Akiko Aizawa aizawa@nii.ac.jp \nThe University of Tokyo\n\n\nNational Institute of Informatics\n\n"
] | [
"The University of Tokyo\n",
"National Institute of Informatics\n",
"National Institute of Informatics\n",
"The University of Tokyo\n",
"National Institute of Informatics\n"
] | [] | Question answering (QA) models for reading comprehension tend to learn shortcut solutions rather than the solutions intended by QA datasets. QA models that have learned shortcut solutions can achieve human-level performance in shortcut examples where shortcuts are valid, but these same behaviors degrade generalization potential on anti-shortcut examples where shortcuts are invalid. Various methods have been proposed to mitigate this problem, but they do not fully take the characteristics of shortcuts themselves into account. We assume that the learnability of shortcuts, i.e., how easy it is to learn a shortcut, is useful to mitigate the problem. Thus, we first examine the learnability of the representative shortcuts on extractive and multiple-choice QA datasets. Behavioral tests using biased training sets reveal that shortcuts that exploit answer positions and word-label correlations are preferentially learned for extractive and multiple-choice QA, respectively. We find that the more learnable a shortcut is, the flatter and deeper the loss landscape is around the shortcut solution in the parameter space. We also find that the availability of the preferred shortcuts tends to make the task easier to perform from an information-theoretic viewpoint. Lastly, we experimentally show that the learnability of shortcuts can be utilized to construct an effective QA training set; the more learnable a shortcut is, the smaller the proportion of anti-shortcut examples required to achieve comparable performance on shortcut and anti-shortcut examples. We claim that the learnability of shortcuts should be considered when designing mitigation methods. | 10.48550/arxiv.2211.16220 | [
"https://export.arxiv.org/pdf/2211.16220v1.pdf"
] | 254,069,973 | 2211.16220 | ad7c788824aba70deb141617410dd70b390c4ae8 |
Which Shortcut Solution Do Question Answering Models Prefer to Learn?
Kazutoshi Shinoda shinoda@is.s.u-tokyo.ac.jp
The University of Tokyo
National Institute of Informatics
Saku Sugawara
National Institute of Informatics
Akiko Aizawa aizawa@nii.ac.jp
The University of Tokyo
National Institute of Informatics
Which Shortcut Solution Do Question Answering Models Prefer to Learn?
Question answering (QA) models for reading comprehension tend to learn shortcut solutions rather than the solutions intended by QA datasets. QA models that have learned shortcut solutions can achieve human-level performance in shortcut examples where shortcuts are valid, but these same behaviors degrade generalization potential on anti-shortcut examples where shortcuts are invalid. Various methods have been proposed to mitigate this problem, but they do not fully take the characteristics of shortcuts themselves into account. We assume that the learnability of shortcuts, i.e., how easy it is to learn a shortcut, is useful to mitigate the problem. Thus, we first examine the learnability of the representative shortcuts on extractive and multiple-choice QA datasets. Behavioral tests using biased training sets reveal that shortcuts that exploit answer positions and word-label correlations are preferentially learned for extractive and multiple-choice QA, respectively. We find that the more learnable a shortcut is, the flatter and deeper the loss landscape is around the shortcut solution in the parameter space. We also find that the availability of the preferred shortcuts tends to make the task easier to perform from an information-theoretic viewpoint. Lastly, we experimentally show that the learnability of shortcuts can be utilized to construct an effective QA training set; the more learnable a shortcut is, the smaller the proportion of anti-shortcut examples required to achieve comparable performance on shortcut and anti-shortcut examples. We claim that the learnability of shortcuts should be considered when designing mitigation methods.
Introduction
Natural language understanding (NLU) models based on deep neural networks (DNNs) have been shown to exploit spurious correlations (also called dataset bias (Torralba and Efros 2011) or annotation artifacts (Gururangan et al. 2018)) in the training set, and produce learning shortcut solutions (Geirhos et al. 2020) rather than the solutions intended by datasets. Shortcut learning by NLU models causes poor generalization to anti-shortcut examples where the spurious correlations no longer hold and the learned shortcuts fail (Mc-Coy, Pavlick, and Linzen 2019;Gardner et al. 2020).
To date, question answering (QA) models for reading comprehension have been reported to learn several types of Copyright © 2023, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. shortcut solutions (Jia and Liang 2017;Sugawara et al. 2018;Ko et al. 2020). Various approaches have been proposed to mitigate these problems in QA, such as data augmentation (Shinoda, Sugawara, and Aizawa 2021a) and debiasing methods (Ko et al. 2020;Wu et al. 2020). However, those methods have not fully taken the characteristics of shortcuts into account.
We assume that studying the learnability of each shortcut in QA datasets should be useful to construct training sets or design data augmentation methods for mitigating the problem. This assumption is supported by the work by Lovering et al. (2021), who show that the learnability of a shortcut and the proportion of anti-shortcut examples in a training set are the two important factors that affect the shortcut learning behavior in grammatical tasks.
To verify our assumption, we first examine the learnability of representative shortcuts in extractive and multiple-choice QA. In addition, we investigate how the learnability of a shortcut is related to the proportion of anti-shortcut examples required to mitigate the shortcut learning. Namely, we aim to answer the following research questions (RQs): 1) When every shortcut is valid for answering every question in biased training sets, which shortcut do QA models prefer to learn? 2) Why are certain shortcuts learned in preference to other shortcuts from the biased training sets? 3) How quantitatively different is the learnability for each shortcut? 4) What proportion of anti-shortcut examples in a training set is required to avoid learning a shortcut? Is it related to the learnability of shortcuts?
We answer the first question with behavioral tests using biased training sets as illustrated in Figure 1. These experiments reveal which shortcut solution is preferred by QA models when every shortcut is applicable to the biased training sets. We show that, in extractive QA, the shortcut based on answer-position is preferred over the word matching and question-answer type matching shortcuts. In multiple-choice QA, the shortcut exploiting word-label correlations is preferred to the one using lexical overlap.
We answer the second question from the perspective of the loss landscapes qualitatively. We show that the flatness and depth of the loss surface around each shortcut solution in the parameter space can be the reason of the preference qualitatively.
To quantitatively explain the preference for shortcuts, we answer the third question by quantifying the learnability of shortcuts using the minimum description lengths. We show that the availability of more preferred shortcuts in a dataset tend to make the task easier to learn.
Lastly, we answer the fourth question by simply changing the proportion of anti-shortcut examples in training sets and showing how the gap between the scores on shortcut and anti-shortcut examples changes. We show that more learnable shortcuts require less proportion of anti-shortcut examples during training to achieve the comparable performance on shortcut and anti-shortcut examples. Moreover, we find that only controlling the proportion of anti-shortcut examples is not sufficient to avoid learning less-learnable shortcuts. Our findings suggest that the learnability of shortcuts should be considered when designing mitigation methods.
Shortcut Solutions Notation
When a training or test set D of a dataset is given, we define a rule-based function for each shortcut k to split D into shortcut examples D k that are solvable with shortcut k and anti-shortcut examples D k that are not solvable with shortcut k. Our rule-based functions are deterministic and easy to reproduce, while partial-input baselines that are widely used for detecting shortcut examples (Gururangan et al. 2018) depend on model choice and random seeds.
Examined Shortcuts in Extractive QA
For extractive QA, we compared and analyzed the following three shortcuts, which were found in the existing literature.
Answer-Position
Finding answers from the first sentence (Ko et al. 2020): When QA models are trained on examples where answers are contained in the first sentence of the context, they learn to extract answers from the first sentence. (k = Position) Word Matching Finding the answer from the most similar sentence (Sugawara et al. 2018): When an answer is contained in a sentence that is the most similar to a question, simple word matching is sufficient to find the correct answer. We define the most similar sentence as the one that contains the longest n-gram in common with the question. (k = Word)
Type Matching
Matching question and answer types (Weissenborn, Wiese, and Seiffe 2017): When the entity type of the answer to the question can be specified, and the textual spans corresponding to the expected answer type appear only once in the context, models can answer the question correctly by simply extracting the phrase of the entity type. When the context contain two or more named entities of the same type as the answer, we classify the example into D k . To define this shortcut rigorously, we omit answers that are not named entities. We used spaCy (Honnibal et al. 2020) for named entity recognition. (k = Type)
Examined Shortcuts in Multiple-choice QA
For multiple-choice QA, we defined and analyzed the following two shortcuts. We adopted the two shortcuts following the work on natural language inference (NLI) (Gururangan et al. 2018; McCoy, Pavlick, and Linzen 2019) because multiple-choice QA and NLI are similar tasks as models predict whether the context+question (premise) entails the option (hypothesis).
Word-label Correlation
Previous studies have shown that multiple-choice QA models can even make correct predictions with options only (Sugawara et al. 2020;Yu et al. 2020). NLI models can similarly make correct predictions with hypotheses only because certain words such as negation in hypotheses are highly correlated with labels (Gururangan et al. 2018). When considered in relation to the hypothesis-only bias in NLI, we assumed that multiple- choice QA datasets contain words in options that are highly correlated with binary labels. Based on this assumption, we attempt to identify words in options that are highly correlated with the labels to define a realistic shortcut that exploits the word-label correlation. Gardner et al. (2021) assumed that no single feature by itself should be informative about the class label. Here, we generally follow their assumption. We use z-statistics proposed by Gardner et al. (2021) to identify word w in options with the conditional probability p(y|w) that significantly deviates from the uniform distribution. Specifically, we compute the z-statistics as
z * = p(y|w) p 0 (1 − p 0 )/n ,(1)
where p 0 is the uniform distribution of label y, n is the frequency of word w, and p(y|w) is the empirical distribution over n samples where word w is contained in the options. p 0 is 1/4 in RACE and ReClor datasets because they have four options for each question. The top-7 words with the highest z-statistics in RACE and ReClor are shown in Table 1. We choose the top-1 word for the analysis of the word-label correlation shortcut for simplicity. (k = Top-1)
Lexical Overlap NLI models exploit the lexical overlap between premise and hypothesis to make predictions (Mc-Coy, Pavlick, and Linzen 2019). We assume that multiplechoice QA models can learn a similar shortcut solution using lexical overlap. We define the lexical overlap shortcut as judging an option that has the maximum lexical overlap with context+question among the options to be the answer.
We define the lexical overlap as the ratio of the common unigrams contained in both sequences to the number of words in an option. (k = Overlap)
Experiments Experimental Setup
Datasets For extractive QA, we used SQuAD 1.1 (Rajpurkar et al. 2016) and NaturalQuestions (Kwiatkowski et al. 2019), which contain more than thousand examples in the biased training sets in Figure 1. For multiple-choice QA, we used RACE (Lai et al. 2017) and ReClor (Yu et al. 2020), where option-only models can perform better than the random baselines (Sugawara et al. 2020;Yu et al. 2020), suggesting that options in these datasets have unintended biases.
Models We used BERT-base ) and RoBERTa-base (Liu et al. 2019) as encoders, which are widely adopted for extractive and multiple-choice QA (Yu et al. 2020). The task-specific output layers were added on top of the encoders. For extractive QA, models output the probability distributions of the start and end positions of answer spans over context tokens. For multiple-choice QA, models predicted the probability distribution of the correct option over four options. The models were trained with cross-entropy loss minimization. Except for the training steps, we followed the hyperparameters suggested by the original papers. 1
Evaluation Metrics For extractive QA, we used the F1 score as the evaluation metric, whereas for multiple-choice QA, we used accuracy.
Learning from Biased Training Sets
To compare the learnability of the examined shortcuts, we first answer the following research question (RQ). RQ1 When every shortcut is valid for answering every question in biased training sets, which shortcut do QA models prefer to learn?
To answer this question, we conducted behavioral tests by training on a biased training set and testing on unbiased test sets as illustrated in Figure 1.
The important factors of shortcut learning are 1) the frequency of anti-shortcut examples in a training set and 2) how easy it is to learn the shortcut from shortcut examples (Lovering et al. 2021). In our biased training sets, all the examples are equally solvable with the examined shortcuts. Therefore, our biased training enabled the impact of pure learnability to be compared.
Setup
We first trained the models on D Position ∩ D Word ∩ D Type sampled from the training sets. Then, the models were evaluated on subsets such as D Position ∩ D Word ∩ D Type sampled from the evaluation sets to clarify which shortcut models learn preferentially. To gain insights into the process of learning shortcut solutions, we also examined the scores during training. Figure 2 (left) shows the F1 score on each subset of the extractive QA datasets during training. We assume that the higher the score on a subset where only one of the three shortcuts is valid, the more preferentially the model learns the shortcut.
Results of Extractive QA
Regardless of the datasets and models, the F1 score on D Position ∩ D Word ∩ D Type is higher than the F1 scores on D Position ∩ D Word ∩ D Type and D Position ∩ D Word ∩ D Type throughout the training. This observation supports that, among the three, the shortcut using answer-position is the most learnable.
Moreover, the scores on D Position ∩ D Word ∩ D Type increased significantly during the first several hundred training steps. This observation is consistent with the experimental (Utama, Moosavi, and Gurevych 2020;Lai et al. 2021) and theoretical results (Hu et al. 2020); neural networks learn simpler functions at the early phase of training.
Conversely, the F1 scores on D Position ∩ D Word ∩ D Type and D Position ∩ D Word ∩ D Type were higher than that on D Position ∩ D Word ∩ D Type . If the models exclusively learned the answer-position shortcut, the scores on these subsets would be similarly low regardless of the availability of the word and type matching shortcuts. Therefore, this observation implies that the models did not exclusively learn only one shortcut, but a mixture of multiple shortcuts.
Of the two models, RoBERTa generalized better to D Position ∩ D Word ∩ D Type . RoBERTa is able to learn sophisticated solutions other than the predefined shortcuts. As BERT and RoBERTa have the same model architecture, the observations show that initialization points also affect the shortcut learning behavior. Figure 2 (right) shows the accuracy curve on each subset of the multiple-choice QA datasets during training. At the end of the training, regardless of the models and the datasets, models learned to exploit word-label correlations more preferentially than lexical overlap because the accuracy on D Top-1 ∩ D Overlap is ultimately greater than that on D Top-1 ∩ D Overlap at the end.
Results of Multiple-choice QA
Interestingly, learning the shortcut using lexical overlap conversely took precedence over the shortcut using wordlabel only at the early stage of the training. This may be because recognizing the dataset-specific word-label correlation requires hundreds of training steps as statistical evidence, while transformer-based language models might be originally equipped to recognize lexical overlap via self-attention (Vaswani et al. 2017).
Visualizing the Loss Landscape RQ2 Why are certain shortcuts learned in preference to other shortcuts from the biased training sets?
We attempt to answer this question from the perspective of loss landscapes, as done by Scimeca et al. (2022) in image classification tasks. Specifically, we visualize the loss landscapes around shortcut solutions and compare them. The loss values were computed on subsets that are used as the biased training sets in the previous behavioral tests. By doing so, we aim to compare the flatness of loss surfaces and gain insights into the preference.
Setup To visualize the loss landscape around a shortcut solution in the parameter space, we prepared models that use that shortcut. We assume that models that are trained on subsets where only one shortcut is valid learn to use the shortcut. For example, models trained on D Position ∩ D Word ∩ D Type are likely to exclusively learn the answerposition shortcut. We verified this assumption by confirming that models achieved the best performance on the same subsets of the evaluation sets as the training sets. For visualization, we first randomly selected two directions in the parameter space. We displayed the loss values computed on D Position ∩ D Word ∩ D Type and D Top-1 ∩ D Overlap on the hyperplane spanned by the two directions following Li et al. (2018).
Results
The visualization results for extractive and multiple-choice QA are displayed in Figures 3 and 4. The center of each figure represents each shortcut solution.
The results show that the QA models that learn the preferred shortcuts (Position and Top-1) tend to lie in flatter and deeper loss surfaces. 2 The orders of the flatness and depth of the loss surfaces are roughly correlated with the preferential order of learning shortcuts in the previous behavioral tests. These observations explain why models trained on D Position ∩D Word ∩D Type and D Top-1 ∩D Overlap learned to use the answer-position and word-label correlation shortcuts, respectively. 2 We follow the definition of the flatness as the size of the connected region in the parameter space where the loss remains approximately constant (Hochreiter and Schmidhuber 1997
Rissanen Shortcut Analysis
RQ3 How quantitatively different is the learnability for each shortcut? By answering this question, we aim to quantitatively explain the preference for shortcuts. To this end, we approximately computed the minimum description length (MDL) (Rissanen 1978) on the biased datasets where one of the predefined shortcuts is applicable, such as D Position ∩ D Word ∩ D Type , and investigated how MDL changed for each shortcut. Formally, MDL measures the number of bits needed to communicate the labels y given the inputs x in a biased subset of a dataset. We name this method Rissanen Shortcut Analysis (RSA), after the father of the MDL principle. Intuitively, RSA is simple yet effective to examine how well the availability of a shortcut in a training set makes the task easier to learn in a theoretically grounded manner.
Setup We used the online code (Rissanen 1984) to approximate MDL. In this algorithm, a training set is given to a model in a sequence of portions. At each step, a model is trained from scratch on the portions given up to that point and is used to predict the next portion. Practically, when the dataset is split into S subsets with the time steps set to {t 1 , t 2 , ..., t S } 3 , the MDL is estimated with the online code as follows:
L = S−1 i=0 ti+1 n=ti+1 − log 2 p θi (y n |x n ),(2)
where θ i is the parameter of a QA model trained on {(x j , y j )} ti j=1 and p θ0 is the uniform distribution. Intuitively, the online code is related to the area under the loss curve Position Word Type and measures how much effort is required for the training. See Voita and Titov (2020); Perez, Kiela, and Cho (2021) for more details about the online code. The sizes of the biased dataset were 1400, 4000, 3000, and 300 for SQuAD 1.1, NaturalQuestions, RACE, and ReClor, respectively. The size was set equally for each shortcut within a dataset.
Results
The results are shown in Table 2. Note that the MDLs cannot be compared across datasets because the MDLs are dependent on the dataset size t S as shown in Eq. 2. For SQuAD 1.1 and NaturalQuestions, the availability of the answer-position shortcut made the dataset the easiest to learn among the three shortcuts, with the exception of RoBERTa on SQuAD 1.1. The exception may be because RoBERTa can learn the word matching shortcut better than BERT as shown in Figure 2. The MDLs for the word and type matching shortcuts differed for SQuAD 1.1 and Natu-ralQuestions. For RACE and ReClor, the availability of the word-label correlation shortcut achieved lower MDLs than that of the lexical overlap shortcut. Except for some cases, these observations align with the results of our behavioral tests in Figure 2 and visualization in Figures 3 and 4.
In addition, RoBERTa consistently lowered the MDLs compared to BERT in all the cases. Given that RoBERTa was more robust to anti-shortcut examples than BERT in Figure 2, the MDLs may also reflect the generalization capability of models as well as the characteristics of shortcuts.
Balancing Shortcut and Anti-shortcut Examples
RQ4 What proportion of anti-shortcut examples in a training set is required to avoid learning a shortcut? Is it related to the learnability of shortcuts?
One of the simplest approaches to mitigate shortcut learning is to reduce the dataset bias by adding anti-shortcut examples to training sets manually or automatically. When a training set contains unintended biases or annotation artifacts, and the majority is solvable with shortcut solutions, models that adopt the shortcuts achieve low loss on the training set. Therefore, increasing the proportion of anti-shortcut examples is a promising approach to avoid learning shortcuts (Lovering et al. 2021).
In addition, Lovering et al. (2021) showed that the requirement of the proportion of anti-shortcut examples is related to the extractability of shortcut cues. We assume that there should be a similar relationship in QA datasets. If we know how many anti-shortcut examples are required to avoid learning shortcuts, the knowledge can be utilized to construct new QA training sets or design data augmentation approaches Shinoda, Sugawara, and Aizawa 2021a) to make QA models learn more generalizable solutions.
Setup We changed the proportion of anti-shortcut examples from 0 to 1 with the sizes of the training sets fixed as 5k and 4k for extractive and multiple-choice QA, respectively. For example, for the answer-position shortcut, the propor-tion of D Position was changed from 0 to 1, and the scores on D Position and D Position were reported. We conducted the experiment for each shortcut separately on SQuAD 1.1 and RACE using BERT-base.
Results Figures 5 and 6 show the results. When the training sets consist of only shortcut examples, i.e., the x-axis value is 0, the gaps between the scores on D k and D k are significant for all the cases. When the proportion of antishortcut examples is 0.7, 0.8, and 0.9, the scores on D k and D k are equal for Position, Top-1, and Overlap, respectively. At these points, models do not use the shortcut but a solution that is equally generalizable to both the subsets. In contrast, increasing the proportion of anti-shortcut examples more than these points degraded the scores on D k .
When considering the learnability of each shortcut studied in our previous experiments, it is clear that more learnable shortcuts require a smaller proportion of anti-shortcut examples to achieve comparable performance on shortcut and anti-shortcut examples. Moreover, for less-learnable shortcuts, such as Word and Type, we find that the score on D k is greater than that on D k for almost all the points. The results suggests that controlling the proportion of anti-shortcut examples alone is insufficient to mitigate the learning of lesslearnable shortcuts. For these less-learnable shortcuts, we may need to apply model-centric approaches such as Clark, Yatskar, and Zettlemoyer (2019) to further mitigate the gap.
Related Work
Shortcut learning in deep neural networks (DNNs) (Geirhos et al. 2020) has received significant interests because it degrades the generalization of DNNs, causing humans to lose trust in AI (Jacovi et al. 2021). QA models for reading comprehension are no exception. Although QA models have achieved human-level performance on some benchmarks (Rajpurkar et al. 2016), they lack robustness to challenging test sets such as adversarial attacks (Jia and Liang 2017), questions that cannot be solved with partial-input baselines (Sugawara et al. 2018), paraphrased questions (Gan and Ng 2019), answers in unseen positions (Ko et al. 2020), and natural perturbations (Gardner et al. 2020).
The causes of this problem can be grouped into two categories: dataset and model. For the data-centric cause, existing studies have found that substantial amounts of examples in QA datasets are solvable with question-answer type matching (Weissenborn, Wiese, and Seiffe 2017) and word matching (Sugawara et al. 2018) for extractive QA, and partial-input baselines (Sugawara et al. 2020;Yu et al. 2020) for multiple-choice QA. As such, various shortcut solutions in QA have been studied individually. To counter these problems, data augmentation approaches have been studied in QA. Jiang and Bansal (2019) constructed adversarial documents. Bartolo et al. (2020) proposed model-in-the-loop annotation. Shinoda, Sugawara, and Aizawa (2021b) found that automatic question-answer pair generation can improve the robustness.
For the model-centric cause, several approaches have been applied to QA. Ko et al. (2020) used ensemble-based methods to unlearn an answer-position shortcut. Wu et al. (2020) proposed concurrent modeling of multiple biases. Liu et al. (2020) used virtual adversarial training to improve the robustness to adversarial attacks. Wang et al. (2021) introduced mutual-information-based regularizers.
In contrast to the above studies, several studies have attempted to understand shortcut learning. Lai et al. (2021) found that shortcut solutions are learned at the early stage of training compared to a sophisticated solution on SQuAD. Lovering et al. (2021) showed that the more extractable a shortcut cue with a probing classifier, the more anti-shortcut examples are needed to achieve low error on anti-shortcut examples in simple grammatical tasks. Scimeca et al. (2022) compared several shortcut cues in image classification tasks.
We also attempt to understand the characteristics of shortcuts in extractive and multiple-choice QA from the perspectives of the learnability, that is, how easy it is to learn a shortcut. To the best of our knowledge, we are the first to compare the difference of the learnability for each shortcut in QA. Moreover, our study suggests that the learnability of shortcuts should be considered when designing mitigation methods. This perspective is lacking in the existing mitigation studies.
Conclusion
We deepened understanding of the shortcut solutions in extractive and multiple-choice QA by comparing the learnability of shortcuts, that is, how easy it is to learn a shortcut, in a series of experiments. We first showed that when every shortcut is applicable to a training set, extractive QA models prefer the answer-position shortcut whereas multiple-choice QA models prefer the word-label correlation shortcut among the examined shortcuts. From the perspective of the parameter space, QA models that learn the preferred shortcuts tend to lie in flatter and deeper loss surfaces, which explains the cause of the preference. To quantify the learnability of each shortcut, we estimated the MDLs on biased datasets where only one shortcut is valid. The experimental results showed that the availability of more preferred shortcuts tend to make the task easier to learn. To mitigate the shortcut learning behavior, we showed that more learnable shortcuts require less proportion of anti-shortcut examples during training. The results also suggested that controlling the proportion of antishortcut examples alone is insufficient to avoid learning lesslearnable shortcuts such as word and type matching in extractive QA. We claim that approaches for mitigating shortcut learning should be appropriately designed according to the learnability of shortcuts.
Figure 1 :
1An illustration of the behavioral test to reveal which shortcut solution QA models prefer to learn.
Figure 2 :
2Left: F1 score on each subset of the SQuAD 1.1 and NaturalQuestions evaluation sets during training. Right: Accuracy on each subset of the RACE and ReClor test sets during training. The mean±standard deviations over 5 random seeds are displayed.
Figure 3 :
3Visualization of loss landscapes around each shortcut in extractive QA datasets. The x and y directions are randomly selected in the parameter space. The center of the surface corresponds to the model that uses each shortcut.
Figure 4 :
4Visualization of loss landscapes around each shortcut in multiple-choice QA datasets.
Figure 5 :Figure 6 :
56F1 scores on shortcut and anti-shortcut examples from SQuAD with different proportions of anti-shortcut examples in the training set, with the size set to 5k. The mean±standard deviations over 5 random seeds are displayed. Accuracies on shortcut and anti-shortcut examples from RACE with different proportions of anti-shortcut examples in the training set, with the size set to 4k. The mean±standard deviations over 5 random seeds are displayed.
).Shortcut
BERT
RoBERTa
SQuAD 1.1
Position
4.65 ± 0.12 4.22 ± 0.23
Word
4.94 ± 0.24 3.73 ± 0.17
Type
5.75 ± 0.30 4.52 ± 0.06
NaturalQuestions
Position
6.28 ± 0.15 5.37 ± 0.24
Word
12.24 ± 0.14 9.08 ± 0.20
Type
11.76 ± 0.55 8.83 ± 0.38
RACE
Top-1
0.52 ± 0.34 0.41 ± 0.29
Overlap
4.16 ± 0.55 3.55 ± 0.10
ReClor
Top-1
0.33 ± 0.07 0.28 ± 0.03
Overlap
0.55 ± 0.03 0.52 ± 0.02
Table 2 :
2Minimum description lengths (kbits) on biased
datasets where only one of the examined shortcut solutions
is valid. The means±standard deviations over five random
seeds are reported.
Our codes are publicly available at https://github.com/ KazutoshiShinoda/ShortcutLearnability.
The time steps were 0.1, 0.2, 0.4, 0.8, 1.6, 3.2, 6.25, 12.5, 25, 50, and 100 percent of the datasets following Voita and Titov (2020).
AcknowledgementsWe would like to thank the anonymous reviewers for their valuable comments. This work was supported by JSPS KAKENHI Grant Numbers 21H03502, 22J13751 and 22K17954. This work was also supported by NEDO SIP-2 "Big-data and AI-enabled Cyberspace Technologies".
Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension. M Bartolo, A Roberts, J Welbl, S Riedel, P Stenetorp, Transactions of the Association for Computational Linguistics. 8Bartolo, M.; Roberts, A.; Welbl, J.; Riedel, S.; and Stene- torp, P. 2020. Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension. Transactions of the Association for Computational Linguistics, 8: 662-678.
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases. C Clark, M Yatskar, L Zettlemoyer, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsClark, C.; Yatskar, M.; and Zettlemoyer, L. 2019. Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Process- ing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 4069-4082. Hong Kong, China: Association for Computational Linguistics.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinnesotaLong and Short Papers1: Association for Computational LinguisticsDevlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), 4171-4186. Min- neapolis, Minnesota: Association for Computational Lin- guistics.
Improving the Robustness of Question Answering Systems to Question Paraphrasing. W C Gan, H T Ng, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsGan, W. C.; and Ng, H. T. 2019. Improving the Robustness of Question Answering Systems to Question Paraphrasing. In Proceedings of the 57th Annual Meeting of the Associ- ation for Computational Linguistics, 6065-6075. Florence, Italy: Association for Computational Linguistics.
Evaluating Models' Local Decision Boundaries via Contrast Sets. M Gardner, Y Artzi, V Basmov, J Berant, B Bogin, S Chen, P Dasigi, D Dua, Y Elazar, A Gottumukkala, N Gupta, H Hajishirzi, G Ilharco, D Khashabi, K Lin, J Liu, N F Liu, P Mulcaire, Q Ning, S Singh, N A Smith, S Subramanian, R Tsarfaty, E Wallace, A Zhang, B Zhou, Findings of the Association for Computational Linguistics: EMNLP 2020. OnlineAssociation for Computational LinguisticsGardner, M.; Artzi, Y.; Basmov, V.; Berant, J.; Bogin, B.; Chen, S.; Dasigi, P.; Dua, D.; Elazar, Y.; Gottumukkala, A.; Gupta, N.; Hajishirzi, H.; Ilharco, G.; Khashabi, D.; Lin, K.; Liu, J.; Liu, N. F.; Mulcaire, P.; Ning, Q.; Singh, S.; Smith, N. A.; Subramanian, S.; Tsarfaty, R.; Wallace, E.; Zhang, A.; and Zhou, B. 2020. Evaluating Models' Local Decision Boundaries via Contrast Sets. In Findings of the Association for Computational Linguistics: EMNLP 2020, 1307-1323. Online: Association for Computational Linguistics.
Competency Problems: On Finding and Removing Artifacts in Language Data. M Gardner, W Merrill, J Dodge, M Peters, A Ross, S Singh, N A Smith, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingDominican RepublicAssociation for Computational LinguisticsGardner, M.; Merrill, W.; Dodge, J.; Peters, M.; Ross, A.; Singh, S.; and Smith, N. A. 2021. Competency Problems: On Finding and Removing Artifacts in Language Data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 1801-1813. Online and Punta Cana, Dominican Republic: Association for Compu- tational Linguistics.
Shortcut learning in deep neural networks. R Geirhos, J.-H Jacobsen, C Michaelis, R Zemel, W Brendel, M Bethge, F A Wichmann, Nature Machine Intelligence. 211Geirhos, R.; Jacobsen, J.-H.; Michaelis, C.; Zemel, R.; Bren- del, W.; Bethge, M.; and Wichmann, F. A. 2020. Shortcut learning in deep neural networks. Nature Machine Intelli- gence, 2(11): 665-673.
Annotation Artifacts in Natural Language Inference Data. S Gururangan, S Swayamdipta, O Levy, R Schwartz, S Bowman, N A Smith, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLouisiana2Short Papers. Association for Computational LinguisticsGururangan, S.; Swayamdipta, S.; Levy, O.; Schwartz, R.; Bowman, S.; and Smith, N. A. 2018. Annotation Artifacts in Natural Language Inference Data. In Proceedings of the 2018 Conference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), 107-112. New Or- leans, Louisiana: Association for Computational Linguis- tics.
Flat Minima. S Hochreiter, J Schmidhuber, Neural Computation. 91Hochreiter, S.; and Schmidhuber, J. 1997. Flat Minima. Neural Computation, 9(1): 1-42.
spaCy: Industrial-strength Natural Language Processing in Python. M Honnibal, I Montani, S Van Landeghem, A Boyd, Honnibal, M.; Montani, I.; Van Landeghem, S.; and Boyd, A. 2020. spaCy: Industrial-strength Natural Language Pro- cessing in Python.
W Hu, L Xiao, B Adlam, J Pennington, Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS'20. the 34th International Conference on Neural Information Processing Systems, NIPS'20Red Hook, NY, USACurran Associates IncThe Surprising Simplicity of the Early-Time Learning Dynamics of Neural Networks. ISBN 9781713829546Hu, W.; Xiao, L.; Adlam, B.; and Pennington, J. 2020. The Surprising Simplicity of the Early-Time Learning Dynam- ics of Neural Networks. In Proceedings of the 34th Inter- national Conference on Neural Information Processing Sys- tems, NIPS'20. Red Hook, NY, USA: Curran Associates Inc. ISBN 9781713829546.
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. A Jacovi, A Marasović, T Miller, Y Goldberg, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21. the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT '21New York, NY, USAAssociation for Computing Machinery9781450383097Jacovi, A.; Marasović, A.; Miller, T.; and Goldberg, Y. 2021. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. In Proceed- ings of the 2021 ACM Conference on Fairness, Account- ability, and Transparency, FAccT '21, 624-635. New York, NY, USA: Association for Computing Machinery. ISBN 9781450383097.
Adversarial Examples for Evaluating Reading Comprehension Systems. R Jia, P Liang, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsJia, R.; and Liang, P. 2017. Adversarial Examples for Eval- uating Reading Comprehension Systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2021-2031. Copenhagen, Denmark: Association for Computational Linguistics.
Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop QA. Y Jiang, M Bansal, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsJiang, Y.; and Bansal, M. 2019. Avoiding Reasoning Short- cuts: Adversarial Evaluation, Training, and Model Develop- ment for Multi-Hop QA. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2726-2736. Florence, Italy: Association for Computational Linguistics.
Look at the First Sentence: Position Bias in Question Answering. M Ko, J Lee, H Kim, G Kim, J Kang, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsKo, M.; Lee, J.; Kim, H.; Kim, G.; and Kang, J. 2020. Look at the First Sentence: Position Bias in Question Answering. In Proceedings of the 2020 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), 1109-1121. Online: Association for Computational Linguistics.
T Kwiatkowski, J Palomaki, O Redfield, M Collins, A Parikh, C Alberti, D Epstein, I Polosukhin, J Devlin, K Lee, K Toutanova, L Jones, M Kelcey, M.-W Chang, A M Dai, J Uszkoreit, Q Le, S Petrov, Natural Questions: A Benchmark for Question Answering Research. Transactions of the Association for Computational Linguistics. 7Kwiatkowski, T.; Palomaki, J.; Redfield, O.; Collins, M.; Parikh, A.; Alberti, C.; Epstein, D.; Polosukhin, I.; Devlin, J.; Lee, K.; Toutanova, K.; Jones, L.; Kelcey, M.; Chang, M.- W.; Dai, A. M.; Uszkoreit, J.; Le, Q.; and Petrov, S. 2019. Natural Questions: A Benchmark for Question Answering Research. Transactions of the Association for Computa- tional Linguistics, 7: 452-466.
RACE: Large-scale ReAding Comprehension Dataset From Examinations. G Lai, Q Xie, H Liu, Y Yang, E Hovy, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsLai, G.; Xie, Q.; Liu, H.; Yang, Y.; and Hovy, E. 2017. RACE: Large-scale ReAding Comprehension Dataset From Examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 785- 794. Copenhagen, Denmark: Association for Computational Linguistics.
Why Machine Reading Comprehension Models Learn Shortcuts?. Y Lai, C Zhang, Y Feng, Q Huang, D Zhao, Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Online: Association for Computational LinguisticsLai, Y.; Zhang, C.; Feng, Y.; Huang, Q.; and Zhao, D. 2021. Why Machine Reading Comprehension Models Learn Shortcuts? In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, 989-1002. Online: Associ- ation for Computational Linguistics.
Visualizing the Loss Landscape of Neural Nets. H Li, Z Xu, G Taylor, C Studer, T Goldstein, S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi, R Garnett, H Cheng, P He, W Chen, Y Wang, H Poon, J Gao, arXiv:2004.08994Adversarial training for large neural language models. Curran Associates, Inc. Liu31arXiv preprintAdvances in Neural Information Processing SystemsLi, H.; Xu, Z.; Taylor, G.; Studer, C.; and Goldstein, T. 2018. Visualizing the Loss Landscape of Neural Nets. In Bengio, S.; Wallach, H.; Larochelle, H.; Grauman, K.; Cesa-Bianchi, N.; and Garnett, R., eds., Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc. Liu, X.; Cheng, H.; He, P.; Chen, W.; Wang, Y.; Poon, H.; and Gao, J. 2020. Adversarial training for large neural lan- guage models. arXiv preprint arXiv:2004.08994.
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintLiu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Predicting Inductive Biases of Pre-Trained Models. C Lovering, R Jha, T Linzen, E Pavlick, International Conference on Learning Representations. Lovering, C.; Jha, R.; Linzen, T.; and Pavlick, E. 2021. Pre- dicting Inductive Biases of Pre-Trained Models. In Interna- tional Conference on Learning Representations.
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. T Mccoy, E Pavlick, T Linzen, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsMcCoy, T.; Pavlick, E.; and Linzen, T. 2019. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natu- ral Language Inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3428-3448. Florence, Italy: Association for Computational Linguistics.
Rissanen Data Analysis: Examining Dataset Characteristics via Description Length. E Perez, D Kiela, K Cho, PMLRProceedings of the 38th International Conference on Machine Learning. Meila, M.and Zhang, T.the 38th International Conference on Machine Learning139Perez, E.; Kiela, D.; and Cho, K. 2021. Rissanen Data Analysis: Examining Dataset Characteristics via Descrip- tion Length. In Meila, M.; and Zhang, T., eds., Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, 8500-8513. PMLR.
SQuAD: 100,000+ Questions for Machine Comprehension of Text. P Rajpurkar, J Zhang, K Lopyrev, P Liang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingRajpurkar, P.; Zhang, J.; Lopyrev, K.; and Liang, P. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceedings of the 2016 Conference on Empir- ical Methods in Natural Language Processing, 2383-2392.
. Texas Austin, Association for Computational LinguisticsAustin, Texas: Association for Computational Linguistics.
Modeling by shortest data description. J Rissanen, Automatica. 145Rissanen, J. 1978. Modeling by shortest data description. Automatica, 14(5): 465-471.
Universal coding, information, prediction, and estimation. J Rissanen, IEEE Transactions on Information theory. 304Rissanen, J. 1984. Universal coding, information, predic- tion, and estimation. IEEE Transactions on Information the- ory, 30(4): 629-636.
Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space Perspective. L Scimeca, S J Oh, S Chun, M Poli, Yun , S , International Conference on Learning Representations. Scimeca, L.; Oh, S. J.; Chun, S.; Poli, M.; and Yun, S. 2022. Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space Perspective. In International Conference on Learning Representations.
Can Question Generation Debias Question Answering Models? A Case Study on Question-Context Lexical Overlap. K Shinoda, S Sugawara, A Aizawa, Proceedings of the 3rd Workshop on Machine Reading for Question Answering. the 3rd Workshop on Machine Reading for Question AnsweringPunta Cana, Dominican RepublicAssociation for Computational LinguisticsShinoda, K.; Sugawara, S.; and Aizawa, A. 2021a. Can Question Generation Debias Question Answering Models? A Case Study on Question-Context Lexical Overlap. In Pro- ceedings of the 3rd Workshop on Machine Reading for Ques- tion Answering, 63-72. Punta Cana, Dominican Republic: Association for Computational Linguistics.
Improving the Robustness of QA Models to Challenge Sets with Variational Question-Answer Pair Generation. K Shinoda, S Sugawara, A Aizawa, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research WorkshopOnlineAssociation for Computational LinguisticsShinoda, K.; Sugawara, S.; and Aizawa, A. 2021b. Improv- ing the Robustness of QA Models to Challenge Sets with Variational Question-Answer Pair Generation. In Proceed- ings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Con- ference on Natural Language Processing: Student Research Workshop, 197-214. Online: Association for Computational Linguistics.
What Makes Reading Comprehension Questions Easier?. S Sugawara, K Inui, S Sekine, A Aizawa, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsSugawara, S.; Inui, K.; Sekine, S.; and Aizawa, A. 2018. What Makes Reading Comprehension Questions Easier? In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 4208-4219. Brussels, Bel- gium: Association for Computational Linguistics.
Assessing the Benchmarking Capacity of Machine Reading Comprehension Datasets. S Sugawara, P Stenetorp, K Inui, A Aizawa, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Sugawara, S.; Stenetorp, P.; Inui, K.; and Aizawa, A. 2020. Assessing the Benchmarking Capacity of Machine Reading Comprehension Datasets. Proceedings of the AAAI Confer- ence on Artificial Intelligence, 34(05): 8918-8927.
Unbiased look at dataset bias. A Torralba, A A Efros, CVPR 2011. Torralba, A.; and Efros, A. A. 2011. Unbiased look at dataset bias. In CVPR 2011, 1521-1528.
Towards Debiasing NLU Models from Unknown Biases. P A Utama, N S Moosavi, I Gurevych, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsUtama, P. A.; Moosavi, N. S.; and Gurevych, I. 2020. To- wards Debiasing NLU Models from Unknown Biases. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 7597-7610. On- line: Association for Computational Linguistics.
Voita, E.; and Titov, I. 2020. Information-Theoretic Probing with Minimum Description Length. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L U Kaiser, I Polosukhin, I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, R Garnett, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingOnlineAssociation for Computational Linguistics30Advances in Neural Information Processing SystemsVaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L. u.; and Polosukhin, I. 2017. Attention is All you Need. In Guyon, I.; Luxburg, U. V.; Bengio, S.; Wallach, H.; Fergus, R.; Vishwanathan, S.; and Garnett, R., eds., Advances in Neural Information Process- ing Systems, volume 30. Curran Associates, Inc. Voita, E.; and Titov, I. 2020. Information-Theoretic Prob- ing with Minimum Description Length. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), 183-196. Online: Association for Computational Linguistics.
Info{BERT}: Improving Robustness of Language Models from An Information Theoretic Perspective. B Wang, S Wang, Y Cheng, Z Gan, R Jia, B Li, J Liu, D Weissenborn, G Wiese, L Seiffe, Proceedings of the 21st Conference on Computational Natural Language Learning. the 21st Conference on Computational Natural Language LearningCanadaAssociation for Computational LinguisticsInternational Conference on Learning Representations. VancouverWang, B.; Wang, S.; Cheng, Y.; Gan, Z.; Jia, R.; Li, B.; and Liu, J. 2021. Info{BERT}: Improving Robustness of Lan- guage Models from An Information Theoretic Perspective. In International Conference on Learning Representations. Weissenborn, D.; Wiese, G.; and Seiffe, L. 2017. Making Neural QA as Simple as Possible but not Simpler. In Pro- ceedings of the 21st Conference on Computational Natu- ral Language Learning (CoNLL 2017), 271-280. Vancou- ver, Canada: Association for Computational Linguistics.
Improving QA Generalization by Concurrent Modeling of Multiple Biases. M Wu, N S Moosavi, A Rücklé, I Gurevych, Findings of the Association for Computational Linguistics: EMNLP 2020. Online: Association for Computational LinguisticsWu, M.; Moosavi, N. S.; Rücklé, A.; and Gurevych, I. 2020. Improving QA Generalization by Concurrent Modeling of Multiple Biases. In Findings of the Association for Compu- tational Linguistics: EMNLP 2020, 839-853. Online: Asso- ciation for Computational Linguistics.
Semi-Supervised QA with Generative Domain-Adaptive Nets. Z Yang, J Hu, R Salakhutdinov, W Cohen, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Yang, Z.; Hu, J.; Salakhutdinov, R.; and Cohen, W. 2017. Semi-Supervised QA with Generative Domain-Adaptive Nets. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1040-1050. Vancouver, Canada: Association for Computational Linguistics.
ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning. W Yu, Z Jiang, Y Dong, J Feng, International Conference on Learning Representations. ICLRYu, W.; Jiang, Z.; Dong, Y.; and Feng, J. 2020. ReClor: A Reading Comprehension Dataset Requiring Logical Rea- soning. In International Conference on Learning Represen- tations (ICLR).
| [] |
[
"Information Extraction from Broadcast News",
"Information Extraction from Broadcast News"
] | [
"Yoshihiko Gotoh y.gotoh@dcs.shef.ac.uk \nDepartment of Computer Science Regent Court\nUniversity of Sheffield\n211 Portobello StreetS1 4DPSheffieldUK\n",
"Steve Renals s.renals@dcs.shef.ac.uk \nDepartment of Computer Science Regent Court\nUniversity of Sheffield\n211 Portobello StreetS1 4DPSheffieldUK\n"
] | [
"Department of Computer Science Regent Court\nUniversity of Sheffield\n211 Portobello StreetS1 4DPSheffieldUK",
"Department of Computer Science Regent Court\nUniversity of Sheffield\n211 Portobello StreetS1 4DPSheffieldUK"
] | [
"Philosophical Transactions of the Royal Society of London, series A: Mathematical, Physical and Engineering Sciences"
] | This paper discusses the development of trainable statistical models for extracting content from television and radio news broadcasts. In particular we concentrate on statistical finite state models for identifying proper names and other named entities in broadcast speech. Two models are presented: the first represents name class information as a word attribute; the second represents both word-word and class-class transitions explicitly. A common n-gram based formulation is used for both models. The task of named entity identification is characterized by relatively sparse training data and issues related to smoothing are discussed. Experiments are reported using the DARPA/NIST Hub-4E evaluation for North American Broadcast News. keywords: named entity; information extraction; language modelling 2 Named Entity IdentificationReviewProper names account for around 9% of broadcast news output, and their successful identification would be useful for structuring the output of a speech recognizer (through punctuation, capitalization and tokenization), and as an aid to other spoken language processing tasks, such as summarization and database creation. The task of NE identification involves identifying and classifying those words or word sequences that may be classified as proper names, or as certain other classes such as monetary expressions, dates and times. This is not a straightforward problem. While Wednesday 1 September is clearly a date, and Alan Turing is a personal name, other strings, such as the day after tomorrow, South Yorkshire Beekeepers Association and Nobel Prize are more ambiguous.NE identification was formalized for evaluation purposes as part of the 5th Message Understanding Conference(MUC-5 1993), and the evaluation task definition has evolved since then. In this paper we follow the task definition specified for the recent broadcast news evaluation (referred to as Hub-4E IE-NE ) sponsored by DARPA and NIST(Chinchor, Robinson, & Brown 1998). This specification defined seven classes of named entity: three types of proper name (<location>, <person> and <organization>) two types of temporal expression (<date> and <time>) and two types of numerical expression (<money> and <percentage>). According to this definition the following NE tags would be correct: | 10.1098/rsta.2000.0587 | [
"https://arxiv.org/pdf/cs/0003084v1.pdf"
] | 14,213,669 | cs/0003084 | 7a34d125151d6deac877b5b69ae7d62405eb88ab |
Information Extraction from Broadcast News
April 2000
Yoshihiko Gotoh y.gotoh@dcs.shef.ac.uk
Department of Computer Science Regent Court
University of Sheffield
211 Portobello StreetS1 4DPSheffieldUK
Steve Renals s.renals@dcs.shef.ac.uk
Department of Computer Science Regent Court
University of Sheffield
211 Portobello StreetS1 4DPSheffieldUK
Information Extraction from Broadcast News
Philosophical Transactions of the Royal Society of London, series A: Mathematical, Physical and Engineering Sciences
3581769April 2000named entityinformation extractionlanguage modelling
This paper discusses the development of trainable statistical models for extracting content from television and radio news broadcasts. In particular we concentrate on statistical finite state models for identifying proper names and other named entities in broadcast speech. Two models are presented: the first represents name class information as a word attribute; the second represents both word-word and class-class transitions explicitly. A common n-gram based formulation is used for both models. The task of named entity identification is characterized by relatively sparse training data and issues related to smoothing are discussed. Experiments are reported using the DARPA/NIST Hub-4E evaluation for North American Broadcast News. keywords: named entity; information extraction; language modelling 2 Named Entity IdentificationReviewProper names account for around 9% of broadcast news output, and their successful identification would be useful for structuring the output of a speech recognizer (through punctuation, capitalization and tokenization), and as an aid to other spoken language processing tasks, such as summarization and database creation. The task of NE identification involves identifying and classifying those words or word sequences that may be classified as proper names, or as certain other classes such as monetary expressions, dates and times. This is not a straightforward problem. While Wednesday 1 September is clearly a date, and Alan Turing is a personal name, other strings, such as the day after tomorrow, South Yorkshire Beekeepers Association and Nobel Prize are more ambiguous.NE identification was formalized for evaluation purposes as part of the 5th Message Understanding Conference(MUC-5 1993), and the evaluation task definition has evolved since then. In this paper we follow the task definition specified for the recent broadcast news evaluation (referred to as Hub-4E IE-NE ) sponsored by DARPA and NIST(Chinchor, Robinson, & Brown 1998). This specification defined seven classes of named entity: three types of proper name (<location>, <person> and <organization>) two types of temporal expression (<date> and <time>) and two types of numerical expression (<money> and <percentage>). According to this definition the following NE tags would be correct:
Introduction
Simple statistical models underlie many successful applications of speech and language processing. The most accurate document retrieval systems are based on unigram statistics. The acoustic model of virtually all speech recognition systems is based on stochastic finite state machines referred to as hidden Markov models (HMMs). The language (word sequence) model of state-of-the-art large vocabulary speech recognition systems uses an n-gram model ([n − 1]th order Markov model), where n is typically 4 or less. Two important features of these simple models are their trainability and scalability: in the case of language modelling, model parameters are frequently estimated from corpora containing up to 10 9 words. These approaches have been extensively investigated and optimized for speech recognition, in particular, resulting in systems that can perform certain tasks (e.g., large vocabulary dictation from a cooperative speaker) with a high degree of accuracy. More recently, similar statistical finite state models have been developed for spoken language processing applications beyond direct transcription to enable, for example, the production of structured transcriptions which may include punctuation or content annotation.
In this paper we discuss the development of trainable statistical models for extracting content from television and radio news broadcasts. In particular, we concentrate on named entity (NE) identification, a task which is reviewed in §2. Section 3 outlines a general statistical framework for NE identification, based on an n-gram model over words and classes. We discuss two formulations of this basic approach. The first ( §4) represents class information as a word attribute; the second ( §5) explicitly represents wordword and class-class transitions. In both cases we discuss the implementation of the model and present results using an evaluation framework based on North American broadcast news data. Finally, in §6, we discuss our work in the context of other approaches to NE identification in spoken language and outline some areas for future work.
<date>Wednesday 1 September</date> <person>Alan Turing</person> the day after tomorrow <organization>South Yorkshire Beekeepers Association</organization> Nobel Prize
The day after tomorrow is not tagged as a date, since only "absolute" time or date expressions are recognized; Nobel is not tagged as a personal name, since it is part of a larger construct that refers to the prize. Similarly, South Yorkshire is not tagged as a location since it is part of a larger construct tagged as an organization.
Both rule-based and statistical approaches have been used for NE identification. Wakao, Gaizauskas, & Wilks (1996) and Hobbs, Appelt, Bear, Israel, Kameyama, Stickel, & Tyson (1997) adopted grammarbased approaches using specially constructed grammars, gazetteers of personal and company names, and higher level approaches such as name co-reference. Some grammar-based systems have utilized a trainable component, such as the Alembic system (Aberdeen, Burger, Day, Hirschman, Robinson, & Vilain 1995). The LTG system (Mikheev, Grover, & Moens 1998) employed probabilistic partial matching, in addition to a non-probabilistic grammar and gazetteer look-up. Bikel, Miller, Schwartz, & Weischedel (1997) introduced a purely trainable system for NE identification, which is discussed in greater detail in Bikel, Schwartz, & Weischedel (1999). This approach was based on an ergodic HMM (i.e., an HMM in which every state is reachable from every state) where the hidden states corresponded to NE classes, and the observed symbols corresponded to words. Training was performed using an NE annotated corpus, so the state sequence was known at training time. Thus likelihood maximization could be accomplished directly without need for the expectation-maximization (EM) algorithm. The transition probabilities of this model were conditioned on both the previous state and the previous word, and the emission probabilities attached to each state could be regarded as a word-level bigram for the corresponding NE class.
NE identification systems are evaluated using an unseen set of evaluation data: the hypothesised NEs are compared with those annotated in a human-generated reference transcription. 1 In this situation there are two possible types of error: type, where an item is tagged as the wrong kind of entity and extent, where the wrong number of word tokens are tagged. For example, <location>South Yorkshire</location> Beekeepers Association has errors of both type and extent since the ground truth for this excerpt is <organization>South Yorkshire Beekeepers Association</organization> . These two error types each contribute 0.5 to the overall error count, and precision (P) and recall (R) can be calculated in the usual way. A weighted harmonic mean (P&R), sometimes called the F-measure (van Rijsbergen 1979), is often calculated as a single summary statistic:
P &R = 2RP R + P .
In a recent evaluation, using newswire text, the best performing system (Mikheev et al. 1998) returned a P&R of 0.93. Although precision and recall are clearly informative measures, Makhoul, Kubala, Schwartz, & Weischedel (1999) have criticized the use of P&R, since it implicitly deweights missing and spurious identification errors compared with incorrect identification errors. They proposed an alternative measure, referred to as the slot error rate (SER), that weights three types of identification error equally. 2
Identifying Named Entities in Speech
A straightforward approach to identifying named entities in speech is to transcribe the speech automatically using a recognizer, then to apply a text-based NE identification method to the transcription. It is more difficult to identify NEs from automatically transcribed speech compared with text, since speech recognition output is missing features that may be exploited by "hard-wired" grammar rules or by attachment to vocabulary items, such as punctuation, capitalization and numeric characters. More importantly, no speech recognizer is perfect, and spoken language is rather different from written language. Although planned, low-noise speech (such as dictation, or a news bulletin read from a script) can be recognized with a word error rate (WER) of less than 10%, speech which is conversational, in a noisy (or otherwise cluttered) acoustic environment or from a different domain may suffer a WER in excess of 40%. Additionally, the natural unit seems to be the phrase, rather than the sentence, and phenomena such as disfluencies, corrections and repetitions are common. It could thus be argued that statistical approaches, that typically operate with limited context and very little notion of grammatical constructs, are more robust than grammar-based approaches. Appelt & Martin (1999) oppose this argument, and have developed a finite-state grammar-based approach for NE identification of broadcast news. However, this relied on large, carefully constructed lexica and gazetteers, and it is not clear how portable between domains this approach is. Some further discussion of rule-based approaches follows in §6.
Spoken NE identification was first demonstrated by Kubala, Schwartz, Stone, & Weischedel (1998), who applied the model of Bikel et al. (1999) to the output of a broadcast news speech recognizer. An important conclusion of that work -supported by the experiments reported here -was that the error of an NE identifier degraded linearly with WER, with the largest errors due to missing and spuriously tagged names. Since then several other researchers, including ourselves, have investigated the problem within the Hub-4E evaluation framework.
Evaluation of spoken NE identification is more complicated than for text, since there will be speech recognition errors as well as NE identification errors (i.e., the reference tags will not apply to the same word sequence as the hypothesised tags). This requires a word level alignment of the two word sequences, which may be achieved using a phonetic alignment algorithm developed for the evaluation of speech recognizers (Fisher & Fiscus 1993). Once an alignment is obtained, the evaluation procedure outlined above may be employed, with the addition of a third error type, content, caused by speech recognition errors. The same statistics (P&R and SER) can still be used, with the three error types contributing equally to the error count.
Statistical Framework
First, let V denote a vocabulary and C be a set of name classes. We consider that V is similar to a vocabulary for conventional speech recognition systems (i.e., typically containing tens of thousands of words, and no case information or other characteristics). In what follows, C contains the proper names, temporal and number expressions used in the Hub-4E IE-NE evaluation described above. When there is no ambiguity, these named entities are referred to as "name(s)". As a convention here, a class <other> is included in C for those words not belonging to any of the specified names. Because each name may consist of one word or a sequence of words, we also include a marker <+> in C, implying that the corresponding word is a part of the same name as the previous word. The following example is taken from a human-generated reference transcription for the 1997 Hub-4E Broadcast News evaluation data:
AT THE RONALD REAGAN CENTER <organization> IN SIMI VALLEY <location> CALIFORNIA <location>
The corresponding class sequence is <other> <+> <organization> <+> <+> <other> <location> <+> <location> because SIMI VALLEY and CALIFORNIA are considered two different names by the specification (Chinchor et al. 1998).
Class information may be interpreted as a word attribute (the left model of figure 1). Formally, we define a class-word token <c, w> ∈ C × V and consider a probability p(<c, w> 1 , · · · , <c, w> m ) = i=1···m p(<c, w> i | <c, w> 1 , · · · , <c, w> i−1 )
(1) that generates a sequence of class-word tokens <c, w> 1 , · · · , <c, w> m . Alternatively, word-word and class-class transitions may be explicitly formulated (the right model of figure 1). Then we consider a probability
p(c 1 , w 1 , · · · , c m , w m ) = i=1···m p(c i , w i |c 1 , w 1 , · · · , c i−1 , w i−1 )(2)
that generates a sequences of words w 1 , · · · , w m and a corresponding sequence of classes c 1 , · · · , c m . The first approach is simple and analogous to conventional n-gram language modelling, however the performance is sub-optimal in comparison to the second approach, which is more complex and needs greater attention to the smoothing procedure. For both formulations, we have performed experiments using data produced for the Hub-4E IE-NE evaluation. The training data for this evaluation consisted of manually annotated transcripts of the Hub-4E Broadcast News acoustic training data (broadcast in 1996-97). This data contained approximately one million words (corresponding to about 140 hours of audio). Development was performed using the 1997 evaluation data (3 hours of audio broadcast in 1996, about 32,000 words) and evaluation results reported on the 1998 evaluation data (3 hours of audio broadcast in 1996 and 1998, about 33,000 words).
Modelling Class Information as a Word Attribute
In this section, we describe an NE model based on direct word-word transitions, with class information treated as a word attribute. This approach suffers seriously from data sparsity. We briefly summarize why this is so.
Technical Description
Formulation (1) may be best viewed as a straightforward extension to standard n-gram language modelling. Denoting e = <c, w>, (1) is rewritten as
p(e 1 , · · · , e m ) = i=1···m p(e i |e 1 , · · · , e i−1 )(3)
and this is identical to the n-gram model widely used for large vocabulary speech recognition systems. Because each token e ∈ C × V is treated independently, those having the same word but the different class (e.g., <date,MAY>, <person,MAY>, and <other,MAY>) are considered different members. Using this formulation, class-class transitions are implicit. Further it may be interpreted as a classical HMM, in which tokens e i correspond to states, with observations c i and w i generated from each e i . Maximum likelihood estimates for model parameters can be obtained from the frequency count of each n-gram given text data annotated with name information. Since the state sequence is known the forward-backward algorithm is not required. Standard discounting and smoothing techniques may be applied. The search process is based on n-gram relations. Given a sequence of words, w 1 , · · · , w m , the most probable sequence of names may be identified by tracing the Viterbi path across the class-word trellis such that
<ĉ 1 , · · · ,ĉ m > = argmax c1···cm p(<c, w> 1 , · · · , <c, w> m ) .(4)
This process may be slightly elaborated by looking into a separate list of names that augments n-grams of <c, w> tokens. Further technical details of this formulation are in Gotoh & Renals (1999).
Experiment
Using the experimental setup described in §3, we estimated a back-off trigram language model that contained 18, 964 class-word tokens in a trigram vocabulary, with a further 3, 697 words modelled as unigram extensions.
A hand transcription (provided by NIST) and four speech recognizer outputs (three distributed by NIST representing the range of systems that participated in the 1998 broadcast news transcription evaluation, and our own system (Robinson, Cook, Ellis, Fosler-Lussier, Renals, & Williams )) were automatically marked with NEs, then scored against the human-generated reference transcription. The results are summarized in table 1. The combined P&R score was about 83% for a hand transcription. For recognizer outputs, the scores declined as WER increased. As noted by other researchers (e.g., Miller, Schwartz, Weischedel, & Stone (1999)) a linear relationship between the WER and the NE identification scores is observed.
We have previously made an error analysis of this approach (Gotoh and Renals 1999), where it was observed that most correctly marked names were identified through bigram or trigram constraints around each name (i.e., the name itself and words before/after that name). When the NE model was forced to back-off to unigram statistics, names were often missed (causing a decrease in recall) or occasionally a bigram of words attributed with another class was preferred (a decrease in precision Table 1: NE identification scores on 1998 Hub-4E evaluation data, using the NE model with implicit class transitions. A hand transcription and three recognizer outputs were provided by NIST. The bottom row is by our own recognizer. WER and SER indicate word and slot error rates. R, P, and P&R denote recall, precision, and a combined precision&recall scores, respectively. This table contains further improvement since our participation in the 1998 Hub-4E evaluation. In this experiment, we used transcripts of Broadcast News acoustic training data (1996-97) for NE model generation, but did not rely on external sources.
... <other,DIRECTOR> <other,unknown> <other,unknown> <other,SAYS> ...
Unigram statistics for <person,ADRIAN> and <person,unknown> existed in the model, however none of the trigrams or bigrams outperformed a bigram entry p(<other,SAYS> | <other,unknown>) .
Further, <other,unknown> had higher unigram probability than <person,ADRIAN>, and no other trigram or bigram was able to recover this name. (There was no unigram entry for <other,ADRIAN>.) As a consequence, ADRIAN LAJOUS was not identified as <person>.
This is an example of a data sparsity problem that is observed in almost every aspect of spoken language processing. Although NE models cannot accommodate probability parameters for a complete set of n-gram occurrences, a successful recovery of name expressions is heavily dependent on the existence of higher order n-grams in the model. The implicit class transition approach contributes adversely to the data sparsity problem because it causes the set of possible tokens to increase in size from |V| to |C × V|.
Explicit Modelling of Class and Word Transitions
In this section, an alternative formulation is presented that explicitly models constraints at the class level, compensating for the fundamental sparseness of n-gram tokens on a vocabulary set. Recent work by Miller et al. (1999) and has indicated that such explicit modelling is a promising direction as P &R scores of up to 90% for hand transcribed data have been achieved using an ergodic HMM. These formulations may be regarded as a two-level architecture, in which the state transitions in the HMM represent transitions between classes (upper level), and the output distributions from each state correspond to the sequence of words within each class (lower level).
The formulation developed here is simpler because, rather than introducing a two-level architecture, we describe a flat state machine that models the probabilities of the current word and class conditioned on the previous word and class (the right model of figure 1). We do not describe this formulation as an HMM, as the probabilities are conditioned both on the previous word and on the previous class. Only a bigram model is considered; however it outperforms the trigram modelling of §4.
Technical Description
Formulation (2) treats class and word tokens independently. Using bigram level constraints, (2) is reduced to p(c 1 , w 1 , · · · , c m , w m ) = i=1···m p(c i , w i |c i−1 , w i−1 ) .
The right side of (5) may be decomposed as
p(c i , w i |c i−1 , w i−1 ) = p(w i |c i , c i−1 , w i−1 ) · p(c i |c i−1 , w i−1 ) .(6)
The conditioned current word probability p(w i |c i , c i−1 , w i−1 ) and the current class probability p(c i |c i−1 , w i−1 ) are in the same form as a conventional n-gram, hence may be estimated from annotated text data. The amount of annotated text data available is orders of magnitude smaller than the amount of text data typically used to estimate n-gram language models for large vocabulary speech recognition. Smoothing the maximum likelihood probability estimates is therefore essential to avoid zero probabilities for events that were not observed in the training data. We have applied standard techniques in which more specific models are smoothed with progressively less specific models. The following smoothing path was chosen for the first term on the right side of (6):
p(w i |c i , c i−1 , w i−1 ) −→ p(w i |c i , c i−1 ) −→ p(w i |c i ) −→ p(w i ) −→ 1 |W| ,
where |W| is the size of the possible vocabulary that includes both observed and unobserved words from the training text data (i.e., |W| is sufficiently greater than |V|). We preferred smoothing to p(w i |c i , c i−1 ), rather than to p(w i |c i , w i−1 ), since we believed that the former would be better estimated from the annotated training data. Similarly, the smoothing path for the current class probability (the final term in (6)) was:
p(c i |c i−1 , w i−1 ) −→ p(c i |c i−1 ) −→ p(c i ) .
This assumes that each class occurs sufficiently in training text data; otherwise, further smoothing to some constant probability may be required. Given the smoothing path, the current word probability may be computed using an interpolation method based on that of Jelinek & Mercer (1980):
p(w i |c i , c i−1 , w i−1 ) =f (w i |c i , c i−1 , w i−1 ) + {1 − α(c i , c i−1 , w i−1 )} · p(w i |c i , c i−1 )(7)
wheref (w i |c i , c i−1 , w i−1 ) is a discounted relative frequency and α(c i , c i−1 , w i−1 ) is a non-zero probability estimate (i.e., the probability thatf (w i |c i , c i−1 , w i−1 ) exists in the model).
Alternatively, the back-off smoothing method of Katz (1987) could be applied:
p(w i |c i , c i−1 , w i−1 ) = f (w i |c i , c i−1 , w i−1 ) if E(c i , w i |c i−1 , w i−1 ) exists, β(c i , c i−1 , w i−1 ) · p(w i |c i , c i−1 ) otherwise.(8)
In (8), β(c i , c i−1 , w i−1 ) is a back-off factor and is calculated by
β(c i , c i−1 , w i−1 ) = 1 − α(c i , c i−1 , w i−1 ) 1 − wi∈E(ci,wi|ci−1,wi−1)f (w i |c i , c i−1 )(9)
where E(c i , w i |c i−1 , w i−1 ) implies the event such that current class c i and word w i occur after previous class c i−1 and word w i−1 . 3 Discounted relative frequencies and non-zero probability estimates may be obtained from training data using standard discounting techniques such as Good-Turing, absolute discounting, or deleted interpolation. Further discussion for discounting and smoothing approaches should be referred to, e.g., Katz (1987) or Ney, Essen, & Kneser (1995). Given a sequence of words w 1 , · · · , w m , named entities can be identified by searching the Viterbi path such that <ĉ 1 · · ·ĉ m > = argmax c1···cm p(c 1 , w 1 , · · · , c m , w m ) . Hub-4E hand transcription -calculated using interpolation and back-off smoothing. NE models were built with and without the unknown token, using deleted interpolation (del), Good-Turing (GT), absolute (abs), and a combination of Good-Turing/absolute (GTabs) discounting schemes. We have used 1997 data for a system development (as in figure 2), then applied to 1998 data for a system evaluation (as in table 2).
Although the smoothing scheme should handle novel words well, the introduction of conditional probabilities for unknown (which represents those words not included in the vocabulary V) may be used to model unknown words directly. In practice, this is achieved by setting a certain cutoff threshold when estimating discounting probabilities. Those words that occur less than this threshold are treated as unknown tokens. This does not imply that smoothing is no longer needed, but that conditional probabilities containing the unknown token may occasionally pick up the context correctly without smoothing with weaker models. The drawback is that some uncommon words are lost from the vocabulary. Below we compare two NE models experimentally: one with unknown and fewer vocabulary words and the other without unknown but with more vocabulary words.
Experiment
Experiments were performed using the evaluation conditions described in §3. Two NE models (with explicit class transitions) were derived from transcripts of the hand annotated Broadcast News acoustic training data. One model contained no unknown token; there existed 27,280 different words in the training data, all of which were accommodated in the vocabulary list. Another model selected 17,560 words (from those occurring more than once in the training data) as a vocabulary and the rest (those occurring exactly once -nearly 10,000 words) were replaced by the unknown token. Firstly, NE models were discounted using the deleted interpolation, absolute, Good-Turing and combined Good-Turing/absolute discounting schemes. 4 For each discounting scheme and with/without an unknown token, figure 2 shows P&R scores using the hand transcription of the 1997 evaluation data. For most cases, P&R was slightly better when unknown was introduced, although the vocabulary size was substantially smaller. Among discounting schemes, there was hardly any difference between Good-Turing, absolute, and combined Good-Turing/absolute, regardless of the smoothing method used. Non-zero probability parameters derived using deleted interpolation did not seem well matched to back-off smoothing. Table 2: NE identification scores on 1998 Hub-4E evaluation data, using the NE model with explicit class transitions. A hand transcription and three recognizer outputs were provided by NIST. The bottom row is by our own recognizer. WER and SER indicate word and slot error rates. R, P, and P&R denote recall, precision, and a combined precision&recall scores, respectively. The NE model contained 17,560 vocabulary words plus unknown token. A combination of Good-Turing/absolute discounting scheme was applied, followed by back-off smoothing. The best performing model in the 1998 Hub-4E IE-NE (Miller et al. 1999) had P&R scores of .906, .815, .826, and .703 for the hand transcription and NIST recognizer outputs 1, 2, 3.
We suspect, however, that the difference in performance would be negligible if a sufficient amount of training data was available for the deleted interpolation case.
Using unknown and the combined Good-Turing/absolute discounting scheme, followed by back-off smoothing, table 2 summarizes NE identification scores for 1998 Hub-4E evaluation data. For the hand transcription and the four speech recognition outputs, this explicit class transition NE model improved P&R scores by 4-6% absolute over the implicit model of §4.
Although more complex in formulation, it is beneficial to model class-class transitions explicitly. Consider again the phrase ... DIRECTOR ADRIAN LAJOUS SAYS ... discussed in §4. Here, ADRIAN LAJOUS was correctly identified as <person> although LAJOUS was not included in the vocabulary. It was identified using the product of conditional probabilities p(unknown | <+>, <person>) · p(<+> | <person>, ADRIAN) between ADRIAN and unknown as well as the product p(SAYS | <other>, <person>, unknown) · p (<other> | <person>, unknown) between unknown and SAYS.
An Alternative Decomposition
There exists an alternative approach to decomposing the right side of Equation (5):
p(c i , w i |c i−1 , w i−1 ) = p(c i |w i , c i−1 , w i−1 ) · p(w i |c i−1 , w i−1 ) .(11)
Theoretically, if the "true" conditional probability can be estimated, decompositions by (6) and by (11) should produce identical results. This ideal case does not occur, and various discounting and smoothing techniques will cause further differences between two decompositions. In practice, the conditional probabilities on the right side of (11) can be estimated in the same fashion as described in §4: counting the occurrences of each token in annotated text data, then applying certain discounting and smoothing techniques. The adopted smoothing path for the current word probability was
p(w i |c i−1 , w i−1 ) −→ p(w i |c i−1 ) −→ p(w i ) −→ 1 |W|
and a path for the current class probability was Figure 3: P&R scores on the 1997 hand transcription using mixtures of the two decompositions. NE models were built using unknown combined Good-Turing/absolute discounting, then back-off smoothing.
p(c i |w i , c i−1 ) −→ p(c i |w i ) −→ p(c i ) .
In the latter case, a slight approximation p(c i |w i , c i−1 , w i−1 ) ∼ p(c i |w i , c i−1 ) was made, since it was observed that w i−1 did not contribute much when calculating the probability of c i in this manner. This second decomposition alone did not work as well as the initial decomposition. When applied to the 1997 hand transcription, the P&R score declined by 8% absolute (using unknown, combined Good-Turing/absolute discounting, and back-off smoothing). In general, decomposition by (11) accurately tagged words that occurred frequently in the training data, but performed less well for uncommon words. Crudely speaking, it calculated the distribution over classes for each word; consequently it had reduced accuracy for uncommon words with less reliable probability estimates. Decomposition by (6) makes a more balanced decision because it relies on the distribution over words for each class, and there are orders of magnitude fewer classes than words.
The two decompositions can be combined by
p(c i , w i |c i−1 , w i−1 ) = p 1 (c i , w i |c i−1 , w i−1 ) 1−k · p 2 (c i , w i |c i−1 , w i−1 ) k(12)
where p 1 refers to the initial method and p 2 the alternative. Figure 3 shows precision and recall scores for the mixture (with factors 0.0 ≤ k ≤ 1.0) of the two decompositions. It is observed that, for values of k around 0.5, this modelling improved the precision without degrading the overall P&R.
Discussion
We have described trainable statistical models for the identification of named entities in television and radio news broadcasts. Two models were presented, both based on n-gram statistics. The first modelin which class information was implicitly modelled as a word attribute -was a straightforward extension of conventional language modelling. However, it suffered seriously from the problem of data sparsity, resulting in a sub-optimal performance (a P&R score of 83% on a hand transcription). We addressed this problem in a second approach which explicitly modelled class-class and word-word transitions. With this approach the P&R score improved to 89%. These scores were based on a relatively small amount of training data (one million words). Like other language modelling problems, a simple way to improve the performance is to increase the amount of training data. Miller et al. (1999) have noted that there is a log-linear relation between the amount of training data and the NE identification performance; our experiments indicate that the P&R score improves by a few percent for each doubling of the training data size (between 0.1 and 1.0 million words).
The development of the second model was motivated by the success of the approach of Bikel et al. (1999) and Miller et al. (1999). This model shares the same principle of an explicit, statistical model of class-class and word-word transitions, but the model formulation, and the discounting and smoothing procedures differ. In particular, the model presented here is a flat state machine, that is not readily interpretable as a two-level HMM architecture. Our experience indicates that an appropriate choice and implementation of discounting/smoothing strategies is very important, since a more complex model structure is being trained with less data, compared with conventional language models for speech recognition systems. The overall results that we have obtained are similar to those of Miller et al., but there are some differences which we cannot immediately explain away. In particular, although the combined P&R scores were similar, Miller et al. reported balanced recall and precision, whereas we have consistently observed substantially higher precision and lower recall.
The models presented here were trained using a corpus of about one million words of text, manually annotated. No gazetteers, carefully-tuned lexica or domain-specific rules were employed; the brittleness of maximum likelihood estimation procedures when faced with sparse training data was alleviated by automatic smoothing procedures. Although the fact that an accurate NE model can be estimated from sparse training data is of considerable interest and import, it is clear that it would be of use to be able to incorporate much more information in a statistical NE identifier. To this end, we are investigating two basic approaches: the incorporation of prior information; and unsupervised learning.
The most developed uses of prior information for NE identification are in the form of the rule-based systems developed for the task. Some initial work, carried out with Rob Gaizauskas and Mark Stevenson using a development of the system described by Wakao et al. (1996), has analysed the errors of rule-based and statistical approaches. This has indicated that there is a significant difference between the annotations produced by the two systems for the three classes of proper name. This leads us to believe that there is some scope for either merging the outputs of the two systems, or incorporating some aspects of the rule-based systems as prior knowledge in the statistical system.
Unsupervised learning of statistical NE models is attractive, since manual NE annotation of transcriptions is a labour intensive process. However, our preliminary experiments indicate that unsupervised training of NE models is not straightforward. Using a model built from 0.1 million words of manually annotated text, the rest of the training data was automatically annotated, and the process iterated. P&R scores stayed at the same level (around 73%) regardless of iteration.
Finally, we note that the NE annotation models discussed here -and all other state-of-the-art approaches -act as a post-processor to a speech recognizer. Hence the strong correlation between the P&R scores of the NE tagger and the WER of the underlying speech recognizer is to be expected. The development of NE models that incorporate acoustic information such as prosody (Hakkani Tür, Tür, Stolcke, & Shriberg 1999) and confidence measures are future directions of interest.
Figure 1 :
1Topologies for NE models. The left model assumes that class information is a word attribute. The right model explicitly models word-word and class-class transitions.
Figure 2 :
2NE identification scores (P&R) on 1997
). For example consider the phrase ... DIRECTOR ADRIAN LAJOUS SAYS ...,taken from the 1997 evaluation data, where LAJOUS was not found in the vocabulary. The maximum likelihood decoding for this phrase was:WER SER
R
P
P&R
hand transcription (NIST) .000 .286 .799 .865 .831
recognizer output (NIST 1) .135 .394 .738 .797 .766
(NIST 2) .145 .399 .741 .791 .765
(NIST 3) .283 .563 .618 .713 .662
recognizer output (own) .210 .452 .700 .769 .733
Inter-annotator agreement for reference transcriptions is around 97-98%(Robinson, Brown, Burger, Chinchor, Douthat, Ferro, & Hirschman 1999).
SER is analogous to word error rate (WER), a performance measure for automatic speech transcription. It is obtained by SER = (I + M + S)/(C + I + M ) where C, I, M , and S denote the numbers of correct, incorrect, missing, and spurious identifications. Using this notation, precision and recall scores may be calculated as R = C/(C +I +M ) and P = C/(C +I +S), respectively.
The weaker models -p(w i |c i , c i−1 ), p(w i |c i ), and p(w i ) -may be obtained in a way analogous to that used for p(w i |c i , c i−1 , w i−1 ). The smoothing approach is similar for the conditioned current class probabilities, i.e., p(c i |c i−1 , w i−1 ), p(c i |c i−1 ), and p(c i ).
The Good-Turing discounting formula is applied only when the inequality rnr ≤ (r + 1)n r+1 is satisfied, where r is a sample count and nr implies the number of samples that occurred exactly r times. Empirically, and for most cases, this inequality holds only when r is small. This may be modified slightly by applying absolute discounting to samples with higher r, which cannot be discounted using the Good-Turing formula (i.e., combined Good-Turing/absolute discounting).
Acknowledgements.We have benefited greatly from cooperation and discussions with Robert Gaizauskas and Mark Stevenson. We thank BBN and MITRE for the provision of manually-annotated training data. The evaluation infrastructure was provided by MITRE, NIST and SAIC. This work was supported by EPSRC grant GR/M36717.
MITRE: Description of the Alembic system used for MUC-6. J Aberdeen, J Burger, D Day, L Hirschman, P Robinson, M Vilain, Proceedings of the 6th Message Understanding Conference (MUC-6). the 6th Message Understanding Conference (MUC-6)MarylandAberdeen, J., Burger, J., Day, D., Hirschman, L., Robinson, P., & Vilain, M. (1995). MITRE: Descrip- tion of the Alembic system used for MUC-6. In Proceedings of the 6th Message Understanding Conference (MUC-6), Maryland, pp. 141-155.
Named entity extraction from speech: Approach and results using the TextPro system. D E Appelt, D Martin, Proceedings of DARPA Broadcast News Workshop. DARPA Broadcast News WorkshopHerndon, VAAppelt, D. E. & Martin, D. (1999). Named entity extraction from speech: Approach and results using the TextPro system. In Proceedings of DARPA Broadcast News Workshop, Herndon, VA, pp. 51- 54.
Nymble: a high-performance learning name-finder. D M Bikel, S Miller, R Schwartz, R Weischedel, Proceedings of the 5th ANLP. the 5th ANLPWashington, DCBikel, D. M., Miller, S., Schwartz, R., & Weischedel, R. (1997). Nymble: a high-performance learning name-finder. In Proceedings of the 5th ANLP, Washington, DC, pp. 194-201.
An algorithm that learns what's in a name. D M Bikel, R Schwartz, R M Weischedel, Machine Learning. 34Bikel, D. M., Schwartz, R., & Weischedel, R. M. (1999). An algorithm that learns what's in a name. Machine Learning 34, 211-231.
Hub-4 Named Entity Task Definition. N Chinchor, P Robinson, E Brown, version 4.8). SAIC.Chinchor, N., Robinson, P., & Brown, E. (1998). Hub-4 Named Entity Task Definition (version 4.8). SAIC. (http://www.nist.gov/speech/hub4 98/hub4 98.htm).
Better alignment procedures for speech recognition evaluation. W Fisher, J Fiscus, Proceedings of ICASSP-93. ICASSP-93Minneapolis2Fisher, W. & Fiscus, J. (1993). Better alignment procedures for speech recognition evaluation. In Pro- ceedings of ICASSP-93, Volume 2, Minneapolis, pp. 59-62.
Statistical annotation of named entities in spoken audio. Y Gotoh, S Renals, Proceedings of the ESCA Workshop: Accessing Information in Spoken Audio. the ESCA Workshop: Accessing Information in Spoken AudioCambridgeGotoh, Y. & Renals, S. (1999). Statistical annotation of named entities in spoken audio. In Proceedings of the ESCA Workshop: Accessing Information in Spoken Audio, Cambridge, pp. 43-48. (http://svr- www.eng.cam.ac.uk/˜ajr/esca99/).
Combining words and prosody for information extraction from speech. D Hakkani Tür, G Tür, A Stolcke, E Shriberg, Proceedings of Eurospeech-99. Eurospeech-99Budapest5Hakkani Tür, D., Tür, G., Stolcke, A., & Shriberg, E. (1999). Combining words and prosody for information extraction from speech. In Proceedings of Eurospeech-99, Volume 5, Budapest, pp. 1991-1994.
FASTUS: A cascaded finite state transducer for extracting information from natural language text. J Hobbs, D Appelt, J Bear, D Israel, M Kameyama, M Stickel, M Tyson, Finite State Language Processing. E. Roche & Y. SchabesMIT PressHobbs, J., Appelt, D., Bear, J., Israel, D., Kameyama, M., Stickel, M., & Tyson, M. (1997). FASTUS: A cascaded finite state transducer for extracting information from natural language text. In E. Roche & Y. Schabes (Eds.), Finite State Language Processing, pp. 381-406. MIT Press.
Interpolated estimation of Markov source parameters from sparse data. F Jelinek, R L Mercer, Proceedings of the Workshop: Pattern Recognition in Practice. the Workshop: Pattern Recognition in PracticeAmsterdamJelinek, F. & Mercer, R. L. (1980). Interpolated estimation of Markov source parameters from sparse data. In Proceedings of the Workshop: Pattern Recognition in Practice, Amsterdam, pp. 381-397.
Estimation of probabilities from sparse data for the language model component of a speech recognizer. S M Katz, IEEE Transactions on Acoustics, Speech, and Signal Processing. 353Katz, S. M. (1987). Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech, and Signal Processing 35(3), 400-401.
Named entity extraction from speech. F Kubala, R Schwartz, R Stone, R Weischedel, Proceedings of DARPA Broadcast News Transcription and Understanding Workshop. DARPA Broadcast News Transcription and Understanding WorkshopLansdowne, VAKubala, F., Schwartz, R., Stone, R., & Weischedel, R. (1998). Named entity extraction from speech. In Proceedings of DARPA Broadcast News Transcription and Understanding Workshop, Lansdowne, VA.
Performance measures for information extraction. J Makhoul, F Kubala, R Schwartz, R Weischedel, Proceedings of DARPA Broadcast News Workshop. DARPA Broadcast News WorkshopHerndon, VAMakhoul, J., Kubala, F., Schwartz, R., & Weischedel, R. (1999). Performance measures for information extraction. In Proceedings of DARPA Broadcast News Workshop, Herndon, VA, pp. 249-252.
Description of the LTG system used for MUC-7. A Mikheev, C Grover, M Moens, Proceedings of the 7th Message Understanding Conference. the 7th Message Understanding Conference7Mikheev, A., Grover, C., & Moens, M. (1998). Description of the LTG system used for MUC-7. In Proceedings of the 7th Message Understanding Conference (MUC-7).
Named entity extraction from broadcast news. D Miller, R Schwartz, R Weischedel, R Stone, Proceedings of DARPA Broadcast News Workshop. DARPA Broadcast News WorkshopHerndon, VAMiller, D., Schwartz, R., Weischedel, R., & Stone, R. (1999). Named entity extraction from broadcast news. In Proceedings of DARPA Broadcast News Workshop, Herndon, VA, pp. 37-40.
Proceedings of the fifth message understanding conference. Muc-5, MUC-5 (1993). Proceedings of the fifth message understanding conference.
On the estimation of 'small' probabilities by leaving-one-out. H Ney, U Essen, R Kneser, IEEE Transactions on Pattern Analysis and Machine Intelligence. 1712Ney, H., Essen, U., & Kneser, R. (1995). On the estimation of 'small' probabilities by leaving-one-out. IEEE Transactions on Pattern Analysis and Machine Intelligence 17(12), 1202-1212.
Information extraction from broadcast news speech data. D D Palmer, J D Burger, M Ostendorf, Proceedings of DARPA Broadcast News Workshop. DARPA Broadcast News WorkshopHerndon, VAPalmer, D. D., Burger, J. D., & Ostendorf, M. (1999). Information extraction from broadcast news speech data. In Proceedings of DARPA Broadcast News Workshop, Herndon, VA, pp. 41-46.
Robust information extraction from spoken language data. D D Palmer, M Ostendorf, J D Burger, Proceedings of Eurospeech-99. Eurospeech-99Budapest3Palmer, D. D., Ostendorf, M., & Burger, J. D. (1999). Robust information extraction from spoken language data. In Proceedings of Eurospeech-99, Volume 3, Budapest, pp. 1035-1038.
Connectionist speech recognition of broadcast news. A J Robinson, G D Cook, D P W Ellis, E Fosler-Lussier, S J Renals, D A Williams, Submitted to Speech CommunicationRobinson, A. J., Cook, G. D., Ellis, D. P. W., Fosler-Lussier, E., Renals, S. J., & Williams, D. A. G. Connectionist speech recognition of broadcast news. Submitted to Speech Communication.
Overview: Information extraction from broadcast news. P Robinson, E Brown, J Burger, N Chinchor, A Douthat, L Ferro, L Hirschman, Proceedings of DARPA Broadcast News Workshop. DARPA Broadcast News WorkshopHerndon, VARobinson, P., Brown, E., Burger, J., Chinchor, N., Douthat, A., Ferro, L., & Hirschman, L. (1999). Overview: Information extraction from broadcast news. In Proceedings of DARPA Broadcast News Workshop, Herndon, VA, pp. 27-30.
Information Retrieval. C J Van Rijsbergen, ButterworthsLondon2nd ed.van Rijsbergen, C. J. (1979). Information Retrieval (2nd ed.). London: Butterworths.
Evaluation of an algorithm for the recognition and classification of proper names. T Wakao, R Gaizauskas, Y Wilks, Proceedings of COLING-96. COLING-96CopenhagenWakao, T., Gaizauskas, R., & Wilks, Y. (1996). Evaluation of an algorithm for the recognition and classification of proper names. In Proceedings of COLING-96, Copenhagen, pp. 418-423.
| [] |
[
"NewsEdits: A Dataset of Revision Histories for News Articles (Technical Report: Data Processing)",
"NewsEdits: A Dataset of Revision Histories for News Articles (Technical Report: Data Processing)",
"NewsEdits: A Dataset of Revision Histories for News Articles (Technical Report: Data Processing)",
"NewsEdits: A Dataset of Revision Histories for News Articles (Technical Report: Data Processing)"
] | [
"Alexander Spangher spangher@usc.edu \nInformation Sciences Institute\nUniversity of Southern California\n\n",
"Jonathan May jonmay@isi.edu \nInformation Sciences Institute\nUniversity of Southern California\n\n",
"Alexander Spangher spangher@usc.edu \nInformation Sciences Institute\nUniversity of Southern California\n\n",
"Jonathan May jonmay@isi.edu \nInformation Sciences Institute\nUniversity of Southern California\n\n"
] | [
"Information Sciences Institute\nUniversity of Southern California\n",
"Information Sciences Institute\nUniversity of Southern California\n",
"Information Sciences Institute\nUniversity of Southern California\n",
"Information Sciences Institute\nUniversity of Southern California\n"
] | [] | News article revision histories have the potential to give us novel insights across varied fields of linguistics and social sciences. In this work, we present, to our knowledge, the first publicly available dataset of news article revision histories, or NewsEdits.Our dataset is multilingual; it contains 1,278,804 articles with 4,609,430 versions from over 22 English-and French-language newspaper sources based in three countries. Across version pairs, we count 10.9 million added sentences; 8.9 million changed sentences and 6.8 million removed sentences. Within the changed sentences, we derive 72 million atomic edits. NewsEdits is, to our knowledge, the largest corpus of revision histories of any domain. | null | [
"https://arxiv.org/pdf/2104.09647v2.pdf"
] | 233,307,327 | 2104.09647 | 4004fc3c0f438ac94201266467e14e5f2294daa5 |
NewsEdits: A Dataset of Revision Histories for News Articles (Technical Report: Data Processing)
Alexander Spangher spangher@usc.edu
Information Sciences Institute
University of Southern California
Jonathan May jonmay@isi.edu
Information Sciences Institute
University of Southern California
NewsEdits: A Dataset of Revision Histories for News Articles (Technical Report: Data Processing)
News article revision histories have the potential to give us novel insights across varied fields of linguistics and social sciences. In this work, we present, to our knowledge, the first publicly available dataset of news article revision histories, or NewsEdits.Our dataset is multilingual; it contains 1,278,804 articles with 4,609,430 versions from over 22 English-and French-language newspaper sources based in three countries. Across version pairs, we count 10.9 million added sentences; 8.9 million changed sentences and 6.8 million removed sentences. Within the changed sentences, we derive 72 million atomic edits. NewsEdits is, to our knowledge, the largest corpus of revision histories of any domain.
Introduction
Revision histories gathered from various natural language domains like Wikipedia (Grundkiewicz and Junczys-Dowmunt, 2014), Wikihow (Faruqui et al., 2018) and student learner essays (Zhang and Litman, 2015) have been studied widely in NLP. These corpora have been used for as varied tasks as language modeling (Yin et al., 2018), sentence modification (Shah et al., 2020), argumentation design (Afrin et al., 2020), and even group collaboration dynamics (Murić et al., 2019).
By utilizing a novel domain for revision histories, news article revision histories, we see the potential for strengthening methods for these aforementioned tasks, as well as the introduction of a novel set of tasks. Because news covers events (or worldstates) that are constantly updating, we hypothesize that many edits in news either (1) incorporate new information (2) update events or (3) broaden perspectives. 1 Each of these categories of edits poses 1 This is in contrast with the distribution of edits in other new questions that news edits are uniquely positioned to answer: What information is likely to change in the current state of a document? What must be incorporated? What perspectives need to be layered in, and when?
We offer, to our knowledge, the first publicly available corpus of news article revision histories, called NewsEdits. We compile our corpus from various subcorpora collected by internet activists who track news article version changes. Each subcorpora is collected by monitoring article URLs, and downloading article text when a new version of the same news article is published. 2 Our dataset consists of 1,278,804 articles and 4,609,430 versions from over 22 English-language newspaper outlets. These outlets are based in the U.S., Great Britain and Canada. As in , we compare article versions to each other on the sentence-level, and then, for matched sentences, the word level. We count 10,929,051 added sentences; 8,908,254 changed sentences and 6,807,624 removed sentences.
Our contributions are the following: 1. We introduce NewsEdits, the first publicly available academic corpus of news article version histories, as well as, to our knowledge, the largest version-history corpus. 2. We process the dataset to identify various forms of structural edits, in order to facilitate a wide range of possible analyses. We provide simple, lightweight visualization tools to render article-comparisons in an intuitive format. In the remainder of the paper, we start by comparing this work to other edit corpora. We then discuss the dataset curation and processing. In upcoming work, we will present a schema for characterizing domains like Wikipedia (Yang et al., 2017), which tend to focus on counter vandalism and syntax edits 2 As such, our dataset only contains differences between published versions of news articles, not intermediate drafts.
edits in news articles, developed with journalists and graduate students in the communications field.
Related Work
Previous work in natural language revision histories has primarily focused on two primary domains: student learner essays and Wikipedia. There is, additionally, emerging work in using WikiHow for similar ends as Wikipedia (Anthonio et al., 2020;Bhat et al., 2020).
Wikipedia Revisions
Wikipedia is a resource that is often used in textrevision research. Many tasks have benefited from studying Wikipedia revisions, such as text simplification (Yatskar et al., 2010), textual entailment (Zanzotto and Pennacchiotti, 2010) and discourse learning in edits Gurevych, 2012, 2013;Fong and Biuk-Aghai, 2010). Discourse learning for edits, as in other branches of discourse learning, focuses on developing schemas to elucidate the function or purpose of each edit, and then performing classification on these schema (Faigley and Witte, 1981). One such work, by Yang et al. (2017), developed a schema for Wikipedia edit intentions, included Wikepedia-specific edit categories, such as Counter-Vandalism and Wikification, as well as general categories like Elaboration and Refactoring. We take the general categories as a starting point for our own work.
The two largest-scale corpora to be processed and released, to our knoweldge, are the WikEd Error Corpus, in English, which contains 12 million sentences and 14 million revisions (Grundkiewicz and Junczys-Dowmunt, 2014), and Wiki-AtomicEdits, which contains 43 million "atomic edits". 4 The WikEd Error Corpus has been used primarily for Grammar Error Correction (GEC), while WikiAtomicEdits has been used primarily for language modeling (Yin et al., 2018;Shah et al., 2020).
While our work has roughly the same number of changed sentences as the Wikipedia-based corpora (8.9 million), we have roughly twice as many atomic edits (72 million), and added/removed sentences, which neither Wikipedia corpus reports. 4 Authors are unclear about how many sentences this corresponds to, but in our work we found, on average, roughly 4 atomic edits per changed sentence
Student Learner Essays
Student Learner essays are another area of focus in revisions research. Such research focuses on editing revisions made during student essay-writing, particularly focusing on non-English speakers. Because of the difficulties in collecting data in this domain, most natural datasets tend to be small (Leacock et al., 2010). In recent work, researchers create an editing platform; they instruct students to write 3 drafts of an essay using their platform, and gather revisions made during the writing process . They collect 60 essays with 180 versions, or 3 drafts per essay, and they focus on classifying the discursive purpose of each edit (i.e. what the edit introduces, like Evidence, or Claims).
In this vein, researchers have constructed Automated Writing Evaluation systems (AWE) and used these systems to evaluate writing when a student submits a draft Zhang, 2020;Zhang and Litman, 2015). In one recent work by Afrin et al. (2020), researchers compile 143 essays with 286 versions, or 2 versions each. They develop an annotation scheme focused on improving the writing (ex: Evidence: Relevant or Irrelevant).
While such work is interesting because the generation process is fully controlled: i.e., students can be instructed to write about similar topics and be given similar feedback between rounds, the corpora collected are small by modern machine learning standards.
News
Academic Work
Despite the centrality of published news articles as sources to some of the most fundamental corpora in NLP (Marcus et al., 1993;Carlson et al., 2003;Pustejovsky et al., 2003;Walker, 2006), there is, to our knowledge, no revision-histories corpus based on news edits currently published in the academic literature. For research on linguistics, corpora from multiple domains can capture relevant mechanisms, but for research on language describing real-world events -such as event extraction or temporalitynews is still the gold standard.
We are aware of just one academic work, and its followup, that focuses on news edits . Authors analyzed news edits to predict quality of news articles, as well as to predict editing operations performed during text generation. A dataset of news revisions from a Japanese newspaper was used, but not released publicly. 5 In our work, in contrast, we publicly release a large dataset and outline several tasks that, in our opinion, are tasks for which news corpora are best suited. In previously studied revision-corpora, researchers typically assume, implicitly, that the writing focuses on a relatively static world-stateboth Wikipedia and student learner essays tend to be focused on historical events or arguments (Yang et al., 2017). Thus in previous corpora, the nature of edits are primarily argumentative or corrective. However, news articles very often cover updating events. This difference has important implications for the kinds of edits we expect in our corpora.
We are not aware of any work using WikiNews 6 as a source of revision histories, despite its proximity, as a domain, to other corpora that have been used in revision-history research, like Wikipedia and WikiHow. WikiNews corpora have been used for study in the communications literature (Thorsen, 2008;Roessing, 2019), as well as in subfields of NLP, including document summarization (Bravo-Marquez and Manriquez, 2012), timeline synthesis (Zhang and Wan, 2017;Minard et al., 2016) and word-identification tasks (Yimam et al., 2017). While we are aware of one work, on identifying entity-salience, that uses WikiNews editorannotations as part of their task (Wu et al., 2020), we are not aware of any other works that utilize the wiki structure of WikiNews, including its revision histories. We considered using WikiNews as an additional source, but were concerned that its generation process (i.e. as a community-generated resource) was significantly different than the other sources we compiled (i.e. professionally developed articles). In future work, when we are better able to do more extensive comparison, we will consider include it in our collection of news revision corpora.
Nonacademic Work
Since at least 2006, internet activists have tracked changes made to major digital news articles (Herrmann, 2006). In 2012 NewsDiffs.org became one of the first websites to gain popular media attention for tracking changes to articles published in the New York Times, CNN, Politico, and others (Brisbane, 2012;Burke, 2016;Jones and Neubert, 2017;Fass and Main, 2014). Other similar trackers have since (or concurrently) emerged, such as NewsSniffer 7 and DiffEngine. 8 These tools collect article versions from RSS feeds and the Internet Archive. Using open-sourced technology, dozens of major newspapers are being analyzed, 9 as well as thousands of government websites. 10 racial bias in earlier article drafts, 11 shifting portrayals of social events, (Johnson et al., 2016), and lack of media transparency (Gourarie, 2015). We utilize this work in the construction of our corpora, and are actively exploring efforts in this space for additional sources of data and analysis.
Dataset
We combine data from two primary sources: NewsSniffer (NS) and Twitter accounts powered by DiffEngine (DE). Currently, we have gathered data for 22 media outlets. The number of articles per outlet, as well as the time-frames for which we have collected data, is listed in Table 2. We are in the process of accumulating more data from La Nacion, Clarìn, Le Soir, and Le Monde from internet activists to broaden the number of languages we currently have access to.
Dataset Tables and Fields
Our dataset is released in a set of 5 SQLite tables. Three of them are primary data tables, and two are summary-statistic tables. Our primary data tables are: articles, sentence _ diffs, word _ diffs; the first two of which are shown in Tables 3a and 3b (word _ diffs shares a similar structure with sentence _ diffs). We compile two summary statistics tables to cache statistics from sentence _ diffs and word _ diffs; they calculate metrics such as NUM _ SENTENCES _ ADDED and NUM _ SENTENCES _ REMOVED per article. 12 The sentence _ diffs data table's schema is shown in Table 3 and some column-abbreviated sample rows are shown in Table 4. As can be seen, the diffs are calculated and organized on a sentence-level. Each row shows a comparison of sentences between two adjacent versions of the same article. 13 Every row in sentence _ diffs contains index columns: SOURCE, A _ ID, VERSION _ OLD, and VERSION _ NEW. These columns can be used to uniquely map each row in sentence _ diffs to two
TAG columns in sentence _ diffs
The columns TAG _ OLD and TAG _ NEW in sentence _ diffs have specific meaning: how to transform from version to its adjacent version. In other words, TAG _ OLD conveys where to find SENT _ OLD in VERSION _ NEW and whether to change it, whereas TAG _ NEW does the same for SENT _ NEW in VERSION _ OLD. Table 4b, 4a and 4c. As can be seen, each tag is 3-part and has the following components. Component 1 can be either M, A, or R. M means that the sentence in the current version was Matched with a sentence in the adjacent version, A means that a sentence was Added to the new version and R means the sentence was Removed from the old version. 15 Component 2 is only present for Matched sentences, and refers to the index or indices of the sentence(s) in the adjacent version. 16 Additionally, Component 3 is also only present if the sentence is Matched. It can be either C or U. C refers to whether the matched sentence was Changed and U to whether it was Unchanged.
More concretely, consider the examples in
Although not shown or described in detail, all M sentences have corresponding entry-matches in word _ diffs table, which has a similar schema and 14 One mapping for sentence _ diffs.VERSION _ OLD = article.VERSION _ ID and one mapping for sentence _ diffs.VERSION _ NEW = article.VERSION _ ID. 15 i.e. an Added row is not present in the old version and a Removed row is not present in the new version. They have essentially the same meaning and we could have condensed notation, but we felt this was more intuitive. 16
4.
To model the inverse of use-case 3, i.e. which sentences would be removed, or p(sentence i ∉ article t+1 sentence i ∈ article t ), a user would select all sentences in SENT _ NEW, and all sentences in SENT _ OLD where R is in TAG _ OLD.
Assigning sentence _ diff Tag Columns
To assign the tags, we need to determine which sentences are Added, Removed and Matched. We seek to have our dataset reflect a general principle: sentences tagged as Added should contain novel information and sentences tagged as Removed should delete information that the journalist wishes to delete. Sentences tagged as Matched should be substantially similar except for syntactic changes, rephrasing, and updated information. "She has not seen him for 12 years, and the first time she saw him was through a monitor," said Lloyd.
M 2 U 2 M 1 U
She has not seen him for 12 years, and the first time she saw him was through a monitor," said Lloyd.
"The mother, this was the first time seeing her son since he got to the States." M 1 U 3 "She wept, and wept, and wept." A (c) Demo 3: Two features shown: (1) Refactoring, or order-swapping, makes sentences appear as though they have been deleted and then added. Swapped sentences are matched through their tags.
(2) The last sentence is a newly added sentence and is not matched with any other sentence.
Matching Algorithm
To do this, we develop an asymmetrical sentencematching algorithm . The examples shown in Tables 4b, 4a and 4c illustrate our requirements. The first example, shown in Table 4b, occurs when a sentence is edited syntactically, but its meaning does not change. 18 So, we need our sentence-matching algorithm to use a sentence-similarity measure that considers semantic changes and does not consider surface-level changes. The second example, shown in Table 4a, occurs when a sentence is split (or inversely, two sentences are merged.) Thus, we need our sentence matching algorithm to consider many-to-one matchings for sentences. The third example, shown in Table 4c, occurs when sentenceorder is rearranged, arbitrarily, throughout a piece. Finally, we need our sentence-matching algorithm to perform all pairwise comparisons of sentences. Our algorithm is given in Algorithm 1. Given a list of sentences for pairwise versions, our algorirthm computes the asymmetrical similarity between sentence-pairs in the Cartesian product of the sentences of adjacent article versions. It returns two mappings: (1) from sentences in the old version to the new, and (2) from sentences in the new to the old. This relies on an effective sentencesimilarity score.
Sentence Matching
There is a wide body of research in sentencematching (Quan et al., 2019;Abujar et al., 2019;Chen et al., 2018) including BERTbased approaches (Reimers and Gurevych, 2019), dependency-tree approaches (Le et al., 2018) and(Allan et al., 2003;Achananuparp et al., 2008). We desire a measure that considers semantics over syntactical changes, yet can appropriately match named entities (names, places or organizations) or other specific words not likely to be found in pretrained models.
While we are still evaluating the effectiveness of several methods, our primary method so far is based on a matching algorithm similar to maximum alignment method, described by Kajiwara and Komachi (2016), shown below, where φ (x i ,y j ) ∶= similarity between word x i and y j .
Sim asym (x,y) = 1 x x i=1 max j φ (x i ,y// match v old → v new for (i,s i ) ∈ v old do d = max s j ∈vnew Sim asym (s i ,s j ) j = argmax s j ∈vnew Sim asym (s i ,s j ) m old→new [i] = j × 1[d > T ] end // match v old ← v new for ( j,s j ) ∈ v new do d = max s i ∈v old Sim asym (s j ,s i ) i = argmax s i ∈v old Sim asym (s j ,s i ) m old←new [ j] = i × 1[d > T ]
end Algorithm 1:
Asymmetrical sentencematching algorithm. Input v old , v new are lists of sentences, and output is an index mapper. If a sentence maps to 0 (i.e. d < T ), there is no match. Sim asym is described in text.
This approach allows us to identify unidirectional matches, where sentence a might be a subsentence of sentence b, as shown in Table 4a. We use this method because it is similar to one used in prior work on news article revision histories .
We test several word-similarity functions, φ . The first uses a simple lexical overlap, where φ (x,y) = 1 if lemma(x) = lemma(y) and 0 otherwise. This is based off of the hypothesis that proper nouns make up the majority of important components in our desired similarity measure, and these are poorly captured by large language models. The second uses contextual word-embeddings, where φ (x,y) = Emb(x) ⋅ Emb(y), and Emb(x) is the word-embeddings derived from a pretrained language model (Albert-XXLarge-Uncased.) (Lan et al., 2020). We are still testing different similarity thresholds (T in Algorithm 1) and still determining how well this embedding function captures proper nouns and entities.
Discussion and Possible Tasks
In this work, we have introduced, to our knowledge, the largest dataset of revision histories ever, and the first public dataset of news revision histories into in the academic literature.
Tasks
We hope this resource will prove useful as a domain that is far more general in terms of standards and style than domains such as Wikipedia and WikiHow, for which revisions data already exists. Thus, tasks based off these existing corpora, such as edit language modeling (Yin et al., 2018) and fact-guided revisions (Shah et al., 2020), should stand to benefit.
However, as mentioned previously, news articles are, more often than Wikipedia or Student Learner articles, based on a world-state that is dynamically changing. Thus, we expect that edits in news articles are more likely to describe events that are changing and include primary-source material (in contrast to another source we considered, WikiNews). This has important implications for linguistic inquiries, such as:
1. Event-temporal relations, as edits update events that are changing through time (Ning et al., 2018) 2. Headline Generation (Shen et al., 2017) 3. Fact-guided updates (Shah et al., 2020) This dataset is also interesting as it captures the lifecycles of news articles, which have standard arcs. Breaking news typically gets published first as a bare-bones paragraph containing a main event.
Within the next few hours and days, additional quotes, contextual and explainer-paragraphs are added until the article comes to resemble a more standard news article. 19 Thus, there are tasks this corpus sheds light on that have important implications for computational journalism:
1. News information gathering (Spangher et al., 2020) 2. Contextualization: discourse and article structure (Choubey et al., 2020;Spangher et al., 2021a) 3. How framing changes in response to an unfolding story (Spangher et al., 2021b) Finally, the nature of our article updates -each version is captured upon publication, not drafting -also contain fewer grammatical and spelling errors than, say, student learner essays. Thus, edits made to change the syntax of a sentence rather than introduce or change information, are more likely to have stylistic purpose. This might lead to in-teresting subtasks in several fields of NLP, such as:
1. Style transfer (Fu et al., 2018) 2. Bias in news articles (Mehrabi et al., 2020) 3. Cross-cultural sensitivity (Tian et al., 2020) In short, there are many tasks that we see emerging from such a dataset, and several tasks that are currently ongoing.
Future Work
To make this work more useful, we wish to develop a schema to describe the types of edits occurring, similar to the types of schemas exists in other work examining revisions corpora Yang et al., 2017;Afrin et al., 2020). We are inspired by the Wikipedia Intentions schema developed by Yang et al. (2017), and are working in collaboration with journalists to further clarify the differences. The development of a news edits schema would help to clarify the nature of these edits as well as focus further questions.
We are also interested in extending prior work (Spangher et al., 2020(Spangher et al., , 2021a in this domain. We are pursuing collaborations (Spangher et al., 2021b). Please do not hesitate to reach the authors by email to discuss any possible collaborations or usages of the dataset.
References
Table 1 :
1A comparison of natural langauge revision history corpora.
Such diffs have been used by journalists, communications scholars and media critics to study instances of gender and 7 https://www.newssniffer.co.uk/ 8 https://github.com/DocNow/diffengine 9 https://twitter.com/i/lists/821699483088076802 10 https://envirodatagov.org/ federal-environmental-web-tracker-about-page/Source
# Articles # Versions Start
End
Ctry. Lang. Coll.
BBC
307,616 1,244,490 2006-08 2021-01 U.K. En.
NS
Guardian
231,252
852,324 2012-01 2021-01 U.K. En.
NS
Nytimes
87,556
395,643 2012-08 2020-12 U.S. En.
NS
Telegraph
78,619
124,128 2017-01 2018-09 U.K. En.
NS
Fox
78,566
117,171 2017-01 2018-09 U.S. En.
DE
CNN
58,569
117,202 2017-01 2018-09 U.S. En.
DE
Independent
55,009
158,881 2014-01 2018-05 U.K. En.
NS
CBC
54,012
387,292 2017-08 2018-09 Ca.
En.
DE
Dailymail
50,639
166,260 2017-01 2018-09 U.K. En.
DE
BBC
42,797
99,082 2017-01 2018-09 U.K. En.
DE
La Presse
40,978
73,447 2017-08 2018-09 Ca.
Fr-Ca. DE
Torontostar
33,523
310,112 2017-08 2018-07 Ca.
En.
DE
Globemail
32,552
91,820 2017-08 2018-09 Ca.
En.
DE
Reuters
31,359
143,303 2017-01 2018-09 U.K. En.
DE
National Post
22,934
63,085 2017-08 2018-09 Ca.
En.
DE
Associated Press
22,381
97,314 2017-01 2018-09 U.S. En.
DE
Washington Post
19,184
68,612 2014-01 2020-07 U.S. En.
NS
Toronto Sun
19,121
46,353 2017-08 2018-09 Ca.
En.
DE
Calgary Herald
7,728
33,427 2017-08 2018-09 Ca.
En.
DE
The Rebel
4,344
19,383 2017-08 2018-09 Ca.
En.
DE
Canada Land
65
101 2017-12 2018-09 Ca.
En.
DE
Table 2 :
2A summary of the number of total number of articles and versions for different media outlets which comprise our dataset. Also shown is the original collection that they were derived from (DE for DiffEngine, and NS from NewsSniffer), and the date-ranges during which articles from each outlet were collected.
DB schema for the article table. SOURCE, A _ ID and VERSION _ ID are the primary key columns. DB schema for the sentence _ diffs table (word _ diffs is similar).Table comparesversion pairs of articles. The rows in the table are on the sentence-level; V _ OLD _ ID refers to the index of the old version, V _ NEW _ ID refers to the index of the new version. TAG _ OLD gives information for how to transition from the old version to the new version; TAG _ NEW is the inverse.Column Name
Type
Column Name
Type
Column Name
Type
SOURCE
index
TITLE
text
CREATED
text
A_ID
index
URL
text
ARCHIVE_URL
text
VERSION_ID
index
TEXT
text
NUM_VERSIONS int
(a) Column Name
Type
Column Name
Type
Column Name
Type
SOURCE
index
V_NEW_ID
index
TAG_OLD
text
A_ID
index
SENTENCE_ID
index
SENT_NEW
text
V_OLD_ID
index
SENT_OLD
text
TAG_NEW
text
(b)
Table 3 :
3Schemas for two databases central to our content organization scheme.rows in article. 14
I.e. in TAG _ OLD, the index refers to the SENTENCE _ ID of TAG _ NEW). Then, they would join TAG _ OLD.Component _ 2 with SENTENCE _ ID. Finally, they would select SENT _ OLD, SENT _ NEW. 17 2. To view only refactorings, or when a sentence is moved from one location in the article to another, a user could filter sentence _ diffs to only sentences containing M..U and follow a similar join process as in use-case 1. 3. To model which sentences might be added, i.e. p(sentence i ∈ article t+1 sentence i ∉ article t ), a user would select all sentences in SENT _ OLD, and all sentences in SENT _ NEW where A is in TAG _ NEW.SENT _ NEW
tagging aim.
A user might use these tags in the following
ways:
1. To compare only atomic edits, as in Faruqui
et al. (2018), a user could filter sentence _ diffs
to sentences where M..C is in TAG _ OLD (or
equivalently,
Table 4 :
4Here we show demos of three tricky edge-cases and how our tagging scheme handles them. Old Tag annotates a Old Version relative to changes in the New Version (or "converts" the Old Version to the New Version). If Matched, is the sentence Changed or Unchanged.New Tag is the inverse. Tag components: Component 1: M, A, R. Whether the sentence is Matched, Added, or
Removed. Component 2: Index. If Matched, what is the index of the sentence in version that it is matched to.
Component 3: C, U.
j )18 Syntactic changes: synonyms are used, or phrasing is
condensed, but substantially new information is not added
input :Article versions v old , v new , Match
Threshold T
output :maps m old→new , m old←new
initialize;
m old→new , m old←new = {}, {};
Sheikh Abujar, Mahmudul Hasan, and Syed Akhter Hossain. 2019. Sentence similarity estimation for text summarization using deep learning. In Proceedings of the 2nd International Conference on Data Engineering and Communication Technology, pages 155-164. Springer. Palakorn Achananuparp, Xiaohua Hu, and Xiajiong Shen. 2008. The evaluation of sentence similarity measures. In International Conference on data warehousing and knowledge discovery, pages 305-316. Springer. Tazin Afrin, Elaine Lin Wang, Diane Litman, Lindsay Clare Matsumura, and Richard Correnti. 2020. Annotation and classification of evidence and reasoning revisions in argumentative writing. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 75-84. James Allan, Courtney Wade, and Alvaro Bolivar. 2003. Retrieval and novelty detection at the sentence level. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pages 314-321.
Dataset could not be released due to copyright infringement, according to the authors in response to inquiry. 6 https://en.wikinews.org/wiki/Main _ Page
These summary statistic tables make it convenient to, say, filter sentence _ diffs in order train a model on all articles that have one sentence added; or all articles that have no sentences removed.13 So, for instance, article A, with versions 1, 2 where each version has sentences i, ii, iii, would have 3 rows (assuming sentences were similar): A.1-2.i, A.1-2.ii, A.1-2.iii.
or simply look in the word _ diffs table.
Much of our insight in this section is derived from our own professional experience working in newsrooms as well as journalists we interviewed before and during the writing of this article.
2020. wikihowtoimprove: A resource and analyses on edits in instructional texts. Talita Anthonio, Irshad Bhat, Michael Roth, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceTalita Anthonio, Irshad Bhat, and Michael Roth. 2020. wikihowtoimprove: A resource and analyses on ed- its in instructional texts. In Proceedings of The 12th Language Resources and Evaluation Confer- ence, pages 5721-5729.
Towards modeling revision requirements in wiki-How instructions. Irshad Bhat, Talita Anthonio, Michael Roth, 10.18653/v1/2020.emnlp-main.675Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsIrshad Bhat, Talita Anthonio, and Michael Roth. 2020. Towards modeling revision requirements in wiki- How instructions. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 8407-8414, Online. As- sociation for Computational Linguistics.
A zipf-like distant supervision approach for multidocument summarization using wikinews articles. Felipe Bravo, - Marquez, Manuel Manriquez, International Symposium on String Processing and Information Retrieval. SpringerFelipe Bravo-Marquez and Manuel Manriquez. 2012. A zipf-like distant supervision approach for multi- document summarization using wikinews articles. In International Symposium on String Processing and Information Retrieval, pages 143-154. Springer.
Insider's view of changes, from outside. The New York Times. Arthur S Brisbane, Arthur S. Brisbane. 2012. Insider's view of changes, from outside. The New York Times.
Newsdiffs: A tool for tracking changes to online news articles -vr research -public records research: Opposition research. Austin Burke, Austin Burke. 2016. Newsdiffs: A tool for tracking changes to online news articles -vr research -public records research: Opposition research.
Building a discourse-tagged corpus in the framework of rhetorical structure theory. Lynn Carlson, Daniel Marcu, Mary Ellen Okurowski, Current and new directions in discourse and dialogue. SpringerLynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2003. Building a discourse-tagged cor- pus in the framework of rhetorical structure theory. In Current and new directions in discourse and dia- logue, pages 85-112. Springer.
Sentence similarity measures revisited: ranking sentences in pubmed documents. Qingyu Chen, Sun Kim, John Wilbur, Zhiyong Lu, Proceedings of the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics. the 2018 ACM International Conference on Bioinformatics, Computational Biology, and Health InformaticsQingyu Chen, Sun Kim, W John Wilbur, and Zhiy- ong Lu. 2018. Sentence similarity measures revis- ited: ranking sentences in pubmed documents. In Proceedings of the 2018 ACM International Confer- ence on Bioinformatics, Computational Biology, and Health Informatics, pages 531-532.
Discourse as a function of event: Profiling discourse structure in news articles around the main event. Aaron Prafulla Kumar Choubey, Ruihong Lee, Lu Huang, Wang, 10.18653/v1/2020.acl-main.478Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsPrafulla Kumar Choubey, Aaron Lee, Ruihong Huang, and Lu Wang. 2020. Discourse as a function of event: Profiling discourse structure in news arti- cles around the main event. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 5374-5386, Online. As- sociation for Computational Linguistics.
A corpus-based study of edit categories in featured and non-featured Wikipedia articles. Johannes Daxenberger, Iryna Gurevych, The COLING 2012 Organizing Committee. Mumbai, IndiaProceedings of COLING 2012Johannes Daxenberger and Iryna Gurevych. 2012. A corpus-based study of edit categories in featured and non-featured Wikipedia articles. In Proceedings of COLING 2012, pages 711-726, Mumbai, India. The COLING 2012 Organizing Committee.
Automatically classifying edit categories in wikipedia revisions. Johannes Daxenberger, Iryna Gurevych, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingJohannes Daxenberger and Iryna Gurevych. 2013. Au- tomatically classifying edit categories in wikipedia revisions. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 578-589.
Analyzing revision. College composition and communication. Lester Faigley, Stephen Witte, 32Lester Faigley and Stephen Witte. 1981. Analyzing revision. College composition and communication, 32(4):400-414.
WikiAtomicEdits: A multilingual corpus of Wikipedia edits for modeling language and discourse. Manaal Faruqui, Ellie Pavlick, Ian Tenney, Dipanjan Das, 10.18653/v1/D18-1028Manaal Faruqui, Ellie Pavlick, Ian Tenney, and Dipan- jan Das. 2018. WikiAtomicEdits: A multilingual corpus of Wikipedia edits for modeling language and discourse. pages 305-315.
Revealing the news: How online news changes without you noticing. John Fass, Angus Main, Digital Journalism. 23John Fass and Angus Main. 2014. Revealing the news: How online news changes without you notic- ing. Digital Journalism, 2(3):366-382.
What did they do? deriving high-level edit histories in wikis. Peter Kin-Fong Fong, Robert P Biuk-Aghai , Proceedings of the 6th International Symposium on Wikis and Open Collaboration. the 6th International Symposium on Wikis and Open CollaborationPeter Kin-Fong Fong and Robert P Biuk-Aghai. 2010. What did they do? deriving high-level edit histories in wikis. In Proceedings of the 6th International Symposium on Wikis and Open Collaboration, pages 1-10.
Style transfer in text: Exploration and evaluation. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, Rui Yan, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence32Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2018. Style transfer in text: Explo- ration and evaluation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32.
Why 'diffing' could make news organizations more transparent. Chava Gourarie, Columbia Journalism Review. Chava Gourarie. 2015. Why 'diffing' could make news organizations more transparent. Columbia Journal- ism Review.
The wiked error corpus: A corpus of corrective wikipedia edits and its application to grammatical error correction. Roman Grundkiewicz, Marcin Junczys-Dowmunt, International Conference on Natural Language Processing. SpringerRoman Grundkiewicz and Marcin Junczys-Dowmunt. 2014. The wiked error corpus: A corpus of correc- tive wikipedia edits and its application to grammat- ical error correction. In International Conference on Natural Language Processing, pages 478-490. Springer.
The editors: Sniffing out edits. Steve Herrmann, BBCSteve Herrmann. 2006. The editors: Sniffing out edits. BBC.
Proofread sentence generation as multi-task learning with editing operation prediction. Yuta Hitomi, Hideaki Tamori, Naoaki Okazaki, Kentaro Inui, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingShort Papers2Yuta Hitomi, Hideaki Tamori, Naoaki Okazaki, and Kentaro Inui. 2017. Proofread sentence generation as multi-task learning with editing operation predic- tion. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 436-441.
The effect of new york times event coding techniques on social movement analyses of protest data. Jonathan P Erik W Johnson, Jon Schreiner, Agnone, Narratives of Identity in Social Movements, Conflicts and Change. Emerald Group Publishing LimitedErik W Johnson, Jonathan P Schreiner, and Jon Agnone. 2016. The effect of new york times event coding techniques on social movement analyses of protest data. In Narratives of Identity in Social Movements, Conflicts and Change. Emerald Group Publishing Limited.
Using rss to improve web harvest results for news web sites. M Gina, Michael Jones, Neubert, Journal of Western Archives. 823Gina M Jones and Michael Neubert. 2017. Using rss to improve web harvest results for news web sites. Journal of Western Archives, 8(2):3.
Building a monolingual parallel corpus for text simplification using sentence similarity based on alignment between word embeddings. Tomoyuki Kajiwara, Mamoru Komachi, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersTomoyuki Kajiwara and Mamoru Komachi. 2016. Building a monolingual parallel corpus for text sim- plification using sentence similarity based on align- ment between word embeddings. In Proceedings of COLING 2016, the 26th International Confer- ence on Computational Linguistics: Technical Pa- pers, pages 1147-1158.
ALBERT: A lite BERT for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netZhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A lite BERT for self-supervised learning of language representations. In 8th Inter- national Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Acv-tree: A new method for sentence similarity modeling. Yuquan Le, Zhi-Jie Wang, Zhe Quan, Jiawei He, Bin Yao, IJCAI. Yuquan Le, Zhi-Jie Wang, Zhe Quan, Jiawei He, and Bin Yao. 2018. Acv-tree: A new method for sen- tence similarity modeling. In IJCAI, pages 4137- 4143.
Automated grammatical error detection for language learners. Claudia Leacock, Martin Chodorow, Michael Gamon, Joel Tetreault, Synthesis lectures on human language technologies. 3Claudia Leacock, Martin Chodorow, Michael Gamon, and Joel Tetreault. 2010. Automated grammatical error detection for language learners. Synthesis lec- tures on human language technologies, 3(1):1-134.
Building a large annotated corpus of english: The penn treebank. Mitchell Marcus, Beatrice Santorini, Mary Ann Marcinkiewicz, Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank.
Nanyun Peng, and Aram Galstyan. 2020. Man is to person as woman is to location: Measuring gender bias in named entity recognition. Ninareh Mehrabi, Thamme Gowda, Fred Morstatter, Proceedings of the 31st ACM Conference on Hypertext and Social Media. the 31st ACM Conference on Hypertext and Social MediaNinareh Mehrabi, Thamme Gowda, Fred Morstatter, Nanyun Peng, and Aram Galstyan. 2020. Man is to person as woman is to location: Measuring gender bias in named entity recognition. In Proceedings of the 31st ACM Conference on Hypertext and Social Media, pages 231-232.
Meantime, the newsreader multilingual event and time corpus. Anne-Lyse Minard, Manuela Speranza, Ruben Urizar, Begona Altuna, Marieke Van Erp, Anneleen Schoen, Chantal Van Son, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Anne-Lyse Minard, Manuela Speranza, Ruben Urizar, Begona Altuna, Marieke Van Erp, Anneleen Schoen, and Chantal Van Son. 2016. Meantime, the news- reader multilingual event and time corpus. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 4417-4422.
Collaboration drives individual productivity. Goran Murić, Andres Abeliuk, Kristina Lerman, Emilio Ferrara, Proceedings of the ACM on Human-Computer Interaction. 3CSCWGoran Murić, Andres Abeliuk, Kristina Lerman, and Emilio Ferrara. 2019. Collaboration drives indi- vidual productivity. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW):1-24.
A multiaxis annotation scheme for event temporal relations. Qiang Ning, Hao Wu, Dan Roth, arXiv:1804.07828arXiv preprintQiang Ning, Hao Wu, and Dan Roth. 2018. A multi- axis annotation scheme for event temporal relations. arXiv preprint arXiv:1804.07828.
The timebank corpus. James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, Corpus linguistics. Lancaster, UK40James Pustejovsky, Patrick Hanks, Roser Sauri, An- drew See, Robert Gaizauskas, Andrea Setzer, Dragomir Radev, Beth Sundheim, David Day, Lisa Ferro, et al. 2003. The timebank corpus. In Corpus linguistics, volume 2003, page 40. Lancaster, UK.
An efficient framework for sentence similarity modeling. Zhe Quan, Zhi-Jie Wang, Yuquan Le, Bin Yao, Kenli Li, Jian Yin, Speech, and Language Processing. 27Zhe Quan, Zhi-Jie Wang, Yuquan Le, Bin Yao, Kenli Li, and Jian Yin. 2019. An efficient framework for sentence similarity modeling. IEEE/ACM Transac- tions on Audio, Speech, and Language Processing, 27(4):853-865.
Nils Reimers, Iryna Gurevych, arXiv:1908.10084Sentencebert: Sentence embeddings using siamese bertnetworks. arXiv preprintNils Reimers and Iryna Gurevych. 2019. Sentence- bert: Sentence embeddings using siamese bert- networks. arXiv preprint arXiv:1908.10084.
Wikis and wikinews. Thomas Roessing, The International Encyclopedia of Journalism Studies. Thomas Roessing. 2019. Wikis and wikinews. The International Encyclopedia of Journalism Studies, pages 1-5.
Automatic fact-guided sentence modification. Darsh Shah, Tal Schuster, Regina Barzilay, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Darsh Shah, Tal Schuster, and Regina Barzilay. 2020. Automatic fact-guided sentence modification. In Proceedings of the AAAI Conference on Artificial In- telligence, volume 34, pages 8791-8798.
Recent advances on neural headline generation. Shi-Qi, Yan-Kai Shen, Cun-Chao Lin, Yu Tu, Zhi-Yuan Zhao, Mao-Song Liu, Sun, Journal of computer science and technology. 324Shi-Qi Shen, Yan-Kai Lin, Cun-Chao Tu, Yu Zhao, Zhi- Yuan Liu, Mao-Song Sun, et al. 2017. Recent ad- vances on neural headline generation. Journal of computer science and technology, 32(4):768-784.
don't quote me on that": Finding mixtures of sources in news articles. Alexander Spangher, Jonathan May, Emilio Ferrara, Nanyun Peng, Proceedings of Computation+Journalism Conference. Computation+Journalism ConferenceAlexander Spangher, Jonathan May, Emilio Ferrara, and Nanyun Peng. 2020. "don't quote me on that": Finding mixtures of sources in news articles. In Pro- ceedings of Computation+Journalism Conference.
Alexander Spangher, Jonathan May, arXiv:2101.00389Sz-rung Shiang, and Lingjia Deng. 2021a. Multitask learning for class-imbalanced discourse classification. arXiv preprintAlexander Spangher, Jonathan May, Sz-rung Shiang, and Lingjia Deng. 2021a. Multitask learning for class-imbalanced discourse classification. arXiv preprint arXiv:2101.00389.
what's the diff?": Examining news article updates and changing narratives during the uss theodore roosevelt coronavirus crisis. Alexander Spangher, Amberg-Lynn Scott, Ke Huang-Isherwood, Annenberg Scymposium. Alexander Spangher, Amberg-Lynn Scott, and Ke Huang-Isherwood. 2021b. "what's the diff?": Examining news article updates and changing narra- tives during the uss theodore roosevelt coronavirus crisis. In Annenberg Scymposium.
Analyzing the revision logs of a japanese newspaper for article quality assessment. Hideaki Tamori, Yuta Hitomi, Naoaki Okazaki, Kentaro Inui, Proceedings of the 2017 EMNLP Workshop: Natural Language Processing meets Journalism. the 2017 EMNLP Workshop: Natural Language Processing meets JournalismHideaki Tamori, Yuta Hitomi, Naoaki Okazaki, and Kentaro Inui. 2017. Analyzing the revision logs of a japanese newspaper for article quality assessment. In Proceedings of the 2017 EMNLP Workshop: Nat- ural Language Processing meets Journalism, pages 46-50.
Journalistic objectivity redefined? wikinews and the neutral point of view. Einar Thorsen, New Media & Society. 106Einar Thorsen. 2008. Journalistic objectivity rede- fined? wikinews and the neutral point of view. New Media & Society, 10(6):935-954.
Yufei Tian, Tuhin Chakrabarty, arXiv:2004.04938Fred Morstatter, and Nanyun Peng. 2020. Identifying cultural differences through multi-lingual wikipedia. arXiv preprintYufei Tian, Tuhin Chakrabarty, Fred Morstatter, and Nanyun Peng. 2020. Identifying cultural differences through multi-lingual wikipedia. arXiv preprint arXiv:2004.04938.
Ace 2005 multilingual training corpus ldc2006t06. Philadelphia: Linguistic Data Consortium. et al. Walker, Christopher. 2006. Ace 2005 multilin- gual training corpus ldc2006t06. Philadelphia: Lin- guistic Data Consortium.
erevis (ing): Students' revision of text evidence use in an automated writing evaluation system. Elaine Lin Wang, Lindsay Clare Matsumura, Richard Correnti, Diane Litman, Haoran Zhang, Emily Howe, Ahmed Magooda, Rafael Quintana, Assessing Writing. 44100449Elaine Lin Wang, Lindsay Clare Matsumura, Richard Correnti, Diane Litman, Haoran Zhang, Emily Howe, Ahmed Magooda, and Rafael Quintana. 2020. erevis (ing): Students' revision of text evidence use in an automated writing evaluation system. Assess- ing Writing, 44:100449.
Wn-salience: A corpus of news articles with entity salience annotations. Chuan Wu, Evangelos Kanoulas, Wei Maarten De Rijke, Lu, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceChuan Wu, Evangelos Kanoulas, Maarten de Rijke, and Wei Lu. 2020. Wn-salience: A corpus of news articles with entity salience annotations. In Proceed- ings of The 12th Language Resources and Evalua- tion Conference, pages 2095-2102.
Identifying semantic edit intentions from revisions in wikipedia. Diyi Yang, Aaron Halfaker, Robert Kraut, Eduard Hovy, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingDiyi Yang, Aaron Halfaker, Robert Kraut, and Eduard Hovy. 2017. Identifying semantic edit intentions from revisions in wikipedia. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2000-2010.
A novel sentence similarity model with word embedding based on convolutional neural network. Concurrency and Computation: Practice and Experience. Haipeng Yao, Huiwen Liu, Peiying Zhang, 304415Haipeng Yao, Huiwen Liu, and Peiying Zhang. 2018. A novel sentence similarity model with word embed- ding based on convolutional neural network. Con- currency and Computation: Practice and Experi- ence, 30(23):e4415.
For the sake of simplicity: Unsupervised extraction of lexical simplifications from wikipedia. Mark Yatskar, Bo Pang, Cristian Danescu-Niculescu-Mizil, Lillian Lee , arXiv:1008.1986arXiv preprintMark Yatskar, Bo Pang, Cristian Danescu-Niculescu- Mizil, and Lillian Lee. 2010. For the sake of simplicity: Unsupervised extraction of lexical simplifications from wikipedia. arXiv preprint arXiv:1008.1986.
Cwig3g2-complex word identification task across three text genres and two user groups. Sanja Seid Muhie Yimam, Martin Štajner, Chris Riedl, Biemann, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingShort Papers2Seid Muhie Yimam, Sanja Štajner, Martin Riedl, and Chris Biemann. 2017. Cwig3g2-complex word iden- tification task across three text genres and two user groups. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 401-407.
Pengcheng Yin, Graham Neubig, Miltiadis Allamanis, Marc Brockschmidt, Alexander L Gaunt, arXiv:1810.13337Learning to represent edits. arXiv preprintPengcheng Yin, Graham Neubig, Miltiadis Allama- nis, Marc Brockschmidt, and Alexander L Gaunt. 2018. Learning to represent edits. arXiv preprint arXiv:1810.13337.
Expanding textual entailment corpora fromwikipedia using co-training. Fabio Massimo Zanzotto, Marco Pennacchiotti, Proceedings of the 2nd Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Resources. the 2nd Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic ResourcesFabio Massimo Zanzotto and Marco Pennacchiotti. 2010. Expanding textual entailment corpora fromwikipedia using co-training. In Proceedings of the 2nd Workshop on The People's Web Meets NLP: Collaboratively Constructed Semantic Resources, pages 28-36.
A corpus of annotated revisions for studying argumentative writing. Fan Zhang, B Homa, Rebecca Hashemi, Diane Hwa, Litman, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Fan Zhang, Homa B Hashemi, Rebecca Hwa, and Di- ane Litman. 2017. A corpus of annotated revisions for studying argumentative writing. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1568-1578.
Annotation and classification of argumentative writing revisions. Fan Zhang, Diane Litman, Grantee SubmissionFan Zhang and Diane Litman. 2015. Annotation and classification of argumentative writing revisions. Grantee Submission.
Towards automatic construction of news overview articles by news synthesis. Jianmin Zhang, Xiaojun Wan, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingJianmin Zhang and Xiaojun Wan. 2017. Towards au- tomatic construction of news overview articles by news synthesis. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 2111-2116.
Engaging with automated writing evaluation (awe) feedback on l2 writing: Student perceptions and revisions. Zhe Victor Zhang, Assessing Writing. 43100439Zhe Victor Zhang. 2020. Engaging with automated writing evaluation (awe) feedback on l2 writing: Stu- dent perceptions and revisions. Assessing Writing, 43:100439.
| [
"https://github.com/DocNow/diffengine"
] |
[
"Automated Rating of Recorded Classroom Presentations using Speech Analysis in Kazakh",
"Automated Rating of Recorded Classroom Presentations using Speech Analysis in Kazakh"
] | [
"Akzharkyn Izbassarova \nSchool of Engineering\nNazarbayev University\nAstana www.biomicrosystems.info/alex\n",
"Aidana Irmanova \nSchool of Engineering\nNazarbayev University\nAstana www.biomicrosystems.info/alex\n",
"Alex Pappachen \nSchool of Engineering\nNazarbayev University\nAstana www.biomicrosystems.info/alex\n",
"James \nSchool of Engineering\nNazarbayev University\nAstana www.biomicrosystems.info/alex\n"
] | [
"School of Engineering\nNazarbayev University\nAstana www.biomicrosystems.info/alex",
"School of Engineering\nNazarbayev University\nAstana www.biomicrosystems.info/alex",
"School of Engineering\nNazarbayev University\nAstana www.biomicrosystems.info/alex",
"School of Engineering\nNazarbayev University\nAstana www.biomicrosystems.info/alex"
] | [] | Effective presentation skills can help to succeed in business, career and academy. This paper presents the design of speech assessment during the oral presentation and the algorithm for speech evaluation based on criteria of optimal intonation. As the pace of the speech and its optimal intonation varies from language to language, developing an automatic identification of language during the presentation is required. Proposed algorithm was tested with presentations delivered in Kazakh language. For testing purposes the features of Kazakh phonemes were extracted using MFCC and PLP methods and created a Hidden Markov Model (HMM) [5], [5] of Kazakh phonemes. Kazakh vowel formants were defined and the correlation between the deviation rate in fundamental frequency and the liveliness of the speech to evaluate intonation of the presentation was analyzed. It was established that the threshold value between monotone and dynamic speech is 0.16 and the error for intonation evaluation is 19%. | 10.1109/icacci.2017.8125872 | [
"https://arxiv.org/pdf/1801.00453v1.pdf"
] | 28,900,707 | 1801.00453 | 49a18619e9b0f9e31dce774c47e898072fb69625 |
Automated Rating of Recorded Classroom Presentations using Speech Analysis in Kazakh
Akzharkyn Izbassarova
School of Engineering
Nazarbayev University
Astana www.biomicrosystems.info/alex
Aidana Irmanova
School of Engineering
Nazarbayev University
Astana www.biomicrosystems.info/alex
Alex Pappachen
School of Engineering
Nazarbayev University
Astana www.biomicrosystems.info/alex
James
School of Engineering
Nazarbayev University
Astana www.biomicrosystems.info/alex
Automated Rating of Recorded Classroom Presentations using Speech Analysis in Kazakh
Index Terms-MFCCPLPpresentationsspeechimagesrecognition
Effective presentation skills can help to succeed in business, career and academy. This paper presents the design of speech assessment during the oral presentation and the algorithm for speech evaluation based on criteria of optimal intonation. As the pace of the speech and its optimal intonation varies from language to language, developing an automatic identification of language during the presentation is required. Proposed algorithm was tested with presentations delivered in Kazakh language. For testing purposes the features of Kazakh phonemes were extracted using MFCC and PLP methods and created a Hidden Markov Model (HMM) [5], [5] of Kazakh phonemes. Kazakh vowel formants were defined and the correlation between the deviation rate in fundamental frequency and the liveliness of the speech to evaluate intonation of the presentation was analyzed. It was established that the threshold value between monotone and dynamic speech is 0.16 and the error for intonation evaluation is 19%.
Abstract-Effective presentation skills can help to succeed in business, career and academy. This paper presents the design of speech assessment during the oral presentation and the algorithm for speech evaluation based on criteria of optimal intonation. As the pace of the speech and its optimal intonation varies from language to language, developing an automatic identification of language during the presentation is required. Proposed algorithm was tested with presentations delivered in Kazakh language. For testing purposes the features of Kazakh phonemes were extracted using MFCC and PLP methods and created a Hidden Markov Model (HMM) [5], [5] of Kazakh phonemes. Kazakh vowel formants were defined and the correlation between the deviation rate in fundamental frequency and the liveliness of the speech to evaluate intonation of the presentation was analyzed. It was established that the threshold value between monotone and dynamic speech is 0.16 and the error for intonation evaluation is 19%.
Index Terms-MFCC, PLP, presentations, speech, images, recognition
I. INTRODUCTION
Delivering an effective presentation in today's information world is becoming a critical factor in the development of individuals career, business or academic success. The Internet is full of sources on how to improve presenting skills and give a successful presentation. These sources accentuate on important aspects of the presentation that grasps attention.
Since there is no a particular template of an ideal oral presentation, opinions on how to prepare for oral presentations to make a good impression on the audience differ. For example, [1] claims that the passion about topic is a number one characteristic of the exceptional presenter. The author suggests that the passion can be expressed through the posture, gestures and movement, voice and removal of hesitation and verbal graffiti. Where the criteria for the content of presentation depend on the particular field, the standards for visual aspect and non-verbal communication are almost general for each presentation given in business, academia or politics. In the illustration of the examples of different postures and their interpretation the author emphasizes voice usage aspects like its volume, inflation, and tempo. It is important to mention that the author Timothy Koegel has twenty years of experience as a presentation consultant to famous business companies, politicians and business schools [1]. That is why the criteria for a successful presentation in terms of intonation given in this source can be used as a basis for speech evaluation as the the part of presentation assessment.
However, it can be questioned how the assessment of speech is normally conducted based on these criteria. [2] examined the different criterion-referenced assessment models used to evaluate oral presentations in secondary schools and at the university level. These criterion-referenced assessment rubrics are designed to provide instructions for students as well as to increase the objectivity during evaluation. It was suggested that intonation, volume, and pitch are usually evaluated based on the comments in criterion-referenced assessment rubrics like "Outstandingly appropriate use of voice" or "poor use of voice". The comments used in the evaluation sheets can be subjective [2] which is why the average relation between how people perceive the speech during the presentation and the level of change in intonation and tempo should be addressed.
In this paper we present a software for evaluating presentation skills of a speaker in terms of the intonation. We use the pitch to identify the intonation of the speech. Also, we aim to implement the automatic identification of the speech-language during the presentation as the presentations used for testing the proposed algorithm delivered in kazakh language. This task poses another problem, as Kazakh speech recognition is still not fully addressed in previously conducted research works. The recognition of the Kazakh speech itself is not within the scope of this paper. The adaptation of other languages such as Russian or English are considered as a next step.
The paper organized as follows: Section II presents the methodology of the design used for presentation evaluation, section III shows the results of testing the developed software and further section IV provides overall discussion of main issues of the software design.
II. METHODOLOGY
The Figure 1 illustrates the approach used to identify language and intonation. First, the features corresponding to the Kazakh phonemes are extracted. Then the model for language recognition is developed based on Hidden Markov Model (HMM).
MATLAB is used to create a HMM for Kazakh phonemes. The block diagram in Fig. 2 illustrates the algorithm used in the code.
The program should be able to evaluate the intonation and tempo of the speech. It is assumed that there is a direct correlation between the deviation rate in fundamental frequency and the liveliness of the speech. Thus, we need to conduct the pitch analysis to identify whether the proposed hypothesis is true. The pitch variation quotient derived from pitch contour of the audio files, where pitch variation quotient is a ratio of standard deviation of the pitch to its mean should be found. In order to identify the variation of pitch during presentations, the database of the presentations given in Kazakh language is created. This database consists of five presentations with ten-minute duration for each presentation. It is obtained by taking a video of students class presentations giving during "Kazakh Music History" and "History of Kazakhstan" courses at Nazarbayev University. For the simplicity of the analysis, presentations are divided into one-minute long audio files converted to WAV format. As a result, we obtain 32 audio files where seven presentations are with male voices and the rest by female. By using WaveSurfer program, the pitch value is found for each 7.5 ms of the speech. Two different sampling frequency values are tested to identify which sampling rate should be applied to obtain better results. 16 kHz and 44.1 kHz sampling frequency values are available in WaveSurfer. Thus, pitch is measured at these sampling rates. Then the mean and standard deviation of the pitch corresponding to each audio file is obtained. After that, a pitch variation quotient calculated. In order to obtain the pitch variation quotient we divide the standard deviation of the pitch to its mean. Finally, the results of the pitch variation quotient should be compared to the results of a perception test. The same speech files used for pitch extraction are used to conduct a test on how people perceive the speech regarding intonation. The purpose of this test is to identify the correlation between how people evaluate the presentation and the value of the pitch variation quotient. Since the paper aims to evaluate the presentation skills based on criteria such as intonation and tempo of the speech and give feedback to the users, the ability of the program to assess should be consistent with that how would professionals and general audience evaluate the presentation. Thus, we will ask students and professors to participate in this test. They will listen to a speech from presentations and categorize the speech into "monotone" or "emotionless" and "dynamic" or "lively". Since the intonation during the presentation is not always constant, the speech will be divided into small segments so the participants will give feedback for each speech segment. They should give marks for each presentations based on the intonation of the speakers. A marking system is a following: 1-monotone, 2-middle and 3-dynamic. After that, all results will be analyzed and the average mark for each presentation will be calculated. These average marks are compared with the results of the pitch variation quotient.
III. RESULTS
A. Formants
From data analysis results we defined first, second and third formants of Kazakh vowels. The Table 1 and Table 2 show the results for vowels produced by male and female voices, respectively. These phonemes were obtained by manually extracting each phoneme from KLC audio files. 734 1627 2769 517 1437 2500 540 1700 2705 513 1405 2505 811 1258 2640 577 808 2765 590 1307 2652 566 961 2605 443 2087 2900
The data given in Table 1 and Table 2 are used to observe the position of vowels according to their first and second formants. Figure 3 and Figure 4 illustrate the distribution of vowels for male and female voices respectively.
B. Intonation evaluation
The test was conducted in order to identify how listeners perceive presentations based on intonation. Totally, 32 fragments from the different presentations given in the Kazakh language were tested. The participants of the test were ranking presentations from 1 to 3, where 1 is for monotone presentation and 3 is for dynamic. In addition, the variation of pitch 858 1929 3180 662 1424 2892 697 1844 2986 572 1529 2801 948 1397 3048 583 969 3220 743 1175 3072 696 1116 3155 554 2559 3150 Figure 3. First and second formant frequencies of Kazakh vowels produced by male speakers in each presentation was measured and the pitch variation quotient was found. The pitch was measured for the different values of the sampling frequency. The average value for pitch variation quotient at f=16 kHz is 0.32 and at f=44.1 kHz the average quotient for 32 presentation fragments is 0.16. Figure 5 and Figure 6 show the results for pitch variation quotient of Since the presentation were marked from 1 to 3, the average mark is 2. Thus, the boundary between monotone and dynamic presentation should be 2 along the x-axis and the average pitch variation quotient along the yaxis. In order to estimate error, the number of presentations with the value of pitch variation quotient below the average but with high average marks and inversely, the numbers of presentations with high pitch variation but low marks should be calculated. It is found that at f=16 kHz sampling frequency the error is 34% and at f=44.1 kHz estimated error is 19%. Finally, the same presentation was recorded twice but with different intonations of the speech. The pitch variation quotient of the monotone speech is 0.092 whereas the second record with more dynamic intonation has 0.179 pitch variation quotient.
C. Phone recognition
As phone recognition does not recognize the speech, there is no need to use the lexical decoding, syntactic and semantic analysis. Therefore, phonemes are used as matching units. In this paper training the Kazakh phonemes for further phone recognition [9] was conducted in MATLAB. The results are given from simulations of HMM with 1-emission and with 2emission states. Models of context-independent phones which Figures 7 and 8, where a ij is a transition probability from state i to j, while S1...S4 are transition states, b i (O i ) is probability density function for each state or emission probability, O i are observations.In Figure 7 S1 is an initial state, S3 is an end state and S2 is an emission state (Figure 7). For 2-emission state HMM, S2 and S3 represent emission states (Figure 8).
The phonemes recognition rate is calculated using Viterbi algorithm. Different sets of simulations are done with the variation of train and test data. Table 3 gives the results for recognition rates for 1-emission and 2-emission state. Train and test data contain phonemes recorded by female and male voices.
IV. DISCUSSION
MFCC and PLP coefficients were extracted to develop phoneme based automatic language identification [4]. As a result, 12 cepstral coefficients and one energy feature were obtained for each feature extraction technique [4], [8]. After that, the first and second derivatives of these 13 features [3]. This limits further analysis for phone recognition and language identification. More time is required to create a corpus with phoneme labeling. In this paper, we analyzed the Kazakh phonemes by extracting them manually in Praat program from the set of recordings done in a soundproof studio as well as in real environment conditions. For the Kazakh language identification based on the phonological features of the language itself, a bigger phoneme database is required.
V. CONCLUSION
To conclude, in this paper we present the system that can be used to evaluate presentation skills of the speaker based on the intonation of the voice. To test the proposed design we used data in kazakh language which consequently led to consideration of language identification system. As language identification and speech recognition is a relatively new field for Kazakh language processing field, we believe that the development of such system could be useful for the further popularization of Kazakh language and realization of different projects that builds up on top of the Kazakh speech recognition systems.
Future works cover the development of the Kazakh language corpus with the analysis and labeling up to phoneme level. After that, the language model for the Kazakh language can be developed. Finally, the larger database of the presentations in the Kazakh language should be created to analyze the presentation styles in the Kazakh language as well as to conduct a test and design an intonation evaluator.
Figure 1 .Figure 2 .
12Flow chart for speech evaluation Block diagram for phone recognition
Figure 4 .Figure 5 .Figure 6 .
456First and second formant frequencies of Kazakh vowels produced by female speakers Pitch variation quotient vs perception test results at 16 kHz sampling rate Pitch variation quotient vs perception test results at 44.1 kHz sampling rate each presentation and their corresponding average marks based on the test results.
Figure 7
7by one or two emission states are shown in
Table I AVERAGE
IFORMANT FREQUENCIES OF KAZAKH VOWELS PRODUCED BYMALE SPEAKERS
Vowel
F 1 , Hz
F 2 , Hz
F 3 , Hz
Table II AVERAGE
IIFORMANT FREQUENCIES OF KAZAKH VOWELS PRODUCED BYFEMALE SPEAKERS
Vowel
F 1 , Hz
F 2 , Hz
F 3 , Hz
Table III RECOGNITION
IIIRATE FOR 1-EMISSION AND 2-EMISSION STATE HMM were taken , which gives 39-dimensional feature vector per frame in total to represent each phoneme.After that mean and covariance vectors for each phoneme were calculated. These values were used to create training model for the Kazakh phonemes recognition. MATLAB code was used to train the phonemes and create an HMM for them. As results show, the 2-emission state HMM gives higher recognition rate comparing with 1-emission state. In order to train for Kazakh language identification, the Kazakh corpus with labeling on phoneme level should be used. However, nowadays the wordlevel labeling is available in the current Kazakh Language CorpusTrain/Test
Recognition rate for
Recognition rate for
1-emission state HMM
2-emission state HMM
Female/Female
61.76
64.71
Male/Male
5.88
8.82
Male/Female
11.76
14.71
Female/Male
5.88
8.82
The exceptional presenter. T Koegel, Greenleaf Book Group PressAustin, TXT. Koegel, The exceptional presenter. Austin, TX: Greenleaf Book Group Press, 2007.
Orals ain't orals : How instruction and assessment practices affect delivery choices with prepared student oral presentations. I Michelle, L Michelle, Australian and New Zealand Communication Association Conference. BrisbaneI. Michelle and L. Michelle, 'Orals ain't orals : How instruction and assessment practices affect delivery choices with prepared student oral presentations', in Australian and New Zealand Communication Association Conference, Brisbane, 2009.
Assembling the Kazakh Language Corpus. O Makhambetov, A Makazhanov, Zh, B Yessenbayev, I Matkarimov, A Sabyrgaliyev, Sharafudinov, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational Linguistics10221031O. Makhambetov, A. Makazhanov, Zh. Yessenbayev, B. Matkarimov, I. Sabyrgaliyev, and A. Sharafudinov, Assembling the Kazakh Language Corpus, in Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, 2013, pp. 10221031, Seattle, Washington, USA, October. Association for Computational Linguistics.
Automatic language identification using Gaussian mixture and hidden Markov models. M Zissman, IEEE International Conference on Acoustics Speech and Signal Processing. M. Zissman, "Automatic language identification using Gaussian mixture and hidden Markov models", IEEE International Conference on Acous- tics Speech and Signal Processing, 1993.
PLP and RASTA (and MFCC, and inversion) in Matlab, Labrosa.ee.columbia. D Ellis, 19D. Ellis, 'PLP and RASTA (and MFCC, and inversion) in Matlab, Labrosa.ee.columbia.edu, 2015. [Online]. Available: http://labrosa.ee.columbia.edu/matlab/rastamat/. [Accessed: 19-Nov- 2015].
Using Sub-Phonemic Units for HMM Based Phone Recognition. J Hamar, Thesis for the degree of Philosophiae Doctor. Norwegian University of Science and TechnologyJ. Hamar, "Using Sub-Phonemic Units for HMM Based Phone Recog- nition", Thesis for the degree of Philosophiae Doctor, Norwegian University of Science and Technology, 2013.
Hidden Markov Models. A Moore, Autonlab.org. A. Moore, "Hidden Markov Models", Autonlab.org, 2016. [Online].
Feature Extraction and Acoustic Modeling. D Jurafsky, D. Jurafsky, "Feature Extraction and Acoustic Modeling", 2007.
ASR (Automatic Speech Recognition) Toolbox", Mirlab.org. R Jang, 14R. Jang, "ASR (Automatic Speech Recognition) Toolbox", Mirlab.org, 2016. [Online]. Available: http://mirlab.org/jang. [Accessed: 14-Apr- 2016].
| [] |
[
"Learning to Order Facts for Discourse Planning in Natural Language Generation",
"Learning to Order Facts for Discourse Planning in Natural Language Generation"
] | [
"Aggeliki Dimitromanolaki \nDepartment of Information & Communication Systems Engineering\nDepartment of Informatics Athens University of Economics & Business Patission 76\nUniversity of the Aegean Institute of Informatics & Telecommunications NCSR \"Demokritos\"\n15310, Ag.Paraskeui10434AthensGreece, Greece\n",
"Ion Androutsopoulos \nDepartment of Information & Communication Systems Engineering\nDepartment of Informatics Athens University of Economics & Business Patission 76\nUniversity of the Aegean Institute of Informatics & Telecommunications NCSR \"Demokritos\"\n15310, Ag.Paraskeui10434AthensGreece, Greece\n"
] | [
"Department of Information & Communication Systems Engineering\nDepartment of Informatics Athens University of Economics & Business Patission 76\nUniversity of the Aegean Institute of Informatics & Telecommunications NCSR \"Demokritos\"\n15310, Ag.Paraskeui10434AthensGreece, Greece",
"Department of Information & Communication Systems Engineering\nDepartment of Informatics Athens University of Economics & Business Patission 76\nUniversity of the Aegean Institute of Informatics & Telecommunications NCSR \"Demokritos\"\n15310, Ag.Paraskeui10434AthensGreece, Greece"
] | [] | This paper presents a machine learning approach to discourse planning in natural language generation. More specifically, we address the problem of learning the most natural ordering of facts in discourse plans for a specific domain. We discuss our methodology and how it was instantiated using two different machine learning algorithms. A quantitative evaluation performed in the domain of museum exhibit descriptions indicates that our approach performs significantly better than manually constructed ordering rules. Being retrainable, the resulting planners can be ported easily to other similar domains, without requiring language technology expertise. | null | [
"https://arxiv.org/pdf/cs/0306062v1.pdf"
] | 11,760,580 | cs/0306062 | 3db021fa65121c200caa6ee0560c955a803f9ce0 |
Learning to Order Facts for Discourse Planning in Natural Language Generation
Aggeliki Dimitromanolaki
Department of Information & Communication Systems Engineering
Department of Informatics Athens University of Economics & Business Patission 76
University of the Aegean Institute of Informatics & Telecommunications NCSR "Demokritos"
15310, Ag.Paraskeui10434AthensGreece, Greece
Ion Androutsopoulos
Department of Information & Communication Systems Engineering
Department of Informatics Athens University of Economics & Business Patission 76
University of the Aegean Institute of Informatics & Telecommunications NCSR "Demokritos"
15310, Ag.Paraskeui10434AthensGreece, Greece
Learning to Order Facts for Discourse Planning in Natural Language Generation
This paper presents a machine learning approach to discourse planning in natural language generation. More specifically, we address the problem of learning the most natural ordering of facts in discourse plans for a specific domain. We discuss our methodology and how it was instantiated using two different machine learning algorithms. A quantitative evaluation performed in the domain of museum exhibit descriptions indicates that our approach performs significantly better than manually constructed ordering rules. Being retrainable, the resulting planners can be ported easily to other similar domains, without requiring language technology expertise.
Introduction
Along the lines of Reiter and Dale (2000), we view natural language generation (NLG) as consisting of six tasks: content determination, discourse planning, aggregation, lexicalization, referring expression generation, and linguistic realization. This paper is concerned with the second task, i.e., discourse planning. Discourse planning determines the ordering and rhetorical relations of the logical messages, hereafter called facts, that the generated document is intended to convey. Most existing approaches to discourse planning are based on either rhetorical structure theory (RST) (Mann and Thompson, 1988;Hovy, 1993) or schemata (McKeown, 1985). In both cases, the rules that determine the order and the rhetorical relations are typically written by hand. This is a time-consuming process, which requires domain and linguistic expertise, and has to be repeated whenever the system is ported to a new domain; see also Rambow (1990).
This paper presents a machine learning (ML) approach to the subtask of discourse planning that attempts to find the most natural ordering of facts in each generated document. Our approach was motivated by experience obtained from the M-PIRO project (Androutsopoulos et al., 2001). Building upon ILEX (O'Donnell et al., 2001), M-PIRO is developing technology that allows personalized descriptions of museum exhibits to be generated in several languages, starting from symbolic, language-independent information stored in a database, and small fragments of text (Isard et al., 2003). One of M-PIRO's most ambitious goals is to develop authoring tools that will allow domain experts, e.g., museum curators, with no language technology expertise to configure the system for new application domains. While this goal has largely been achieved for resources such as the domain-dependent parts of the ontology, or domain-dependent settings that affect content selection, lexicalization, and referring expression generation (Androutsopoulos et al., 2002), designing tools that will allow domain experts to edit discourse planning rules has proven difficult. In contrast, domain experts, in our case museum curators, were happy to reorder the clauses of sample generated texts, thus indicating the preferred orderings of the facts in the corresponding discourse plans. We have, therefore, opted for a machine learning approach that allows factordering rules to be captured automatically from sets of manually reordered facts. We view this approach as a first step towards learning richer discourse plans, which apart from ordering information will also include rhetorical relations, although the experience from M-PIRO indicates that even just ordering the facts in a natural way can lead to quite acceptable texts. Being automatically retrainable, the planners that our approach produces can be easily ported to other similar domains, e.g., descriptions of products for e-commerce catalogues, provided that samples of ideal fact orderings can be made available.
Our method introduces a new representation of the fact-ordering task, and employs supervised learning algorithms. It is assumed that the number of facts to be conveyed by each generated document, in effect the desired length of the generated texts, has been fixed to a particular value; i.e., all the documents contain the same number of facts. In ILEX and M-PIRO, this number is provided by the user model. Furthermore, it is assumed that a content determination module is available, which selects the particular facts to be conveyed by each document. Our method consists of a sequence of stages, the number of stages being equal to the number of facts to be conveyed by each document. Each stage is responsible for the selection of the fact to be placed at the corresponding position in the resulting document. In our experiments, we set the number of facts per document to six, which per document to six, which seems to be an appropriate value for our particular domain and an average adult user, but this number could vary depending on the application and user type. Two learning algorithms, decision trees (Quinlan, 1993) and instance-based learning (Aha and Kibler, 1991), were explored. The results are compared against two baselines: a simple hand-crafted planner, which always assigns a predefined order, and the majority scheme. The latter selects, among the facts that are available at each position, the fact that occurred most frequently at that position in the training data. Overall, the results indicate that with either of the two learning algorithms our method significantly outperforms both of the baselines, and that there is no significant difference in the performance of the two learning algorithms.
The remainder of this paper is organized as follows. Section 2 presents previous learning approaches to NLG, and discusses their relevance to the work presented here. Section 3 describes our learning approach, including issues such as data representation and system architecture. Section 4 discusses our experiments and their results. Section 5 concludes and highlights plans for future work.
Previous work
In recent years, ML approaches have been introduced to NLG to address problems such as the construction and maintenance of domain and language resources, which is a timeconsuming process in systems that use handcrafted rules. 1 To the best of our knowledge, only two of these approaches (Duboue and McKeown, 2001;Duboue and McKeown, 2002) consider discourse planning. Duboue and McKeown (2001) present an unsupervised ML algorithm based on pattern matching and clustering, which is used to learn ordering constraints among facts. The same authors have also used evolutionary algorithms to learn the tree representation of a planner (Duboue and McKeown, 2002). These works are similar to ours in that we also address the problem of ordering facts. However, Duboue and McKeown follow the lines of schemabased planning, where content determination is not an independent stage, but is interleaved with discourse planning. This means that the discourse planner has the overall control of content determination, and cannot handle inputs from an independent content determination module. In contrast, our method can be used with any content determination mechanism that returns a fixed number of facts. This has the benefit that alternative content determination modules can be used without affecting the discourse planner. Moreover, while Duboue and McKeown (2002) learn a tree structure representing the best sequence of facts, our method directly manipulates facts. Mellish et al. (1998) also experiment with genetic algorithms to find the optimal RST tree, which is then mapped to the corresponding sequence of facts. Karamanis and Manurung (2002) use a similar approach that employs constraints from Centering Theory in the genetic search. However, these approaches do not involve any learning: the genetic search is repeated every time the text planner is invoked, i.e., for each new document. In contrast, our method induces a single discourse planner from the training data, which is then used to order any set of facts provided by the content determinator.
ML approaches to NLG have also been used in syntactic and lexical realization (Langkilde and Knight, 1998;Bangalore and Rambow, 2000;Ratnaparkhi, 2000;Varges and Mellish, 2001;Shaw and Hatzivassiloglou, 1999;Malouf 2000), as well as in sentence planning tasks (Walker et al., 2001;Poesio et al., 2000). In the context of spoken dialogue systems, learning techniques have been used to select among different templates (Oh and Rudnicky, 2000;Walker, 2000). These approaches, however, are not directly relevant to discourse planning.
The problem of ordering semantic units has also been addressed in the context of summarization. Kan and McKeown (2002) use an ngram model to infer ordering constraints between facts, while Barzilay et al. (2002) manu-ally identify constraints on ordering, using a corpus of ordering preferences among subjects and clustering techniques that identify commonalities among these preferences. The approach presented here, instead of identifying ordering constraints, "learns" the overall ordering of the input facts.
Learning to order facts
In our approach, the discourse planner is trained on manually ordered sequences of facts of a fixed length. Once trained, it is able to determine what it considers to be the most natural ordering of any set of facts, as output by a content determination module, provided that the cardinality of the set is the same as the length of the training sequences. This section describes our approach in more detail, starting from the required data and the pre-processing that they undergo.
Data and pre-processing
Our data was derived from the database of M-PIRO. This database currently contains information about 50 museum exhibits, each of which is associated with a large number of facts. For example, the left column of Table 1 shows the database facts associated with the entity exhibit9. Each generated document is intended to describe a museum exhibit. As already mentioned, in our experiments the number of facts to be conveyed by each document was set to six. That is, when asked to describe exhibit9, the content determination module would choose six of the facts in the left column of Table 1, possibly depending on user modeling information, such as the interests and backgrounds of the users, or information indicating which facts have already been conveyed to the users. We did not use a particular content determination module, because we wanted the discourse planner to be independent from the content determination process. Our goal was to be able to order any set of six facts that could be provided as input by an arbitrary content determination module.
In Proceedings of EACL 2003 Workshop on Natural Language Generation In order to create the dataset of our experiments, we used a program that yields all the possible combinations of six facts for each exhibit. The right column of Table 1 shows an example set of six facts, which can be used as input to the discourse planner. Many combinations, however, looked unreasonable in our domain; e.g., combinations that do not include the subclass fact (descriptions in the museum domain must always inform the reader about the type of the exhibit), or combinations that include facts providing background information about an entity that is not present in the discourse (for instance, combinations that include opposite-technique but not painting-technique-used in Table 1). A refinement operation was performed manually to discard such combinations. We note that in real-life applications, the combinations would be obtained by calling several times a content determination module; hence, no refinement operation would be necessary, as the content determination module would, presumably, never return unreasonable combinations of facts.
After the refinement operation, 880 combinations of 6 facts were left. The facts of each set were manually assigned an order, to reflect what a domain expert considered to be the most natural ordering of the corresponding clauses in the generated texts. Each one of the 880 sets was then used as an instance in the learning algorithms, as will be explained in the following section. Figure 1 shows the discourse planning architecture that our approach adopts, along with an example of inputs and outputs at each stage. We decompose the fact-ordering task into six multi-class classification problems. Each of the six classifiers selects the fact to be placed at the corresponding position. Each input set of six facts is represented as a vector in a multidimensional space, where dimensions correspond to values of attributes. 42 binary attributes, representing the fact types of the domain, were used. The vector at the top left corner of Figure 1 represents the set of six facts of the right column of Table 1. Each attribute shows whether a particular fact type exists in the input (e.g., creation-period:1) or not (e.g., painting-technique-used:0). Classifiers 2-6 have additional attributes representing the fact types that have already been selected for positions 1-5. More specifically, as shown in Figure 1, the attribute 1 st -fact is added from the 2 nd classifier onwards, the attribute 2 nd -fact is added from the 3 rd classifier onwards, and so forth. Therefore, the classifiers make their decisions based on the fact types that are present in their inputs (set of remaining facts to be ordered) and the fact types that have been selected at the previous positions. We assume that it is not possible to have more than one fact of the same type in the input set of facts because this is the case in the M-PIRO domain (e.g., we cannot have two facts of type creation-period) as well as in other similar domains. In a more general case, however, one could differentiate between facts of the same type, by enriching, for instance, the attributes, so as to represent information about the entities related with each fact, or by adding new attributes. The output of each classifier is the class value representing the fact type that has been selected for the corresponding position. In the example of Figure 1, the classifiers select the following order: subclass, creation-period, creation-time, painted-by, original-location, current-location. As shown in Figure 1, the sixth classifier has no substantial role, since there is only one fact left in the input, and, consequently, this fact will be placed at the sixth position.
Instance representation and planner architecture
In a similar manner, a sequence of n classifiers can be used when each document is to convey n, rather than 6, facts. A limitation of this approach is that it cannot be used when n varies across the documents. However, this is not a problem in M-PIRO, where n, in effect the length of the documents, is fixed for each user type: if there are t user types, we train t different document planners, one for each user type; each planner is a sequence of n i classifiers, where n i is the value of n for the corresponding user type (i = 1, …, t).
Experiments and results
In order to evaluate our approach, we performed four experiments. The first experiment was conducted using the majority scheme, where each classifier selects among the available classes (i.e., among the facts that are present in the input set and have not been selected by the previous classifiers) the class (i.e., fact) that was most frequent in its training data. However, this scheme is too primitive, and could not be seen as a safe benchmark for our experiments. For this reason, we constructed a simple planner, hereafter base planner, which always assigns a predefined fixed order defined in collaboration with a museum expert; e.g., subclass should always be placed before creation-period, creation-period should always be placed before creation-time, etc. The base planner was used as a second baseline. In this way, we had a safer benchmark for the performance of the learning schemes. In the two remaining experiments we used instancebased and decision-tree learning. More specifically, we experimented with the k-nearest neighbour algorithm (Aha and Kibler, 1991), with k = 1, and the C4.5 algorithm (Quinlan, 1993). All the experiments were performed using the machine learning software of WEKA (Witten and Frank, 1999). Figure 2 presents the accuracy scores of each of the six classifiers, for each learning scheme. The results were obtained using 10fold cross-validation. That is, the dataset (880 vectors) was divided into ten disjoint parts (folds), and each experiment was repeated 10 times. Each time, a different part was used for testing, and the remaining 9 parts were used for training. The dataset was stratified, i.e. the class distribution in each fold was approximately the same as in the full dataset. The reported scores are averaged over the 10 iterations. Accuracy measures the percentage of correct selections at each classifier (position) compared to the selections made by the human annotator. All schemes have 100% accuracy at the selection of the 1 st and 6 th fact. This happens because the first classifier always selects the fact subclass, which is always the first fact in our domain, while the sixth classifier has no alternative choice, since only one fact has been left in the input. At the other positions, both C4.5 and 1-NN perform better than the two baselines; C4.5 seems to have a slightly better performance than 1-NN. Paired two-tailed t-tests at p = 0.005 indicate that the observed differences in accuracy between baselines and ML schemes are statistically significant; the only exception is the selection of the 2 nd fact, where there is no significant difference between the base planner and 1-NN. Figure 3 shows a text corresponding to the ordering produced by C4.5. The surface text, including aggregation and referring expression generation, was generated by hand, though we plan to automate this process using the corresponding modules of M-PIRO. The ordering of the facts, which are realized as natural language clauses, looks quite reasonable. The flow of information is not the optimal one, but does not cause problems to the understandability or readability of the text. Figure 4 shows the text that corresponds to the ordering of the human annotator. The two texts differ in the placement of the fact made-of, which is expressed as "it is made of marble"; C4.5 places this fact at the fourth position instead of the second, which is the right position according to the human annotator. The word "but" in the human text of Figure 4 implies the use of a rhetorical relation; the presence of this relation suggests a possible explanation of why the human text is ordered differently than the one produced by the system. The misplaced fact is penalized three times when computing the accuracy scores of the six classifiers: at the second classifier, where the fact exhibit-portrays is selected instead of made-of, at the third classifier, where creation-period is selected instead of exhibit-portrays, and at the fourth classifier, where made-of is selected instead of creation-period. This implies that the accuracy scores that were presented above are a very strict measure of the performance of our method, and, in fact, our method may actually be performing even better than what the scores indicate.
This exhibit is a portrait. It portrays Alexander the Great and was created during the Hellenistic period. It is made of marble. What we see in the picture is a roman copy. Today it is located at the archaeological museum of Thassos. This exhibit is a portrait. It is made of marble and portrays Alexander the Great. It was created during the Hellenistic period, but what we see in the picture is a roman copy. Today it is located in the archaeological museum of Thassos.
Figure 4: Ordering of facts as specified by the human annotator
We are currently trying to devise evaluation measures that are better suited to discourse planning, and to NLG in general. More specifically, we plan to apply metrics that assign different penalties depending on the importance of an error, based on the edit distance between the output of the discourse planner and the reference corpus. We also plan to correlate these metrics with human evaluation as proposed by Reiter and Sripada (2002). This paper has presented a machine learning approach to the fact-ordering subtask of discourse planning. We have decomposed the problem into a sequence of multi-class classification stages, where each stage selects the fact to be placed at the corresponding position. Experiments performed using the C4.5 and k-NN learning algorithms indicate that our method performs significantly better than both a sequence of simple majority classifiers and a set of manually constructed ordering rules.
Our method can be used with any content determination module that selects a fixed number of facts per document and user type, and gives rise to planners that can be easily retrained for other similar application domains, where sample manually ordered sequences of facts can be obtained. Compared to approaches that employ manually constructed rules, our method has the advantage that it does not require language technology expertise, and, hence, can be used to construct authoring tools that will allow domain experts to control the order of the facts in the generated documents. Furthermore, unlike previous machine learning approaches, our method does not interleave fact ordering with content determination.
As already mentioned, we plan to move towards learning richer discourse plans, which apart from ordering information will also include rhetorical relations, although our experience so far indicates that even just ordering the facts in a natural way can lead to quite acceptable texts. We are currently investigating a more flexible representation that will not be limited by a fixed number of facts per page and, apart from the absolute order of facts, will take into account the relative ordering between facts (e.g., by using n-grams). Further work is planned in order to devise better evaluation measures, and improve the performance of our planners by considering other learning algorithms.
Figure 1 :
1Architecture diagram
period(EXHIBIT9,CLASSICAL-PERIOD) painting-technique-used(EXHIBIT9,RED-FIG-TECHN) exhibit-depicts(EXHIBIT9,ENTITY-1786) opposite-technique(RED-FIG-TECHN,BLACK-FIG-TEC) technique-description(RED-FIG-TECHN,ENTITY-2474) person-information(SOTADES,ENTITY-2739) museum-country(MUS-DU-PETIT-PALAIS,FRANCE) period-story(CLASSICAL-PERIOD,STORYcurrent-location:1,original-location:1,paintedby:1,creation-time:1,creation-period:0,painting-techniqueused:0,…,1 st -fact:subclass,2 nd -fact:creation-period> 3 rd -classifier 3 rd fact = creation-time F3 = F2 -3 rd -fact <subclass:0,current-location:1,original-location:1,paintedby:1,creation-time:0,creation-period:0,painting-techniqueused:0,…,1 st -fact:subclass,2 nd -fact:creation-period,3 rdfact:creation-time> 4 th -classifier 4 th fact = painted-by <subclass:0,current-location:1,original-location:1,paintedby:0,creation-time:0,creation-period:0,painting-techniqueused:0,…,1 st -fact:subclass,2 nd -fact:creation-period,3 rdfact:creation-time,4 th -fact:painted-by> F4 = F3 -4 th -fact 5 th -classifier 5 th fact = original-location <subclass:0,current-location:1,original-location:0,paintedby:0,creation-time:0,creation-period:0,painting-techniqueused:0,…,1 st -fact:subclass,2 nd -fact:creation-period,3 rdfact:creation-time,4 th -fact:painted-by,5 th -fact:original-location> F5 = F4 -5 th -fact 6 th -classifier 6 th fact = current-location
Figure 2 :
2Accuracy scores at each classification stage
Figure 3 :
3Ordering of facts produced using C4.5
Table 1 :
1Database facts and facts selected as input to the discourse planner
For an extensive bibliography on statistical and machine learning approaches to NLG, see: http://www.iit.demokritos.gr/~adimit/bibliography.html.
Instance-based learning algorithms. D Aha, D Kibler, Machine Learning. 6Aha D., and Kibler D. 1991. Instance-based learn- ing algorithms. Machine Learning, 6:37-66.
Generating multilingual personalized descriptions of museum exhibits -the M-PIRO project. I Androutsopoulos, V Kokkinaki, A Dimitromanolaki, J Calder, J Oberlander, E Not, Proceedings of the 29 th Conference on Computer Applications and Quantitative Methods in Archaeology. the 29 th Conference on Computer Applications and Quantitative Methods in ArchaeologyGotland, SwedenAndroutsopoulos I., Kokkinaki V., Dimitro- manolaki A., Calder J., Oberlander J. and Not E. 2001. Generating multilingual personalized de- scriptions of museum exhibits -the M-PIRO project. In Proceedings of the 29 th Conference on Computer Applications and Quantitative Meth- ods in Archaeology, Gotland, Sweden.
Symbolic authoring for multilingual natural language generation. I Androutsopoulos, D Spiliotopoulos, K Stamatakis, A Dimitromanolaki, V Karkaletsis, C D Spyropoulos, Proceedings of the 2 nd Hellenic Conference on Artificial Intelligence (SETN-02). the 2 nd Hellenic Conference on Artificial Intelligence (SETN-02)Thessaloniki, GreeceAndroutsopoulos I., Spiliotopoulos D., Stamatakis K., Dimitromanolaki A., Karkaletsis V. and Spy- ropoulos C.D. 2002. Symbolic authoring for multilingual natural language generation. In Proceedings of the 2 nd Hellenic Conference on Artificial Intelligence (SETN-02), Thessaloniki, Greece.
Exploiting a probabilistic hierarchical model for generation. S Bangalore, O Rambow, Proceedings of the 18 th International Conference on Computational Linguistics (COLING 2000). the 18 th International Conference on Computational Linguistics (COLING 2000)Saarbrucken, GermanyBangalore S. and Rambow O. 2000. Exploiting a probabilistic hierarchical model for generation. In Proceedings of the 18 th International Confer- ence on Computational Linguistics (COLING 2000), Saarbrucken, Germany.
Inferring Strategies For Sentence Ordering In Multidocument News Summarization. R Barzilay, N Elhadad, K Mckeown, Journal of Artificial Intelligence Research. 17Barzilay R., Elhadad N. and McKeown K. 2002. Inferring Strategies For Sentence Ordering In Multidocument News Summarization. Journal of Artificial Intelligence Research, 17: 35-55.
Content Planner Construction via Evolutionary Algorithms and a Corpus-based Fitness Function. Duboue P Mckeown, K , Proceedings of the 2 nd International Natural Language Generation Conference (INLG'02). the 2 nd International Natural Language Generation Conference (INLG'02)New York, USADuboue P and McKeown K. 2002. Content Planner Construction via Evolutionary Algorithms and a Corpus-based Fitness Function. In Proceedings of the 2 nd International Natural Language Gen- eration Conference (INLG'02), New York, USA, pp. 89-96.
Empirically estimating order constraints for content planning in generation. P Duboue, K Mckeown, Proceedings of the 39 th Annual Meeting of the Association for Computational Linguistics (ACL-2001). the 39 th Annual Meeting of the Association for Computational Linguistics (ACL-2001)Toulouse, FranceDuboue P. and McKeown K. 2001. Empirically estimating order constraints for content planning in generation. In Proceedings of the 39 th Annual Meeting of the Association for Computational Linguistics (ACL-2001), Toulouse, France, pp. 172-179.
Automated Discourse Generation Using Discourse Structure Relations. E Hovy, Artificial Intelligence. 631-2Hovy E. 1993. Automated Discourse Generation Using Discourse Structure Relations. Artificial Intelligence, 63(1-2):341-386.
Speaking the Users' Languages. A Isard, J Oberlander, I Androutsopoulos, C Matheson, IEEE Intelligent Systems. 181A. Isard, J. Oberlander, I. Androutsopoulos and C. Matheson. 2003. "Speaking the Users' Lan- guages". IEEE Intelligent Systems, 18(1):40-45.
Corpus-trained text generation for summarization. M Kan, K Mckeown, Proceedings of the 2 nd International Natural Language Generation Conference (INLG'02). the 2 nd International Natural Language Generation Conference (INLG'02)New York, USAKan M. and McKeown K. 2002. Corpus-trained text generation for summarization. In Proceed- ings of the 2 nd International Natural Language Generation Conference (INLG'02), New York, USA, pp. 1-8.
Stochastic Text Structuring using the Principle of Continuity. N Karamanis, H M Manurung, Proceedings of the 2 nd International Natural Language Generation Conference (INLG'02). the 2 nd International Natural Language Generation Conference (INLG'02)New York, USAKaramanis N. and Manurung H. M. 2002. Stochas- tic Text Structuring using the Principle of Conti- nuity. In Proceedings of the 2 nd International Natural Language Generation Conference (INLG'02), New York, USA, pp. 81-88.
Generation that Exploits Corpus-Based Statistical Knowledge. Langkilde I Knight, K , Proceedings of the 36 th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (COLING-ACL. the 36 th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (COLING-ACLMontreal, CanadaLangkilde I and Knight K. 1998. Generation that Exploits Corpus-Based Statistical Knowledge. In Proceedings of the 36 th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (COLING-ACL 1998), Montreal, Canada, pp. 704-710.
The order of prenominal adjectives in natural language generation. R Malouf, Proceedings of the 38 th Annual Meeting of the Association for Computational Linguistics (ACL-00). the 38 th Annual Meeting of the Association for Computational Linguistics (ACL-00)Hong KongMalouf R. 2000. The order of prenominal adjec- tives in natural language generation. In Proceed- ings of the 38 th Annual Meeting of the Association for Computational Linguistics (ACL- 00), Hong Kong, pp. 85-92.
Rhetorical structure theory: towards a functional theory of text organization. W Mann, S Thompson, Text. 3Mann W. and Thompson S. 1988. Rhetorical struc- ture theory: towards a functional theory of text organization. Text, 3:243-281.
Discourse strategies for generating natural language text. K Mckeown, Artificial Intelligence. 27McKeown K. 1985. Discourse strategies for gener- ating natural language text. Artificial Intelli- gence, 27:1-42.
From local to global coherence: A bottom-up approach to text planning. D Marcu, Proceedings of the 14 th National Conference on Artificial Intelligence. the 14 th National Conference on Artificial IntelligenceProvidence, Rhode IslandMarcu D. 1997. From local to global coherence: A bottom-up approach to text planning. In Pro- ceedings of the 14 th National Conference on Ar- tificial Intelligence, Providence, Rhode Island, pp. 629-635.
Experiments using stochastic search for text planning. C Mellish, A Knott, J Oberlander, M Donnell, Proceedings of the 9 th International Workshop on Natural Language Generation. the 9 th International Workshop on Natural Language GenerationOntario, CanadaMellish C., Knott A., Oberlander J. and O' Donnell M. 1998. Experiments using stochastic search for text planning. In Proceedings of the 9 th Inter- national Workshop on Natural Language Gen- eration, Ontario, Canada, pp. 97-108.
ILEX: An Architecture for a Dynamic Hypertext Generation System. M O'donnell, C Mellish, J Oberlander, A Knott, Natural Language Engineering. 73O'Donnell M., Mellish C., Oberlander J. and Knott A. 2001. ILEX: An Architecture for a Dynamic Hypertext Generation System. Natural Lan- guage Engineering, 7(3):225-250.
Stochastic language generation for spoken dialogue systems. A Oh, A Rudnicky, Proceedings of the ANLP/NAACL 2000 Workshop on Conversational Systems. the ANLP/NAACL 2000 Workshop on Conversational SystemsSeattle, USAOh A. and Rudnicky A. 2000. Stochastic language generation for spoken dialogue systems. In Pro- ceedings of the ANLP/NAACL 2000 Workshop on Conversational Systems, Seattle, USA, pp. 27-32.
Statistical NP generation: a first report. M Poesio, R Henschel, R Kibble, Proceedings of the ESSLLI Workshop on NP Generation. the ESSLLI Workshop on NP GenerationUtrecht, NetherlandsPoesio M., Henschel R. and Kibble R. 2000. Statis- tical NP generation: a first report. In Proceed- ings of the ESSLLI Workshop on NP Generation, Utrecht, Netherlands.
C4.5: programs for machine learning. R Quinlan, Morgan Kaufmann302Quinlan R. 1993. C4.5: programs for machine learning. Morgan Kaufmann, 302 p.
Domain Communication Knowledge. O Rambow, Proceedings of the 5 th International Workshop on Natural Language Generation. the 5 th International Workshop on Natural Language GenerationDawson, PARambow O. 1990. Domain Communication Knowledge. In Proceedings of the 5 th Interna- tional Workshop on Natural Language Genera- tion, Dawson, PA.
Trainable methods for surface natural language generation. A Ratnaparkhi, Proceedings of the 6 th Applied Natural Language Processing Conference and the 1 st Meeting of the North American Chapter of ACL (ANLP-NAACL 2000). the 6 th Applied Natural Language Processing Conference and the 1 st Meeting of the North American Chapter of ACL (ANLP-NAACL 2000)Seattle, USARatnaparkhi A. 2000. Trainable methods for sur- face natural language generation. In Proceedings of the 6 th Applied Natural Language Processing Conference and the 1 st Meeting of the North American Chapter of ACL (ANLP-NAACL 2000), Seattle, USA, pp. 194-201.
Building natural language generation systems. E Reiter, R Dale, Cambridge University Press248EnglandReiter E. and Dale R. 2000. Building natural lan- guage generation systems. Cambridge Univer- sity Press, England, 248 p.
Should Corpora Texts Be Gold Standards for NLG?. E Reiter, S Sripada, Proceedings of the 2 nd International Natural Language Generation Conference (INLG'02). the 2 nd International Natural Language Generation Conference (INLG'02)New York, USAReiter E. and Sripada S. 2002. Should Corpora Texts Be Gold Standards for NLG? In Proceed- ings of the 2 nd International Natural Language Generation Conference (INLG'02), New York, USA, pp. 97-104.
Ordering among premodifiers. J Shaw, V Hatzivassiloglou, Proceedings of the 37 th Annual Meeting of the Association for Computational Linguistics (ACL-99). the 37 th Annual Meeting of the Association for Computational Linguistics (ACL-99)College Park, MarylandShaw J. and Hatzivassiloglou V. 1999. Ordering among premodifiers. In Proceedings of the 37 th Annual Meeting of the Association for Computa- tional Linguistics (ACL-99), College Park, Maryland, pp. 135-143.
Instance-based natural language generation. S Varges, C Mellish, Proceedings of the 2 nd Meeting of the North American Chapter of ACL (NAACL-2001). the 2 nd Meeting of the North American Chapter of ACL (NAACL-2001)Pittsburgh, PAVarges S. and Mellish C. 2001. Instance-based natural language generation. In Proceedings of the 2 nd Meeting of the North American Chapter of ACL (NAACL-2001), Carnegie Mellon Uni- versity, Pittsburgh, PA.
An application of reinforcement learning to dialogue strategy selection in a spoken dialogue system for email. M Walker, Journal of Artificial Intelligence Research. 12Walker M. 2000. An application of reinforcement learning to dialogue strategy selection in a spo- ken dialogue system for email. Journal of Artifi- cial Intelligence Research, 12:387-416.
SPoT: a trainable sentence planner. M Walker, O Rambow, M Rogati, Proceedings of the 2 nd Meeting of the North American Chapter of the ACL (NAACL-2001). the 2 nd Meeting of the North American Chapter of the ACL (NAACL-2001)Pittsburgh, PACarnegie Mellon UniversityWalker M., Rambow O. and Rogati M. 2001. SPoT: a trainable sentence planner. In Proceed- ings of the 2 nd Meeting of the North American Chapter of the ACL (NAACL-2001), Carnegie Mellon University, Pittsburgh, PA.
Data mining: practical machine learning tools and techniques with Java implementations. I Witten, E Frank, Morgan Kaufmann416Witten I. and Frank E. 1999. Data mining: practi- cal machine learning tools and techniques with Java implementations. Morgan Kaufmann, 416 p.
| [] |
[
"A MULTISET VERSION OF EVEN-ODD PERMUTATIONS IDENTITY",
"A MULTISET VERSION OF EVEN-ODD PERMUTATIONS IDENTITY"
] | [
"Hossein Teimoori ",
"Faal "
] | [] | [] | In this paper, we give a new bijective proof of a multiset analogue of even-odd permutations identity. This multiset version is equivalent to the original coin arrangements lemma which is a key combinatorial lemma in the Sherman's Proof of a conjecture of Feynman about an identity on paths in planar graphs related to combinatorial solution of two dimensional Ising model in statistical physics. | 10.1142/s0129054119500163 | [
"https://arxiv.org/pdf/2206.01291v1.pdf"
] | 202,127,705 | 2206.01291 | 4e97933463d573dbdb98b5f977444182c40e1fd1 |
A MULTISET VERSION OF EVEN-ODD PERMUTATIONS IDENTITY
2 Jun 2022
Hossein Teimoori
Faal
A MULTISET VERSION OF EVEN-ODD PERMUTATIONS IDENTITY
2 Jun 2022
In this paper, we give a new bijective proof of a multiset analogue of even-odd permutations identity. This multiset version is equivalent to the original coin arrangements lemma which is a key combinatorial lemma in the Sherman's Proof of a conjecture of Feynman about an identity on paths in planar graphs related to combinatorial solution of two dimensional Ising model in statistical physics.
Introduction and Motivation
The Ising model [1] is a theoretical physics model of the nearest-neighbor interactions in a crystal structure. In the Ising model, the vertices of a graph G = (V, E) represent particles and the edges describe interactions between pairs of particles. The most common example of a two dimensional Ising model is a planar square lattice where each particle interacts only with its neighbors. A factor (weight) J ij is assigned to each edge {i, j}, where this factor describes the nature of the interaction between particles i and j. A physical state of the system is an assignment of σ i ∈ {+1, −1} to each vertex i. The Hamiltonian (or energy function) of the system is defined as:
H(σ) = − {i,j}∈E J ij σ i σ j .
The distribution of the physical states over all possible energy levels is encapsulated in the partition function:
Z(β, G) = σ e −βH(σ) ,
where β is changed for K T , in which K is a constant and T is a variable representing the temperature. Motivated by a generalization of a cycle in a graph, a set A of edges is called even if each vertex of V is incident with an even number of edges of A. The generating function of even subsets denoted by E(G, x) can be defined as E(G, x) = A: A is even e∈A
x e .
It turns out that the Ising partition function for a graph G may be expressed in terms of the generating function of the even sets of the same graph G.
More precisely, we have the following Van der Waerden's formula [2]
Z(G, β) = 2 |V | {i,j}∈E cosh(βJ ij )E(G, x)| x J ij =tanh(βJ ij ) .
Now, let G = (V, E) be a planar graph embedded in the plane and for each edge e we associate a formal variable x e which can be seen as a weight of that edge. Let A = (V, A(G)) be an arbitrary orientation of G. If e ∈ E then a e will denote the orientation of e in A(G) and a −1 e will be the reversed orientation to a e . We put x ae = x a −1 e = x e . A circular sequence p = v 1 , a 1 , v 2 , a 2 , . . . , a n , (v n+1 = v 1 ) is called non-periodic closed walk if the following conditions are satisfied: a i ∈ {a e , a −1 e : e ∈ E}, a i = a −1 i+1 and (a 1 , . . . , a n ) = Z m for some sequence Z and m > 1. We also let X(p) = n i=1 x a i . We further let sign(p) = (−1) n(p) , where n(p) is a rotation number of p; i.e., the number of integral revolutions of the tangent vector. Finally put W (p) = sign(p)X(p). There is a natural equivalence on non-periodic closed walks; that is, p is equivalent with reversed p. Each equivalence class has two elements and will be denoted by [p]. We assume W ([p]) = W (p) and note that this definition is correct since equivalent walks have the same sign. The following beautiful formula is due to Feynman who conjectured it, but did not gave a proof of it. It was Sherman who gave a proof based on a key combinatorial lemma on coin arrangements [3] .
E(G, x) = [1 − W ([p])],
where the product is over all equivalence classes of non-periodic closed walks of G.
Here is the original statement of the coin arrangement lemma: Suppose we have a fixed collection of N objects of which m 1 are of one kind, m 2 are of second kind, . . ., and m n of n-th kind. Let b N,k be the number of exhaustive unordered arrangements of these symbols into k disjoint, nonempty, circularly ordered sets such that no two circular orders are the same and none are periodic. Then, we have
N k=1 (−1) k b N,k = 0, (N > 1).
It is worth to note that when the collection of objects constitute a set of n elements, then the numbers b n,k are exactly Stirling cycle numbers; that is, the number of permutations of the set {1, 2, . . . , n} (or n -permutations) with exactly k cycles in its decompositions into disjoint cycles. It is noteworthy that the coin arrangements lemma in this particular case, can be reformulated as the following well-known identity in combinatorics of permutations.
Proposition 1.2. [Even-Odd Permutations Identity]
For any integer number n > 1, The number of even n-permutations is the same as the number of odd n-permutations.
Our main goal here is to formulate a weighted version of the even-odd permutations identity in the multiset setting.
Basic Definitions and Notation
As Knuth has noted in [7, p.36] , the term multiset was suggested by N.G.de Bruijn in a private communication to him. Roughly speaking, a multiset is an unordered collection of elements in which repetition is allowed.
Definition 2.1 ( Multiset ). Let Σ = {a 1 , . . . , a n } be a finite alphabet. A multiset M over Σ denoted by [a m 1 1 , a m 2 2 , .
. . , a mn n ] is a finite collection of elements of Σ with m 1 occurrences of a 1 , m 2 occurrences of a 2 , . . ., and m n occurrences of a n . The number N = m 1 + m 2 + · · · + m n is called the cardinality of M and m i (1 ≤ i ≤ n) is called the multiplicity of the element a i . Definition 2.2 (Permutation of a multiset). Let M be a multiset over a finite alphabet Σ of cardinality N . We also let i ≥ N be a given integer. Then an i-permutation of M is defined as an ordered arrangement of i elements of M . In particular, an N -permutation of M is also called a permutation of M . It is worth to note that by a simple counting argument, one can obtain that the number of permutations of the multiset M = [a m 1 1 , a m 2 2 , . . . , a mn n ] of cardinality N is equal to
N ! m 1 !m 2 !···mn! .
In the rest of this section, we quickly review the basics of the combinatorics of words. The reader can consult the reference [5]. Let Σ be a finite alphabet. The elements of Σ are called letters. A finite sequence of elements of Σ is called a word ( or string ) over the alphabet Σ. An empty sequence of letters is called an empty word and is denoted by λ. The set of all words over the alphabet Σ will be denoted by Σ ⋆ . We also denote the set of non-empty words by Σ + . A word u is called a factor ( resp. a prefix, resp. a suffix) of a word w, if there exists words w 1 and w 2 such that w = w 1 uw 2 (resp. w = uw 2 , resp. w = w 1 u). The k-th power of a word w is defined by w k = ww k−1 with the convention that w 0 = λ. A word w ∈ Σ + is called primitive if the equation w = u n (u ∈ Σ + ) implies n = 1. Two words w and u are conjugate if there exist two words w 1 and w 2 such that w = w 1 w 2 and u = w 2 w 1 . It is easy to see that the conjugacy relation is an equivalence relation. A conjugacy class (or necklace) is a class of this equivalence relation. For an ordered alphabet (Σ, <), the lexicographic order on (Σ ⋆ , <) is defined by letting w 1 w 2 if
• w 1 = uw 2 , (u ∈ Σ ⋆ ) or • w 1 = ras, w 2 = rbt a < b, for a, b ∈ Σ and r, s, t ∈ Σ ⋆ .
In particular, if w 1 w 2 and w 1 is not a proper prefix of w 2 , we write
w 1 ⊳ w 2 .
A word is called a Lyndon word if it is primitive and the smallest word with respect to the lexicographic order in it's conjugacy class. The following factorization of the words as a non-increasing product of Lyndon words is of fundamental importance in the combinatorics of words. From now on, we will denote the set of all Lyndon words by L.
Theorem 2.1 (Lyndon Factorization ). Any word w ∈ Σ + can be written uniquely as a non-increasing product of Lyndon words:
w = l 1 l 2 · · · l h , l i ∈ L, l 1 l 2 · · · l h .
One of the important results about the characterization of Lydon words is the following.
Proposition 2.2. A word w ∈ Σ + is a Lyndon word if and only if w ∈ Σ or w = rs with r, s ∈ L and r ⊳ s. Moreover, if there exists a pair (r, s) with w = rs such that s, w ∈ L and s of maximal length, then r ∈ L and l ⊳ rs ⊳ s. Definition 2.3. For w ∈ L\Σ a Lyndon word consisting of more than a single letter, the pair (r, s) with w = rs such that r, s ∈ L and s of maximal length is called the standard factorization of the Lyndon w.
Multiset Version of Even-Odd Permutations
In this section, we first briefly review basics of the combinatorics of permutations. For more detailed introduction see [6]. From now on, we will denote the set {1, 2, . . . , n} by [n]. Recall that a permutation τ of a set [n] (or simply an n-permutation) is a bijective function τ : [n] → [n]. A one-line representation of τ is denoted by τ = τ (1)τ (2) · · · τ (n) . Recall that from abstract algebra, we know that any permutation can be written as a product of disjoint cycles. Hence, a representation of a permutation in terms of disjoint cycles is called cycle representation. The set of all permutations of the set [n] will be denoted by S n . Definition 3.1 ( Cycle Index ). Let τ = c 1 c 2 · · · c k be the cycle representation of the permutation τ ∈ S n . Then, the number n − k is called the cycle index of τ and will be denoted by ind c (τ ). Definition 3.2 ( Inversion ). Let τ ∈ S n = τ (1)τ (2) · · · τ (n) be a permutation. We say that (τ (i), τ (j)) is an inversion of τ if i < j implies τ (i) > τ (j).
We will denote the number of inversions of a permutation τ with inv(τ ). We recall the well-known fact due to Cauchy [8] that for any permutation τ ∈ S n , the parity of inv(τ ) and ind c (τ ) are the same. Therefore, we can divide the class of all permutations S n into two important subclasses.
Definition 3.3 (Even -Odd Permutations).
A permutation τ = c 1 c 2 · · · c k in S n is called an even (resp. odd) n-permutation if ind c (τ ) is even (resp. odd). Considering the above discussions, the coin arrangements lemma in the case that there exists exactly one coin of each type can be restate as follows. In the rest of this section, we attempt to formulate a multiset version of the above well-known result in combinatorics of permutations. For finding the right formulation of the coin arrangement lemma for multisets, we have to first replace permutations of the set [n] with words of length N defined on the multiset M = [1 m 1 , 2 m 2 , . . . , n mn ] of cardinality N . The next step is to find the analogue of the cyclic decomposition of permutations into disjoint cycles. It seems that the Lyndon factorization of a word in which all factors are distinct is the suitable candidate. Hence, we come up with the following analogue of cycle index. 2 m 2 , . . . , n mn ] be a multiset over Σ of cardinality N . We will call any permutation w = w 1 w 2 · · · w N of M an N -word over M . If w = l 1 l 2 · · · l k is a Lyndon factorization of w in which l 1 ⊲ l 2 ⊲ . . . ⊲ l k , then a tuple tup(w) = (l k , . . . , l 2 , l 1 ) is called a Lyndon tuple of the word w over M . 2 m 2 , . . . , n mn ] be a multiset of over Σ of cardinality N . A N -word w ∈ Σ ⋆ over M is said to be even (resp. odd) N -word if the Lyndon index i l (w) of w is even (resp. odd). Thus, we finally get the following reformulation of the Sherman's original coin arrangements lemma. In the next section, we will give a bijective proof a weighted version of the above coin arrangements lemma.
Weighted Coin Arrangements Lemma
In this section, we will first give a weighted reformulation of the coin arrangements lemma. Then, we present a bijective proof of our main result by constructing a weight-preserving involution on the set of words. But before doing it, for the sake of completeness, we present the original proof of Sherman based on the so called Witt identity in the context of combinatorial group theory [4] .
(1 − x m 1 1 · · · x m k k ) M (m 1 ,...,m k ) = 1 − x 1 − · · · − x k .
Proof. By using Lyndon factorization and formal power series identities on words, we have
1 1 − x 1 − · · · − x k = w∈{x 1 ,...,x k } ⋆ ω = l∈L 1 1 − l = 1 m 1 ,...,m k ≥0 (1 − x m 1 1 · · · x m k k ) M (m 1 ,...,m k ) .
Now, considering the Witt identity, the proof of the coin arrangements lemma can be simply obtained by equating the coefficients of monomials of the same degree in both sides of the identity. To obtain a weighted generalization of the coin arrangements lemma, we first associate a formal variable u a with each letter a of alphabet Σ which can be viewed as a weight of that letter. For any Lyndon word l = i 1 i 2 · · · i h , we define the weight wt(l) of the Lyndon word l ∈ L as the product of weights of it's letters. That is, wt(l) = u i 1 u i 2 · · · u i h . The weight of an N -word w ∈ Σ ⋆ , is defined as wt(w) = l∈tup(w) wt(l) . From now on, we will denote the set of all even (resp. odd) N -words over M by E (resp. O). Thus, a weighted version of the coin arrangement lemma can be read as follows. The following lemma is the key in the proof of the above theorem.
Proof.
i: In this case, it is obvious that s is of maximal length. Hence by Definition 2.3 , the result is immediate. ii: Assume in contrary that s is not of maximal length. Then there exists a Lyndon word s ′ = s ′ 1 s (s ′ ⊳ s) where s ′ is of maximal length and l = r ′ 1 s ′ 1 with r ′ 1 ∈ L. Now if s 1 ⊳ s ′ 1 , since s ′ 1 ⊳ s it implies that s 1 ⊳ s which is a contradiction. On the other hand, since r = (r 1 , s 1 ) is the standard factorization of r, s ′ 1 must be a proper right factor of s 1 . But we already know that every Lyndon word is smaller than its any proper right factor. Thus we get s 1 ⊳ s ′ 1 , which is again a contradiction.
The Proof of Theorem 4.2. For a given N -word w with Lydon tuple tup(w) = (l 1 , l 2 , . . . , l k ) , we call the Lyndon word l 1 splittable, if l 1 is not a single letter and the standard factorization of l 1 = (r 1 , s 1 ) satisfies s 1 ⊳ l 1 . Now, one of the following cases may happen:
• The Lyndon word l 1 is splittable. Then, a mapping f : E → O, w ′ = f (w), tup(w ′ ) = (r 1 , s 1 , l 2 , . . . , l k ) is a well-defined weight-preserving mapping (because r 1 ⊳ s 1 ⊳ l 2 and wt(l 1 ) = wt(r 1 )wt(s 1 ) ) . • The Lyndon word l 1 is not splittable. Then, a mapping g : O → E, w ′ = f (w), tup(w ′ ) = (l 0 , l 3 , . . . , l k ) with l 0 = l 1 l 2 , is a well-defined weight-preserving mapping (because l 0 ∈ L and l 0 ⊳ l 2 ⊳ l 3 with wt(l 1 ) = wt(r 1 )wt(s 1 ) ) .
Clearly the mappings f and g are inverse of one another. Thus, the function f is a wight-preserving bijection form the set of even N -words to the set odd N -words and the conclusion immediately follows.
Theorem 1 . 1 (
11Feynman and Sherman). Let G be a planar graph. Then
Example 2 . 1 .
21For the alphabet Σ = {a, b, c}, the string σ = aabcba is a permutation of the multiset M = [a 3 , b 2 , c 1 ] .
Example 2 . 2 .
22Let Σ = {1, 2, 3} be an ordered alphabet. Then, l 1 = 1123 and l 2 = 1223 are Lyndon words but l 3 = 1131 is not a Lyndon word.
Example 3 . 1 .
31Consider the bijective function τ : [5] → [5], τ (1) = 3, τ (2) = 4, τ (3) = 1, τ (4) = 5, τ (5) = 2.
A one-line representation of τ is τ = 34152. The cycle representation of τ is equal to τ = (13)(245).
Example 3 . 2 .
32For n = 5, the permutation τ = 13524 = (1)(2354) has cycle index equal to 3 and hence τ is an odd permutation, but the cycle index of τ ′ = 21354 = (12)(3)(45) is 2 and so the permutation τ ′ is even.
Proposition 3 . 1 (
31Set version of coin arrangements). For any integer n > 1, the number of even n-permutations is the same as the number of odd npermutations.
Definition 3. 4 (
4Lyndon tuple ). Let Σ = {1, 2, . . . , n} be a finite ordered alphabet and M = [1 m 1 ,
Remark 3 . 1 .
31It is noteworthy to mention that a Lyndon tuple of a word consists of only distinct Lyndon words.
Definition 3.5 ( Lyndon index ). Let Σ = {1, 2, . . . , n} be a finite ordered alphabet and M = [1 m 1 , 2 m 2 , . . . , n mn ] be a multiset of over Σ cardinality N . For a N -word w ∈ Σ ⋆ over M with tup(w) = (l 1 , l 2 , . . . , l k ) such that l 1 ⊳ l 2 ⊳ . . . ⊳ l k , the Lydon index of w denoted by i l (w) is defined to be the number N − k.
Definition 3. 6 (
6Even-Odd Words). Let Σ = {1, 2, . . . , n} be a finite ordered alphabet and M = [1 m 1 ,
Example 3 . 3 .
33For an ordered alphabet Σ = {1, 2, 3} and a multiset M = [1 2 , 2, 3] , the 4-word w 1 = 2113 = (2)(113) has the Lyndon index equals 2 and hence it is an even 4-word. But the Lyndon index of w 2 = 2131 = (2)(13)(1) is 1 and so the 4-word w 2 is odd.
Proposition 3 . 2 (
32Multiset version of even-odd permutations identity ). Let Σ = {1, 2, . . . , n} be a finite ordered alphabet and M = [1 m 1 , 2 m 2 , . . . , n mn ] be a multiset over Σ of cardinality N > 1. Then, the number of even N -words over M is the same as the number of odd N -words over M .
Proposition 4 . 1 .
41Let Σ be a finite alphabet of k letters. Let M (m 1 , . . . , m k ) be the number of Lyndon words with m 1 occurrences of a 1 , m 2 occurrences of a 2 , . . ., m k occurrences of a k . Let x 1 , . . . , x k be commuting variables. Then
,...,m k ≥0
Theorem 4.2. [Weighted Coin Arrangements Lemma] For any multiset M of cardinality N > 1, the weighted sum of even N -words over M is the same as the weighted sum of odd N -words over M . In other words, we have
Let l = rs where r, s ∈ L with r ⊳ s and let r be a single letter Lyndon word. Then, l = (r, s) is the standard factorization of l. ii: Let l = rs where r, s ∈ L r ⊳ s and let r = (r 1 , s 1 ) be the standard factorization of r with r 1 ⊳ s 1 . Then, l = (r, s) is the standard factorization of l.
Beitrag zur Theorie des Ferromagnetismus. E Ising, Z. Phys. 31E. Ising, Beitrag zur Theorie des Ferromagnetismus, Z. Phys., 31 (1925), pp. 253-258.
Die lange Reichweite der regelm assigen Atomanordnung in Mischkristallen. B L Van Der Waerden, Z. Phys. 118B. L. Van der Waerden, Die lange Reichweite der regelm assigen Atomanordnung in Mischkristallen, Z. Phys., 118 (1941), pp. 473-488.
Combinatorial aspects of the Ising model for ferromagnetism. I. A conjecture of Feynman on paths and graphs. S Sherman, J. Math. Phys. 1S. Sherman, Combinatorial aspects of the Ising model for ferromagnetism. I. A con- jecture of Feynman on paths and graphs , J. Math. Phys., 1 (1960), pp. 202-217.
The Theory of Groups. M Hall, Chelsea Publishing CoNew YorkReprinting of the 1968 editionM. Hall, The Theory of Groups , Chelsea Publishing Co., New York, 1976, Reprinting of the 1968 edition.
M Lothaire, Combinatorics on Words , Encyclopedia of Mathematics. CambridgeCambridge University Press17Reprint of the 1983 originalM. Lothaire, Combinatorics on Words , Encyclopedia of Mathematics, vol. 17, Cambridge University Press, Cambridge, 1997, Reprint of the 1983 original.
M Bona, Combinatorics of Permutations , Discrete Mathematics and its Applications. Chapman & Hall/CRCM. Bona, Combinatorics of Permutations , Discrete Mathematics and its Applica- tions, Chapman & Hall/CRC, 2004.
D E Knuth, The Art of computer programming. Reading MassAddison-Wesley2Seminumerical Algorithms). Second EditionD. E. Knuth, The Art of computer programming , Vol. 2 (Seminumerical Algo- rithms), Second Edition. Addison-Wesley, Reading Mass. 1981.
Mémoire sur les fonctions qui ne peuvent obtenir que deux valeurś egale et de signes contraires par suite des transposition opérées entre les variables qu'elles renferment J. de l'École polytechnique. A L Cauchy, 10A. L. Cauchy, Mémoire sur les fonctions qui ne peuvent obtenir que deux valeurś egale et de signes contraires par suite des transposition opérées entre les variables qu'elles renferment J. de l'École polytechnique, Vol. 10, 29-112;
. Oeuvres Complètes, ser II. Oeuvres Complètes, ser II. vol. pp. 91-169.
| [] |
[
"Learning Stylometric Representations for Authorship Analysis",
"Learning Stylometric Representations for Authorship Analysis"
] | [
"Steven H H Ding ",
"Benjamin C M Fung ",
"William K Cheung ",
"\nSchool of Information Studies\nSchool of Information Studies\nMcGill University\nCanada\n",
"\nFARKHUND IQBAL\nCollege of Technological Innovation\nMcGill University\nCanada\n",
"\nDepartment of Computer Science\nZayed University\nUAE\n",
"\nHong Kong Baptist University\nHong Kong\n"
] | [
"School of Information Studies\nSchool of Information Studies\nMcGill University\nCanada",
"FARKHUND IQBAL\nCollege of Technological Innovation\nMcGill University\nCanada",
"Department of Computer Science\nZayed University\nUAE",
"Hong Kong Baptist University\nHong Kong"
] | [] | Authorship analysis (AA) is the study of unveiling the hidden properties of authors from a body of exponentially exploding textual data. It extracts an author's identity and sociolinguistic characteristics based on the reflected writing styles in the text. It is an essential process for various areas, such as cybercrime investigation, psycholinguistics, political socialization, etc. However, most of the previous techniques critically depend on the manual feature engineering process. Consequently, the choice of feature set has been shown to be scenario-or dataset-dependent. In this paper, to mimic the human sentence composition process using a neural network approach, we propose to incorporate different categories of linguistic features into distributed representation of words in order to learn simultaneously the writing style representations based on unlabeled texts for authorship analysis. In particular, the proposed models allow topical, lexical, syntactical, and character-level feature vectors of each document to be extracted as stylometrics. We evaluate the performance of our approach on the problems of authorship characterization and authorship verification with the Twitter, novel, and essay datasets. The experiments suggest that our proposed text representation outperforms the bag-of-lexical-n-grams, Latent Dirichlet Allocation, Latent Semantic Analysis, PVDM, PVDBOW, and word2vec representations. | 10.1109/tcyb.2017.2766189 | [
"https://arxiv.org/pdf/1606.01219v1.pdf"
] | 4,379,758 | 1606.01219 | 48ba033808110445145cc4a93476b63d7c2f523e |
Learning Stylometric Representations for Authorship Analysis
Steven H H Ding
Benjamin C M Fung
William K Cheung
School of Information Studies
School of Information Studies
McGill University
Canada
FARKHUND IQBAL
College of Technological Innovation
McGill University
Canada
Department of Computer Science
Zayed University
UAE
Hong Kong Baptist University
Hong Kong
Learning Stylometric Representations for Authorship Analysis
0K41 [Computers and Society]: Public Policy Issues-Abuse and crime involving computersI75 [Document and Text Processing]: Document Capture-Document anal- ysisI27 [Natural Language Processing]: Text analysis General Terms: Design, Algorithms, Experimentation Additional Key Words and Phrases: Authorship analysis, computational linguistics, feature learning, text mining
Authorship analysis (AA) is the study of unveiling the hidden properties of authors from a body of exponentially exploding textual data. It extracts an author's identity and sociolinguistic characteristics based on the reflected writing styles in the text. It is an essential process for various areas, such as cybercrime investigation, psycholinguistics, political socialization, etc. However, most of the previous techniques critically depend on the manual feature engineering process. Consequently, the choice of feature set has been shown to be scenario-or dataset-dependent. In this paper, to mimic the human sentence composition process using a neural network approach, we propose to incorporate different categories of linguistic features into distributed representation of words in order to learn simultaneously the writing style representations based on unlabeled texts for authorship analysis. In particular, the proposed models allow topical, lexical, syntactical, and character-level feature vectors of each document to be extracted as stylometrics. We evaluate the performance of our approach on the problems of authorship characterization and authorship verification with the Twitter, novel, and essay datasets. The experiments suggest that our proposed text representation outperforms the bag-of-lexical-n-grams, Latent Dirichlet Allocation, Latent Semantic Analysis, PVDM, PVDBOW, and word2vec representations.
INTRODUCTION
The prevalence of the computer information system, personal computational devices, and the globalizing Internet have fundamentally transformed our daily lives and reshaped the way we generate and digest information. Countless pieces of textual snippets and documents are generated every millisecond: This is the era of infobesity. Authorship analysis (AA) is one of the critical approaches to turn the burden of a vast amount of data into practical, useful knowledge. By looking into the reflected linguistic trails, AA is a study to unveil an underlying author's identity and sociolinguistic characteristics. The advancement of authorship analysis backed up by stylometric techniques has a fundamental impact on various areas:
-Cybercrime investigation. The distributed nature of cyber space provides an ideal anonymous channel for computer-mediated malicious activities, e.g., phishing scams, spamming, ransom messages, harassment, money laundering, illegal material distribution, etc., because the network-based origins such as IP address can be easily repudiated. Several authorship identification techniques have been developed for the purpose of cyber investigation on SMS text-messaging slips [Ragel et al. 2014], personal e-mails [Iqbal et al. 2013;Ding et al. 2015], and social blogs [Yang and Chow 2014;Stolerman et al. 2014]. Stylometric techniques have been used as evidence in the form of expert knowledge in the courts of the UK, the US, and Australia [Juola 2006;Brennan et al. 2012]. In a well-known case in the UK, linguistic experts showed Studies of authorship analysis backed up by computational stylometric techniques can be dated back to the 19th century. Many customized approaches focusing on different sub-problems and scenarios have been proposed [Stamatatos 2009]. It has been a successful line of research [Brennan et al. 2012]. Research problems in authorship analysis can be broadly categorized into three types: authorship identification (i.e., identify the most plausible author given a set of candidates [Iqbal et al. 2013;Ding et al. 2015]), authorship verification (i.e., verify whether or not a given candidate is the actual author of the given text ), and authorship characterization (i.e., infer the sociolinguistic characteristics of the author of the given text [Rangel et al. 2014]). Both the problems of authorship identification and authorship characterization can be formulated as a one-class text classification problem. For the authorship attribution problem, the classification label is the identity of the anonymous text snippet; for the authorship attribution problem, the label can be the hidden properties of the anonymous author, such as age and gender. There exists other variances of the authorship attribution problems, such as the open-set problem, the closed-set problem, and the attribution-with-probability-output problem [Stamatatos 2009;Iqbal et al. 2013;Ding et al. 2015]. Regardless of the studied authorship problems, the existing solutions in previous AA studies typically consist of three major processes, as shown in the upper flowchart of Figure 1): the feature engineering process, the solution design process, and the experimental evaluation process. In the first, a set of features are manually chosen by the researchers to represent each unit of textual data as a numeric vector. In the second process, a classification model is carefully adopted or designed. At the end, the entire solution is evaluated based on the specific datasets. Representative solutions are Burger et al. [2011], Nirkhi and Dharaskar [2014], and Cavalcante et al. [2014]. Exceptions are few recent applications of the topic models that actually combine these two process into one [Pratanwanich and Liò 2014;Savoy 2013a;Seroussi et al. 2014]. Still, the two-processes-based studies on authorship analysis problems dominate [Rangel et al. 2014].
During the feature engineering process, given the available dataset and application scenario, authorship analysts manually select a broad set of features based on the hypotheses or educated guesses, and then refine the selection based on the experimental feedback. As demonstrated by previous research [Savoy 2012;Zamani et al. 2014a;Savoy 2013b;Ding et al. 2015], the choice of the feature set (i.e., the feature selection method) is a crucial indicator of the prediction result, and it requires explicit knowledge in computational linguistics and tacit experiences in analyzing the textual data. Manual feature engineering is a time-consuming and labor-intensive task. The hand-crafted feature representations have very limited generalizability on different data and scenarios. We have shown that the amount of data and the complexity of the problem to be solved have a strong implication on the analysis result [Ding et al. 2015] even when a full set of n-grams is employed. Thus, the generalizability and sensitivity to different application scenarios is critical.
Manual feature engineering to a high degree limits the potentials of the whole available feature space because it makes a strong simplification on the language model. For example, the bag of words (BoW) model, in which the text is represented as an unordered list of words with their respective frequency value, ignores the dependency between words and the sequential information. The lexical n-gram model describes a feature as a consecutive n lexical token; it can capture the co-occurrence relationship within an n-length neighbor. Still, it is unable to capture the contextual relationship between words over long sentences. Also, most feature selection methods trim the original space based on a specific metric, for example, by picking the top-k frequent lexical n-gram. In our previous study [Ding et al. 2015], we showed that using a full set of n-gram features greatly promotes the accuracy of authorship identification, which indicates the reduced classification power of feature selection for authorship analysis.
Inspired by the recent development of the unsupervised representation learning in deep learning [Mikolov et al. 2013], we raise two new research questions for authorship analysis.
(1) Given the unlabeled textual data, can we automatically come up with a vectorized numeric representation of the writing style? (2) Can we contribute to the AA field by discovering and generalizing new, special, and interpretable linguistic features from the available data instead of validating the educated guess and hypotheses on the data?
In this paper, we present a stylometric representation learning approach for authorship analysis (AA). Refer to the lower flowchart in Figure 1. The goal is to learn an effective vector representation of writing styles of different linguistic modalities to mitigate the aforementioned issue in AA study. Following the previous work [Solorio et al. 2011;Sapkota et al. 2013;Ding et al. 2015], we use the concept linguistic modalities to denote the categories of linguistic features [Solorio et al. 2011]. We broadly categorize them into four modalities: the topical modality, the lexical modality, the character-level modality, and the syntactic modality. It is noted that the term "modality" used here is different from the term "multi-modality" in machine learning. The first one denotes a category of linguistic features, and the latter denotes a combination of different ways in which information is presented, such as text, image, rating, etc. In this paper, "modality" and "linguistic modality" are used interchangeably to denote the categories of linguistic features. Also, we use the term representation and embedding interchangeably to describe the vectorized representation of feature. In the first stage, we learn the stylometric representation for different linguistic modalities based on the unlabeled textual data. At the second stage, an authorship analyst can select the modality according to his or her needs. If the scenario requires the least interference from the topic-related information, the analyst can discard the topical modality, or, more strictly, both the topical and lexical modality. Such a design inherits the flexibility of the original hand-crafted stylometric features while it enables the learned representation to fit into the available data.
The basic idea of our proposed solution is to simulate how people construct a sentence based on different linguistic modalities. The proposed approach follows the recent ideas and models of estimating word embedding [Mikolov et al. 2013] and paragraph embedding [Le and Mikolov 2014] to efficiently approximate the factorization of different co-occurrence matrices. The proposed approach can be applied to any authorship analysis studies that involve feature engineering processes.
To the best of our knowledge, this is the very first work attempting to automate the feature engineering process and discover the stylometric representations for authorship analysis. Specifically, our major contributions are summarized as follows:
-We propose a different solution flow for authorship analysis. Instead of manually engineering the stylometric features, we learn the representation of the writing style based on the available unlabeled texts according to different linguistic modalities. The user/researcher can pick the modalities based on their needs and interests in the context of the authorship analysis problem. For example, political socialization researchers are interested in content, so they may choose topic-modality. In contrast, cybercrime investigators would prefer avoiding topic-related features since given a harassment letter, the candidate authors may not have previously written anything on such a topic. -We propose a joint learning model that can learn simultaneously the distributed word representation as well as the topical bias and lexical bias representations of each document based on unlabeled texts. The joint learning model simulates the sentence composition process and captures the joint effects of topical bias and lexical bias in picking a specific word to form the sentence. The learned topical vector representation of a document is able to capture the global topical context, while the learned lexical representation of a document is able to capture the personal bias in choosing words under the given global context. -Using a similar learning representation approach, we propose how the characterlevel and syntactic-level representations of each document can be learned. The character-level model captures the morphological and phonemes bias of an author when he/she is composing a lexical token. It models the probability of how a character element is chosen to form a given lexical token in the sentence. The syntactic-level model captures the syntactic/grammatical bias of an author when he/she is putting words together to construct a sentence. It models a prediction path that maximally avoids the dependency information introduced by the Part-of-Speech (POS) tagger. To the best of our knowledge, this paper proposes the first model that learns a characterlevel representation and the first model that learns a syntactic-level representation. -We evaluate the effectiveness of the learned representations as stylometrics via extensive experiments and show its superiority over the state-of-the-art representations and algorithms for author verification and author characterization tasks. Without using any labels we achieve the best result on the PAN 2014 English authorship verification problem with respect to the Area Under Receiver Operating Characteristic curve (AUROC). By using a simple logistic regression classifier over the learned representations, we achieve the best result on the ICWSM 2012 [Al Zamal et al. 2012] authorship characterization dataset without any social network related structural information. The characteristics include age range, gender, and political orientation.
The rest of this paper is organized as follows: Section 2 elaborates our stylometric learning models and discusses their relation to the existing stylometric features according to different linguistic modalities. Section 3 elaborates our evaluation of the proposed models on the authorship verification problem with the PAN 2014 dataset. Section 4 presents our evaluation of the proposed models on the problem of authorship characterization with the ICWSM 2012 dataset. Relevant works are situated throughout the discussions in this paper. Finally Section 5 concludes this paper and explores future directions.
MINING STYLOMETRIC REPRESENTATIONS FOR AUTHORSHIP ANALYSIS
In this section, we present the proposed models for learning the stylometric representations on unlabeled training data and estimating the representations for unseen testing data. To start with, we will define several key concepts and the studied stylometric learning problem.
To be consistent in terminology, text dataset refers to the union of available labeled and unlabeled text; document and writing sample are used interchangeably to refer to the minimum unit of text data to be analyzed. A writing sample consists of a list of sentences, and a sentence consists of a sequence of lexical tokens. Each lexical token has its respective POS tag in the corresponding sentence.
This section corresponds to the first process of the lower flowchart in Figure 1, where only unlabeled text data are available. In this process we learn the representation of each chosen unit of text into four vectorized numeric representations, respectively, for four linguistic modalities. We formally define the stylometric feature learning problem as follows:
Definition 2.1. (stylometric representation learning) The given text dataset is denoted by D, and each document is formulated as ω ∈ D. A document ω consists of a list of ordered sentences S(ω) = s[1 : a], where s a represents one of them. Each sentence consists of an ordered list of lexical tokens T (s a ) = t[1 : b], where t b represents the token at index b. P(t b ) denotes the Part-of-Speech tag for token t b . Given D, the task is to learn four vector representations θ tp ω ∈ R D(tp) , θ lx ω ∈ R D(lx) , θ ch ω ∈ R D(ch) , and θ sy ω ∈ R D(sy) , respectively, for topical modality tp, lexical modality lx, characterlevel modality ch, and syntactic modality sy for each document ω ∈ D. D(·) denotes the dimensionality for a given modality.
We argue that the division of the whole feature space according to linguistic modalities is necessary because different application scenarios have different requirements of the features. Moreover, even for manual feature engineering, writing styles that correspond to different linguistic dimensions are constructed differently by humans [Solorio et al. 2011;Torney et al. 2012]; therefore, they need to be grouped together.
Take the typical dynamic bag-of-lexical-n-gram model, with ranking by frequency, as a lexical modality representation example. Given a text dataset ω ∈ D, the top k lexical n-grams G(ω) = g[1 : c] are selected based on their occurring frequency. g c represents one of them. For each document ω, a lexical level modality representation is constructed as θ k ω ∈ R k , where θ k ω [c] is the frequency value of g c in ω. In the following section, we describe the proposed models for stylometric learning according to different linguistic modalities.
Joint learning model for topical modality and lexical modality
In this section we are interested in both the topical modality and the lexical modality; both operate on the lexical tokens of a document. To start with, we look into their nature and the existing AA solutions that involve these two modalities.
2.1.1. Topical modality. The topical modality concerns the differences of interested topics reflected from the plain text. For example, in web blogs males may talk more about the information technologies, while females may talk more about fashion and cosmetics. It is widely acknowledged that the topics reflected from text actually depend on the distribution of words or combination of words [Fung et al. 2003].
Typical LDA-based AA techniques [Savoy 2013a;Pratanwanich and Liò 2014;Seroussi et al. 2014] construct the topics by drawing a distribution over the lexical tokens and then represent the writing style as the distribution over topics. For this modality it is intuitive to directly use the LDA or the LSA topic models and represent the constructed distribution over topics as the stylometric representation. However, it has been shown that the representation of words learned by a neural network performs significantly better than the LSA model, and the LDA model becomes computationally infeasible on large datasets [Mikolov et al. 2013]. Thus, we seek to customize existing embedding learning neural network models to combine the co-occurrence relationship between words and document and the co-occurrence relationship between words.
2.1.2. Lexical modality. The lexical modality is concerned with the specific choice of words, given the context. Different from the topical modality, in which the stylometric representation captures the relationship between word and document, the lexical modality captures the difference in the co-occurrence relationship between words reflected by the text.
The most widely employed lexical features are function words. They have been shown to be effective for capturing the differences in writing styles [Koppel et al. 2002]. Usually the frequency value of the function words are used to represent the features [Baron 2014;Halvani and Steinebach 2014;HaCohen-Kerner and Margaliot 2014]. They are effective for identifying the first language of the authors [Torney et al. 2012;Argamon et al. 2009], identifying the actual author of French literature [Boukhaled and Ganascia 2014], and characterizing the gender of e-mails [Corney et al. 2002]. Especially for the first language detection, the effect of language transfer affects the use of function words in the secondary language [Torney et al. 2012]. These feature representations can be regarded as the typical "bag-of-words" model. Segarra et al. [2014] present the function words differently. They construct a function word adjacent matrix to capture the relationships among function words, and model the probability by regarding it as a Markov chain. Their approach considers the co-occurring relationship among function words.
Another useful type of lexical features are lexical n-grams, which denote a sequence of consecutive words of length n. Lexical n-grams are becoming popular as they are shown to be more effective than character n-grams and syntactic n-grams when all the possible n-grams are used as features [Ding et al. 2015]. Moreover, it has been shown to be effective in identifying the gender of tweeters [Burger et al. 2011]. However, the study presented by Rao and Yarowsky [2010] shows that the socio-linguistic features outperform the lexical n-gram approach (n ∈ 1, 2) when characterizing gender and age, but when characterizing the region of the Twitter user, the n-gram-based approach out-performs the socio-linguistic feature. This is possibly because people in different regions discuss different topics.
The lexical n-gram approach has two problems. First, the current bag − of − ngram features fail to capture the co-occurring relationship between words in a longer context due to the limit of the parameter n and the independence assumption of the n-grams. Simple unigram (i.e., n = 1) and bigram (i.e., n = 2) features can hardly capture the relationship among nouns across the whole sentence, and the relationship between each bigram/trigram is considered independent. Second, the current n-gram approach heavily depends on the feature selection method [Zamani et al. 2014a;Savoy 2013a;Pavlyshenko 2014]. The space of the complete n-gram (n ∈ N) features is indeed sparse and can be greatly compressed for the problem of authorship analysis. Most of the AA studies simply discard some n-gram features based on the threshold of a specific measure, such as occurring frequency; a better way is to analyze the principal component or conduct the factorization of the co-occurrence matrix. However, it is computationally infeasible to do a full factorization for a large dataset.
2.1.3. Joint modeling of topical and lexical modalities. Both the topical modality and the lexical modality operate on the lexical tokens. A text document ω can be considered to be generated by the author under a mixture effect of topical bias and lexical bias. In the bag-of-n-grams approaches for AA studies, it is difficult to distinguish whether an n-gram selected to form a sentence is mainly due to the holistic topics of the document or the personal lexical preference. The LDA-based and LSA-based approaches fail to consider the lexical preference; they only considered the co-occurrence relationship between documents and words instead of the co-occurrence relationship between words.
0:8
In order to best separate the mixed effects of topical modality and lexical modality, and to address the aforementioned issues, we propose a joint learning model in which a document is considered as a lexical token picking process and the author picks tokens from his/her vocabulary in sequence to construct sentences and express what his/her interests are. We consider three factors in this token picking process: the topical bias, the local contextual bias, and the lexical bias.
-Topical bias. Based on the certain holistic meaning (i.e., topics) to be conveyed through the text, the author is limited to a vague set of possible thematic tokens. For example, if the previously picked tokens are mostly about Microsoft, then the author will have a higher chance of picking the word "Windows" in the rest of the document because they are probably under a similar topic. Given the topics of the document, the author's selection of the next token in a sentence is influenced by the related vocabulary under these topics. -Local contextual bias. A document has both holistic topics and local contexts. Both influence how the next word is chosen in a sentence. For example, a document about Microsoft may consist of several parts that cover its different software products. Moreover, the context can be irrelevant to the topic. For example, a web blog may have an opening about weather that has nothing to do with the holistic topic in the following text. -Lexical bias. Given the topics and their related vocabularies, the author has different choices for picking the next token to convey a similar meaning. For example, if the author wants to talk about the good weather, He or she may pick the adjective "nice" to describe the word "day". Alternatively, the author can pick other words such as "great", "wonderful", "fantastic", or "fabulous", etc. The variations in choosing different words to convey a similar meaning introduce the lexical bias for an author to construct the document.
The word picking process is a sequence of individual decision problems influenced by the individual topical bias, contextual bias, and lexical bias; therefore, it is natural to jointly learn the topical representation and lexical representation in the same model. It has the advantage of modeling their joint effects simultaneously and at best of minimizing the interference between the learned representations.
2.1.4. The proposed joint model. This section introduces our proposed joint learning model for the topical modality and lexical modality. The goal is to estimate θ tp ω ∈ R D(tp) and θ lx ω ∈ R D(lx) in Definition 2.1. Figure 2 depicts the model, which is a single-layer neural network with two output layers. The input is the joint effect of topical bias, local contextual bias, and lexical bias. Recall that the contextual bias concerns the local information surrounding the token to be picked. We represent the vectorized local contextual bias surrounding token t b in its corresponding sentence s a as θ
C(t b ) sa
. The first output is the prediction probability of the targeted word to be chosen by the author. The model tries to maximumize the log probability for the first output:
arg max J 1 (θ) = arg max 1 |D| D ω S(ω) sa T (sa) t b log P(t b | θ tp ω topical , θ lx ω lexical , θ C(t b ) sa contextual )(1)
Similar to the other neural-network-based paragraph/word embedding learning models [Bengio et al. 2006;Mikolov et al. 2013;Le and Mikolov 2014], this model maps each lexical token t b into two vectors: w t b in ∈ R dw (the yellow rectangles in Figure 2) and w t b out ∈ R dw (the blue rectangles in Figure 2) where dw denotes the dimensionality. w t b in is used to construct the input of contextual bias for the neural network, and w t b out is used for the multi-class prediction output of the neural network. They are all model parameters to be estimated on the text data.
Θ → lx ω Θ → C(t ,s ) b a C(t ) b s Θ → a t b in Θ → t b out w →
The local context of a token is represented by its surrounding tokens within the window size. Given a token t b in a sentence s a with a sliding window of size W(tp), the context of t b is formulated as
C(t b , s a ) = {t b−W(tp) , . . . , t b−1 , t b , t b+1 , . . . , t b+W(tp) } where C(t b , s a ) ⊆ T (s a ).
The contextual bias input to the neural network is defined as the average over the input mapped vectors of C(t b ). We define · as the vector elementwise average function:
θ C(t b ) sa = C(t b ,sa) t w t in (2)
The other two inputs to the model are the topical bias θ tp ω ∈ R D(tp) and the lexical bias θ lx ω ∈ R D(lx) . In order to have the model working properly, we need to set D(lx), D(tp), and dw equal to d 1 , where d 1 is the parameter of the whole model that indicates the dimensionality for both the lexical modality representation and topical modality representation. With these three input vectors we further take their average as joint input vector θ t b in since it is costly to have a fully connected layer to have the model trained in a reasonable time.
θ t b in = θ tp ω topical + θ lx ω lexical + θ C(t b ) sa contextual (3)
Example 2.2. Consider a simple sentence: t a = "it is a great day !!" in Figure 2. For each token {t b |b ∈ [1, 6]} we pass forward the neural network. We take b = 4 and t b ='great' for example. The training process is the same for other values of b. Given a windows size of 2, we construct the local context of t 4 as C(t 4 , s a ) = {t 2 , t 3 , t 5 , t 6 } = {'is', 'a', 'day', '!!'}. We map these two tokens into their corresponding vector representations w t2 in , w t3 in , w t5 in and w t6 in . With θ tp ω and θ lx ω , we calculate θ t4 in using Equation 3. Suppose that we use the typical soft-max multi-class output layer. The first output of this model captures the probability of picking a word t b based on the joint bias input θ t b in as follows:
P(t b | θ tp ω topical , θ lx ω lexical , θ C(t b ) sa contextual ) = P( w t b out | θ t b in ) = f ( w t b out , θ t b in ) V t f ( w t out , θ t b in ) f ( w t out , θ t b in ) = U h(( w t out ) T × θ t b in )(4)
V denotes the whole vocabulary constructed upon the text dataset D. U h(·) denotes the element-wise sigmoid function. It corresponds to the red circle under the first output in the Figure 2. This function scales the output to the range of [0, 1], so its output can be interpreted as probability. w t out is the mapped out vector for the lexical token t. By substituting the log probability in Equation 1 with Equation 4 and taking derivatives respectively on w t out and θ t b in , we have the gradients to be updated for each t b at each mini-batch in the back propagation algorithm that is used to train this model:
∂ ∂ w t out J(θ) 1 = t == t b − P( w t out | θ t b in ) × θ t b in ∂ ∂θ t b in J(θ) 1 = w t b out − V t P( w t out | θ t b in ) × w t out (5)
· is an identity function. If the expression inside this function is evaluated to be true, then it outputs 1; otherwise 0. For example, 1 + 2 == 3 = 1 and 1 + 1 == 3 = 0. However, using a full soft-max layer is costly and inefficient because for each t b we need to update d 1 × (|V | + 2 + W(tp) × 2) parameters, where |V | can be large. Following recent development of an efficient word embedding learning approach [Mikolov et al. 2013], we use the negative sampling method to approximate the complete soft-max layer:
log P(t b | θ tp ω topical , θ lx ω lexical , θ C(t b ) sa contextual ) = log P( w t b out | θ t b in ) ≈ log f ( w t b out , θ t b in ) + k i=1 E t Pn(t b ) t = t b log f (−1 × w t out , θ t b in ) f ( w t out , θ t b in ) = U h(( w t out ) T × θ t b in )(6)
The negative sampling algorithm tries to distinguish the correct guess t b with k randomly selected negative samples {t|t = t b } using k + 1 logistic regressions. E t Pn(t) is a sampling function that samples a token v from the vocabulary V according to the noise distribution P n (t) of V . By substituting the log probability in Equation 1 with Equation 6 and taking derivatives, respectively, on w t out and θ t b in , we have the gradients to be updated:
∂ ∂ w t out J(θ) ng 1 = t == t b − f ( w t out , θ t b in ) × θ t b in ∂ ∂ θ t b in J(θ) ng 1 = k i E t Pn(t) t == t b − f ( w t out , θ t b in ) × w t out(7)
The superscript ng of J(θ) ng 1 indicates that the log probability of the objective function is substituted by negative sampling rather than a complete soft-max. For each t b to be predicted, we only need to update d 1 × (k + 1 + W(tp) × 2) parameters for the second output. k is contributed by the negative sampling. W(tp) × 2 is contributed by the contextual input. We equally propagate the error to each lexical token in the context
C(t b , s a ) of t b .
The constant 1 is contributed by the lexical bias term θ lx ω . We do not propagate the errors from the first output to the topical modality θ tp ω since the topical bias is determined by the holistic distribution of vocabulary and is not determined by the specific token selection on the local level. We update θ tp ω in the second output, which will be described below.
Example 2.3. Continue from Example 2.2. This example shows how to train the model for the first output. We map t 4 into its output vector w t4 out . Next we can calculate P( w t4 out | θ t4 in ) using negative sampling (Equation 6). After that we can calculate the gradients w.r.p.t w t4 out and θ t4 in using Equation 7. We update w t4 out according to its gradient with a pre-specified learning rate. We also update w t2 in , w t3 in , w t5 in , w t6 in , and θ lx ω equally according to the gradient of θ t4 in . The second output of this model captures the topical bias reflected on the document ω. The topics reflected from the text can be interpreted as the union of effects of all the local context in the sentence. Thus, the second output of this single-layer feed-forward neural network (see the left part of Figure 2) is a multi-class prediction of each word in the sentence s a , which is denoted by T (s a ) in Definition 2.1. The goal is to maximumize the log probability on θ tp ω of document ω for each of its sentences S(ω):
arg max J 2 (θ) = arg max 1 |D| D ω S(ω) sa T (sa) t b log P(t b | θ tp ω topical )(8)
Similar to the first output of this model, we map each lexical token at the output to a numeric vector w t b out (the yellow rectangles in Figure 2). Suppose that we use the typical soft-max multi-class output layer. The second output of this model captures the probability of picking a word t b based on the topics θ t b in as follows:
0:12 P(t b | θ tp ω topical ) = P( w t b out | θ tp ω ) = f ( w t b out , θ tp ω ) V t f ( w t out , θ t b ω ) f ( w t out , θ tp ω ) = U h(( w t out ) T × θ tp ω )(9)
The total number of parameters to be estimated is (|V | + 1) × d 1 . However, the term |V | is too large. Similar to the first output of this model, we use the k negative sampling approach to approximate the log probability:
P(t b | θ tp ω topical ) = P( w t b out | θ tp ω ) ≈ log f ( w t b out , θ tp ω ) + k i=1 E t Pn(t b ) t = t b log f (−1 × w t out , θ tp ω )(10)
By substituting the log probability in Equation 8 with Equation 10, and taking the derivatives respectively over w t b out and θ tp ω , we have the derivatives to be updated for each t b :
∂ ∂ w t out J(θ) ng 2 = t == t b − f ( w t out , θ tp ω ) × θ tp ω ∂ ∂ θ tp ω J(θ) ng 2 = k i E t Pn(t) t == t b − f ( w t out , θ tp ω ) × w t out(11)
The total number of parameters now becomes (k + 1) × d 1 for each t b . Constant k is contributed by k negative samples, and constant 1 is contributed by the update of θ tp ω . Basically, the second output of this model is an approximation to the full factorization of the document-term co-occurrence matrix.
Example 2.4. Continue from Example 2.2. This example shows how to train the model for the second output. For the second output of the model we map each token into a numeric vector
w t b out , where t b ∈ {'it', 'is', 'a', 'great', 'day', '!!'}.
For each of the vectors we calculate P( w t b out | θ tp ω ) in Equation 10 using negative sampling. Then we calculate the derivatives for each w t b out and θ tp ω by using Equation 11, and update them accordingly by multiplying the gradients with a pre-specified learning rate.
In this model, we count punctuation marks as lexical tokens. Consequently, the information related to the punctuation marks is also included. Punctuation marks carry information of intonation in linguistics and are useful for authorship analysis [Torney et al. 2012]. After training the model on a given text dataset D, we have a topical modality vector representation θ tp ω ∈ R d1 and a lexical modality vector representation θ lx ω ∈ R d1 for each document ω ∈ D. Also, for each lexical token t b ∈ V we have a vectorized representation w t b in ∈ R d1 . For an unseen document ω / ∈ D that does not belong to the training text data, we fix all the w t b in ∈ R d1 and w t b out ∈ R d1 in the trained model and only propagate errors to θ lx ω ∈ R d1 and θ tp ω ∈ R d1 . At the end, we also have both θ lx ω and θ d1 ω for ω . The PVCBOW model and PVDM model for paragraph embedding in [Le and Mikolov 2014] are trained in a similar way, and all of them operate on the lexical tokens. How-ever, the proposed joint network structure in this paper is customized for authorship analysis. We jointly model the topical bias and the local contextual bias in a single neural network. Thus, it is very different from the PVCBOW and PVDM models.
The character-level modality
Features of the character-level modality are concerned with the morphology and phonemes biases in the process of constructing/spelling a single lexical word [Torney et al. 2012]. The typical feature used for character modality is the character n-gram feature. Character n-gram is a sequence of consecutive characters. Usually the character bigrams, trigrams, and four-grams (i.e., n ∈ 2, 3, 4) with their respective frequency or tf×idf values are used for AA studies. They have been shown to be effective for authorship verification [Halvani and Steinebach 2014;Potha and Stamatatos 2014], attribution [Nasir et al. 2014], and characterization [Burger et al. 2011]. Sapkota et al. [2014] show that character n-gram features are robust and perform well even in the condition where the training data and testing data are on different topics. In contrast, our previous studies [Ding et al. 2015] show that the lexical n-gram approach still outperforms the character n-gram approach even on e-mail data where topics vary across different documents. This is possibly because the documents used by Sapkota et al. [2014] are newspapers, which on average are more formal and longer than e-mail data; therefore, the cumulative effect of character n-gram is more stable and robust.
However, similar to the lexical n-gram, the problems related to the character n-gram are two-fold. First it is difficult to determine the parameter n, and the choice actually depends on the data. For formal writings such as theses, academic papers, or newspapers, bigrams and trigrams appear to be sufficient; however, for informal writing such as tweets, some special grams are ignored if only n ∈ 2, 3, 4 is considered. For example, a lexical token containing repeated alphabets (e.g., "niceeeeee") are popular on social networks and they are important for authorship analysis as a socio-linguistic feature [Rao and Yarowsky 2010]. However using only short n-grams cannot really distinguish the style of "niceeeeee" from the style of the repeating lexical tokens in "engineer" and "IEEE" because all of them reflect the frequent usage of bigram "ee". The question is: Can we model the relationship between characters directly from the text instead of choosing the parameter n? This example illustrates that the character modality has a strong relationship with the lexical token. Can we also model the variation of the morphological relationship between them? Second, the number of possible character n-grams is large and it is difficult to determine the choice of feature selection method and the hyper parameter k. They have a strong dependency on the available dataset, as shown in our experiment. The question is: Can we get rid of these dataset-dependent configurations?
In order to address the aforementioned issues of character-based stylometric features, we propose a neural-network-based model to learn the character modality representation on the plain text data. This model consists of a single neural layer and captures the morphological differences in constructing and spelling lexical tokens across different documents. Refer to Figure 3. The input of this model is one of the character bigrams generated by a sliding window over a lexical token t b with the character-level bias. The output of this model is the vectorized representation of the token t b . The purpose is to learn θ ch ω ∈ R D(ch) for each document ω ∈ D such that vector θ ch ω captures the morphological differences in constructing lexical tokens. Let CH(t b ) = bg[1 : c] denote the list of character bigrams of a given token t b , and bg is one of them. The goal is to maximize the following log probability on the given dataset D:
D ω S(ω) sa T (sa) t b CH(t b ) bg log P(t b | θ ch ω char-level , bg in )(12)
We use a character bigram instead of unigram to increase the character-level vocabulary size. Similar to the previous lexical model, we map each lexical token t b into a numeric vector w bt out , which is used to output a multi-class prediction. We also map each character bigram into a numeric vector bg in , which is used for the network input.
Both are model parameters to be estimated. The input vectors of this model are bg bt in and θ ch ω . Both of them have the same dimensionality d 2 . After taking an average, it is fed into the neural network, as depicted in Figure 3, to predict its corresponding lexical token t b . We considered adding a soft-max layer to predict t b :
θ bg in = θ ch ω , bg in P(t b | θ ch ω char-level , bg in ) = P( w t b out | θ bg in ) = f ( w t b out , θ bg in ) V t f ( w t out , θ bg in ) f ( w t out , θ bg in ) = U h(( w t out ) T × θ bg in )(13)
Again, there are O(V ) parameters to be updated for each pass of the neural network, which is not efficient. Thus, we use the negative sampling approach to approximate the log probability:
P(t b | θ ch ω char-level , bg) = P( w t b out | θ bg in ) ≈ log f ( w t b out , θ bg in ) + k i=1 E t Pn(t b ) t = t b log f (−1 × w t out , θ bg in )(14)
Similar to the previous model, we have the following derivatives by using negative sampling as indicated by the superscript ng:
∂ ∂ w t out J(θ) ng = t == t b − f ( w t out , θ bg in ) × θ bg in ) ∂ ∂ θ bg in J(θ) ng = k i E t Pn(t) t == t b − f ( w t out , θ bg in ) × w t out(15)
The number of parameters to be updated for each bigram bg of token t b is (k + 2) × d 2 . The constant k is contributed by the negative sampling function, and the constant 2 is contributed by θ ch ω and bg in . To learn θ ch ω , for ω / ∈ D we fix all w t b out and bg in and only propagate errors to θ ch ω . Example 2.5. Consider a simple sentence: t a = "Fantastic day !!" in Figure 3. For each token {t b |b ∈ [1, 3]} we extract its character bigrams. Suppose the word in the target is t 1 = 'fantastic', and its bigrams are CH(t 4 ) = {bg c |c ∈ 1, 2, 3, 4, 5, 6, 7, 8} = {'fa', 'an', 'nt', 'ta', 'as', 'st', 'ti', 'ic'}. The training process is the same for other words in this sentence. Let us take a bigram bg 1 ='fa' as an example. First, we map bg 1 to its vectorized representation bg in and map t 1 to its representation w t1 out . In combination with θ ch ω , we calculate θ bg in according to the first formula in Equation 13. Then we calculate the forward log probability for P( w t1 out | θ bg in ) in Equation 14. Then we calculate the corresponding gradients in Equation 15 and update the respective parameters. The training pass for bigram bg 1 = 'fa' is completed, and we move to the next bigram bg 2 ='an' following the sample procedure. After all the bigrams are completed, we move to the next token t 2 = 'day'.
The syntactic modality
Syntactic features are usually considered as deep linguistic features that are comparatively more difficult to consciously manipulate [Gamon 2004]. Typically, the Part-of-Speech (POS) tags n-grams are used as the features for this modality [Boukhaled and Ganascia 2014;Baron 2014;Qian et al. 2014]. Given a token t b , its Part-of-Speech (POS) tag represents its grammatical role in the sentence. The number of total unique tags is much smaller than the number of the character n-grams and the number of the lexical n-grams. Due to its limited feature space, it is less effective than the other linguistic modalities; however, it is more robust than the others [Ding et al. 2015], especially for the cross-language authorship attribution problem [Bogdanova and Lazaridou 2014]. However, by re-constructing the POS n-grams based on the traversing order of the parsed tree structure of the sentence, [Posadas-Duran et al. 2014] show that the complete POS tag or Syntatic Relation (SR) tag n-grams can actually outperform the character n-grams when given the same number of features. However, based on their experimental result, we notice that as the limited number of features increases, the performance of character n-grams increases steadily without reaching the peak value, and their maximum limit on the number of features appears to limit the performance of the character n-gram approach. Recently, Feng and Hirst [2014] present the syntactic information in a different way by considering the sequence of grammatical roles of the entities recognized from the text, e.g., the entity is the subject, the object, or neither. Another alternative to the POS tags n-gram is the fully parsed POS dependent tree of the sentence [Tschuggnall and Specht 2014;Posadas-Duran et al. 2014]. Syntactic tree-based approaches have larger feature space than POS n-grams. However, we find that for most online casual text snippets and informal writings the accuracy of such a parsing approach drops significantly. Moreover, it is inefficient and computationally infeasible to parse a large dataset into full syntactic trees.
Instead of looking at the POS n-grams, we seek another alternative to maximize the degree of variations that we can gain from the POS tags. First, we look into the stateof-the-art tagger models. POS taggers are pre-trained models that take a list of tokens as input and output a list of tags. Suppose we have a sentence s a with its tokens t b ∈ T (s a ). Recall that P(t b ) denotes the POS tag for the token t b in the sentence. Refer to Definition 2.1. To assign a tag P(t b ) to a token t b , there are three typical structures/models [Toutanova et al. 2003]:
-Left-to-Right structure. This structure tries to maximize P(P(t b )|t b , P(t b−1 )). The tag for token t b is determined by both the lexical token itself and the next tag P(t b−1 ). Strong dependencies exist between P(t b−1 ) and P(t b ) and between P(t b ) and t b . See Figure 4a. The solid lines indicate the dependencies. -Right-to-Left structure. This structure tries to maximize P(P(t b )|t b , P(t b+1 )). The tag for token t b is determined by both the lexical token itself and the previous tag P(t b+1 ). Strong dependencies exist between P(t b+1 ) and P(t b ) and between P(t b ) and t b . See Figure 4b. The solid lines indicate the dependencies. -Bidirectional structure This structure is a combination of the previous two. It maximizes P(P(t b )|t b , P(t b+1 ), P(t b−1 )). The tag for token t b is determined by both the lexical token itself and the surrounding tags P(t b+1 ) and P(t b−1 ). Strong dependencies exist between P(t b+1 ) and P(t b ), between P(t b ) and P(t b−1 ), and between P(t b ) and t b . See Figure 4c. The solid lines indicate the dependencies.
For all of these three structures, there exists a strong dependency between contiguous POS tags, as well as between the actual lexical token and its tag. Using POS tags n-grams as a stylometric feature is less effective than using character n-grams and lexical n-grams because the strong dependencies between contiguous POS tags introduced by the POS taggers are shared between different documents. An n-gram-based model enlarges the feature space using the contiguous gram dependencies, while for POS n-grams it is weakened by the taggers. Therefore, we seek another way that has fewer dependencies introduced by the POS tagger. In Figure 4c, strong dependencies introduced by the tagger are shown as solid lines. We select two weak dependency links from t b to P(t b+1 ) and from t b to P(t b−1 ), as indicated by the dashed lines. The tagger only introduces indirect dependencies on these two paths. Thus, these two paths have more variations across different documents than the others, as indicated by solid lines. Formally, our model tries to maximize P (P(t b−1 ), P(t b+1 )|t b ), which is different from the typical structures for the taggers.
The number of unique POS tags is quite limited, so we use the bigrams of POS tags. See Figure 5. Let P 2 (t b ) be a POS tag bigram [P(t b ), P(t b+1 )], and pg b = PG(t b ) = {P 2 (t b−3 ), P 2 (t b−2 ), P 2 (t b+1 ), P 2 (t b+2 )} be the neighbor POS bigrams of token t b . The goal of this model is to maximize:
arg max J(θ) = arg max 1 |D| D ω S(ω) sa T (sa) t b PG(t b ) pg b log P(pg b | θ sy ω syntactic , w t b in )(16)
Similar to the previous models, this model maps each lexical token t b into a numeric vector w t b in , and each of its neighbor POS bigrams maps into an numeric vector pg b out . The input of the model, denoted by θ pg in , is the average of w t b in and θ sy ω , and the prediction is one of the target token t b 's neighbor POS tag bigrams, as shown in Figure 5. w t b in and θ sy ω share the same dimensionality d 3 . The prediction can be implemented as a soft-max 0:18 layer:
θ pg in = θ sy ω , w t b in P(pg b | θ sy ω syntactic , t b ) = P( pg b out | θ pg in ) = f ( pg b out , θ pg in ) Vpg pg f ( pg out , θ pg in ) f ( pg out , θ pg in )) = U h(( pg out ) T × θ pg in ))(17)
where V pg denotes the union of all distinct POS bigrams, and the number of parameters to be updated for each pg b is bounded by V pg , which is around a few hundreds. It is still computationally feasible to directly use the soft-max layer. It is possible to use the negative sampling as well:
P(pg b | θ sy ω syntactic , t b ) = P( pg b out | θ pg in ) ≈ log f ( pg b out , θ pg in ) + k i=1 E pg Pn(pg b ) pg = pg b log f (−1 × pg out , θ pg in )(18)
where P n (pg b ) denotes the negative sampling function for V pg . Accordingly, we have the following derivatives for back propagation:
∂ ∂ pg b out J(θ) ng = pg == pg b − f ( pg b out , θ pg in ) × θ pg in ) ∂ ∂ θ pg in J(θ) ng = k i E pg Pn(pg b ) pg == pg b − f ( pg out , θ pg in ) × pg out(19)
At the end of the training, we have θ sy ω for each document ω ∈ D. To estimate θ sy ω for ω / ∈ D, we fix all w t b in and pg out and only propagate errors to θ sy ω . Example 2.6. Consider a simple sentence and its corresponding sequence of POS tags in Figure 5. For each token {t b |b ∈ [1, 10]} we extract its POS neighbor bigrams. Suppose the word in target is t 5 ='contains', and its POS neighbor bigrams are PG(t 5 ) = {'DT JJ', 'JJ NN', 'RB RB', 'RB JJ'} given a window size of 2. The training process is the same for other lexical tokens in the sentence. Let us take one of its (t 5 's) POS neighbor bigrams pg 5 ='DT JJ' as an example. First we map pg 5 to its vectorized representation pg 5 in and map t 5 to its representation w t5 in . With θ sy ω , we calculate θ pg in according to the first formula in Equation 17. In combination with pg 5 in , we calculate the forward log probability for P( pg 5 in | θ pg in ) in Equation 18. Then we calculate the corresponding gradients in Equation 19 and update the respective parameters. The training pass for bigram pg 5 ='DT JJ' is completed, and we move to the next bigram 'JJ NN' following the same procedure. After all the bigrams are processed, we move to the next token t 6 .
Making the model deterministic
Deterministic and reproducible results are important requirements for most authorship applications, especially in the area of cyber forensic and linguistic evidence [Iqbal et al. 2013;Ding et al. 2015]. We could not show that a piece of document is written by an author in the first run, and later show that it is authored by another author in the subsequent runs with the same inputs and settings. Unfortunately, the proposed stylometric representation learning approach is based on the stochastic gradient descend back-propagation algorithm, which involves a high degree of randomness in its nature. In order to make the proposed model deterministic, we need to enforce specific modifications to the aforementioned models on the implementation level:
-Initializing the parameters to be estimated. To start with, we need to initialize all the neural network input layer parameters of all the models to small random values around zero. In order to have the same sequence of the random number generated for the parameters, we fix the starting random seed as 0. In this way, we make sure that the gradient descend process always starts from the same point at the feature space.
-Using the single-thread implementation. A multi-threaded implementation of the stochastic gradient descent can greatly reduce the runtime for learning stylometric representation; however, the behavior of each thread can hardly be controlled. Even the parameter space is sparse; there is a high chance for different threads to update the parameters of the frequent terms at the same time. To avoid the unpredictable behavior of multi-threading, we choose a single-threaded implementation for both the parameter initialization and stochastic gradient descent. Thus, using the negative sampling approach is critical for all the models to keep the training efficient.
EVALUATION ON UNSUPERVISED AUTHORSHIP VERIFICATION PROBLEM
In this section, we evaluate the proposed models on the authorship verification problem. The problem is to verify whether or not two anonymous text documents are written by the same author. Unlike the authorship identification problem, where a set of candidate authors are available for comparison, the authorship verification problem has only one target document to be compared. The solution to this problem should provide a confidence value that indicates how likely the two given text documents are written by the same author. Authorship identification is a closed-set classification problem and authorship verification is an open-set classification problem ]. Authorship verification is more difficult to solve than authorship identification.
We further divide authorship verification into two types: supervised verification and unsupervised verification. In the supervised authorship verification problem, the ground-truth data is available in the training set. The ground-truth data consists of a list of authors with their respective written documents. The ground-truth data has similar properties to the two anonymous documents to be verified. For example, all of them are e-mails. Learning-to-rank-based classification schemes fit into this category: Given a vector, the classification model learns to output a value ranging from 0 to 1 based on the training vectors. Typically, a SVM model or a logistic regression model is employed.
In the unsupervised authorship verification problem, no such ground-truth data is available. However, a list of documents that share similar properties to the two anonymous documents to be verified is available. For example, all of them are e-mails. The authors of this list of documents are unknown. The unsupervised authorship verification problem is more difficult than the supervised authorship verification problem. In this section, we focus on the unsupervised authorship verification problem.
The solution
To solve the targeted problem with our proposed stylometric representation learning models, we first train the three models mentioned in Section 2 on the unlabeled text data, and then we estimate the stylometric representations θ tp ω ∈ R d1 , θ lx ω ∈ R d1 , 0:20 θ ch ω ∈ R d2 , and θ sy ω ∈ R d3 , respectively, for the two anonymous documents ω 1 , ω 2 in the testing data that is previously unseen by the models. The verification score is a simple cosine distance measure between the given two documents' stylometric representations. Formally, given two anonymous documents ω 1 , ω 2 , the solution outputs the similarity value:
Q(ω 1 , ω 2 ) = ( θ v ω1 ) T × θ v ω2 | θ v ω1 | × | θ v ω2 | v ∈ {tp, lx, ch, sy}(20)
where v denotes the selected modality. It could be tp topical modality, lx lexical modality, ch character-level modality, sy syntactic modality, or their combinations. If more than one modality is selected, we concatenate their θ v ω into a single one for each ω. To measure the performance of the proposed approaches and the baselines on this problem, we use the Area Under Receiver Operating Characteristic curve (AU-ROC) [Fawcett 2006]. It is a well-known evaluation measure for binary classifiers where both positive labels and negative labels are equally important. Since changing the threshold of the similarity value results in different accuracy and recall measures, the AUROC measure captures the overall performance of the classifier when the threshold is varied. A 1.0 AUROC value indicates an excellent performance, and a 0.5 AUROC value implies a worthless random guess.
The English novel and essay dataset
We choose the PAN2014 authorship verification English dataset as the benchmark dataset. PAN provides a series of shared tasks on digital text forensics. In this way we can directly compare our results with other studies. The latest available dataset (both training and testing) for the authorship verification problem is from PAN2014 2 . At the moment of writing this paper, PAN2015 has only published the training dataset for this problem. Currently, we only focus on English text data, even though the aforementioned models can be adopted for different languages.
Refer to Table I. This dataset provides both the training data and testing data. The training data consists of 300 verification cases. Each verification case consists of two sets of documents and a label. The label can be true, which indicates that two sets of
Genre Verification Cases Documents Sentences Tokens
Training Data English-essays 200 729 30,038 676,966 English-novels 100 200 28,054 705,751 Testing Data English-essays 200 718 29,375 653,981 English-novels 200 400 119, 202 2,781,425 documents are written by the same author, or false, vice versa. 200 of them are essays, and 100 are novels. The test data follows the same format. It contains 400 verification cases; 200 of them are essays, and 200 are novels. Table I shows that the numbers of essay verification cases for both training and testing data are comparable, while there are more verification cases in testing data than in training data for novels. Figure 6 shows the empirical distribution function over the document length in terms of the size of lexical tokens for essays and novels. It is apparent that documents in the novel dataset are longer than those in the essay dataset. We expect that the proposed model performs better on the novel dataset than the essay dataset based on our previous studies on the factors that influence the quality of AA results [Ding et al. 2015]. We preprocess the data by removing extra spaces and non-ASCII lexical tokens. We also pre-tokenize texts, detect sentence boundaries, and generate POS tags for the dataset using the Stanford tagger [Toutanova et al. 2003]. This tagger has a bidirectional structure, discussed in Section 2.3.
As we focus on the unsupervised authorship verification problem, we treat all the training data as unlabeled data (all the ground-truth labels are stripped for training). Only a small portion of randomly sampled problems from the training dataset with their ground-truth labels are selected as a validation set for tuning the hyperparameters for the proposed models and all the baseline models.
Baselines
We choose several of the most relevant approaches to compare with our proposed models on the authorship verification problem.
-Style. This approach represents a document as a numeric vector under a typical 302 static stylometric features, which has been widely studied. Table II provides a summary of these features. It is shown to be effective coupled with classifiers for the authorship identification problem. It is called a static feature set since the features do not change across different datasets. The similarity between two documents is based on their normalized cosine distance.
-Style+[k-freq-ngram]. This approach represents a document as a numeric vector under the 302 static features in Table II as well as k dynamic features constructed based on the training dataset. The k dynamic features are constructed by picking the top-k n-grams ranked by their occurring frequency in the training set. n-grams include lexical unigrams, lexical bigrams, lexical trigrams, character unigrams, character bigrams, character trigrams, POS tag unigrams, POS tag bigrams, and POS tag trigrams. In this experiment, we pick k ∈ {100, 200, 500, 1000, 2000, 5000}. The value for each top-k selected n-gram is calculated using tf × idf . The similarity between two documents is based on their normalized cosine distance.
-
Style+[k-info-ngram].
This approach is the same as Style+[k-freq-ngram] except that the top-k n-grams are selected based on information gain rather than the frequency value in the training dataset. Even though previous AA research already experimentally demonstrates that frequency value carries enough stylistic information and outperforms the information gain scheme [Stamatatos 2009], we include it in the baselines. We pick k ∈ {100, 200, 500, 1000, 2000, 5000}. Occurrences of word contract, time, and draft, etc. Gender-preferential features 10 Ratio of words ending with ful -LDA. The Latent Dirichlet allocation (LDA) is a generative model that learns a latent semantic representation between the documents and the words. The latent semantic operation is learned through Gibbs Sampling. This approach represents each document as a numeric vector under the document-to-latent-topic distribution learned from the dataset. The numeric vector has k elements and each of them corresponds to one of the latent topics in the LDA model. We validate the number of iterations and the number of k for the LDA model on the validation set. The similarity between two documents is the cosine distance between their vectorized representation. We pick k = 500, which achieves the best result on the validation set.
0:22
-LSA. Latent semantic analysis (LSA) is a technique for analyzing the relationship between words and documents. Given a set of documents, we can represent it as a sparse matrix, where a row denotes a document and a column represents a word. A word is represented as its occurrences in different documents and a document is represented as a set of words with their corresponding occurrence in this document. Given such a sparse matrix, LSA learns the latent representation between document and word by factorizing the sparse matrix using Single Value Decomposition (SVD). After SVD, each document can be represented as weights over the k singular values in SVD. The similarity between two documents is the cosine distance between their vectorized representation. We pick k = 200 by maximizing its performance on the validation set.
-w2v-skipgram. w2v-skipgram is a neural network language model that learns the vectorized embedding for words in a text dataset [Mikolov et al. 2013]. It is efficient and has been adapted in various data mining problems. It is well known for the word analogy task by mathematically manipulating the vectorized representation of words. It is shown that this model is an approximated factorization of the co-occurrence matrix between words. By converting each word of a document into a vector and taking their average, we can obtain an vectorized representation of the document. Finally we use cosine similarity to measure the distance between vectors.
-w2v-cbow. w2v-cbow is another neural network language model that learns the vectorized embedding for words in a text dataset [Mikolov et al. 2013]. It is more scalable to larger dataset than the w2v-skipgram model. Following the previous w2vskipgram approach, we obtain a vectorized representation of a document by converting each word of the document into a vector and taking their average. At the end we use cosine similarity to measure the distance between vectors.
-PVDBOW. PVDBOW is a recently proposed model that learns a vectorized document representation based on the neural network language model [Le and Mikolov 2014]. It captures the occurrences relationship between words. It has been shown to be effective in sentiment analysis [Le and Mikolov 2014]. Our proposed model uses a similar neural network approach but the whole structure and the motivation are different. We validate hyper-parameters for PVDBOW on the validation set. We pick k = 400 with sub-sampling enabled and a window size of 4 accordingly to maximize its performance on the validation set.
-PVDM. PVDM is another recently proposed model that learns a vectorized document representation based on the neural network language model [Le and Mikolov 2014]. It captures the document-to-word occurrences relationship. It has been shown to be more effective than PVDBOW in sentiment analysis [Le and Mikolov 2014]. We pick k = 400 with sub-sampling enabled and a window size of 4 according to its performance on the validation set.
-Other approaches are reported in PAN2014. For comparison, we have selected the top 10 approaches from the other studies reported in PAN2014. PAN2014 also has a meta-classifier, called META-CLASSIFIER-PAN2014, which combines all the submitted approaches.
These baselines cover both the recent development in text embedding learning models and authorship verification solutions. We also include several combinations such as w2v-cbow+skipgram. Following the same procedure for the baseline approaches, we train our proposed three models on the training set and choose its hyper-parameters based on the validation set. We set b 1 = 400, b 2 = 400, b 3 = 400 and select a window size of 4 for the joint model of lexical and topical modality. Evaluation results are reported based on the performance on the test dataset.
Performance comparison
In this section, we present our evaluation result on the English authorship verification dataset with respect to the AUROC measure with all the baselines mentioned above. As indicated in Table III, our proposed Modality models achieve the highest AUROC score on this authorship verification problem. Specifically, on average the first-rated model is the joint learning model for lexical modality and the topical modality described in Section 2.1. This model also outperforms all the others on the essay dataset. The runner-up is the lexical modality representation that is learned in the joint learning model. Character-level modality achieves the highest score on the novel dataset. It also outperforms all the aforementioned baselines on average. The syntactic modality does not perform as well as the lexical, topical, and character-level modalities; however, it still achieves better AUROC than the PVDBOW, LSA, LDA, and other dynamic n-gram approaches. It is noted that our proposed syntactic modality representation outperforms the other POS-tags-based approach, such as [Harvey 2014] and n-gram approaches, that involve POS tags.
Our proposed models perform better than LSA and LDA approaches, and the LSA approaches outperform the LDA approaches. Probably it is because our model is a joint effect of document-to-word co-occurrence relationship and co-occurrence relationship between words, while LSA and LDA only consider the direct relationship between document and word. The PVDBOW and PVDM approaches also outperform LSA and LDA. In general, the neural-network-based model achieves better performance. Table III also shows that our proposed lexical modality representation outperforms the dynamic n-gram-based feature representation with a lower degree of dimensionality.
The w2v-related approaches, which learn document embedding by averaging the word embedding, do not perform as well as our proposed approaches and the PVDMrelated approaches that directly learn the document embedding. We also see that the overall performance on the novel dataset is better than that on the essay dataset, which is consistent with our expectation described in Section 3.2 and the observation in our previous work [Ding et al. 2015].
Considering the feature selection criteria for dynamic n-gram-based approaches, in this scenario the information gain measure outperforms the frequency measure, which is contradictory to the results reported in the survey [Stamatatos 2009]. The information-gain-based feature selection method consistently outperforms the frequency-based measure for the authorship verification problem.
EVALUATION ON AUTHORSHIP CHARACTERIZATION PROBLEM
We evaluate the proposed models on another important problem in authorship analysis: authorship characterization. The problem is to identify the socio-linguistic characteristics of the author based on the given text. As discussed in Section 1, it has wide applications in marketing, political socialization, digital forensics, and social network analysis. This problem can be described as follows: Given a set of labeled documents, where each document is assigned a class label such as the gender of its author, the problem is to identify the class label for a document whose author remains unknown. A classifier is trained on the labeled documents, and it assigns one of the labels to the targeted document. All labels are considered non-overlapping. For example, the labels for age ranges can be 18-23 and 23+. To have the classifier understand the text data we need to represent these data in numeric form. To have the proposed models in this paper be able to solve this problem, we first learn the models based on the training text data by treating them as unlabeled data. After that we estimate the stylometric representations for all the documents in the training set and pad the available labels into the vectorized representations of the documents. Then an arbitrary classification model can be trained based on these vectors. Given the unseen document data, the models estimate their representations and feed them into the classifier to have the predicted labels.
In this section, we evaluate different stylometric representations for the authorship characterization problem. We represent the text documents in different forms based on the selected model and feed the learned representation of the documents into a simple logistic regression model to predict the characteristics of a text's author.
The Twitter characterization dataset
We choose the ICWSM 2012 labeled Twitter dataset [Al Zamal et al. 2012] in our experiment. This dataset consists of three categories of labels, and it is publicly available. Due to the limitation of Twitter's policy, the actual content of tweets were not included with the dataset; however, the Twitter users' identification numbers as well as their tweet IDs are available. We retrieve all the data using Twitter API according to the available information.
To preprocess the dataset, we remove all the non-ASCII characters and replace all the URLs with a special lexical token. We also pre-tokenize the tweets and assign POS tags for each token in each tweet using the pre-trained tagger from [Owoputi et al. 2013]. In this dataset there is other social-network-based information, such as the target user's friends, and the friends' tweets, etc. Since we only want to model the writing style of the Twitter user, we omit this information as well as those tweets that 0:26 are re-tweeted by the given author. We attempt to include only the tweets that are authentically authored by the labeled Twitter user.
The labels in this dataset are generated semi-automatically and manually inspected [Al Zamal et al. 2012]. This dataset consists of three categories of labels for Twitter users: age, gender, and political orientation. The cleaned dataset is summarized in Table IV. There are 1170 Twitter users in total.
-Gender. The labels for this category can be either male or female. The labels are automatically generated based on the Twitter user's name with a name-gender database, and then labels are manually inspected to ensure the labels are correct.
-Age.This dataset only distinguishes individuals of age ranges in 18-23 or 25-30. It frames the age prediction into a binary classification problem. The labels are constructed by looking at the tweets about birthday announcement, e.g., "Happy birthday to me".
-Political orientation. This dataset provides political Twitter users with a label: either Democrat or Republican. Twitter users are collected from the wefollow Twitter directory [Al Zamal et al. 2012]. Figure 7 shows the empirical distribution, kernel density and histogram on the tweets' length in terms of lexical token size. In general, tweets are very short text snippets. 90 percent of tweets have less than 25 tokens and most have a length of around 10 tokens. We combine all tweets of a single user into a single document and treat each tweet as an individual sentence.
To proceed with the experiment we conduct a 10-fold cross validation on the Twitter dataset and collect the accuracy measure for each characterization approach. First we convert each document into its numeric vector representation using different stylometric representations in our proposed models or the baselines, and then we feed them into a simple logistic classifier to predict the label of the document. 4.1.1. Baselines. We inherit the same set of baselines as in the previous experiment on the authorship verification problem, except for those studies reported in PAN2014 [Rangel et al. 2014] since we do not have the available result for direct comparison. The baselines are configured to have the same hyper-parameter setting as the previous experiment. Additionally, we include several additional baselines:
-LDA. In addition to picking the empirical optimal value k = 500, which represents the latent topics, we include the performance of k ∈ {100, 200, 500, 800} for comparison.
-LSA. Likewise, for the LSA model we also include the performance of k ∈ {100, 200, 500, 800} in addition to the original k = 200. Recall that k for LSA represents the number of singular values.
-Moreover, we include two evaluation results that were presented by Al Zamal et al. [2012] since we follow the same setup and use the same dataset for the experiment. The target user info approach is a SVM-based model trained on the features that are constructed on the user's tweets and other information. These features include textual features (e.g., stemmed n-grams and hash tags, etc.) and socio-linguistic features (e.g., re-tweeting tendency, neighborhood size, and mention frequency, etc.). The all info approach is another SVM-based model trained on the features that adds additional social-network features (e.g., average of the neighborhood's feature vectors).
We notice that the baseline measures adopted from [Al Zamal et al. 2012] have more advantages to our proposed approaches and our baseline approaches. First, they use a SVM model that typically outperforms a simple logistic regression model when the same data is given. Second, our approaches only consider the information reflected from the text, i.e., stylometric information. Other socio-linguistic, behavioral, and social-network-related information is discarded. Given the advantages of their approach we expect that probably they will outperform the others. However, our experiments show that our proposed model achieves even better accuracy, which will be described in the following section.
4.1.2. Performance comparison. The performance of our proposed models, as well as all the baselines, is listed in Table V. It shows that our first proposed model, which jointly learns the representation for the lexical modality and the topical modality, achieves the highest accuracy value. The runner-up is the topical modality, and the characterlevel modality does not perform as well as the other two. The proposed lexical/topical modality model and the character-level modality model also outperform the PVDMrelated models, w2v-related models, and other dynamic n-gram-based models. Unlike the results for the authorship verification problem, the w2v-related baselines perform fairly well. They achieve a higher accuracy value than PVDM, PVDBOW, LSA, and LDA.
Even the (target user only) approach and the (all info) approach are given advantage; however, it achieves a lower accuracy value than our proposed joint model for lexical and topical modality, which is contradictory to our expectation. It shows that our proposed approach better models the writing variation than the n-gram language model that is used in both of these two baselines.
Table V also shows that the proposed syntactic representation learning model does not perform well on the ICWSM 2012 dataset, which is different from the previous authorship verification problem. This is because the tweet text data are relatively more casual than essay and novel, which does not introduce much variation in the grammatical bias.
Regarding the feature selection measure, in this scenario the frequency-based selection approach outperforms the information-gain-based selection-based approach. Even the top-100 frequency-ranked n-grams outperform top-1500 information-gain-ranked n-grams, which is completely different from the result shown in previous authorship verification experiments. Such a difference further confirms our argument that feature selection measures are scenario-dependent/data-dependent. Even the feature set is dynamically constructed based on a different dataset, the measurement for the selection process is data-dependent. A language model over text data is better than a feature-selection-based model.
CONCLUSIONS AND FUTURE DIRECTIONS
In this article, we present our three models for learning the vectorized stylometric representations of different linguistic modalities for authorship analysis. To the best of our knowledge, it is the very first work introducing the problem of stylometric representation learning into the authorship analysis field. Our proposed models are designed to effectively capture the differences of writing styles of different modalities when an author is composing text. By using the proposed feature learning scheme, guided by the selected linguistic modality, we attempt to mitigate the issues related to the feature engineering process in current authorship study. Our experiments on the publicly available PAN 2014 and the ICWSM 2012 Twitter datasets, respectively, for the authorship verification problem and the authorship characterization problem, demonstrate that our proposed models are effective and robust on both different datasets and AA problems.
Our future research will focus on exploring better models to capture writing styles and proposing models for other languages. Currently the representation learning models are simple one-layer neural networks. A recurrent neural network with long-short term memory is more suitable for capturing the contextual relationship over long text. For learning the syntactic modality representation, a recursive neural network that operates on the fully parsed syntactic tree will fit more into the nature of grammatical variations than the current one. Moreover, this work only focuses on capturing the variations in English writing. Additional changes need to be applied for text in other languages.
Fig. 1 .
1Overview of the traditional solution and the proposed solution for authorship analysis.
Fig. 2 .
2The joint model for learning the stylometric representation of the topical and lexical modalities.
Fig. 3 .
3The model for learning the stylometric representation of the character-level modality.arg max J(θ) = arg max 1 |D|
Fig. 4 .
4Three typical inference structures for the Part-Of-Speech tagger.
Fig. 5 .
5The model for learning the stylometric representation of the syntactic modality.
Fig. 7 .
7Empirical distribution, kernel density and histogram on the tweets length for the ICWSM2012 Twitter dataset.
Table I .
IThe PAN2014 authorship verification English dataset.
Table II .
IIBaseline static features.Feature type
Features
Count
Example
Static feature
Lexical features
105
Ratio of digits and vocabulary
richness, etc.
Function words
150
Occurrence of after
Punctuation marks
9
Occurrences of punctuation !
Structural features
15
Presence/absence of greetings
Domain-specific features
13
Table III .
IIIPerformance comparison for the authorship verification problem on the PAN2014 dataset. Entries with * are the performance of our proposed approaches.Measure
Approach Essay
Novel
Average
AUROC
Modality [Lexical+Topical] * 0.8104
0.7796
0.7950
Modality [Lexical] *
0.8092
0.7356
0.7724
Modality [Character] *
0.7416
0.7876 0.7646
META-CLASSIFIER-PAN2014 0.7810
0.7320
0.7565
PVDM 0.7234
0.7590
0.7412
PVDBOW+PVDM 0.7294
0.7463
0.7379
Modality [Topical] *
0.7208
0.7356
0.7282
PVDBOW 0.6818
0.7142
0.6980
Satyam et al. [2014] 0.6990
0.6570
0.6780
Khonji and Iraqi [2014] & Iraqi 0.5990
0.7500
0.6745
Frery et al. [2014] 0.7230
0.6120
0.6675
Zamani et al. [2014b] et al. 0.5850
0.7330
0.6590
Modaresi and Gross [2014] 0.6030
0.7110
0.6570
LSA-k=200 0.7172
0.5956
0.6564
Modality [Syntactic] *
0.6629
0.6201
0.6415
Mayor et al. [2014] 0.5720
0.6640
0.6180
Moreau et al. 0.6200
0.5970
0.6085
Halvani and Steinebach [2014] 0.6290
0.5690
0.5990
LDA-k=500 0.5608
0.6320
0.5964
Castillo et al. [2014] 0.5490
0.6280
0.5885
Static+[5000-info-ngram] 0.4727
0.6765
0.5746
Static+[2000-info-ngram] 0.4760
0.6684
0.5722
Static+[1500-info-ngram] 0.4760
0.6650
0.5705
Static+[1000-info-ngram] 0.4752
0.6637
0.5695
Static+[0200-info-ngram] 0.4752
0.6625
0.5689
Static+[0100-info-ngram] 0.4752
0.6621
0.5687
Static 0.4752
0.6617
0.5685
Static+[0500-info-ngram] 0.4752
0.6614
0.5683
Static+[0100-freq-ngram] 0.4716
0.6484
0.5600
Harvey [2014] 0.5790
0.5400
0.5595
Static+[0200-freq-ngram] 0.4725
0.6412
0.5569
Static+[0500-freq-ngram] 0.4712
0.6376
0.5544
Static+[5000-freq-ngram] 0.4666
0.6361
0.5514
Static+[1000-freq-ngram] 0.4652
0.6355
0.5504
Layton [2014] 0.5900
0.5100
0.5500
Static+[1500-freq-ngram] 0.4659
0.6338
0.5499
Static+[2000-freq-ngram] 0.4623
0.6332
0.5478
w2v-cbow+skipgram 0.3936
0.6902
0.5419
w2v-cbow 0.3950
0.6717
0.5334
w2v-skipgram 0.3465
0.6616
0.5041
Table IV .
IVSummary of the cleaned ICWSM Twitter characterization dataset.Label type
Label
Users Valid tweets
Tokens
Gender
Female
192
115,746
1,366,699
Male
192
127,368
1,475,018
Age
(18 -23)
194
104,686
1,473,512
(25 -30)
192
71,883
1,122,247
Political orientation
Republican
200
147,423
2,545,947
Democrat
200
170,822
2,957,180
Table V .
VPerformance comparison for the authorship characterization problem on the ICWSM2012 Twitter dataset.Measure
Approach Age
Gender
Political
Orientation
Average
Accuracy
Modality [Lexical+Topical] * 0.7887
0.8308
0.9318
0.8504
Modality [Topical] *
0.7606
0.8423
0.9205
0.8411
Modality [Lexical] *
0.7782
0.8154
0.9148
0.8361
[Al Zamal et al. 2012] (all info) 0.7720
0.8020
0.9150
0.8297
Modality [Character] *
0.7711
0.7846
0.9034
0.8197
[Al Zamal et al. 2012] (target user only) 0.7510
0.7950
0.8900
0.8120
Static+[5000-freq-ngram] 0.7606
0.7615
0.8580
0.7934
Static+[2000-freq-ngram] 0.7500
0.7731
0.8523
0.7918
Static+[1500-freq-ngram] 0.7782
0.7308
0.8352
0.7814
PVDBOW+PVDM 0.7323
0.7346
0.8693
0.7787
w2v-skipgram 0.7112
0.7692
0.8380
0.7728
w2v-cbow+skipgram 0.7253
0.7692
0.8238
0.7728
w2v-cbow 0.7218
0.7807
0.8096
0.7707
Static+[1000-freq-ngram] 0.7465
0.7385
0.8097
0.7649
Static+[0500-freq-ngram] 0.7641
0.7192
0.8011
0.7615
LSA-k=200 0.6937
0.7577
0.8097
0.7537
LSA-k=100 0.6937
0.7538
0.8097
0.7524
LSA-k=500 0.7007
0.7500
0.8040
0.7516
PVDBOW 0.6936
0.6653
0.8409
0.7333
PVDM 0.6901
0.6653
0.8409
0.7321
Static+[0200-freq-ngram] 0.7570
0.7000
0.7301
0.7290
LDA-k=500 0.6338
0.7423
0.8040
0.7267
LDA-k=100 0.6303
0.7462
0.7869
0.7211
Static+[0100-freq-ngram] 0.7324
0.6846
0.7102
0.7091
Static+[1500-info-ngram] 0.7254
0.7000
0.6847
0.7034
Static+[5000-info-ngram] 0.6866
0.7231
0.6960
0.7019
Static+[2000-info-ngram] 0.7289
0.6962
0.6790
0.7014
Static 0.6904
0.7324
0.6769
0.6999
Static+[0500-info-ngram] 0.7394
0.6692
0.6847
0.6978
Static+[1000-info-ngram] 0.7113
0.7000
0.6818
0.6977
LDA-k=200 0.5986
0.7269
0.7585
0.6947
Static+[0200-info-ngram] 0.7183
0.6808
0.6818
0.6936
Static+[0100-info-ngram] 0.7289
0.6654
0.6761
0.6901
Modality [syntactic] *
0.6303
0.6654
0.6364
0.6440
Institute for Linguistic Evidence (ILE): http://linguisticevidence.org/
PAN2014 Authorship Verification. Available at http://pan.webis.de/clef14/pan14-web/author-identification. html
ACKNOWLEDGMENTS
Homophily and Latent Attribute Inference: Inferring Latent Attributes of Twitter Users from Neighbors. Wendy Faiyaz Al Zamal, Derek Liu, Ruths, Proceedings of the 5th International Conference on Weblogs and Social Media (ICWSM). the 5th International Conference on Weblogs and Social Media (ICWSM)Faiyaz Al Zamal, Wendy Liu, and Derek Ruths. 2012. Homophily and Latent Attribute Inference: Inferring Latent Attributes of Twitter Users from Neighbors.. In Proceedings of the 5th International Conference on Weblogs and Social Media (ICWSM).
Fighting authorship linkability with crowdsourcing. Mishari Almishari, Ekin Oguz, Gene Tsudik, Proceedings of the second ACM conference on Online Social Networks (COSN). the second ACM conference on Online Social Networks (COSN)Mishari Almishari, Ekin Oguz, and Gene Tsudik. 2014. Fighting authorship linkability with crowdsourcing. In Proceedings of the second ACM conference on Online Social Networks (COSN).
Automatically profiling the author of an anonymous text. Shlomo Argamon, Moshe Koppel, James W Pennebaker, Jonathan Schler, Communication ACM. 52Shlomo Argamon, Moshe Koppel, James W. Pennebaker, and Jonathan Schler. 2009. Automatically profiling the author of an anonymous text. Communication ACM 52, 2 (2009).
Influence of Data Discretization on Efficiency of Bayesian Classifier for Authorship Attribution. Grzegorz Baron, Proceedings of the 18th International Conference in Knowledge Based and Intelligent Information and Engineering Systems (KES). the 18th International Conference in Knowledge Based and Intelligent Information and Engineering Systems (KES)Grzegorz Baron. 2014. Influence of Data Discretization on Efficiency of Bayesian Classifier for Authorship Attribution. In Proceedings of the 18th International Conference in Knowledge Based and Intelligent Information and Engineering Systems (KES).
Neural probabilistic language models. Yoshua Bengio, Holger Schwenk, Jean-Sébastien Senécal, Fréderic Morin, Jean-Luc Gauvain, Innovations in Machine Learning. SpringerYoshua Bengio, Holger Schwenk, Jean-Sébastien Senécal, Fréderic Morin, and Jean-Luc Gauvain. 2006. Neural probabilistic language models. In Innovations in Machine Learning. Springer.
Cross-Language Authorship Attribution. Dasha Bogdanova, Angeliki Lazaridou, Proceedings of the International Conference on Language Resources and Evaluation. the International Conference on Language Resources and EvaluationDasha Bogdanova and Angeliki Lazaridou. 2014. Cross-Language Authorship Attribution.. In Proceedings of the International Conference on Language Resources and Evaluation.
Probabilistic Anomaly Detection Method for Authorship Verification. Mohamed Amine Boukhaled, Jean-Gabriel Ganascia, Proceedings of the Second International Conference on Statistical Language and Speech Processing. the Second International Conference on Statistical Language and Speech ProcessingSLSPMohamed Amine Boukhaled and Jean-Gabriel Ganascia. 2014. Probabilistic Anomaly Detection Method for Authorship Verification. In Proceedings of the Second International Conference on Statistical Language and Speech Processing (SLSP).
Adversarial stylometry: Circumventing authorship recognition to preserve privacy and anonymity. Michael Brennan, Sadia Afroz, Rachel Greenstadt, ACM Transactions on Information and System Security (TISSEC). 153Michael Brennan, Sadia Afroz, and Rachel Greenstadt. 2012. Adversarial stylometry: Circumventing au- thorship recognition to preserve privacy and anonymity. ACM Transactions on Information and System Security (TISSEC) 15, 3 (2012).
Discriminating Gender on Twitter. John D Burger, John C Henderson, George Kim, Guido Zarrella, Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). the Conference on Empirical Methods in Natural Language Processing (EMNLP)John D. Burger, John C. Henderson, George Kim, and Guido Zarrella. 2011. Discriminating Gender on Twit- ter. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP).
Unsupervised method for the authorship identification task. Esteban Castillo, Ofelia Cervantes, Darnes Vilariño, David Pinto, Saul León, Esteban Castillo, Ofelia Cervantes, Darnes Vilariño, David Pinto, and Saul León. 2014. Unsupervised method for the authorship identification task.
Large-Scale Micro-Blog Authorship Attribution: Beyond Simple Feature Engineering. Thiago Cavalcante, Anderson Rocha, Ariadne Carvalho, Proceedings of the 19th Iberoamerican Congress, Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications (CIARP). the 19th Iberoamerican Congress, Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications (CIARP)Thiago Cavalcante, Anderson Rocha, and Ariadne Carvalho. 2014. Large-Scale Micro-Blog Authorship At- tribution: Beyond Simple Feature Engineering. In Proceedings of the 19th Iberoamerican Congress, Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications (CIARP).
Authorship Analysis based on Data Compression. Daniele Cerra, Mihai Datcu, Peter Reinartz, CoRR abs/1402.3405Daniele Cerra, Mihai Datcu, and Peter Reinartz. 2014. Authorship Analysis based on Data Compression. CoRR abs/1402.3405 (2014).
Classifying Political Orientation on Twitter: It's Not Easy!. Raviv Cohen, Derek Ruths, Proceedings of the Seventh International Conference on Weblogs and Social Media (ICWSM). the Seventh International Conference on Weblogs and Social Media (ICWSM)Raviv Cohen and Derek Ruths. 2013. Classifying Political Orientation on Twitter: It's Not Easy!. In Proceed- ings of the Seventh International Conference on Weblogs and Social Media (ICWSM).
Predicting the Political Alignment of Twitter Users. Michael Conover, Bruno Gonçalves, Jacob Ratkiewicz, Alessandro Flammini, Filippo Menczer, Proceedings of 3rd IEEE International Conference on and 3rd IEEE Third International Conference on Social Computing (SocialCom) Privacy, Security, Risk and Trust. 3rd IEEE International Conference on and 3rd IEEE Third International Conference on Social Computing (SocialCom) Privacy, Security, Risk and TrustPASSATMichael Conover, Bruno Gonçalves, Jacob Ratkiewicz, Alessandro Flammini, and Filippo Menczer. 2011a. Predicting the Political Alignment of Twitter Users. In Proceedings of 3rd IEEE International Con- ference on and 3rd IEEE Third International Conference on Social Computing (SocialCom) Privacy, Security, Risk and Trust (PASSAT).
Political Polarization on Twitter. Michael Conover, Jacob Ratkiewicz, Matthew R Francisco, Bruno Gonçalves, Filippo Menczer, Alessandro Flammini, Proceedings of the 5th International Conference on Weblogs and Social Media (ICWSM). the 5th International Conference on Weblogs and Social Media (ICWSM)Michael Conover, Jacob Ratkiewicz, Matthew R. Francisco, Bruno Gonçalves, Filippo Menczer, and Alessan- dro Flammini. 2011b. Political Polarization on Twitter. In Proceedings of the 5th International Confer- ence on Weblogs and Social Media (ICWSM).
Gender-Preferential Text Mining of E-mail Discourse. Malcolm Corney, Y Olivier, Alison De Vel, George M Anderson, Mohay, 18th Annual Computer Security Applications Conference (ACSAC). Malcolm Corney, Olivier Y. de Vel, Alison Anderson, and George M. Mohay. 2002. Gender-Preferential Text Mining of E-mail Discourse. In 18th Annual Computer Security Applications Conference (ACSAC).
A Visualizable Evidence-Driven Approach for Authorship Attribution. H H Steven, Ding, C M Benjamin, Mourad Fung, Debbabi, ACM Transactions on Information and System Security (TISSEC). 173Steven H. H. Ding, Benjamin C. M. Fung, and Mourad Debbabi. 2015. A Visualizable Evidence-Driven Approach for Authorship Attribution. ACM Transactions on Information and System Security (TISSEC) 17, 3 (2015).
An introduction to ROC analysis. Tom Fawcett, Pattern recognition letters. 278Tom Fawcett. 2006. An introduction to ROC analysis. Pattern recognition letters 27, 8 (2006).
Patterns of local discourse coherence as a feature for authorship attribution. Vanessa Wei Feng, Graeme Hirst, LLC. 29Vanessa Wei Feng and Graeme Hirst. 2014. Patterns of local discourse coherence as a feature for authorship attribution. LLC 29, 2 (2014).
UJM at CLEF in Author Verification based on optimized classification trees. Jordan Frery, Christine Largeron, Mihaela Juganaru-Mathieu, Proceedings of the International Conference on CLEF, Notebook for PAN. the International Conference on CLEF, Notebook for PANJordan Frery, Christine Largeron, and Mihaela Juganaru-Mathieu. 2014. UJM at CLEF in Author Verifi- cation based on optimized classification trees. In Proceedings of the International Conference on CLEF, Notebook for PAN.
Hierarchical document clustering using frequent itemsets. C M Benjamin, Ke Fung, Martin Wang, Ester, Proceedings of the 3rd SIAM International Conference on Data Mining (SDM). SIAM. the 3rd SIAM International Conference on Data Mining (SDM). SIAMBenjamin C. M. Fung, Ke Wang, and Martin Ester. 2003. Hierarchical document clustering using frequent itemsets. In Proceedings of the 3rd SIAM International Conference on Data Mining (SDM). SIAM.
Linguistic correlates of style: authorship classification with deep linguistic analysis features. Michael Gamon, Proceedings of the 20th International Conference on Computational Linguistics. the 20th International Conference on Computational Linguistics2004Michael Gamon. 2004. Linguistic correlates of style: authorship classification with deep linguistic analysis features. In Proceedings of the 20th International Conference on Computational Linguistics (COLING- 2004).
Computing political preference among twitter followers. Jennifer Golbeck, Derek Hansen, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. the SIGCHI Conference on Human Factors in Computing SystemsACMJennifer Golbeck and Derek Hansen. 2011. Computing political preference among twitter followers. In Pro- ceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM.
A method for computing political preference among Twitter followers. Jennifer Golbeck, Derek L Hansen, Social Networks. 36Jennifer Golbeck and Derek L. Hansen. 2014. A method for computing political preference among Twitter followers. Social Networks 36 (2014).
How text-messaging slips can help catch murderers. Tim Grant, Tim Grant. 2008. How text-messaging slips can help catch murderers. (2008). http://www.independent.co.uk/ voices/commentators/dr-tim-grant-how-textmessaging-slips-can-help-catch-murderers-923503.html
Authorship Attribution of Responsa using Clustering. Yaakov Hacohen, - Kerner, Orr Margaliot, Cybernetics and Systems. 45Yaakov HaCohen-Kerner and Orr Margaliot. 2014. Authorship Attribution of Responsa using Clustering. Cybernetics and Systems 45, 6 (2014).
VEBAV -A Simple, Scalable and Fast Authorship Verification Scheme. Oren Halvani, Martin Steinebach, Proceedings of the International Conference on CLEF, Work Note. the International Conference on CLEF, Work Note030Oren Halvani and Martin Steinebach. 2014. VEBAV -A Simple, Scalable and Fast Authorship Verification Scheme. In Proceedings of the International Conference on CLEF, Work Note. 0:30
Temporal Context for Authorship Attribution -A Study of Danish Secondary Schools. Christina Niels Dalum Hansen, Birger Lioma, Stephen Larsen, Alstrup, Proceedings of the 7th Information Retrieval Facility Conference on Multidisciplinary Information Retrieval (IRFC). the 7th Information Retrieval Facility Conference on Multidisciplinary Information Retrieval (IRFC)Niels Dalum Hansen, Christina Lioma, Birger Larsen, and Stephen Alstrup. 2014. Temporal Context for Authorship Attribution -A Study of Danish Secondary Schools. In Proceedings of the 7th Information Retrieval Facility Conference on Multidisciplinary Information Retrieval (IRFC).
Author Verification using PPM with Parts of Speech Tagging. Sarah Harvey, Proceedings of the International Conference on CLEF, Notebook for PAN. the International Conference on CLEF, Notebook for PANSarah Harvey. 2014. Author Verification using PPM with Parts of Speech Tagging. In Proceedings of the International Conference on CLEF, Notebook for PAN.
A unified data mining solution for authorship analysis in anonymous textual communications. Farkhund Iqbal, Hamad Binsalleeh, C M Benjamin, Mourad Fung, Debbabi, Information Science. 231Farkhund Iqbal, Hamad Binsalleeh, Benjamin C. M. Fung, and Mourad Debbabi. 2013. A unified data min- ing solution for authorship analysis in anonymous textual communications. Information Science 231 (2013).
Authorship attribution. Patrick Juola, Foundations and Trends in information Retrieval. 13Patrick Juola. 2006. Authorship attribution. Foundations and Trends in information Retrieval 1, 3 (2006).
A Slightly-modified GI-based Author-verifier with Lots of Features (ASGALF). Mahmoud Khonji, Youssef Iraqi, Proceedings of the International Conference on CLEF, Notebook for PAN. the International Conference on CLEF, Notebook for PANMahmoud Khonji and Youssef Iraqi. 2014. A Slightly-modified GI-based Author-verifier with Lots of Fea- tures (ASGALF). In Proceedings of the International Conference on CLEF, Notebook for PAN.
Bookish Math-Statistical tests are unraveling knotty literary mysteries. Erica Klarreich, Science News. 164Erica Klarreich. 2003. Bookish Math-Statistical tests are unraveling knotty literary mysteries. Science News 164, 25-26 (2003).
Automatically Categorizing Written Texts by Author Gender. Moshe Koppel, Shlomo Argamon, Anat Rachel Shimoni, LLC. 174Moshe Koppel, Shlomo Argamon, and Anat Rachel Shimoni. 2002. Automatically Categorizing Written Texts by Author Gender. LLC 17, 4 (2002).
A simple Local n-gram Ensemble for Authorship Verification. Robert Layton, Proceedings of the International Conference on CLEF, Notebook for PAN. the International Conference on CLEF, Notebook for PANRobert Layton. 2014. A simple Local n-gram Ensemble for Authorship Verification. In Proceedings of the International Conference on CLEF, Notebook for PAN.
Distributed Representations of Sentences and Documents. V Quoc, Tomas Le, Mikolov, CoRR abs/1405.4053Quoc V. Le and Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. CoRR abs/1405.4053 (2014).
What's in a Name? Using First Names as Features for Gender Inference in Twitter. Wendy Liu, Derek Ruths, Proceedings of the AAAI Spring Symposium, Analyzing Microtext. the AAAI Spring Symposium, Analyzing MicrotextWendy Liu and Derek Ruths. 2013. What's in a Name? Using First Names as Features for Gender Inference in Twitter. In Proceedings of the AAAI Spring Symposium, Analyzing Microtext.
A Single Author Style Representation for the Author Verification Task. Cristhian Mayor, Josue Gutierrez, Angel Toledo, Rodrigo Martinez, Paola Ledesma, Gibran Fuentes, Ivan Meza, Proceedings of the International Conference on CLEF, Notebook for PAN. the International Conference on CLEF, Notebook for PANCristhian Mayor, Josue Gutierrez, Angel Toledo, Rodrigo Martinez, Paola Ledesma, Gibran Fuentes, and Ivan Meza. 2014. A Single Author Style Representation for the Author Verification Task. In Proceedings of the International Conference on CLEF, Notebook for PAN.
Efficient Estimation of Word Representations in Vector Space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, CoRR abs/1301.3781Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representa- tions in Vector Space. CoRR abs/1301.3781 (2013).
A Language Independent Author Verifier Using Fuzzy C-Means Clustering. Pashutan Modaresi, Philipp Gross, Proceedings of the International Conference on CLEF, Notebook for PAN. the International Conference on CLEF, Notebook for PANPashutan Modaresi and Philipp Gross. 2014. A Language Independent Author Verifier Using Fuzzy C- Means Clustering. In Proceedings of the International Conference on CLEF, Notebook for PAN.
An Off-the-shelf Approach to Authorship Attribution. Jamal A Nasir, Nico G"Ornitz, Ulf Brefeld, Proceedings of the 25th International Conference on Computational Linguistics (COLING). the 25th International Conference on Computational Linguistics (COLING)Jamal A. Nasir, Nico G"ornitz, and Ulf Brefeld. 2014. An Off-the-shelf Approach to Authorship Attri- bution. In Proceedings of the 25th International Conference on Computational Linguistics (COLING).
Comparative study of Authorship Identification Techniques for Cyber Forensics Analysis. Smita Nirkhi, Rajiv V Dharaskar, CoRR abs/1401.6118Smita Nirkhi and Rajiv V. Dharaskar. 2014. Comparative study of Authorship Identification Techniques for Cyber Forensics Analysis. CoRR abs/1401.6118 (2014).
Ant colony optimisation for stylometry: The federalist papers. P Michael, Oakes, Proceedings of the 5th International Conference on Recent Advances in Soft Computing. the 5th International Conference on Recent Advances in Soft ComputingMichael P Oakes. 2004. Ant colony optimisation for stylometry: The federalist papers. In Proceedings of the 5th International Conference on Recent Advances in Soft Computing.
Improved part-of-speech tagging for online conversational text with word clusters. Olutobi Owoputi, O' Brendan, Chris Connor, Kevin Dyer, Nathan Gimpel, Noah A Schneider, Smith, Association for Computational LinguisticsOlutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. Association for Computational Linguistics.
Genetic Optimization of Keyword Subsets in the Classification Analysis of Authorship of Texts. Bohdan Pavlyshenko, Journal of Quantitative Linguistics. 214Bohdan Pavlyshenko. 2014. Genetic Optimization of Keyword Subsets in the Classification Analysis of Au- thorship of Texts. Journal of Quantitative Linguistics 21, 4 (2014).
A Machine Learning Approach to Twitter User Classification. Marco Pennacchiotti, Ana-Maria Popescu, Proceedings of the 5th International Conference on Weblogs and Social Media (ICWSM). the 5th International Conference on Weblogs and Social Media (ICWSM)Marco Pennacchiotti and Ana-Maria Popescu. 2011. A Machine Learning Approach to Twitter User Classi- fication. In Proceedings of the 5th International Conference on Weblogs and Social Media (ICWSM).
Complete Syntactic N-grams as Style Markers for Authorship Attribution. Juan Pablo Posadas-Duran, Grigori Sidorov, Ildar Z Batyrshin, Proceedings of the 3th Mexican International Conference on Artificial Intelligence Human-Inspired Computing and Its Applications -1 (MICAI). the 3th Mexican International Conference on Artificial Intelligence Human-Inspired Computing and Its Applications -1 (MICAI)Juan Pablo Posadas-Duran, Grigori Sidorov, and Ildar Z. Batyrshin. 2014. Complete Syntactic N-grams as Style Markers for Authorship Attribution. In Proceedings of the 3th Mexican International Conference on Artificial Intelligence Human-Inspired Computing and Its Applications -1 (MICAI).
A Profile-Based Method for Authorship Verification. Nektaria Potha, Efstathios Stamatatos, Proceedings of the Artificial Intelligence: Methods and Applications -8th Hellenic Conference on AI (SETN). the Artificial Intelligence: Methods and Applications -8th Hellenic Conference on AI (SETN)Nektaria Potha and Efstathios Stamatatos. 2014. A Profile-Based Method for Authorship Verification. In Proceedings of the Artificial Intelligence: Methods and Applications -8th Hellenic Conference on AI (SETN).
Who Wrote This? Textual Modeling with Authorship Attribution in Big Data. Naruemon Pratanwanich, Pietro Liò, Proceedings of the IEEE International Conference on Data Mining Workshops (ICDM). the IEEE International Conference on Data Mining Workshops (ICDM)Naruemon Pratanwanich and Pietro Liò. 2014. Who Wrote This? Textual Modeling with Authorship Attri- bution in Big Data. In Proceedings of the IEEE International Conference on Data Mining Workshops (ICDM).
Co-training on authorship attribution with very fewlabeled examples: methods vs. views. Tieyun Qian, Bing Liu, Ming Zhong, Guoliang He, Proceedings of the 37th International ACM Conference on Research and Development in Information Retrieval (SIGIR). the 37th International ACM Conference on Research and Development in Information Retrieval (SIGIR)Tieyun Qian, Bing Liu, Ming Zhong, and Guoliang He. 2014. Co-training on authorship attribution with very fewlabeled examples: methods vs. views. In Proceedings of the 37th International ACM Conference on Research and Development in Information Retrieval (SIGIR).
Authorship detection of SMS messages using unigrams. G Roshan, P Ragel, Upul Herath, Senanayake, CoRR abs/1403.1314Roshan G. Ragel, P. Herath, and Upul Senanayake. 2014. Authorship detection of SMS messages using unigrams. CoRR abs/1403.1314 (2014).
Overview of the 2nd Author Profiling Task at PAN. Francisco Rangel, Paolo Rosso, Irina Chugur, Martin Potthast, Martin Trenkmann, Benno Stein, Ben Verhoeven, Walter Daelemans, Proceedings of the Conference and Labs of the Evaluation Forum (Working Notes). the Conference and Labs of the Evaluation Forum (Working Notes)Francisco Rangel, Paolo Rosso, Irina Chugur, Martin Potthast, Martin Trenkmann, Benno Stein, Ben Ver- hoeven, and Walter Daelemans. 2014. Overview of the 2nd Author Profiling Task at PAN 2014. In Proceedings of the Conference and Labs of the Evaluation Forum (Working Notes).
Detecting latent user properties in social media. Delip Rao, David Yarowsky, Proceedings of the NIPS MLSN Workshop. the NIPS MLSN WorkshopDelip Rao and David Yarowsky. 2010. Detecting latent user properties in social media. In Proceedings of the NIPS MLSN Workshop.
Cross-Topic Authorship Attribution: Will Out-Of-Topic Data Help. Upendra Sapkota, Thamar Solorio, Manuel Montes-Y-G&apos;Omez, Steven Bethard, Paolo Rosso, Proceedings of the 25th International Conference on Computational Linguistics (COLING). the 25th International Conference on Computational Linguistics (COLING)Upendra Sapkota, Thamar Solorio, Manuel Montes-y-G'omez, Steven Bethard, and Paolo Rosso. 2014. Cross-Topic Authorship Attribution: Will Out-Of-Topic Data Help?. In Proceedings of the 25th Interna- tional Conference on Computational Linguistics (COLING).
The use of orthogonal similarity relations in the prediction of authorship. Upendra Sapkota, Thamar Solorio, Manuel Montes-Y Gómez, Paolo Rosso, Computational Linguistics and Intelligent Text Processing. SpringerUpendra Sapkota, Thamar Solorio, Manuel Montes-y Gómez, and Paolo Rosso. 2013. The use of orthogonal similarity relations in the prediction of authorship. In Computational Linguistics and Intelligent Text Processing. Springer.
A Statistical Analysis Approach to Author Identification Using Latent Semantic Analysis. Anand Satyam, Arnav Kumar Dawn, Sujan Kumar Saha, Anand Satyam, Arnav Kumar Dawn, and Sujan Kumar Saha. 2014. A Statistical Analysis Approach to Author Identification Using Latent Semantic Analysis.
Authorship Attribution Based on Specific Vocabulary. Jacques Savoy, ACM Transaction on Information System (TOIS). 302Jacques Savoy. 2012. Authorship Attribution Based on Specific Vocabulary. ACM Transaction on Informa- tion System (TOIS) 30, 2 (2012).
Authorship attribution based on a probabilistic topic model. Jacques Savoy, Inf. Process. Manage. 491Jacques Savoy. 2013a. Authorship attribution based on a probabilistic topic model. Inf. Process. Manage. 49, 1 (2013).
Feature selections for authorship attribution. Jacques Savoy, Proceedings of the 28th Annual ACM Symposium on Applied Computing, SAC '13. the 28th Annual ACM Symposium on Applied Computing, SAC '13Jacques Savoy. 2013b. Feature selections for authorship attribution. In Proceedings of the 28th Annual ACM Symposium on Applied Computing, SAC '13.
Authorship Attribution through Function Word Adjacency Networks. Santiago Segarra, Mark Eisen, Alejandro Ribeiro, CoRR abs/1406.4469Santiago Segarra, Mark Eisen, and Alejandro Ribeiro. 2014. Authorship Attribution through Function Word Adjacency Networks. CoRR abs/1406.4469 (2014).
Authorship Attribution with Topic Models. Yanir Seroussi, Ingrid Zukerman, Fabian Bohnert, Computational Linguistics. 40Yanir Seroussi, Ingrid Zukerman, and Fabian Bohnert. 2014. Authorship Attribution with Topic Models. Computational Linguistics 40, 2 (2014).
Modality specific meta features for authorship attribution in web forum posts. Thamar Solorio, Sangita Pillay, Sindhu Raghavan, Manuel Montes-Y-Gómez, Proceedings of the 5th International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing. the 5th International Joint Conference on Natural Language Processing. Asian Federation of Natural Language ProcessingThamar Solorio, Sangita Pillay, Sindhu Raghavan, and Manuel Montes-y-Gómez. 2011. Modality specific meta features for authorship attribution in web forum posts.. In Proceedings of the 5th International Joint Conference on Natural Language Processing. Asian Federation of Natural Language Processing.
A survey of modern authorship attribution methods. Efstathios Stamatatos, Journal of the American Society for information Science and Technology. 603Efstathios Stamatatos. 2009. A survey of modern authorship attribution methods. Journal of the American Society for information Science and Technology 60, 3 (2009).
Miguel A. S'anchez-P'erez, and Alberto Barr'on-Cedeño. Efstathios Stamatatos, Walter Daelemans, Ben Verhoeven, Benno Stein, Martin Potthast, Patrick Juola, Proceedings of the Working Notes for CLEF Conference. the Working Notes for CLEF ConferenceOverview of the Author Identification Task at PANEfstathios Stamatatos, Walter Daelemans, Ben Verhoeven, Benno Stein, Martin Potthast, Patrick Juola, Miguel A. S'anchez-P'erez, and Alberto Barr'on-Cedeño. 2014. Overview of the Au- thor Identification Task at PAN. In Proceedings of the Working Notes for CLEF Conference.
Breaking the Closed-World Assumption in Stylometric Authorship Attribution. Ariel Stolerman, Rebekah Overdorf, Sadia Afroz, Rachel Greenstadt, Proceedings of the 10th International Conference on Advances in Digital Forensics. the 10th International Conference on Advances in Digital ForensicsAriel Stolerman, Rebekah Overdorf, Sadia Afroz, and Rachel Greenstadt. 2014. Breaking the Closed-World Assumption in Stylometric Authorship Attribution. In Proceedings of the 10th International Conference on Advances in Digital Forensics.
Using psycholinguistic features for profiling first language of authors. Rosemary Torney, Peter Vamplew, John Yearwood, JASIST. 636Rosemary Torney, Peter Vamplew, and John Yearwood. 2012. Using psycholinguistic features for profiling first language of authors. JASIST 63, 6 (2012).
Feature-rich part-ofspeech tagging with a cyclic dependency network. Kristina Toutanova, Dan Klein, D Christopher, Yoram Manning, Singer, Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyAssociation for Computational Linguistics1Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology- Volume 1. Association for Computational Linguistics.
Enhancing Authorship Attribution By Utilizing Syntax Tree Profiles. Michael Tschuggnall, G"Unther Specht, Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL). the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL)Michael Tschuggnall and G"unther Specht. 2014. Enhancing Authorship Attribution By Utilizing Syn- tax Tree Profiles. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL).
Examining Multiple Features for Author Profiling. R D Edson, Anderson U Weren, Lucas Kauer, Viviane Mizusaki, José Pereira Moreira, Leandro Krug Palazzo Moreira De Oliveira, Wives, JIDM. 53Edson R. D. Weren, Anderson U. Kauer, Lucas Mizusaki, Viviane Pereira Moreira, José Palazzo Moreira de Oliveira, and Leandro Krug Wives. 2014. Examining Multiple Features for Author Profiling. JIDM 5, 3 (2014).
Quantifying Political Leaning from Tweets and Retweets. Felix Ming Fai Wong, Chee Wei Tan, Soumya Sen, Mung Chiang, Proceedings of the 7th International Conference on Weblogs and Social Media (ICWSM). the 7th International Conference on Weblogs and Social Media (ICWSM)Felix Ming Fai Wong, Chee Wei Tan, Soumya Sen, and Mung Chiang. 2013. Quantifying Political Leaning from Tweets and Retweets. In Proceedings of the 7th International Conference on Weblogs and Social Media (ICWSM).
Authorship Attribution for Forensic Investigation with Thousands of Authors. Min Yang, Kam-Pui Chow, Proceedings of the 29th International Conference on ICT Systems Security and Privacy Protection -IFIP TC 11. the 29th International Conference on ICT Systems Security and Privacy Protection -IFIP TC 11Min Yang and Kam-Pui Chow. 2014. Authorship Attribution for Forensic Investigation with Thousands of Authors. In Proceedings of the 29th International Conference on ICT Systems Security and Privacy Protection -IFIP TC 11.
Authorship Identification Using Dynamic Selection of Features from Probabilistic Feature Set. Hamed Zamani, Pariya Hossein Nasr Esfahani, Samira Babaie, Mostafa Abnar, Azadeh Dehghani, Shakery, Proceedings of the 5th International Conference of the CLEF Initiative -Information Access Evaluation. Multilinguality, Multimodality, and Interaction. the 5th International Conference of the CLEF Initiative -Information Access Evaluation. Multilinguality, Multimodality, and InteractionHamed Zamani, Hossein Nasr Esfahani, Pariya Babaie, Samira Abnar, Mostafa Dehghani, and Azadeh Shakery. 2014a. Authorship Identification Using Dynamic Selection of Features from Probabilistic Fea- ture Set. In Proceedings of the 5th International Conference of the CLEF Initiative -Information Access Evaluation. Multilinguality, Multimodality, and Interaction.
Authorship identification using dynamic selection of features from probabilistic feature set. Hamed Zamani, Pariya Hossein Nasr Esfahani, Samira Babaie, Mostafa Abnar, Azadeh Dehghani, Shakery, Proceedings of the International Conference on Information Access Evaluation. Multilinguality, Multimodality, and Interaction. the International Conference on Information Access Evaluation. Multilinguality, Multimodality, and InteractionSpringerHamed Zamani, Hossein Nasr Esfahani, Pariya Babaie, Samira Abnar, Mostafa Dehghani, and Azadeh Shakery. 2014b. Authorship identification using dynamic selection of features from probabilistic feature set. In Proceedings of the International Conference on Information Access Evaluation. Multilinguality, Multimodality, and Interaction. Springer.
| [] |
[
"Fine-grained Intent Classification in the Legal Domain",
"Fine-grained Intent Classification in the Legal Domain"
] | [
"Ankan Mullick ankanm@kgpian.iitkgp.ac.in \nDepartment of Computer Science and Engineering\n\n",
"Abhilash Nandy nandyabhilash@kgpian.iitkgp.ac.in \nDepartment of Computer Science and Engineering\n\n",
"Nitin Manav ",
"Kapadnis \nDepartment of Electrical Engineering\n\n",
"Sohan Patnaik sohanpatnaik106@iitkgp.ac.in \nDepartment of Mechanical Engineering\n\n",
"R Raghav \nIndustrial and Systems Engineering Department\n\n",
"\nIndian Institute of Technology Kharagpur\nIndia\n"
] | [
"Department of Computer Science and Engineering\n",
"Department of Computer Science and Engineering\n",
"Department of Electrical Engineering\n",
"Department of Mechanical Engineering\n",
"Industrial and Systems Engineering Department\n",
"Indian Institute of Technology Kharagpur\nIndia"
] | [] | A law practitioner has to go through a lot of long legal case proceedings. To understand the motivation behind the actions of different parties/individuals in a legal case, it is essential that the parts of the document that express an intent corresponding to the case be clearly understood. In this paper, we introduce a dataset of 93 legal documents, belonging to the case categories of either Murder, Land Dispute, Robbery, or Corruption, where phrases expressing intent same as the category of the document are annotated. Also, we annotate finegrained intents for each such phrase to enable a deeper understanding of the case for a reader. Finally, we analyze the performance of several transformer-based models in automating the process of extracting intent phrases (both at a coarse and a fine-grained level), and classifying a document into one of the possible 4 categories, and observe that, our dataset is challenging, especially in the case of fine-grained intent classification. | 10.48550/arxiv.2205.03509 | [
"https://arxiv.org/pdf/2205.03509v1.pdf"
] | 248,571,925 | 2205.03509 | 14a61d07de702bfc206dc37639dde3a2e7135488 |
Fine-grained Intent Classification in the Legal Domain
Ankan Mullick ankanm@kgpian.iitkgp.ac.in
Department of Computer Science and Engineering
Abhilash Nandy nandyabhilash@kgpian.iitkgp.ac.in
Department of Computer Science and Engineering
Nitin Manav
Kapadnis
Department of Electrical Engineering
Sohan Patnaik sohanpatnaik106@iitkgp.ac.in
Department of Mechanical Engineering
R Raghav
Industrial and Systems Engineering Department
Indian Institute of Technology Kharagpur
India
Fine-grained Intent Classification in the Legal Domain
A law practitioner has to go through a lot of long legal case proceedings. To understand the motivation behind the actions of different parties/individuals in a legal case, it is essential that the parts of the document that express an intent corresponding to the case be clearly understood. In this paper, we introduce a dataset of 93 legal documents, belonging to the case categories of either Murder, Land Dispute, Robbery, or Corruption, where phrases expressing intent same as the category of the document are annotated. Also, we annotate finegrained intents for each such phrase to enable a deeper understanding of the case for a reader. Finally, we analyze the performance of several transformer-based models in automating the process of extracting intent phrases (both at a coarse and a fine-grained level), and classifying a document into one of the possible 4 categories, and observe that, our dataset is challenging, especially in the case of fine-grained intent classification.
Introduction
Documents which record legal case proceedings are often perused by many law practitioners. In any Court Judgement, these documents can contain as much as 4500 words (for example -Indian Supreme Court Judgements). Knowing the amount of intent in the text before hand will help a person understand the case better (intent here refers to the intention latent in a piece of text. e.g. 'Mr. XYZ robbed a bank yesterday' -in this sentence, the phrase 'robbed a bank' depicts the intent of Robbery).
There can be different levels of intent. For example, stating that a legal case deals with murder is a document level intent. It conveys a generalized information about the document. Sentence level and phrase level intents will give much more information about the document. To understand the documents much efficiently various summarization techniques exist. However, an analysis of intents conditioned on the legal cases, along with summarization, would improve the reader's understanding and clarity of the content of the document significantly.
We curate a dataset that consists of 93 legal documents, spread across four intents -Murder, Robbery, Land Dis-pute and Corruption. We manually annotate certain phrases which bring out the intent of the document. Additionally, we painstakingly assign fine-grained intents (referred to as 'sub-intent' interchangeably from here on) to each phrase. These intent phrases are annotated in a coarse (4 categories) as well as in a fine-grained manner (with several sub-intents in each category of intent). For example, under the intent of Robbery, 'Mr. ABC saw Mr. XYZ picking the lock of the neighbour's house' is an example of a witness. Another example is, 'Gold and silver ornaments missing', indicating the stolen items.
Another contribution is the analysis of different off-theshelf models on intent based task. We finally present a proofof-concept, which shows that coarse-grained document intent and document classification, as well as fine-grained annotation of phrases in legal documents, can be automated with reasonable accuracy.
Dataset Description
5000 legal documents are scraped from CommonLII 1 using 'selenium' python package. 93 documents belonging to the categories of Corruption, Murder, Land Dispute, and Robbery are randomly sampled from this larger set.
Intent phrases are annotated for each document in the following manner -1. Initial filtering: 2 annotators filter out sentences that convey an intent matching the category of the document at hand. 2. Intent Phrase annotation 2 other annotators then extract a span from each sentence, so as to exclude any details do not contribute to the intent (such as name of the person, date of incident etc.), and only include the words expressing corresponding intent. The resulting spans are the intent phrases. Inter-annotator agreement (Cohen κ) is 0.79. 3. Sub-intent annotation: 1 annotator who is aware of legal terminology, is asked to go through the intent phrases of several documents from all the 4 intent categories in order to come up with possible set of sub-intents for each intent category, that covers almost all aspects of that category. After coming up with the sets of sub-intents, 4 an- Table 1 shows the statistics of our dataset, describing the number of documents, average length of documents and intent phrases, and average sentiment score for each of the 4 intent categories. The documents on Corruption and Land Dispute are roughly longer than those on Murder and Robbery. Table 1 also shows average sentiment scores across annotated intent phrases (calculated using sentifish 2 Python Package) for each of the four categories. The sentiment scores of the categories follow the following order -Land Dispute > Corruption > Robbery > Murder, which follows common intuition. Fig. 1 shows the top 200 most frequent words (excluding stopwords) occurring in the intent phrases for each of the four categories, with the font size of the word being proportional to its frequency. In each wordcloud, we can observe that each category has words that match the corresponding intent (E.g. 'bribe' in Corruption, 'property' in Land Dispute etc.)
Experiment and Results
This section is organized to describe the use of transformers (Vaswani et al. 2017) for document classification, which will be followed by the explanation for the use of JointBERT (Chen, Zhuo, and Wang 2019) for intent as well as slot classification. We use two Tesla P100 GPUs with 16 GB RAM to perform all the experiments.
Document Classification
Recent advancements show that, Transformer (Vaswani et al. 2017) based pre-trained language models like BERT (Devlin et al. 2019), RoBERTa (Liu et al. 2019), ALBERT (Lan et al. 2020), and DeBERTa (He et al. 2021), have proven to be very successful in learning robust context-based representations of lexicons and applying these to achieve state of the art performance on a variety of downstream tasks such as document classification in our case. We then implemented different models mentioned in Table 2, for learning contextual representations of the documents whose outputs were then fed to a softmax layer to get the final predicted class of the document. Along with this, we also implemented a variant of LEGAL-BERT (Chalkidis et al. 2020) and LEGAL-RoBERTa 3 which were pre-trained on large scale datasets of legal domain-specific corpora which in turn led to much better scores than their counterparts pre-trained on general corpora.
Recent improvements to the state-of-the-art in contextual language models such as in the case of DeBERTa perform significantly better than BERT. The same is observed from Table 2 which shows that the Accuracy and Macro F1-score for DeBERTa came to be the highest among the other models, whereas LEGAL-BERT was at par with De-BERTa in terms of Accuracy score. Further, since DeBERTa is trained previously using the disentangled attention mechanism along with an enhanced mask decoder. The training method is same as that of BERT. Owing to the novel attention mechanism used in DeBERTa, it outperforms the other models in terms of both Accuracy and Macro F1-score.
LEGAL-BERT on the other hand is pre-trained and further fine-tuned on legal-domain specific corpora, which in turn lead to its state-of-the-art performance on various legal domain specific tasks. In our case, leveraging LEGAL-BERT outperforms other models since the contextual representation is more inclined towards legal matters.
All of the transformer models were implemented using sliding window attention (Masood, Abbasi, and Wee Keong 2020), since the document length for all the documents is greater than the transformer maximum token size. They were trained with a sliding window ratio of 20% over three epochs with learning rate and batch size set at 2e-5 and 32 respectively. The docuemnts in the dataset are randomly split into train, validation and test sets in the ratio of 6:2:2. Note that, when classifying fine-grained intents, we only consider those sub-intents that have atleast 50 corresponding phrases.
We report the Accuracy score and Macro average score for each of the model so as to get an intuition on how the state of art transformer-based architectures perform on document classification in the legal domain.
JointBERT
We implemented BERT for joint intent classification and slot filling (Chen, Zhuo, and Wang 2019) on our dataset. We also replaced the BERT backbone with other transformer-based models such as DistilBERT and ALBERT. Slot Filling is a sequence labelling task, where BIO Tags are for the classes of 'Corruption', Land Dispute', 'Robbery' and 'Murder', and then the intent classification task for those classes. The dataset is prepared in the following manner -Since there is a majority of 'O' Tags for the slot filling task, only sentences containing an intent phrase, the one before that, and the one after that are used for training to mitigate class imbalance. Each token has an intent BIO tag and each sentence with an intent phrase has a target intent. We randomly selected 20% sample for testing, 20% for validation. Rest 60% samples were used for training.
The models were trained over 10 epochs with a batch size of 16, at a learning rate of 2e-5. At each epoch checkpoint, Table 3: Results on Intent classification Table 4 gives the evaluation metric scores for each in-tent separately and the analysis provides evidence that the transformer-based models perform poorly on Corruption intent due to the number of ocuments in that category being the lowest, whereas they perform significantly better on other intents. Table 6 provides the classification accuracy and Intent Macro F1-score on fine grained Intent Classification task. As the intent becomes more specific, the scores drop significantly, showing that the models are unable to capture the in-depth context of the intent phrases. However, modle with the BERT backbone still performs the best. This can be attributed to the fact, that BERT has the highest number of parameters ( 110 million) as compared to ALBERT ( 31 million), and DistilBERT ( 50 million Table 6: Results on fine-grained Intent Classification Table 7 provides the precision, recall and macro F1 Score for fine-grained intent classification for the best performing model among the three models, i.e., JointBERT with a BERT Backbone. The labels are presented in the form of X Y , where X is an intent (e.g. Robbery), and Y is a fine-grained intent/sub-intent (e.g. action). We observe that, even though the number of training samples per fine-grained class is quite low, performance on the test set is quite good -The F1-Score for all classes is above 0.4, and except for two classes, it is above the halfway mark of 0.5. Note that we have not reported the slot classification results for the fine-grained intents. This is because the number of labels becomes almost twice in this case as compared to intent classification (due to the presence of both B and I tags corresponding to each fine-grained intent, and an O class additionally, as we consider BIO tags for annotation). Hence, the number of samples per class is insufficient to learn a good slot classifier.
Discussion
We observe that, although transformer-based models are performing well in the case of document classification and coarse-grained intent classification, there is a need for better performance in the fine-grained intent classification case. Hence, we argue that our dataset could be a crucial starting point for research on fine-grained intent classification in the legal domain.
Conclusion
This paper presents a new dataset for coarse and fine-grained annotation, as well as, shows a proof-of-concept as to how document as well as intent classification can be automated with reasonably good results. We use different transformerbased models for document classification, and observe that DeBERTa performs the best. We try transformer-based models such as BERT, ALBERT and DistilBERT as the backbones of a joint intent and slot classification neural network, and observe that, BERT performs the best among all the three, both in coarse as well as fine-grained intent classification. However, our dataset is challenging, as there is a lot of scope of improvement in the results, especially in finegrained intent classification. Hence, our dataset could serve as a crucial benchmark for fine-grained intent classification in the legal domain.
Table 2 :
2Results of Transformer Models
enumerates the results of Joint BERT on the task of Slot Classification. The model performs best on Murder intent when compared with others, which is again due to the number of samples in the Murder category being the largest.Precision Recall F1-score Support
Corruption
0.75
0.89
0.81
27
Land Dispute
0.95
0.88
0.91
42
Murder
0.94
0.94
0.94
50
Robbery
0.96
0.89
0.92
27
Macro Average
0.90
0.90
0.90
146
Table 4: Results of Joint BERT on Intent Classification
Table 5 Precision Recall F1-score Support
Corruption
0.74
0.38
0.51
326
Land Dispute
0.71
0.55
0.62
317
Murder
0.80
0.63
0.70
361
Robbery
0.66
0.53
0.59
137
Macro Average
0.73
0.52
0.60
1041
Table 5: Results of Joint BERT on Slot Classification
).Model Name
Intent
Accuracy
Intent
Macro
F1-score
BERT
0.53
0.50
DistilBERT
0.46
0.40
ALBERT
0.48
0.47
Table 7 :
7Results of Joint BERT on fine-grained Intent Classification
http://www.commonlii.org/resources/221.html arXiv:2205.03509v1 [cs.CL] 6 May 2022notators are then shown some samples on how to annotate sub-intent for a given phrase. Then, the intent phrases are divided amongst these annotators, and the sub-intent of each intent phrase is annotated thereafter.
https://pypi.org/project/sentifish/
https://huggingface.co/saibo/legal-roberta-base
I Chalkidis, M Fergadiotis, P Malakasiotis, N Aletras, I Androutsopoulos, arXiv:2010.02559LEGAL-BERT: The Muppets straight out of Law School. Chalkidis, I.; Fergadiotis, M.; Malakasiotis, P.; Aletras, N.; and Androutsopoulos, I. 2020. LEGAL-BERT: The Mup- pets straight out of Law School. arXiv:2010.02559.
Q Chen, Z Zhuo, W Wang, arXiv:1902.10909BERT for Joint Intent Classification and Slot Filling. Chen, Q.; Zhuo, Z.; and Wang, W. 2019. BERT for Joint Intent Classification and Slot Filling. arXiv:1902.10909.
J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805.
P He, X Liu, J Gao, W Chen, arXiv:2006.03654DeBERTa: Decoding-enhanced BERT with Disentangled Attention. He, P.; Liu, X.; Gao, J.; and Chen, W. 2021. DeBERTa: Decoding-enhanced BERT with Disentangled Attention. arXiv:2006.03654.
Z Lan, M Chen, S Goodman, K Gimpel, P Sharma, R Soricut, arXiv:1909.11942ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. Lan, Z.; Chen, M.; Goodman, S.; Gimpel, K.; Sharma, P.; and Soricut, R. 2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv:1909.11942.
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, arXiv:1907.11692RoBERTa: A Robustly Optimized BERT Pretraining Approach. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv:1907.11692.
Context-Aware Sliding Window for Sentiment Classification. M A Masood, R A Abbasi, N Keong, IEEE Access. 8Masood, M. A.; Abbasi, R. A.; and Wee Keong, N. 2020. Context-Aware Sliding Window for Sentiment Classifica- tion. IEEE Access, 8: 4870-4884.
A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, arXiv:1706.03762Attention Is All You Need. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. At- tention Is All You Need. arXiv:1706.03762.
| [] |
[
"Inducing Language Networks from Continuous Space Word Representations",
"Inducing Language Networks from Continuous Space Word Representations"
] | [
"Bryan Perozzi bperozzi@cs.stonybrook.edu \nDepartment of Computer Science\nStony Brook University\n\n",
"Rami Al-Rfou' \nDepartment of Computer Science\nStony Brook University\n\n",
"Vivek Kulkarni vvkulkarni@cs.stonybrook.edu \nDepartment of Computer Science\nStony Brook University\n\n",
"Steven Skiena skiena@cs.stonybrook.edu \nDepartment of Computer Science\nStony Brook University\n\n"
] | [
"Department of Computer Science\nStony Brook University\n",
"Department of Computer Science\nStony Brook University\n",
"Department of Computer Science\nStony Brook University\n",
"Department of Computer Science\nStony Brook University\n"
] | [] | Recent advancements in unsupervised feature learning have developed powerful latent representations of words. However, it is still not clear what makes one representation better than another and how we can learn the ideal representation. Understanding the structure of latent spaces attained is key to any future advancement in unsupervised learning. In this work, we introduce a new view of continuous space word representations as language networks. We explore two techniques to create language networks from learned features by inducing them for two popular word representation methods and examining the properties of their resulting networks. We find that the induced networks differ from other methods of creating language networks, and that they contain meaningful community structure. | 10.1007/978-3-319-05401-8_25 | [
"https://arxiv.org/pdf/1403.1252v2.pdf"
] | 2,060,108 | 1403.1252 | 18205c96fcac85002a190b9fbf64c84c9bc78ff0 |
Inducing Language Networks from Continuous Space Word Representations
Bryan Perozzi bperozzi@cs.stonybrook.edu
Department of Computer Science
Stony Brook University
Rami Al-Rfou'
Department of Computer Science
Stony Brook University
Vivek Kulkarni vvkulkarni@cs.stonybrook.edu
Department of Computer Science
Stony Brook University
Steven Skiena skiena@cs.stonybrook.edu
Department of Computer Science
Stony Brook University
Inducing Language Networks from Continuous Space Word Representations
Recent advancements in unsupervised feature learning have developed powerful latent representations of words. However, it is still not clear what makes one representation better than another and how we can learn the ideal representation. Understanding the structure of latent spaces attained is key to any future advancement in unsupervised learning. In this work, we introduce a new view of continuous space word representations as language networks. We explore two techniques to create language networks from learned features by inducing them for two popular word representation methods and examining the properties of their resulting networks. We find that the induced networks differ from other methods of creating language networks, and that they contain meaningful community structure.
Introduction
Unsupervised feature learning (deep learning) utilizes huge amounts of raw data to learn representations that model knowledge structure and disentangle the explanatory factors behind observed events. Under this framework, symbolic sparse data is represented by lower-dimensional continuous spaces. Integrating knowledge in this format is the secret behind many recent breakthroughs in machine learning based applications such as speech recognition, computer vision, and natural language processing (NLP) [3]. We focus here on word representations (word embeddings) where each word representation consists of a dense, real-valued vector. During the pre-training stage, the representations acquire the desirable property that similar words have lower distance to each other than to unrelated words [15]. This allows the representations to utilize the abundance of raw text available to learn features and knowledge that is essential for supervised learning applications such as part-of-speech tagging, named entity recognition, machine translation, language modeling, sentiment analysis etc [11,13,17,24]. Several methods and algorithms have been proposed to learn word representations along different benchmarks for evaluation [10]. However, these evaluations are hard to comprehend as they squash the analysis of the representation's quality into abstract numbers. To enable better understanding of the actual structure of word relationships which have been captured, we have to address the problems that come with analyzing high-dimensional spaces (typically between 50-1000 dimensions). We believe that network induction and graph analysis are appropriate tools to give us new insights. In this work, we seek to induce meaningful graphs from these continuous space language models. Specifically, our contributions include:
-Analysis of Language Network Induction -We propose two criteria to induce networks out of continuous embeddings. For both methods, we study and analyze the characteristics of the induced networks. Moreover, the networks generated lead to easy to understand visualizations. -Comparison Between Word Representation Methods -We evaluate the quality of two well known words embeddings. We contrast between their characteristics using the analysis developed earlier. The remainder of this paper is set up as follows. First, in Section 2, we describe continuous space language models that we consider. In Section 3, we discuss the choices involved with inducing a network from these embeddings and examine the resulting networks. Finally, we finish with a discussion of future work and our conclusions.
Continuous Space Language Models
The goal of a language model is to assign a probability for any given sequence of words estimating the likelihood of observing such a sequence. The training objective usually maximizes the joint probability of the training corpus. A continuous space probabilistic language model aims to estimate such probability distribution by, first, learning continuous representations for the words and phrases observed in the language. Such mapping is useful to cope with the curse of dimensionality in cases where data distribution is sparse as natural language. Moreover, these representations could be used as features for natural language processing applications, domain adaptation and learning transfer scenarios that involve text or speech. More precisely, given a sequence of words S = [w1 . . . w k ], we want to maximize P (w1, . . . , w k ) and learn representations for words. During the training process the continuous space language model learns a mapping of words to points in R d , where d usually ranges between 20 − 200. Prior to training we build a vocabulary V that consists of the most frequent |V | words, we map each word to a unique identifier that indexes an embeddings matrix C that has a size of |V | × d. The sequence S is now represented by a matrix C[w1] T . . . C[w k ] T T , enabling us to compose a new representation of the sequence using one of several compositional functions. The simplest is to concatenate all the rows in a bigger vector with size kd. Another option is to sum the matrix row-wise to produce a smaller representation of size d. While the first respects the order of the words, it is more expensive to compute.
Given a specific sequence representation as an input, we will define a task that the model should solve, given the sequence representation as the only input. Our choice of the task ranges from predicting the next/previous word(s) to distinguishing between observed phrases and other corrupted copies of them. The chosen task and/or the compositional function influence the learned representations greatly as we will discuss later. We will focus our investigations, here, on two embeddings which are trained with different tasks and compositional functions; the Polyglot and SkipGram embeddings.
Polyglot
The Polyglot project offers word representations for each language in Wikipedia [22]. For large enough Wikipedias, the vocabulary consists of the most frequent 100,000 words. The representations are learned through a procedure similar to the one proposed by Collobert et al. [11]. For a given sequence of words St = [w t−k . . . wt . . . w t+k ] observed in the corpus T , a corrupted sequence S t will be constructed by replacing the word in the middle wt with a word wj chosen randomly from the vocabulary V . Once the vectors are retrieved, we compose the sequence representation by concatenating the vectors into one vector called the projection layer St. The model is penalized through the hinge loss function,
1 T t=T t=1 |1 − score(S t ) + score(St)|+
where score is calculated through a hidden layer neural network score(St) = W2(tanh(W1St + b1)) + b2.
For this work, we use the Polyglot English embeddings 1 which consist of the 100,000 most frequent words in the English Wikipedia, each represented by a vector in R 64 .
SkipGram
While the Polyglot embeddings consider the order of words to build the representation of any sequence of words, the SkipGram model proposed by Mikolov et al. [16] maximizes the average log probability of the context words independent of their order
1 T T t=1 k j=−k log p(wt+j|wt)
where k is the size of the training window. This allows the model to scale to larger context windows. In our case, we train a SkipGram model 2 on the English Wikipedia corpus offered by the Polyglot project for the most frequent 350,000 words with context size k set to 5 and the embeddings vector size set to 64.
Random
In order to have a baseline, we also generate random embeddings for the most frequent 100,000 words. The initial position of words in the Polyglot embeddings were sampled from a uniform distribution, therefore, we generate the random embedding vectors by sampling from U(m − σ,m + σ), wherem and σ are the mean and standard deviation of the trained Polyglot embeddings' values respectively. This baseline allows us to see how the language networks we construct differ from networks induced from randomly initialized points.
Word Embedding Networks
We now consider the problem of constructing a meaningful network given a continuous space language model. As there are a variety of ways in which such a network could be induced, we start by developing a list of desirable properties for a language network. Specifically, we are seeking to build a network which:
1. Is Connected -In a connected graph, all the words can be related to each other. This allows for a consistent approach when trying to use the network to solve real-world problems. 2. Has Low Noise -Minimizing the spurious correlations captured by our discrete representation will make it more useful for application tasks. 3. Has Understandable Clusters -We desire that the community structure in the network reflects the syntactic and semantic information encoded in the word embeddings. We also require a method to compute the distance in the embedding space. While there are a variety of metrics that could be used, we found that Euclidean distance worked well. So we use:
dist(x, y) = ||x − y|| 2 2 = ( m i=1 (xi − yi) 2 ) (1/2)(1)
where x and y are words in an d-dimensional embedding space (x, y ∈ R d ). With these criteria and a distance function in hand, we are ready to proceed. We examine two approaches for constructing graphs from word embeddings, both of which seek to link words together which are close in the embedding space. For each method, we induce networks for the 20, 000 most frequent words for each embedding type, and compare their properties.
k-Nearest Neighbors
The first approach we will consider is to link each word to the k closest points in the embedding space. More formally, we induce a set of directed edges through this method:
E knn = {(u, v) : min x dist(u, v)} ∀u, v ∈ V, x ≤ k(2)
where minx denotes the rank of the x-th number in ascending sorted order (e.g. min0 is the minimum element, min1 the next smallest number). After obtaining a directed graph in this fashion, we convert it to an undirected one. The resulting undirected graph does not have a constant degree distribution. This is due to the fact that the nearest-neighbor relation may not be symmetric. Although all vertices in the original directed graph have an out-degree of k, their orientation in the embedding space means that some vertices will have higher in-degrees than others.
GCC Coverage = |GCC|/|V| SkipGram (# Comp) Polyglot (# Comp) Random (# Comp) SkipGram (Coverage) Polyglot (Coverage) Random (Coverage) (a) k-NN CoverageGCC Coverage = |GCC|/|V| SkipGram (# Comp) Polyglot (# Comp) Random (# Comp) SkipGram (Coverage) Polyglot (Coverage) Random (Coverage) (b) d-Threshold Coverage
Results from our investigation of basic network properties of the k-NN embedding graphs are shown in Figures 1 and 2. In (1a) we find that the embedding graphs have few disconnected components, even for small values of k. In addition, there is an obvious GCC which quickly emerges. In this way, the embeddings are similar to the network induced on random points (which is fully connected at k = 2). We performed an investigation of the smaller connected components when k was small, and found them to contain dense groupings of words with very similar usage characteristics (including ordinal values, such as Roman numerals (II,III,IV)). In (2a) we see that the clustering coefficient initially grows quickly as we add edges to our network (k ≤ 6), but has leveled off by (k = 20). This tendency to bridge new clusters together, rather than just expand existing ones, may be related to the instability of the nearest neighbor [6] in high dimensional spaces. In (2b), we see that the networks induced by the k-NN are not only connected, but have a highly modular community structure.
d-Proximity
The second approach we will consider is to link each word to all those within a fixed distance d of it:
Eproximity = {(u, v) : dist(u, v) < d} ∀u, v ∈ V(3)
We perform a similar investigation of the network properties of embedding graphs constructed with the d-Proximity method. The results are shown in Figures 1 and 2. We find that networks induced through this method quickly connect words that are near each other in the embedding space, but do not bridge distant groups together. They have a large number of connected components, and connecting 90% of the vertices requires using a relatively large value of d (1b). The number of connected components is closely related to the average distance between points in the embedding space (around d =(3.25, 3.80, 2.28) for (SkipGram, Polyglot, Random)). As the value of d grows closer to this average distance, the graph quickly approaches the complete graph. Figure 2a shows that as we add more edges to the network, we add triangles at a fast rate than using the k-NN method. against number of edges in the induced graph. When the total number of edges is low (|E| < 150, 000), networks induced through the k-NN method have more closed triangles than those created through d-Proximity. In (2b), Q knn starts high, but slowly drops as larger values of k include more spurious edges.
Discussion
Here we discuss the differences exposed between the methods for inducing word embeddings, and the differences exposed between the embeddings themselves.
Comparison of Network Induction Methods. Which method then, provides the better networks from word embeddings? To answer this question, we will use the properties raised at the beginning of this section:
1. Connectedness -Networks induced through the k-NN method connect much faster (as a function of edges) than those induced through d-Proximity (Fig. 1). Specifically, the network induced for k = 6 has nearly full coverage (1a) with only 100K edges (2a).
2. Spurious Edges -We desire that our resulting networks should be modular. As such we would prefer to add edges between members of a community, instead of bridging communities together. For low values of |E|, the k-NN approach creates networks which have more closed triangles (2a). However this does not hold in networks with more edges. 3. Understandable Clusters -In order to qualitatively examine the quality of such a language network, we induced a subgraph with the k-NN of the most frequent 5,000 words in the Polyglot embeddings for English. Figure 3 presents the language network constructed for (k = 6).
According to our three criteria, k-NN seems better than d-Proximity. In addition to the reasons we already listed, we prefer k-NN as it seems to require less parameterization (d-Proximity has a different optimal d for each embedding type).
Comparison of Polyglot and SkipGram. Having chosen to use
k-NN as our preferred method for inducing language networks, we now examine the difference between the Polyglot and SkipGram networks. Clustering Coefficient. We note that in Figure 2a, the SkipGram model has a consistently higher clustering coefficient than Polyglot in k-NN networks. A larger clustering coefficient denotes more triangles, and this may indicate that points in the SkipGram space form more cohesive local clusters than those in Polyglot. Tighter local clustering may explain some of the interesting regularities observed in the SkipGram embedding [18]. Modularity. In Figure 2b, we see that Polyglot modularity is consistently above the SkipGram modularity. SkipGram's embeddings capture more semantic information about the relations between words, and it may be that causes a less optimal community structure than Polygot whose embeddings are syntactically clustered. Clustering Visulizations. In order to understand the differences between the language networks better, we conducted an examination of the clusters found using the Louvain method [8] for modularity maximization. Figure 4 examines communities from both Polyglot and SkipGram in detail.
Related Work
Here we discuss the relevant work in language networks, and word embeddings. There is also related work on the theoretical properties of nearest neighbor graphs, consult Eppstein, Paterson, and Yao [12] for some basic investigations.
Language Networks
Word Co-occurrences. One branch of the study of language as networks seeks to build networks directly from a corpus of raw text. Cancho and Solé [9] examine word co-occurrence graphs as a method to analyze language. In their graph, edges connect words which appear below a fixed threshold (d ≤ 2) from each other in sentences. They find that networks constructed in this manner show both small world structure, and a power law degree distribution. Language networks based on word co-occurrence have been used in a variety of natural language processing tasks, including motif analysis of semantics [7], text summarization [1] and resolving disambiguation of word usages [26]. SkipGram's bag-of-words approach favors a more semantic meaning between words, which can make its clusters less understandable (Note how in Figure 4c Petersburg is included in a cluster of religious words, because of Saint.) Images created with Gephi [2].
Hypernym relations. Another approach to studying language networks relies on studying the relationships between words exposed by a written language reference. Motter et al. [21] use a thesaurus to construct a network of synonyms, which they find to find to exhibit small world structure. In [25], Sigman and Cecchi investigate the graph structure of the Wordnet lexicon. They find that the semantic edges in Wordnet follow scale invariant behavior and that the inclusion of polysemous edges drastically raises the clustering coefficient, creating a small world effect in the network. Relation to our work. Much of the previous work in language networks build networks that are prone to noise from spurious correlations in word co-occurrence or infrequent word senses [9,25]. Dimensionality reduction techniques have been successful in mitigating the effects of noise in a variety of domains. The word embedding methods we examine are a form of dimensionality reduction that has improved performance on several NLP tasks and benchmarks. The networks produced in our work are considerably different from language networks created by previous work that we are aware of. We find that our degree distribution does appear to follow a power-law (like [9,21,25]) and we have some small world properties like those present in those works (such as C C random ). However, the average path length in our graphs is considerably larger than the average path length in random graphs with the same node and edge cardinalities. Table 1
Word Embeddings
Distributed representations were first proposed by Hinton [14], to learn a mapping of symbolic data to continuous space. These representations are able to capture fine grain structures and regularities in the data [18]. However, training these models is slow due to their complexity. Usually, these models are trained using back-propagation algorithm [23] which requires large amount of computational resources. With the recent advancement in hardware performance, Bengio et al. [4] used the distributed representations to produce a state-of-the-art probabilistic language model. The model maps each word in a predefined vocabulary V to a point in R d space (word embeddings). The model was trained on a cluster of machines for days. More applications followed, Collobert et al. [11] developed SENNA, a system that offers part of speech tagger, chunker, named entity recognizer, semantic role labeler and discriminative syntactic parser using the distributed word representations. To speed up the training procedure, importance sampling [5] and hierarchical softmax models [19,20] were proposed to reduce the computational costs. The training of word representations involves minimal amount of language specific knowledge and expertise. Al-Rfou', Perozzi, and Skiena [22] trained word embeddings for more than a hundred languages and showed that the representations help building multilingual applications with minimal human effort. Recently, SkipGram and Continuous bag of words models were proposed by Mikolov et al. [16] as simpler and faster alternatives to neural network based models.
Conclusions
We have investigated the properties of recently proposed distributed word representations, which have shown results in several machine learning applications. Despite their usefulness, understanding the mechanisms which afford them their characteristics is still a hard problem.
In this work, we presented an approach for viewing word embeddings as a language network. We examined the characteristics of the induced networks, and their community structure. Using this analysis, we were able to develop a procedure which develops a connected graph with meaningful clusters. We believe that this work will set the stage for advances in both NLP techniques which utilize distributed word representations, and in understanding the properties of the machine learning processes which generate them. Much remains to be done. In the future we would like to focus on comparing word embeddings to other well known distributional representation techniques (e.g. LDA/LSA), examining the effects of different vocabulary types (e.g. topic words, entities) on the induced graphs, and the stability of the graph properties as a function of network size.
Fig. 1 :
1Graph Coverage. The connected components and relative size of the Giant Connected Component (GCC) in graphs created by both methods. We see that very low values of k quickly connect the entire network (1a), while values of d appear to have a transition point before a GCC emerges (1b).
) k-NN Modularity, Q knn
Fig. 2 :
2Community Metrics. In (2a), C shown for k = [2,30] and d = [0.8,1.6]
Fig. 3 :
3Polyglot Nearest Neighbor Graph. Here we connect the nearest neighbors (k = 6) of the top 5,000 most frequent words from the Polyglot English embeddings. Shown is the giant connected component of the resulting graph (|V | = 11, 239; |E| = 26, 166). Colors represent clusters found through the Louvain method (modularity Q = 0.849). Vertex label size is determined by its PageRank. Best viewed in color.
Fig. 4 :
4Comparison of clusters found in Polyglot and SkipGram language networks. Polyglot clusters viewed in context of the surrounding graph, SkipGram clusters have been isolated to aide in visualization.
Fig. 5 :Fig. 6 :
56Additional close-ups of clusters in Polyglot embeddings (fromFigure 3) Visualization of the 6-NN for the GCC of the top 5,000 most frequent words in the SkipGram embeddings. SkipGram's representations are more semantic, and so language features like polysemous words make global visualization harder.
shows a comparison of metrics from different approaches to creating language networks. 3 Cancho and Solé [9](UWN) 478, 773 1.77 × 10 7 0.687 1.55 × 10 −4 2.63 * 3.03 -1.50,-2.70 Cancho and Solé [9](RWN) 460, 902 1.61 × 10 7 0.437 1.55 × 10 −4 2.67 * 3.06 -1.50,-2.70|V |
|E|
C
C random
pl pl random
γ
Motter et al. [21]
30, 244
−
0.53
0.002
3.16
−
−
Polyglot, 6-NN
20, 000 96, 592 0.241 0.0004 6.78 * 4.62 *
-1.31
SkipGram, 6-NN
20, 000 94, 172 0.275 0.0004 6.57 * 4.62 *
-1.32
Table 1 :
1A comparison of properties of language networks from the literature against those induced on the 20,000 most frequent words in the Polyglot and SkipGram Embeddings. (C clustering coefficient, pl average path length, γ exponent of power law fits to the degree distribution) '*' denotes values which have been estimated on a random subset of the vertices.
Polyglot embeddings and corpus available at http://bit.ly/embeddings 2 SkipGram training tool available at https://code.google.com/p/word2vec/
Our induced networks available at http://bit.ly/inducing_language_networks
AcknowledgmentsThis research was partially supported by NSF Grants DBI-1060572 and IIS-1017181, with additional support from TASC Inc, and a Google Faculty Research Award.
A complex network approach to text summarization. L Antiqueira, O N O Jr, L Da Fontoura, M Costa, Das Graças Volpe, Nunes, Information Sciences. 1795L. Antiqueira, O. N. O. Jr., L. da Fontoura Costa, and M. das Graças Volpe Nunes. "A complex network approach to text sum- marization". In: Information Sciences 179.5 (2009), pp. 584-599.
Gephi: An Open Source Software for Exploring and Manipulating Networks. M Bastian, S Heymann, M Jacomy, M. Bastian, S. Heymann, and M. Jacomy. Gephi: An Open Source Software for Exploring and Manipulating Networks. 2009.
Representation learning: A review and new perspectives. Y Bengio, A Courville, P Vincent, Y. Bengio, A. Courville, and P. Vincent. "Representation learning: A review and new perspectives". In: (2013).
Neural probabilistic language models. Y Bengio, H Schwenk, J.-S Senécal, F Morin, J.-L Gauvain, Innovations in Machine Learning. SpringerY. Bengio, H. Schwenk, J.-S. Senécal, F. Morin, and J.-L. Gau- vain. "Neural probabilistic language models". In: Innovations in Machine Learning. Springer, 2006, pp. 137-186.
Adaptive importance sampling to accelerate training of a neural probabilistic language model. Y Bengio, J.-S Senecal, Neural Networks. 19Y. Bengio and J.-S. Senecal. "Adaptive importance sampling to accelerate training of a neural probabilistic language model". In: Neural Networks, IEEE Transactions on 19.4 (2008), pp. 713-722.
When is "nearest neighbor" meaningful?. K Beyer, J Goldstein, R Ramakrishnan, U Shaft, In: Database Theory-ICDT'99. SpringerK. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft. "When is "nearest neighbor" meaningful?" In: Database Theory-ICDT'99. Springer, 1999, pp. 217-235.
Quantifying Semantics using Complex Network Analysis. C Biemann, S Roos, K Weihe, Proceedings of COLING 2012. Mumbai, India: The COLING 2012 Organizing Committee. COLING 2012. Mumbai, India: The COLING 2012 Organizing CommitteeC. Biemann, S. Roos, and K. Weihe. "Quantifying Semantics using Complex Network Analysis". In: Proceedings of COLING 2012. Mumbai, India: The COLING 2012 Organizing Committee, Dec. 2012, pp. 263-278.
Fast unfolding of communities in large networks. V D Blondel, J.-L Guillaume, R Lambiotte, E Lefebvre, Journal of Statistical Mechanics: Theory and Experiment. 1010008V. D. Blondel, J.-L. Guillaume, R. Lambiotte, and E. Lefebvre. "Fast unfolding of communities in large networks". In: Journal of Statistical Mechanics: Theory and Experiment 2008.10 (2008), P10008.
The small world of human language. R F I Cancho, R V Solé, Proceedings of the Royal Society of London. Series B: Biological Sciences. the Royal Society of London. Series B: Biological Sciences268R. F. i. Cancho and R. V. Solé. "The small world of human lan- guage". In: Proceedings of the Royal Society of London. Series B: Biological Sciences 268.1482 (2001), pp. 2261-2265.
The Expressive Power of Word Embeddings. Y Chen, B Perozzi, R Al-Rfou, ' , S Skiena, abs/1301.3226ICML 2013 Workshop on Deep Learning for Audio, Speech, and Language Processing. Atlanta, USAY. Chen, B. Perozzi, R. Al-Rfou', and S. Skiena. "The Expressive Power of Word Embeddings". In: ICML 2013 Workshop on Deep Learning for Audio, Speech, and Language Processing. Vol. abs/1301.3226. Atlanta, USA, 2013.
Natural language processing (almost) from scratch. R Collobert, J Weston, L Bottou, M Karlen, K Kavukcuoglu, P Kuksa, The Journal of Machine Learning Research. 12R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. "Natural language processing (almost) from scratch". In: The Journal of Machine Learning Research 12 (2011), pp. 2493- 2537.
On nearest-neighbor graphs. D Eppstein, M S Paterson, F F Yao, Discrete & Computational Geometry. 17D. Eppstein, M. S. Paterson, and F. F. Yao. "On nearest-neighbor graphs". In: Discrete & Computational Geometry 17.3 (1997), pp. 263- 282.
Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach. X Glorot, A Bordes, Y Bengio, 27X. Glorot, A. Bordes, and Y. Bengio. "Domain Adaptation for Large-Scale Sentiment Classification: A Deep Learning Approach". In: vol. 27. June 2011, pp. 97-110.
Learning distributed representations of concepts. G E Hinton, Proceedings of the eighth annual conference of the cognitive science society. the eighth annual conference of the cognitive science societyAmherst, MAG. E. Hinton. "Learning distributed representations of concepts". In: Proceedings of the eighth annual conference of the cognitive science society. Amherst, MA. 1986, pp. 1-12.
Reducing the dimensionality of data with neural networks. G E Hinton, R R Salakhutdinov, Science 313. 5786G. E. Hinton and R. R. Salakhutdinov. "Reducing the dimension- ality of data with neural networks". In: Science 313.5786 (2006), pp. 504-507.
Efficient Estimation of Word Representations in Vector Space. T Mikolov, K Chen, G Corrado, J Dean, arXiv:1301.3781arXiv preprintT. Mikolov, K. Chen, G. Corrado, and J. Dean. "Efficient Estima- tion of Word Representations in Vector Space". In: arXiv preprint arXiv:1301.3781 (2013).
Extensions of recurrent neural network language model. T Mikolov, S Kombrink, L Burget, J Cernocky, S Khudanpur, Acoustics, Speech and Signal Processing (ICASSP). T. Mikolov, S. Kombrink, L. Burget, J. Cernocky, and S. Khudan- pur. "Extensions of recurrent neural network language model". In: Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE In- ternational Conference on. IEEE. 2011, pp. 5528-5531.
Linguistic regularities in continuous space word representations. T Mikolov, W Yih, G Zweig, Proceedings of NAACL-HLT. NAACL-HLTT. Mikolov, W.-t. Yih, and G. Zweig. "Linguistic regularities in continuous space word representations". In: Proceedings of NAACL- HLT. 2013, pp. 746-751.
A scalable hierarchical distributed language model. A Mnih, G E Hinton, Advances in neural information processing systems. A. Mnih and G. E. Hinton. "A scalable hierarchical distributed language model". In: Advances in neural information processing systems. 2008, pp. 1081-1088.
Hierarchical probabilistic neural network language model. F Morin, Y Bengio, Proceedings of the international workshop on artificial intelligence and statistics. the international workshop on artificial intelligence and statisticsF. Morin and Y. Bengio. "Hierarchical probabilistic neural network language model". In: Proceedings of the international workshop on artificial intelligence and statistics. 2005, pp. 246-252.
Topology of the conceptual network of language. A E Motter, A P S De Moura, Y.-C Lai, P Dasgupta, Phys. Rev. E. 65665102A. E. Motter, A. P. S. de Moura, Y.-C. Lai, and P. Dasgupta. "Topology of the conceptual network of language". In: Phys. Rev. E 65 (6 June 2002), p. 065102.
Polyglot: Distributed Word Representations for Multilingual NLP. R Al-Rfou, ' , B Perozzi, S Skiena, Proceedings of the Seventeenth Conference on Computational Natural Language Learning. Sofia, Bulgaria: Association for Computational Linguistics. the Seventeenth Conference on Computational Natural Language Learning. Sofia, Bulgaria: Association for Computational LinguisticsR. Al-Rfou', B. Perozzi, and S. Skiena. "Polyglot: Distributed Word Representations for Multilingual NLP". In: Proceedings of the Seventeenth Conference on Computational Natural Language Learning. Sofia, Bulgaria: Association for Computational Linguis- tics, Aug. 2013, pp. 183-192.
Learning internal representation by back propagation". In: Parallel distributed processing: exploration in the microstructure of cognition. D Rumelhart, G Hinton, R Williams, 1D. Rumelhart, G. Hinton, and R. Williams. "Learning internal representation by back propagation". In: Parallel distributed pro- cessing: exploration in the microstructure of cognition 1 (1986).
Large, pruned or continuous space language models on a GPU for statistical machine translation. H Schwenk, A Rousseau, M Attik, Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT. Association for Computational Linguistics. the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT. Association for Computational LinguisticsH. Schwenk, A. Rousseau, and M. Attik. "Large, pruned or con- tinuous space language models on a GPU for statistical machine translation". In: Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT. Association for Computational Lin- guistics. 2012, pp. 11-19.
Global organization of the Wordnet lexicon. M Sigman, G A Cecchi, Proceedings of the National Academy of Sciences. the National Academy of Sciences99M. Sigman and G. A. Cecchi. "Global organization of the Wordnet lexicon". In: Proceedings of the National Academy of Sciences 99.3 (2002), pp. 1742-1747.
HyperLex: lexical cartography for information retrieval. J Véronis, Computer Speech & Language. 18J. Véronis. "HyperLex: lexical cartography for information retrieval". In: Computer Speech & Language 18.3 (2004), pp. 223-252.
| [] |
[
"Joint Chinese Word Segmentation and Span-based Constituency Parsing",
"Joint Chinese Word Segmentation and Span-based Constituency Parsing"
] | [
"Zhicheng Wang \nSchool of Computer Science\nEngineering Sun Yat-sen University Guangzhou\nChina\n",
"Tianyu Shi \nSchool of Computer Science\nEngineering Sun Yat-sen University Guangzhou\nChina\n",
"Cong Liu liucong3@mail.sysu.edu.cn \nSchool of Computer Science\nEngineering Sun Yat-sen University Guangzhou\nChina\n"
] | [
"School of Computer Science\nEngineering Sun Yat-sen University Guangzhou\nChina",
"School of Computer Science\nEngineering Sun Yat-sen University Guangzhou\nChina",
"School of Computer Science\nEngineering Sun Yat-sen University Guangzhou\nChina"
] | [] | In constituency parsing, span-based decoding is an important direction. However, for Chinese sentences, because of their linguistic characteristics, it is necessary to utilize other models to perform word segmentation first, which introduces a series of uncertainties and generally leads to errors in the computation of the constituency tree afterward. This work proposes a method for joint Chinese word segmentation and Span-based Constituency Parsing by adding extra labels to individual Chinese characters on the parse trees. Through experiments, the proposed algorithm outperforms the recent models for joint segmentation and constituency parsing on CTB 5.1. | 10.48550/arxiv.2211.01638 | [
"https://export.arxiv.org/pdf/2211.01638v2.pdf"
] | 253,265,256 | 2211.01638 | a4c1d4e94672574f07e60a8625564caecd0391e9 |
Joint Chinese Word Segmentation and Span-based Constituency Parsing
Zhicheng Wang
School of Computer Science
Engineering Sun Yat-sen University Guangzhou
China
Tianyu Shi
School of Computer Science
Engineering Sun Yat-sen University Guangzhou
China
Cong Liu liucong3@mail.sysu.edu.cn
School of Computer Science
Engineering Sun Yat-sen University Guangzhou
China
Joint Chinese Word Segmentation and Span-based Constituency Parsing
Constituency parsing · Chinese Word Segmentation
In constituency parsing, span-based decoding is an important direction. However, for Chinese sentences, because of their linguistic characteristics, it is necessary to utilize other models to perform word segmentation first, which introduces a series of uncertainties and generally leads to errors in the computation of the constituency tree afterward. This work proposes a method for joint Chinese word segmentation and Span-based Constituency Parsing by adding extra labels to individual Chinese characters on the parse trees. Through experiments, the proposed algorithm outperforms the recent models for joint segmentation and constituency parsing on CTB 5.1.
Introduction
In natural language processing, constituent parsing is fundamental, which recognizes the phrase structure and syntactic tree of a sentence. Constituency parsing is useful for a variety of upstream tasks, such as translation and sentiment analysis. A neural parser often consists of an encoding module and a decoding module. The encoding module obtains the context representation of each word in a sentence. With the rapid development of representation learning, the encoding model has gradually evolved from the LSTM to Transformer [1] with stronger representation capability. In terms of decoding, there are also many different types of decoding algorithms, such as transition-based decoding [2,3], span-based decoding [4,13], and sequential-to-sequence decoding [5,6].
Much of the previous work has focused on improving encoders, e.g. using more semantic and contextual information to improve performance [9]. In the decoding stage, a span-based decoder is popular. Stern et al. [4] scores labels and splits separately, and calculates the parse tree with the highest score from the bottom up through a dynamic programming algorithm. In addition, they provide a computationally efficient greedy top-down inference algorithm based on recursive partitioning of the input. However, in contrast to English, where punctuation marks and spaces between words serve as natural constituency markers, Chinese sentences must first undergo word segmentation using alternative models or algorithms due to differences in linguistic characteristics from those of English. A number of uncertainties are introduced when additional models or methods are used. For instance, incorrect word segmentation will typically result in problems when computing the constituency tree afterward.
In this work, we attempted to parse at the character level to avoid relying on external segmentation tools, which results in joint Chinese word segmentation and constituency parsing. In prior work, Xin et al. [10] extends the label set with POS tags for n-ary tree parsing models. We take one step further and apply it to binary tree parsing. We first label each individual Chinese character with the label "@1" and then transform the tree into Chomsky normal form (CNF). We further use the label "@2" to denote nodes that are generated when binarizing the subtree for each word with more than two characters.
Our approach surpasses a number of joint-task models for span-based parsing on the Chinese Penn Treebank. For joint tasks, the F1-measure of Chinese word segmentation and constituency parsing of our decoder are 99.05 and 91.94.
Related Work
Early Models for Span-based Decoding
Early constituency parsing methods are mainly based on grammar and statistics, such as probabilistic context-free grammar (PCFG). On this basis, the widely used CKY decoding algorithm is produced, which is essentially a dynamic programming algorithm. Then, Collins [11] extends the probabilistic context-free grammars and proposes a generative, lexicalised, probabilistic parsing model. After that, Matsuzaki et al. [12] defines a generative probabilistic model of parse trees with latent non-terminal splitting annotations. For a long time, such generative models have dominated constituency parsing.
In recent years, span-based parsers were presented, which used log-linear or neural scoring potentials to parameterize a tree-structured dynamic program for maximization or marginalization [14,15]. As one of the most influential works, Stern et al. [4] presents a minimal neural model for constituency parsing based on independent scoring of labels and spans, and achieves state-of-the-art performance. Kitaev and Klein [13] further improves the encoder with factored selfattention, which disentangles the propagation of contextual information and positional information in the transformer layers. On this basis, Mrini et al. [9] proposes the Label Attention Layer, which uses extra labels to encode task-related knowledge similar to the more recently proposed prompting technique [20].
The proposed parser adopts the encoder from previous work [9].
Joint Chinese Word Segmentation and Constituency Parsing
Unlike English, Chinese sentences consist of single characters without segmentation. For Chinese constituency parsing, sentences are first segmented using ex-ternal segmentatoare before inputting into the model [16]. Prior work attempts to combine the two tasks. Qian and Liu [19] trains the two models separately and incorporates them during decoding. Zhang et al. [17] extends the notion of phrase-structure trees by annotating the internal structures of words. They label each individual Chinese character and add structural information to the label to improve parsing performance. Xin et al. [10] extends the label set with the POS tags for n-ary tree parsing models.
In this work, we label "@1" on each individual Chinese character and "@2" on sub-words, We can therefore label word segmentation and label spans for parsing at the same time. An example parse tree is shown in Figure 1a, which is the model input. We start by adding extra labels "@1" to each individual Chinese character, as shown in Figure 1b. Individual Chinese characters are treated as words and words as phrases. In order to decode in a CKY-liked algorithm, we transform the tree into Chomsky Normal Form (CNF), as shown in Figure 1c. Following previous work [8], consecutive unary productions are merged into a single node. For words with more than two Chinese characters, we introduced another label "@2" during binarizing. The CNF tree is converted back into the n-ary tree after decoding.
Model
Preliminaries
Encoding
A sentence is denoted by x = {x 1 , x 2 , ..., x n }, where x i is the i th word, and n is the sentence length.
In the encoding stage, our goal is to obtain the score of each span (i, j), which will be used in the decoding stage to obtain the best parse tree. In the embedding layer, we use the pretrained language model Chinese BERT to generate the contextual embeddings. We denote e i the embedding vector of the i th word in the sentence. e = BERT (x), e = {e 1 , e 2 , ..., e n }
In the encoding layer, the Transformer [1] is selected for extracting the contextual features, denoted by g i . Following previous work, we also stack another layer of Label Attention Layer (LAL) [9] on top of the Transformer layers. The final contextual embedding is denoted by h i , whose dimension is 1024.
g = T ransf ormer(e) (2) h = LAL(g)(3)
In the scoring phase, we convert the word representation into span representation, denoted by H(i, j). We use a two-layer MLP to calculate the scores of span (i, j) for different labels from H(i, j).
s span (i, j, ·) = M LP (H(i, j))(4)
where s span (i, j, l) denotes the score of assigning label l to span (i, j). Let s tree (i, j, l) denote the best score of span (i, j) for the label l, which is calculated with a CKY bottom-up dynamic programming algorithm. In the base case of this algorithm, each span (i, j) contains a single word, and s(i, j, l) is given by the encoder, since the tree span (i, j) contains a single node.
Decoding
s tree (i, j, l) = s span (i, j, l)(5)
For a span (i, j) covering more than one word, let k(i < k < j) be a split point between its left and right children, l be the label of the span, l 1 and l 2 be the labels of the left and the right children. The tree score is calculated by summing its subtree scores of the constituent tree and the optimal parse tree is the parse tree with the highest tree score:
s tree (i, j, l) = max i<k<j [s span (i, j, l) + s tree (i, k, l 1 ) + s tree (k, j, l 2 )](6)
During decoding, we generate the optimal tree on each span for each label in a top-down order. For the root span, we obtain the label l best through max s tree (i, j, l). Then, we can trace back the optimal split applied on each node to construct the optimal tree in a top-down order. The whole process is shown in Figure 2.
Training Loss
We use two kinds of training losses. In the first 10 epochs, label loss is used to make the model converge quickly. Its calculation does not require to decode the optimal tree, and it is simply the sum of cross-entropy between the distribution of each span and its ground-truth label.
Loss label = (i,j) in T CrossEntropyLoss(s span (i, j), l i,j )(7)
For the rest epochs, we use tree loss, which is defined as the hinge-loss between the sum of the label scores of the spans in the predicted tree and that in the optimal tree.
Loss tree = (i,j) in T pred s span (i, j, l i,j ) + 1 − (i,j) in T gold s span (i, j, l i,j )(8)
Experiments
Experimental Setup
We evaluate our model for constituency parsing on Chinese Treebank 5.1 [7]. It consists of 17,544/352/348 examples in its training/validation/testing splits respectively. Each example is a parse tree with internal nodes associated with labels, and words associated with tags. We follow the previous work [3] and adopt the same method of pre-processing. POS tags are removed and not use as input features in both training and testing processes on CTB 5.1, following the previous work in Zhang et al. [8]. We employ the EVALB tool to calculate standard precision, recall and F1-measure as evaluation metrics. Models are trained solely on training data to be evaluate in the test set (not including validation data). Table. 1 lists the hyper-parameters used in the implementation. The hyper-parameter settings for transformer and Label Attention Layer are the same with the previous work [9] and are therefore no longer listed. On a single GTX TITAN, the model is implemented in PyTorch. We employ the pre-trained Chinese BERT to compare with previous works.
Speed Analysis
We use the test set of CTB5.1 to measure parsing speed. To reduce randomness, we conduct 10 experiments and averaged them. The results are shown in Table. 3. The average processing speed is 50 sentences per second with a single RTX 3090. We compare with Xin et al. [10] as the baseline, which is also processed on a single RTX 3090. It can be seen that our model has better performance, and the speed of our model is 2.5 times the baseline.
Conclusion
In this paper, we add extra labels for individual Chinese characters on the parse trees for joint Chinese word segmentation and constituency parsing. Experiments on CTB 5.1 yield several promising results. The proposed framework performs better than earlier work, with F1 improvements of 0.13 and 0.10 percent for the joint tasks of word segmentation and constituent parsing. Additionally, our model's computational speed is faster than earlier work.
Fig. 1 :
1Illustration of how we convert a parse tree into a binarized character-level tree.
Fig. 2 :
2A bottom-up dynamic programming calculation. Different colored lines represent different splits. The best splits are recorded during the calculation.
Cong Liu is the corresponding author. arXiv:2211.01638v2 [cs.CL] 30 Nov 2022
Table 1 :
1Hyper-parameters.Table.2 indicates the overall performance of the joint task on the test set. We use the work in Xin et al.[10] as the baseline, which also uses Chinese BERT as the pre-trained model. It can be observed that the F1 measurements of Chinese word segmentation and Constituency Parsing of our proposed framework are 99.05 and 91.94, which outperform the baseline model.Parameter Value
learning rate 10 −5
decay factor
0.5
max decay
10
decay patience 3
Parameter Value
batch size
250
MLP layer
2
MLP hidden 250
dropout
0.2
4.2 Performance
Table 2 :
2Joint-task performance on test set of CTB 5.1.Model
Seg-F1
Par-F1
Qian and Liu [19]
97.96
82.85
Wang et al. [18]
97.86
83.42
Zhang et al. [17]
97.84
84.43
Xin et al. [10]
98.92
91.84
Ours
99.05
91.94
Table 3 :
3Speed comparison on CTB5.1 test set. Sents/sec Xin et al. [10] 20 ours 50
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Advances in neural information processing systems. 30A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.
Transition-based neural constituent parsing. T Watanabe, E Sumita, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingLong Papers1T. Watanabe and E. Sumita, "Transition-based neural constituent parsing," in Proceedings of the 53rd Annual Meeting of the Association for Computational Lin- guistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2015, pp. 1169-1179.
In-order transition-based constituent parsing. J Liu, Y Zhang, Transactions of the Association for Computational Linguistics. 5J. Liu and Y. Zhang, "In-order transition-based constituent parsing," Transactions of the Association for Computational Linguistics, vol. 5, pp. 413-424, 2017.
A minimal span-based neural constituency parser. M Stern, J Andreas, D Klein, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1M. Stern, J. Andreas, and D. Klein, "A minimal span-based neural constituency parser," in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vancouver, Canada: Association for Computational Linguistics, Jul. 2017, pp. 818-827. [Online].
Straight to the tree: Constituency parsing with neural syntactic distance. Y Shen, Z Lin, A P Jacob, A Sordoni, A Courville, Y Bengio, arXiv:1806.04168arXiv preprintY. Shen, Z. Lin, A. P. Jacob, A. Sordoni, A. Courville, and Y. Bengio, "Straight to the tree: Constituency parsing with neural syntactic distance," arXiv preprint arXiv:1806.04168, 2018.
Constituent parsing as sequence labeling. C Gómez-Rodríguez, D Vilares, arXiv:1810.08994arXiv preprintC. Gómez-Rodríguez and D. Vilares, "Constituent parsing as sequence labeling," arXiv preprint arXiv:1810.08994, 2018.
The penn chinese treebank: Phrase structure annotation of a large corpus. N Xue, F Xia, F.-D Chiou, M Palmer, Natural language engineering. 112N. Xue, F. Xia, F.-D. Chiou, and M. Palmer, "The penn chinese treebank: Phrase structure annotation of a large corpus," Natural language engineering, vol. 11, no. 2, pp. 207-238, 2005.
Fast and accurate neural crf constituency parsing. Y Zhang, H Zhou, Z Li, arXiv:2008.03736arXiv preprintY. Zhang, H. Zhou, and Z. Li, "Fast and accurate neural crf constituency parsing," arXiv preprint arXiv:2008.03736, 2020.
Rethinking self-attention: Towards interpretability in neural parsing. K Mrini, F Dernoncourt, Q Tran, T Bui, W Chang, N Nakashole, arXiv:1911.03875arXiv preprintK. Mrini, F. Dernoncourt, Q. Tran, T. Bui, W. Chang, and N. Nakashole, "Re- thinking self-attention: Towards interpretability in neural parsing," arXiv preprint arXiv:1911.03875, 2019.
N-ary constituent tree parsing with recursive semimarkov model. X Xin, J Li, Z Tan, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1X. Xin, J. Li, and Z. Tan, "N-ary constituent tree parsing with recursive semi- markov model," in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021, pp. 2631-2642.
Three generative, lexicalised models for statistical parsing. M Collins, cmp-lg/9706022arXiv preprintM. Collins, "Three generative, lexicalised models for statistical parsing," arXiv preprint cmp-lg/9706022, 1997.
Probabilistic cfg with latent annotations. T Matsuzaki, Y Miyao, J Tsujii, Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05). the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)T. Matsuzaki, Y. Miyao, and J. Tsujii, "Probabilistic cfg with latent annotations," in Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), 2005, pp. 75-82.
Constituency parsing with a self-attentive encoder. N Kitaev, D Klein, arXiv:1805.01052arXiv preprintN. Kitaev and D. Klein, "Constituency parsing with a self-attentive encoder," arXiv preprint arXiv:1805.01052, 2018.
Efficient, feature-based, conditional random field parsing. J R Finkel, A Kleeman, C D Manning, Proceedings of ACL-08: HLT. ACL-08: HLTJ. R. Finkel, A. Kleeman, and C. D. Manning, "Efficient, feature-based, conditional random field parsing," in Proceedings of ACL-08: HLT, 2008, pp. 959-967.
Neural crf parsing. G Durrett, D Klein, arXiv:1507.03641arXiv preprintG. Durrett and D. Klein, "Neural crf parsing," arXiv preprint arXiv:1507.03641, 2015.
A fast, accurate deterministic parser for chinese. M Wang, K Sagae, T Mitamura, Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational LinguisticsM. Wang, K. Sagae, and T. Mitamura, "A fast, accurate deterministic parser for chinese," in Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Lin- guistics, 2006, pp. 425-432.
Chinese parsing exploiting characters. M Zhang, Y Zhang, W Che, T Liu, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsLong Papers1M. Zhang, Y. Zhang, W. Che, and T. Liu, "Chinese parsing exploiting characters," in Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2013, pp. 125-134.
A lattice-based framework for joint chinese word segmentation, pos tagging and parsing. Z Wang, C Zong, N Xue, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsShort Papers2Z. Wang, C. Zong, and N. Xue, "A lattice-based framework for joint chinese word segmentation, pos tagging and parsing," in Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2013, pp. 623-627.
Joint chinese word segmentation, pos tagging and parsing. X Qian, Y Liu, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningX. Qian and Y. Liu, "Joint chinese word segmentation, pos tagging and parsing," in Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Language Learning, 2012, pp. 501- 511.
Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. P Liu, W Yuan, J Fu, Z Jiang, H Hayashi, G Neubig, arXiv:2107.13586arXiv preprintP. Liu, W. Yuan, J. Fu, Z. Jiang, H. Hayashi, and G. Neubig, "Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language pro- cessing," arXiv preprint arXiv:2107.13586, 2021.
| [] |
[
"Training Vision-Language Models with Less Bimodal Supervision",
"Training Vision-Language Models with Less Bimodal Supervision",
"Training Vision-Language Models with Less Bimodal Supervision",
"Training Vision-Language Models with Less Bimodal Supervision"
] | [
"Elad Segal elad.segal@gmail.com \nBlavatnik School of Computer Science\nTel Aviv University\n\n",
"Ben Bogin ben.bogin@cs.tau.ac.il \nBlavatnik School of Computer Science\nTel Aviv University\n\n",
"Jonathan Berant joberant@cs.tau.ac.il \nBlavatnik School of Computer Science\nTel Aviv University\n\n",
"Elad Segal elad.segal@gmail.com \nBlavatnik School of Computer Science\nTel Aviv University\n\n",
"Ben Bogin ben.bogin@cs.tau.ac.il \nBlavatnik School of Computer Science\nTel Aviv University\n\n",
"Jonathan Berant joberant@cs.tau.ac.il \nBlavatnik School of Computer Science\nTel Aviv University\n\n"
] | [
"Blavatnik School of Computer Science\nTel Aviv University\n",
"Blavatnik School of Computer Science\nTel Aviv University\n",
"Blavatnik School of Computer Science\nTel Aviv University\n",
"Blavatnik School of Computer Science\nTel Aviv University\n",
"Blavatnik School of Computer Science\nTel Aviv University\n",
"Blavatnik School of Computer Science\nTel Aviv University\n"
] | [] | Standard practice in pretraining multimodal models, such as vision-language models, is to rely on pairs of aligned inputs from both modalities, for example, aligned imagetext pairs. However, such pairs can be difficult to obtain in low-resource settings and for some modality pairs (e.g., structured tables and images). In this work, we investigate the extent to which we can reduce the reliance on such parallel data, which we term bimodal supervision, and use models that are pretrained on each modality independently. We experiment with a high-performing vision-language model, and analyze the effect of bimodal supervision on three vision-language tasks. We find that on simpler tasks, such as VQAv2 and GQA, one can eliminate bimodal supervision completely, suffering only a minor loss in performance. Conversely, for NLVR2, which requires more complex reasoning, training without bimodal supervision leads to random performance. Nevertheless, using only 5% of the bimodal data (142K images along with their captions), or leveraging weak supervision in the form of a list of machine-generated labels for each image, leads to only a moderate degradation compared to using 3M image-text pairs: 74%→∼70%. arXiv:2211.00262v1 [cs.CL] | 10.48550/arxiv.2211.00262 | [
"https://export.arxiv.org/pdf/2211.00262v1.pdf"
] | 253,244,224 | 2211.00262 | d246e6efed3b4d49e7c995ae465a0ead708335a6 |
Training Vision-Language Models with Less Bimodal Supervision
Elad Segal elad.segal@gmail.com
Blavatnik School of Computer Science
Tel Aviv University
Ben Bogin ben.bogin@cs.tau.ac.il
Blavatnik School of Computer Science
Tel Aviv University
Jonathan Berant joberant@cs.tau.ac.il
Blavatnik School of Computer Science
Tel Aviv University
Training Vision-Language Models with Less Bimodal Supervision
Automated Knowledge Base Construction (2022) Conference paper
Standard practice in pretraining multimodal models, such as vision-language models, is to rely on pairs of aligned inputs from both modalities, for example, aligned imagetext pairs. However, such pairs can be difficult to obtain in low-resource settings and for some modality pairs (e.g., structured tables and images). In this work, we investigate the extent to which we can reduce the reliance on such parallel data, which we term bimodal supervision, and use models that are pretrained on each modality independently. We experiment with a high-performing vision-language model, and analyze the effect of bimodal supervision on three vision-language tasks. We find that on simpler tasks, such as VQAv2 and GQA, one can eliminate bimodal supervision completely, suffering only a minor loss in performance. Conversely, for NLVR2, which requires more complex reasoning, training without bimodal supervision leads to random performance. Nevertheless, using only 5% of the bimodal data (142K images along with their captions), or leveraging weak supervision in the form of a list of machine-generated labels for each image, leads to only a moderate degradation compared to using 3M image-text pairs: 74%→∼70%. arXiv:2211.00262v1 [cs.CL]
Introduction
Pretraining models on large amounts of raw data using self-supervision has revolutionized machine learning, and is now standard practice across a wide range of modalities [Liu et al., 2019, Raffel et al., 2020, Dosovitskiy et al., 2021, Liu et al., 2021, Herzig et al., 2020, Schneider et al., 2019, Baevski et al., 2022. While typically pretrained models are trained on data from a single modality (unimodal data), the success of pretraining has spread to the bimodal setup, where models are trained on pairs of inputs, each from a different modality (e.g. text and audio, Li et al., 2021a). Most notably, vision-language models, such as LXMERT [Tan and Bansal, 2019], ViLT , METER [Dou et al., 2022], CLIP [Radford et al., 2021], and others [Li et al., 2019, Lu et al., 2019, Li et al., 2021b, have been pretrained on manually or automatically collected parallel data that consists of aligned image-text pairs.
While effective, pretraining on bimodal data comes at a cost. First, gathering highquality pairs can be challenging, especially in low-resource languages and domains, or for modality pairs where parallel data is scarce. Second, expanding this approach to more than two modalities (as in, e.g., MultimodalQA, Talmor et al., 2021) is challenging. Last, pretraining is computationally expensive [Strubell et al., 2019, Bommasani et al., 2021, and thus relying on pretraining for all modality pairs is inefficient. Figure 1: The effect of unimodal and bimodal pretraining on downstream performance after finetuning. In VQAv2 and GQA, pretraining on unimodal data alone (without image-text pairs) is competitive with models pretrained on image-text pairs. On NLVR2, bimodal supervision is necessary, but one can reach reasonable performance using only 5% of the image-text pairs or training on machine-generated object labels. Random initialization leads to poor performance on all tasks.
Given these shortcomings, a natural question is how far can we get with models pretrained on unimodal data only (unimodal models), such as BERT [Devlin et al., 2019] and ViT [Dosovitskiy et al., 2021], to reduce or obviate the need for bimodal pretraining. Can we align unimodal representations without resorting to pretraining over millions of input pairs? While past work [Dou et al., 2022, Li et al., 2021b, Zhai et al., 2022 used unimodal models as an initialization point before bimodal pretraining, it did not investigate its effect on the amount of necessary bimodal data.
In this work, we investigate to what extent we can reduce the burden of bimodal pretraining and finetune models on vision-language applications starting with models that were unimodally pretrained. We choose a high-performing architecture [Dou et al., 2022]a transformer image encoder and a transformer text encoder, which pass their representations through additional transformer layers that capture the interaction between the image and the text, before performing a final classification task.
We test performance on visual question answering and visual reasoning tasks in the following setups: (a) randomly initialized image and text encoders, (b) unimodally-pretrained image and text encoders, and (c) unimodally-pretrained image and text encoders that are then pretrained with bimodal supervision. We test different sources for bimodal pretraining, which require different amounts of human effort: (a) automatically harvested image-caption pairs (Conceptual Captions, Sharma et al., 2018), (b) images paired with machine-generated object labels (CCIL, Ng et al., 2021), (c) manually annotated image-object pairs (ImageNet-1K, Russakovsky et al., 2015), and (d) image-question-answer triples from visual question answering tasks. We note that due to computational constraints the size of our pretraining corpus is smaller compared to those used by industry-based researchers [Li et al., 2022, Radford et al., 2021, Jia et al., 2021.
We find (Figure 1) that on some tasks, models that do not use any bimodal supervision are only slightly worse than models that are pretrained on large amounts of image-text pairs -70.7→69.5 on VQAv2, and 56.1→53.6 on GQA. However, for a more complex reasoning task, such as NLVR2, bimodal supervision is crucial. Nevertheless, we show that one can dramatically reduce the number of bimodal image-text pairs and still obtain reasonable performance -either by using only 5% of the pairs (74.3→70.2) or through machine-generated object labels (74.3→68.0). Our code is available at https://github.com/eladsegal/lessbimodal-sup. Figure 2: Left: Architecture overview: an image encoder and a text encoder followed by a few transformer fusion layers, capturing interaction between modalities through cross-attention. Center : We pretrain the VL encoder from bimodal supervision by taking contextualized representations of the image (h img ) and text (h txt ) and applying the image-text matching (ITM) and masked language modeling (MLM) loss functions. Right: We finetune the VL encoder on downstream classification tasks by concatenating the image and text representations and passing them through an MLP classifier.
Overview
Text Encoder
We provide an overview of the experimental settings explored in this work. As our architecture-of-choice, we leverage one that has been shown to perform well across multiple tasks [Dou et al., 2022], namely, a Vision-Language (VL) encoder, where a unimodal image encoder creates image representations, a unimodal text encoder creates text representations, and these two representations are passed through a few transformer [Vaswani et al., 2017] layers that capture cross-modal interactions (Figure 2, Left).
We experiment with three initializations of the image and text encoders. First, we use random initialization as a baseline. Second, we initialize from pretrained unimodal models (RoBERTa and ViT, Liu et al., 2019, Dosovitskiy et al., 2021, which can potentially reduce the amount of bimodal pretraining. Last, we pretrain the entire VL encoder with bimodal supervision (Figure 2, Center), and compare different data sources for pretraining, each requiring different amounts of human effort.
In each experiment we finetune and evaluate the VL encoder on downstream VL applications ( Figure 2, Right), focusing on classification tasks (visual question answering and visual reasoning).
Data
We now describe the datasets used during bimodal pretraining and finetuning. For downstream applications, we put an emphasis on tasks that require reasoning over image(s) and text. Table 1 provides an example from each dataset, and Appendix A provides key statistics and details on the composition of the training sets.
ImageNet
Conceptual Captions Conceptual Captions Image Labels Table 1: Examples from all datasets used in this work.
Pretraining Datasets
ImageNet-1K [Russakovsky et al., 2015] is a human-annotated dataset that consists of over 1.2M images, divided into 1,000 classes that are mapped to meaningful concepts according to the WordNet hierarchy [Fellbaum, 1998]. Each concept is described by one or more language phrases, and accompanied by ∼1000 images to illustrate it. We consider ImageNet-1K as a source of lightweight bimodal supervision, relatively cheap to obtain, as images are paired with text describing a single concept rather than a full-sentence.
Conceptual Captions (CC) [Sharma et al., 2018] is a programmatically-generated dataset of image-text pairs that consists of 3.3M examples. Prior work has demonstrated that CC is an effective resource for vision-language pretraining , Li et al., 2021b, Lu et al., 2019, Hendricks et al., 2021. We use CC as a primary source of bimodal supervision, since: (a) it does not involve manual annotations, (b) it is small enough to be used by resource-constrained researchers, and (c) its images are from a different origin than the downstream tasks. Therefore, it provides a suitable test bed for estimating models' ability to generalize to new images.
Conceptual Captions Image Labels (CCIL) [Ng et al., 2021] is a subset of 2M images from CC that contains machine-generated labels using the Google Cloud image labelling API. While labels are cheap since they are automatically-generated, the API was presumably trained on large amounts of bimodal data. Nevertheless, we examine pretraining on images paired with sets of labels to investigate whether this provides a sufficiently rich source of bimodal supervision despite lacking natural language sentences. Past work indeed showed that VL pretraining benefits from masking object labels [Bitton et al., 2021].
Downstream Tasks
VQAv2 [Goyal et al., 2017] VQAv2 is a human-authored visual question answering (VQA) dataset that consists of 1.1M natural language questions with 10 short answers per question over 204K images from COCO [Lin et al., 2014]. It is standard to treat VQAv2 as a classification task, by only keeping questions with the most common answers (3,129 classes) [Anderson et al., 2018, Tan and Bansal, 2019, Zhai et al., 2022.
GQA [Hudson and Manning, 2019] (balanced) is a VQA dataset whose public version contains 1.1M questions over 83K images from Visual Genome [Krishna et al., 2017]. Unlike VQAv2, questions are created programmatically from scene graphs created by human annotators. Using scene graphs allows GQA to generate questions that test various reasoning skills such as comparisons, logical inference, spatial reasoning, etc.
NLVR2 [Suhr et al., 2019] is a benchmark for testing models' ability to reason over text and images. The dataset contains 107K examples, where each example contains an English sentence and two web images (see Table 1). The goal is to determine whether the sentence is true or false in the context of the pair of images, a binary classification task.
Method
Our goal is to develop a classifier f : X × I → C that given an utterance x and an image i predicts a class c ∈ C.
Architecture
We use a VL architecture, adapted from Dou et al. [2022]. The tokens of the utterance x = (x 0 , x 1 , . . . , x n ) are fed into a transformer text encoder, where x 0 is the special symbol CLS txt . Similarly, the image is broken into patches i = (i 0 , i 1 , . . . , i m ), where i 0 is a special symbol CLS img , which are fed into a transformer image encoder. The image and text encoders compute contextualized representations of the image and text (ĥ txt 0 , . . . ,ĥ txt n ) and (ĥ img 0 , . . . ,ĥ img m ), which are then linearly projected with projection matrices W txt proj ∈ R dtxt×d , W img proj ∈ R d img ×d , where d txt , d img are the hidden state dimensions of the text and image encoders respectively. The projected representations of each modality are then passed through transformer fusion layers, which include both a self-attention sublayer, and a cross-attention sublayer. Namely, each modality performs cross-attention on the other modality to fuse information from its representations, capturing interaction between the modalities. Overall, the VL encoder outputs the image-and-text contextualized representations h img = (h img 0 , . . . , h img n ) and h txt = (h txt 0 , . . . , h txt m ). An overview of our architecture is given in Figure 2, Left.
All model parameters are jointly trained by defining loss functions over classification heads, which we describe next. Since some model parameters are initialized from a pretrained model, while other are randomly initialized, we use a higher learning rate for randomly initialized weights compared to pretrained weights, similar to Dou et al. [2022].
Pretraining Objectives
For pretraining, we use two objectives: masked language modeling (MLM) [Devlin et al., 2019, Tan andBansal, 2019], and image-text matching (ITM) [Tan and Bansal, 2019], which are the most common objectives for VL pretraining and lead to state-of-the-art performance [Dou et al., 2022]. During training, we sum the ITM loss and the MLM loss for each training instance.
In MLM, given a masked token x i the goal is to maximize the probability of the gold token given the representation h txt i , using cross-entropy loss. In ITM, given a image-text pair (x, i), we concatenate the special CLS tokens h img 0 and h txt 0 , and use a sigmoid layer to predict if the image matches the text or not. We train with binary cross-entropy loss.
When pretraining on Conceptual Captions, we use the same masking scheme employed by Dou et al. [2022], that is, randomly masking 15% of the tokens. For ImageNet, we are given an image and a text label and mask all of its tokens. For CCIL, we are given an image and a list of text labels, concatenated with commas as separators, ordered by their machine-generated confidence scores. We then mask all tokens of a randomly-sampled label.
In ITM, in 50% of the examples, given a positive pair (x, i), we substitute the true image with a random one and label it as a negative example.
Finetuning
Since the downstream applications in §3.2 can all be framed as classification tasks, we add a classification head to finetune the VL encoder. The classification head is a two-layer MLP, as in . Specifically, we take as input the concatenation of all the image and text CLS representations, i.e., [h img 0 ; h txt 0 ], and use the MLP to map them to |C| logits based on the number of task classes. The objective during training is to maximize the probability of the correct class(es), and we use standard cross-entropy loss. At inference time, we return the top-scoring class for all downstream tasks.
In NLVR2, where each example has two images, we consider each example as two imagetext pairs, duplicating the text, and pass them separately through the VL encoder (dubbed 'the pair setup' in Chen et al. [2020]). We then pass four CLS representations (two for the images, two for the text) to the MLP to obtain the prediction.
Experiments
Experimental Setup We use ViT-Base [Dosovitskiy et al., 2021] as the image encoder, pretrained and finetuned on ImageNet-21K at a resolution of 224x224 with a patch size of 16x16. We use RoBERTa-Base [Liu et al., 2019] as the text encoder. For the cross-modal transformer, we use only two layers to save computational resources, as previous work [Lu et al., 2019, Hendricks et al., 2021, as well as our own preliminary findings, have shown that the effect of depth is small after finetuning.
We run pretraining ( §4.2) for a maximum of 7,400 steps, and finetune each downstream task for 10 epochs. We specify batch sizes and learning rates for each case in Appendix B.1.
The evaluation score for VQAv2 is VQA score, and accuracy for GQA and NLVR2. Each result for VQAv2 and GQA is a 3-run average on the test-dev split, and for NLVR2 a 10-run average on the public test split.
Limitations Our work is performed within a limited compute budget. Therefore, we choose our largest pretraining dataset to be CC although there are datasets orders of magnitude larger. Compared to other work, we use images in a lower resolution, which has been shown to decrease performance [Dou et al., 2022]. Also, Dou et al. [2022] showed that better image encoders can significantly improve performance even before bimodal pretraining, but we did not experiment with different text and image encoders nor with larger models. Additionally, even though further pretraining in some setups results in small performance improvements, we decided the computational cost was unjustified. Bugliarello et al. [2021] showed pretraining variance exists when training on CC, but we were only able to pretrain once in each setup, due to the high computational costs. All of the above means that our work is self-contained, but cannot be directly compared in numbers to other works.
Main Results
VQAv2
GQA NLVR2 Table 2 shows the results of finetuning on all downstream tasks for different initializations of the image and text encoders.
In addition to finetuning a model that is initialized with ViT and RoBERTa ('Unimodallypretrained'), and in order to verify the importance of unimodal pretraining, we finetune our model when the image encoder, text encoder, or both encoders are randomly initialized. We find that pretraining the vision model is essential for good performance, and observe a smaller drop in performance when the text encoder is randomly initialized, similar to Zhai et al. [2022].
Comparing the unimodally-pretrained model to one that was further pretrained on CC ('Bimodally-pretrained with CC'), we see that for VQAv2 the gap is only 1 point, and for GQA it is just 2.5 points. However, on the more challenging NLVR2, which requires complex reasoning operations, bimodal pretraining is essential, and the model achieves random performance without it. Nevertheless, training with a weaker form of supervision, namely, a list of machine-generated object labels from CCIL is sufficient for non-random and reasonably high performance on NLVR2 (but has no effect on VQAv2 and GQA). Table 2 showed that bimodal pretraining is essential for obtaining non-random results on NLVR2. A natural question is whether this can be obtained with fewer pretraining examples. To this end, we pretrain on different fractions of CC and present the results after finetuning in Table 8 (in the Appendix) and Figure 3. Surprisingly, even when using only ∼1% of CC (30K examples), performance on NLVR2 is far from random -67.3. When using 5% of the data, performance is only moderately lower than when using CC in its entirety -70.2 vs. 74.3. When using 25% of the data for pretraining, performance on all three datasets is less than two points lower than when using 100%, showing that indeed the amount of bimdal supervision can be considerably reduced with only a small hit in performance.
Effect of CC Size on Pretraining
The aforementioned results were obtained by finetuning on all of the downstream data per task. However, an interesting variant to consider is a low-resource setting where we have only some of the downstream data -what is the importance of bimodal pretraining then? Table 9 (in the Appendix) and Figure 4 show for VQAv2 and GQA that when less data is used for finetuning, the benefit of pretraining with 5% or more of CC is greater than the benefit observed when 100% of the downstream data is used for finetuning. For NLVR2, we see that pretraining is still very helpful even with 100% of the downstream data. The reason for the difference might be that VQAv2 and GQA are much larger than NLVR2.
Pretraining with ImageNet Labels
We have seen in §5.1 that image-caption pairs are useful for pretraining VL models. Here, we investigate if a weaker source of language supervision, namely image labels only, suffices for aligning text and vision representations. Specifically, we pair each ImageNet image with its label, treating it as a caption, and pretrain with MLM and ITM as described in §4.2.
We observe no difference in results compared to unimodally-pretrained models (Table 10 in the Appendix) -performance remains random for NLVR2, and similar for VQAv2 and GQA. This suggests that ImageNet labels do not provide adequate signal for VL pretraining.
Pretraining with CCIL
One hypothesis for the lack of improvement when pretraining with ImageNet is that a single label per image is too limiting, since images typically contain many objects. To test this, we pretrain with CCIL, where each image is paired with machine-generated labels, providing a richer image representation. We pretrain with MLM and ITM as described in §4.2.
While pretraining on CCIL does not improve performance on VQAv2 and GQA, it leads to dramatic improvement on NLVR2, reaching an average accuracy of 68.0±0.7 and a maximum accuracy of 68.9. This shows that providing a set of object labels lets the model better align image and text representations. Table 3 further validates this by showing results when restricting the maximal number of labels per image. We observe that having multiple labels per image is crucial, as performance is roughly random when using a single label. Using 3 labels is already sufficient for bootstrapping the model, and performance is barely lower compared to using all 15 labels.
Transfer Learning
Finally, we test whether a model finetuned on a source downstream task (VQAv2 and GQA) can improve performance on a target task, i.e., in a transfer learning setup, where we vary the amount of annotated data in the source task.
Table 11 (in the Appendix) and Figure 5 (left) show results when VQAv2 is the source task and GQA and NLVR2 are the target tasks. VQAv2 appears to be an effective source of bimodal supervision for both tasks -when using all of VQAv2, performance on GQA is even slightly higher compared to pretraining on CC data, and 3 points lower on NLVR2 (74.3→71.1). Nevertheless, the amount of data in the source task is important, and performance on NLVR2 is much lower when using 5%-25% of the data.
Table 12 (in the Appendix) and Figure 5 (right) show results when GQA is the source task and VQAv2 and NLVR2 are the target tasks. We observe that VQAv2 is a better source task compared to GQA -GQA does not improve performance on VQAv2, and its effect on NLVR2 is much more moderate. A possible explanation is that VQAv2 has natural language questions, while questions in GQA are automatically generated. Another potential factor is the fact that VQAv2 typically require less reasoning steps compared to GQA.
Overall, in both cases we find transfer learning on downstream tasks is useful, and can even perform closely to bimodally-pretrained models.
Analysis
To better understand what data properties are important for pretraining, we train on small subsets of CC (1% of the data) and VQAv2 (5% of the data), with particular characteristics:
• Min/max length: We create subsets that minimize/maximize the average input length.
• Min/max vocabulary size -We create subsets that minimize/maximize the size of the vocabulary. To do so we use a greedy procedure, where (a) we initialize an empty set of examples, and at each step (b) randomly sample a candidate set of 10K examples, and (c) choose the example that minimizes/maximizes the current vocabulary size. Table 4: Analyzing the effect of pretraining on CC/VQAv2 subsets with particular properties. After training on each subset, we finetune on NLVR2.
Results are in Table 4. No subset is noticeably better than a random subset. For CC, results are similar. For VQAv2, while performance when minimizing length and vocabulary is better on average, the differences seem negligible, given the high standard deviation. Table 4 shows that pretraining on long inputs substantially hurts performance -results are reduced by at least 3 points for both CC and VQAv2. This is surprising as one might hypothesize that longer inputs should be better since they contain more information. A possible explanation is that simple examples are necessary to bootstrap the pretraining procedure and align the text and image representations.
Effect of length on pretraining
Effect of vocabulary size on pretraining Pretraining on a subset with higher lexical diversity should expose the model to more concepts, both in images and texts, and therefore improve its performance. While for CC this is indeed the case, for VQAv2 results for the max vocabulary size setup with 16.4K words are lower than the min vocabulary size setup with only 0.2K words. A possible explanation is the amount of yes/no questions in the min/max vocabulary size subsets which is 80.7% and 44.5%, respectively -Since NLVR2 is a yes/no task, training on more yes/no questions might be closer to its distribution. Dou et al. [2022] investigated unimodally-pretrained models, finetuning different image and text encoders on multiple VL tasks, recognizing it as efficient and performant. However, they did not consider the effects of the types and amount of bimodal supervision. Past work investigated bimodal supervision on VL models, but for models that use frozen features from an object detection model [Singh et al., 2020, Hendricks et al., 2021, which (a) cannot be adapted to unseen concepts , (b) require heavily annotated objectlevel data for the training of the object detection model [Krishna et al., 2017, Anderson et al., 2018, and (c) result in an architectural inductive bias towards objects (which is very beneficial for VQA tasks). Singh et al. [2020] compared performance between multiple pretraining datasets, varying their sizes. Unlike us, for all tasks, the effect of different usage of bimodal supervision was small, compared to our NLVR2 experiments. Hendricks et al. [2021] assessed the contribution of pretraining datasets from a set of standard VL pretraining datasets, but focused on zero-shot image retrieval tasks. Li et al. [2021c] and Zhou et al. [2022] also share the motivation of reducing bimodal pretraining for VL models. With some similarity to our CCIL experiments in §5.4, they avoid pretraining on collected parallel image-text data altogether by utilizing predictions of regions and tags from an object detection model to create VL-specialized training objectives.
Related Work
Opposite to our setup, a current trend is to pretrain models on vast amounts of bimodal data [Radford et al., 2021, Zhai et al., 2022, Alayrac et al., 2022, and perform zero/few-shot evaluation. While remarkable results were achieved, performance is lower than finetuned models pretrained on less bimodal data, which is relatively cheap to obtain.
Conclusion
A current obstacle on the road to multimodal models is reliance on bimodal supervision. In this work, we go in an opposite direction from current trends, and instead of using increasing amounts of bimodal data, we examine whether one can use less of it. We find that indeed this is the case, where for simple tasks just finetuning unimodally-pretrained models leads to performance that is similar to bimodally-pretrained models, at a much lower cost. For complex tasks, while bimodal pretraining is still necessary, its amount (100%→5%) and source quality (CC→CCIL) can be significantly reduced with only a moderate degradation in performance. We also find that models finetuned on one downstream task are useful in a transfer learning setup, achieving results close to bimodally-pretrained models. Since for some of datasets the official training splits aren't used as-is, we provide the exact details of the training data composition for each dataset and also key statistics for all of the datasets in Table 5.
Dataset
Training ImageNet-1K [Russakovsky et al., 2015] Since ImageNet classes are often too finegrained, we manually collapse fine-grained classes into an ancestor WordNet class, 1 e.g., dog breeds are collapsed to "dog". Then, we create a balanced training set according to the updated classes of the images. Following is a list of the classes we collapse sub-classes into: dog, fox, wild dog, wolf, coyote, domestic cat, bear, monkey, snake, lizard, turtle, frog, salamander, lobster, crab, beetle, butterfly, spider, rabbit, bird, fungus Conceptual Captions (CC) [Sharma et al., 2018] Out of the 3.3M examples in the official training set, we were able to download 2.84M examples from the provided image URLs.
Conceptual Captions Image Labels (CCIL) [Ng et al., 2021] Out of the 2M examples in the official training set, we were able to download 1.84M examples from the provided image URLs.
VQAv2 [Goyal et al., 2017] We create our training set similar to previous works on VQAv2 [Tan andBansal, 2019, Dou et al., 2022], and use the same validation set as Tan and Bansal [2019], which was constructed from the official validation set based on 5,000 randomly chosen images.
To create the training set, we first create an answer set that contains only majority answers that occurred at least 9 times on the official training and validation sets together. Then, out of the official training and validation sets, we filter out all of the examples that doesn't have any answer in the created answer set. Finally, out of the remaining examples, we discard every example that appears in our validation set.
GQA [Hudson and Manning, 2019] We use the official training set.
1. https://observablehq.com/@mbostock/imagenet-hierarchy.
NLVR2 [Suhr et al., 2019] We use the official training set.
Appendix B. Experimental Setup
B.1 Additional Implementation Details
Image Preprocessing Both in pretraining and finetuning, we apply center crop on the image and resize it to 224x224. When training, we additionally use RandAugment [Cubuk et al., 2020] as in with the exclusion of color-changing strategies (Invert, Posterize, Solarize, SolarizeAdd) and the coutout strategy.
Model Architecture We use the model from Dou et al. [2022], but we simplify it with the removal of two of its components since we didn't observe a performance difference: the single-layer feedforward network before the feeding of the [CLS] representations to a task-specific head (e.g. ITM, MLM, classifier), and the image token type embeddings.
Pretraining We run pretraining for 7,400 steps, except when training on 1%, 5% and 10% of CC, as more training results in an increase of the validation loss. We train for 1850 steps on 1% and %5 of CC, and 3700 steps for 10% of CC.
The batch size is 3,840 and learning rates of 1e −4 and 5e −4 are used for the pretrained and randomly initialized weights respectively. The learning rate is warmed up from zero during the first 10% steps, and then linearly decays back to zero throughout the remaining steps.
We use 8 NVIDIA V100 GPUs, and training takes about 16 hours for 100% of CC.
Finetuning For finetuning, we use a batch size of 96 for VQAv2 and GQA, and 48 for NLVR2. We specify the learning rates for finetuning before and after bimodal pretraining in Tables 6 and 7 respectively. The learning rate is warmed up from zero during the first 10% steps, and then linearly decays back to zero throughout the remaining steps.
We use a single NVIDIA RTX 3090 GPU, and training takes 10, 15 and 4 hours for VQAv2, GQA and NLVR2 respectively.
Weights
VQAv2 GQA NLVR2
Image encoder, Text encoder 2e − 5 1e − 5 1e − 5 Cross-modal transformer 2e − 4 1e − 4 1e − 4 Classifier head 2e − 4 1e − 4 1e − 4
Figure 3 :
3Effect of the fraction of examples from CC on downstream task performance. Solid/dashed line -average/maximum score over seeds.
Figure 4 :
4Effect of the fraction of examples from CC on downstream task performance when finetuning on less downstream data. Solid/dashed line -average/max. over seeds.
Figure 5 :
5Effect of the fraction of examples from VQAv2 (left) and GQA (right) on downstream task performance. Solid/dashed line -average/maximum score over seeds.
Question: What vegetable is to the left of the bag? Answer: cauliflower Sentence: The sink in one of the images is set into a brown wood hanging counter. Label: falseClass label: printer
Caption: snail on a branch isolated
on white background
Computer-generated labels: room, interior design,
furniture, blue, living room, green, property, turquoise,
home, floor, yellow, table, building, wall, house
VQAv2
GQA
NLVR2
Question: How many chairs can
you count? Answer: 2
Table 2 :
2Main results for all downstream tasks.
Table 3 :
3Performance on NLVR2 when re-
stricting the number of labels per
image during pretraining on CCIL
(max. value is in the parentheses).
Table 5 :
5Key statistics for the training datasets.
Table 6 :
6Learning rates per weights for finetuning before bimodal pretraining for each downstream task.Weights
VQAv2 GQA
NLVR2
Image encoder, Text encoder 2e − 5
1e − 5 1e − 5
Cross-modal transformer
1e − 4
1e − 4 5e − 5
Classifier head
1e − 3
1e − 4 5e − 4
Table 7 :
7Learning rates per weights for finetuning after bimodal pretraining for each downstream task.Appendix C. Results
CC Data VQAv2
GQA
NLVR2
0%
69.5±0.1
53.6±0.1
random
1%
69.2±0.1
53.8±0.6
67.3±0.5
5%
69.8±0.0
55.3±0.3
70.2±0.3
10%
70.1±0.1
55.5±0.2
71.2±0.4
25%
70.5±0.1
55.6±0.2
72.9±0.4
50%
70.6±0.1
56.1±0.4 73.8±0.4
100%
70.7±0.0 56.1±0.3
74.3±0.3
Table 8 :
8Effect of the fraction of examples from CC on downstream task performance. Visualized with Fig. 3. 4±0.0 62.1±0.6 69.5±0.1 45.6±0.5 48.0±0.1 53.6±0.1 random random random 1% 55.8±0.2 60.8±0.1 69.2±0.1 45.2±0.3 48.1±0.5 53.8±0.6 52.5±0.9 54.6±0.6 67.3±0.5 5% 58.1±0.2 63.7±0.1 69.8±0.0 46.3±0.4 49.1±0.1 55.3±0.3 55.6±1.5 61.0±0.8 70.2±0.3 100% 62.4±0.2 66.0±0.1 70.7±0.0 48.4±0.6 51.8±0.3 56.1±0.3 63.4±0.5 67.6±0.6 74.3±0.3VQAv2
GQA
NLVR2
CC Data 10%
25%
100%
10%
25%
100%
10%
25%
100%
0%
54.
Table 9 :
9Effect of the fraction of examples from CC on downstream task performance when finetuning on less downstream data. Visualized with Fig. 4. 5±0.1 53.6±0.1 random Bimodally-pretrained with ImageNet 69.3±0.0 53.5±0.2 randomVQAv2
GQA
NLVR2
Unimodally-pretrained
69.
Table 10 :
10Performance on all downstream tasks, with and without ImageNet pretraining. No difference is observed. Bimodally-pretrained with CC 56.1±0.3 74.3±0.3VQAv2 Data
GQA
NLVR2
0%
53.6±0.1
random
5%
54.6±0.3
56.6±3.5
10%
55.1±0.2
61.4±1.8
25%
55.1±0.4
68.3±0.4
50%
55.7±0.5
70.0±0.5
100%
56.3±0.2 71.1±0.5
Table 11 :
11Effect of the fraction of examples from VQAv2 on downstream task performance. Visualized with Fig. 5 (left). Bimodally-pretrained with CC 70.7±0.0 74.3±0.3GQA Data
VQAv2
NLVR2
0%
69.5±0.1
random
5%
69.1±0.1
52.3±1.5
10%
69.3±0.1
53.3±2.3
25%
69.2±0.1
55.5±3.2
50%
69.3±0.1
59.5±2.4
100%
69.4±0.1
63.1±1.0
Table 12 :
12Effect of the fraction of examples from GQA on downstream task performance. Visualized with Fig. 5 (right).
AcknowledgementsThis research was partially supported by The Yandex Initiative for Machine Learning, and the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800).Appendix for "Training Vision-Language Models with Less Bimodal Supervision"Appendix A. Training Data
Flamingo: a visual language model for few. Jeff Jean-Baptiste Alayrac, Pauline Donahue, Antoine Luc, Iain Miech, Yana Barr, Karel Hasson, Arthur Lenc, Katie Mensch, Malcolm Millican, Roman Reynolds, Eliza Ring, Serkan Rutherford, Tengda Cabi, Zhitao Han, Sina Gong, Samangooei, Marianne Monteiro. Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karen Simonyanshot learning. arXiv, abs/2204.14198, 2022Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Has- son, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Mon- teiro, Jacob Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sahand Shar- ifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karen Simonyan. Flamingo: a visual language model for few-shot learning. arXiv, abs/2204.14198, 2022.
Bottom-up and top-down attention for image captioning and visual question answering. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, Lei Zhang, 10.1109/CVPR.2018.006362018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City, UT, USAPeter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image caption- ing and visual question answering. In 2018 IEEE Conference on Computer Vi- sion and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pages 6077-6086. IEEE Computer Society, 2018. doi: 10.1109/CVPR.2018. 00636. URL http://openaccess.thecvf.com/content_cvpr_2018/html/Anderson_ Bottom-Up_and_Top-Down_CVPR_2018_paper.html.
Data2vec: A general framework for self-supervised learning in speech. Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, Michael Auli, vision and language. arXiv, abs/2202.03555, 2022Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli. Data2vec: A general framework for self-supervised learning in speech, vision and lan- guage. arXiv, abs/2202.03555, 2022.
Data efficient masked language modeling for vision and language. Yonatan Bitton, Michael Elhadad, Gabriel Stanovsky, Roy Schwartz, 10.18653/v1/2021.findings-emnlp.259Findings of the Association for Computational Linguistics: EMNLP 2021. Association for Computational Linguistics2021Yonatan Bitton, Michael Elhadad, Gabriel Stanovsky, and Roy Schwartz. Data effi- cient masked language modeling for vision and language. In Findings of the Asso- ciation for Computational Linguistics: EMNLP 2021, pages 3013-3028. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.findings-emnlp.259. URL https://aclanthology.org/2021.findings-emnlp.259.
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Michael S Sydney Von Arx, Jeannette Bernstein, Antoine Bohg, Emma Bosselut, Erik Brunskill, S Brynjolfsson, Dallas Buch, Rodrigo Card, Niladri S Castellon, Annie S Chatterji, Kathleen Chen, John Creel ; Stefano Ermon, Kawin Etchemendy, Li Ethayarajh, Chelsea Fei-Fei, Trevor Finn, Lauren E Gale, Karan Gillespie, Noah D Goel, Shelby Goodman, Neel Grossman, Tatsunori Guha, Peter Hashimoto, John Henderson, Hewitt, E Daniel, Moussa Doumbouya, Esin Durmus. Jared Davis, Dora Demszky, Chris DonahueRishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, S. Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S. Chen, Kathleen Creel, Jared Davis, Dora Demszky, Chris Donahue, Moussa Doum- bouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren E. Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E.
. Jenny Ho, Kyle Hong, Jing Hsu, Thomas F Huang, Saahil Icard, Dan Jain, Pratyusha Jurafsky, Siddharth Kalluri, Geoff Karamcheti, Fereshte Keeling, O Khani, Pang Wei Khattab, Mark S Koh, Ranjay Krass, Rohith Krishna, Ananya Kuditipudi, Faisal Kumar, Mina Ladhak, Tony Lee, Jure Lee, Isabelle Leskovec, Levent, Lisa Xiang, Xuechen Li, Tengyu Li, Ali Ma, Christopher D Malik, Manning, P Suvir, Eric Mirchandani, Zanele Mitchell, Suraj Munyikwa, Nair, Avanika Narayan, Deepak Narayanan, Benjamin Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, J. F. Nyarko, Giray OgutLaurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Robert Reich, Hongyu Ren, Frieda Rong, Yusuf HHo, Jenny Hong, Kyle Hsu, Jing Huang, Thomas F. Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, O. Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir P. Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Benjamin Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, J. F. Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christo- pher Potts, Aditi Raghunathan, Robert Reich, Hongyu Ren, Frieda Rong, Yusuf H.
On the opportunities and risks of foundation models. Camilo Roohani, Jack Ruiz, Ryan, Dorsa Christopher R'e, Shiori Sadigh, Keshav Sagawa, Andy Santhanam, Krishna Parasuram Shih, Alex Srinivasan, Rohan Tamkin, Armin W Taori, Florian Thomas, Rose E Tramèr, William Wang, Bohan Wang, Jiajun Wu, Yuhuai Wu, Sang Michael Wu, Michihiro Xie, Jiaxuan Yasunaga, You, A Matei, Michael Zaharia, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zhang, Kaitlyn Zheng, Percy Zhou, Liang, abs/2108.07258arXivRoohani, Camilo Ruiz, Jack Ryan, Christopher R'e, Dorsa Sadigh, Shiori Sagawa, Ke- shav Santhanam, Andy Shih, Krishna Parasuram Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei A. Za- haria, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. On the opportunities and risks of foundation models. arXiv, abs/2108.07258, 2021.
Multimodal pretraining unmasked: A meta-analysis and a unified framework of vision-and-language BERTs. Emanuele Bugliarello, Ryan Cotterell, Naoaki Okazaki, Desmond Elliott, 10.1162/tacla00408Transactions of the Association for Computational Linguistics. 9Emanuele Bugliarello, Ryan Cotterell, Naoaki Okazaki, and Desmond Elliott. Multimodal pretraining unmasked: A meta-analysis and a unified framework of vision-and-language BERTs. Transactions of the Association for Computational Linguistics, 9:978-994, 2021. doi: 10.1162/tacl a 00408. URL https://aclanthology.org/2021.tacl-1.58.
Uniter: Universal image-text representation learning. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, Jingjing Liu, ECCV. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Universal image-text representation learning. In ECCV, 2020.
Randaugment: Practical automated data augmentation with a reduced search space. Barret Ekin Dogus Cubuk, Jon Zoph, Quoc Shlens, Le, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. 2020Ekin Dogus Cubuk, Barret Zoph, Jon Shlens, and Quoc Le. Randaugment: Practical automated data augmentation with a reduced search space. In Ad- vances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ d85b63ef0ccb114d0a3bb7b7d808028f-Abstract.html.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186. Association for Computational Linguistics, 2019. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423.
An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, Neil Houlsby, 9th International Conference on Learning Representations. AustriaICLR 2021, Virtual EventAlexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In 9th International Conference on Learning Representa- tions, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id=YicbFdNTTy.
An empirical study of training end-to-end vision-and-language transformers. Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Nanyun Peng, Zicheng Liu, Michael Zeng, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)2022Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chen- guang Zhu, Nanyun Peng, Zicheng Liu, and Michael Zeng. An empirical study of training end-to-end vision-and-language transformers. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition (CVPR), 2022.
WordNet: An Electronic Lexical Database. Christiane Fellbaum, Bradford BooksChristiane Fellbaum. WordNet: An Electronic Lexical Database. Bradford Books, 1998.
Making the V in VQA matter: Elevating the role of image understanding in visual question answering. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, Devi Parikh, 10.1109/CVPR.2017.6702017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USAIEEE Computer SocietyYash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the V in VQA matter: Elevating the role of image understanding in visual question answering. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 6325-6334. IEEE Computer Society, 2017. doi: 10.1109/CVPR.2017.670. URL https://doi.org/10.1109/CVPR.2017.670.
Decoupling the role of data, attention, and losses in multimodal transformers. Lisa Anne Hendricks, John Mellor, Rosalia Schneider, Jean-Baptiste Alayrac, Aida Nematzadeh, 10.1162/tacla00385Transactions of the Association for Computational Linguistics. 9Lisa Anne Hendricks, John Mellor, Rosalia Schneider, Jean-Baptiste Alayrac, and Aida Nematzadeh. Decoupling the role of data, attention, and losses in multimodal transform- ers. Transactions of the Association for Computational Linguistics, 9:570-585, 2021. doi: 10.1162/tacl a 00385. URL https://aclanthology.org/2021.tacl-1.35.
TaPas: Weakly supervised table parsing via pre-training. Jonathan Herzig, Krzysztof Nowak, Thomas Müller, Francesco Piccinno, Julian Eisenschlos, 10.18653/v1/2020.acl-main.398Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics2020Jonathan Herzig, Pawel Krzysztof Nowak, Thomas Müller, Francesco Piccinno, and Julian Eisenschlos. TaPas: Weakly supervised table parsing via pre-training. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4320- 4333. Association for Computational Linguistics, 2020. doi: 10.18653/v1/2020.acl-main. 398. URL https://aclanthology.org/2020.acl-main.398.
GQA: A new dataset for real-world visual reasoning and compositional question answering. A Drew, Christopher D Hudson, Manning, 10.1109/CVPR.2019.00686IEEE Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USAComputer Vision Foundation / IEEEDrew A. Hudson and Christopher D. Manning. GQA: A new dataset for real-world visual reasoning and compositional question answering. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 6700-6709. Computer Vision Foundation / IEEE, 2019. doi: 10.1109/CVPR.2019.00686. URL http://openaccess.thecvf.com/content_CVPR_
/html/Hudson_GQA_A_New_Dataset_for_Real-World_Visual_Reasoning_and_ Compositional_CVPR_2019_paper.html. /html/Hudson_GQA_A_New_Dataset_for_Real-World_Visual_Reasoning_and_ Compositional_CVPR_2019_paper.html.
Scaling up visual and vision-language representation learning with noisy text supervision. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V Le, Yun-Hsuan Sung, Zhen Li, Tom Duerig, ICML. 2021Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language rep- resentation learning with noisy text supervision. In ICML, 2021.
Vilt: Vision-and-language transformer without convolution or region supervision. Wonjae Kim, Bokyung Son, Ildoo Kim, PMLRProceedings of the 38th International Conference on Machine Learning. the 38th International Conference on Machine Learning139Wonjae Kim, Bokyung Son, and Ildoo Kim. Vilt: Vision-and-language transformer without convolution or region supervision. In Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pages 5583-5594. PMLR, 2021. URL http://proceedings.mlr.press/v139/kim21k.html.
Visual genome: Connecting language and vision using crowdsourced dense image annotations. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael S Bernstein, Li Fei-Fei, 10.1007/s11263-016-0981-7International Journal of Computer Vision. 1231Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123(1):32-73, 2017. ISSN 0920-5691. doi: 10.1007/s11263-016-0981-7. URL https://doi.org/10.1007/ s11263-016-0981-7.
CTAL: Pretraining cross-modal transformer for audio-and-language representations. Hang Li, Wenbiao Ding, Yu Kang, Tianqiao Liu, Zhongqin Wu, Zitao Liu, 10.18653/v1/2021.emnlp-main.323Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsHang Li, Wenbiao Ding, Yu Kang, Tianqiao Liu, Zhongqin Wu, and Zitao Liu. CTAL: Pre- training cross-modal transformer for audio-and-language representations. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3966-3977. Association for Computational Linguistics, 2021a. doi: 10.18653/v1/2021. emnlp-main.323. URL https://aclanthology.org/2021.emnlp-main.323.
Align before fuse: Vision and language representation learning with momentum distillation. Junnan Li, Ramprasaath R Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, Steven Hoi, NeurIPS. Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq Joty, Caiming Xiong, and Steven Hoi. Align before fuse: Vision and language representation learning with momentum distillation. In NeurIPS, 2021b.
Blip: Bootstrapping languageimage pre-training for unified vision-language understanding and generation. Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi, ICML. 2022Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language- image pre-training for unified vision-language understanding and generation. In ICML, 2022.
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang, arXiv, abs/1908.03557Visualbert: A simple and performant baseline for vision and language. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language. arXiv, abs/1908.03557, 2019.
Unsupervised vision-and-language pre-training without parallel images and captions. Haoxuan Liunian Harold Li, Zhecan You, Alireza Wang, Shih-Fu Zareian, Kai-Wei Chang, Chang, 10.18653/v1/2021.naacl-main.420Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsLiunian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, and Kai-Wei Chang. Unsupervised vision-and-language pre-training without parallel images and captions. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5339-5350. Association for Computational Linguistics, 2021c. doi: 10.18653/v1/2021. naacl-main.420. URL https://aclanthology.org/2021.naacl-main.420.
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge J Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, ECCV. Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ra- manan, Piotr Dollár, and C. Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approach. arXiv, abs/1907.11692Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv, abs/1907.11692, 2019.
Swin transformer: Hierarchical vision transformer using shifted windows. Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, Baining Guo, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). the IEEE/CVF International Conference on Computer Vision (ICCV)Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 10012-10022, 2021.
Vilbert: Pretraining taskagnostic visiolinguistic representations for vision-and-language tasks. Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. NeurIPS; Vancouver, BC, CanadaJiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task- agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Informa- tion Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 13-23, 2019. URL https://proceedings.neurips.cc/paper/2019/ hash/c74d97b01eae257e44aa9d5bade97baf-Abstract.html.
Understanding guided image captioning performance across domains. Edwin G Ng, Bo Pang, Piyush Sharma, Radu Soricut, 10.18653/v1/2021.conll-1.14Proceedings of the 25th Conference on Computational Natural Language Learning. the 25th Conference on Computational Natural Language LearningAssociation for Computational Linguistics2021Edwin G. Ng, Bo Pang, Piyush Sharma, and Radu Soricut. Understanding guided image captioning performance across domains. In Proceedings of the 25th Conference on Com- putational Natural Language Learning, pages 183-193. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021.conll-1.14. URL https://aclanthology.org/ 2021.conll-1.14.
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, International Conference on Machine Learning. PMLRAlec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR, 2021.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Journal of Machine Learning Research. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140): 1-67, 2020. URL http://jmlr.org/papers/v21/20-074.html.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C Berg, Steffen Schneider, Alexei Baevski, Ronan Collobert, Michael Auli, Unsupervised pre-training for. 2speech recognition. arXiv, abs/1904.05862Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhi- heng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, Steffen Schneider, Alexei Baevski, Ronan Collobert, and Michael Auli. wav2vec: Unsuper- vised pre-training for speech recognition. arXiv, abs/1904.05862, 2019.
Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. Piyush Sharma, Nan Ding, Sebastian Goodman, Radu Soricut, 10.18653/v1/P18-1238Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Pro- ceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556-2565. Association for Computational Linguistics, 2018. doi: 10.18653/v1/P18-1238. URL https://aclanthology.org/P18-1238.
Are we pretraining it right? digging deeper into visio-linguistic pretraining. arXiv, abs. Amanpreet Singh, Vedanuj Goswami, Devi Parikh, Amanpreet Singh, Vedanuj Goswami, and Devi Parikh. Are we pretraining it right? digging deeper into visio-linguistic pretraining. arXiv, abs/2004.08744, 2020.
Energy and policy considerations for deep learning in NLP. Emma Strubell, Ananya Ganesh, Andrew Mccallum, 10.18653/v1/P19-1355Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsEmma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in NLP. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3645-3650. Association for Computational Linguistics, 2019. doi: 10.18653/v1/P19-1355. URL https://aclanthology.org/P19-1355.
A corpus for reasoning about natural language grounded in photographs. Alane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, Yoav Artzi, 10.18653/v1/P19-1644Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsAlane Suhr, Stephanie Zhou, Ally Zhang, Iris Zhang, Huajun Bai, and Yoav Artzi. A corpus for reasoning about natural language grounded in photographs. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6418- 6428. Association for Computational Linguistics, 2019. doi: 10.18653/v1/P19-1644. URL https://aclanthology.org/P19-1644.
Multimodalqa: Complex question answering over text, tables and images. Alon Talmor, Ori Yoran, Amnon Catav, Dan Lahav, Yizhong Wang, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi, Jonathan Berant, International Conference on Learning Representations (ICLR). 2021Alon Talmor, Ori Yoran, Amnon Catav, Dan Lahav, Yizhong Wang, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi, and Jonathan Berant. Multimodalqa: Complex question answering over text, tables and images. In International Conference on Learning Repre- sentations (ICLR), 2021.
LXMERT: Learning cross-modality encoder representations from transformers. Hao Tan, Mohit Bansal, 10.18653/v1/D19-1514Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Association for Computational LinguisticsHao Tan and Mohit Bansal. LXMERT: Learning cross-modality encoder representations from transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5100-5111. Association for Computational Linguis- tics, 2019. doi: 10.18653/v1/D19-1514. URL https://aclanthology.org/D19-1514.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Ad- vances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/ 3f5ee243547dee91fbd053c1c4a845aa-Abstract.html.
Lit: Zero-shot transfer with locked-image text tuning. Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, Lucas Beyer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)2022Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. Lit: Zero-shot transfer with locked-image text tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.
Vinvl: Revisiting visual representations in vision-language models. Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, Jianfeng Gao, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. Vinvl: Revisiting visual representations in vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5579-5588, 2021.
Unsupervised vision-and-language pre-training via retrieval-based multi-granular alignment. Mingyang Zhou, Licheng Yu, Amanpreet Singh, Mengjiao Wang, Zhou Yu, Ning Zhang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Mingyang Zhou, Licheng Yu, Amanpreet Singh, Mengjiao Wang, Zhou Yu, and Ning Zhang. Unsupervised vision-and-language pre-training via retrieval-based multi-granular align- ment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 16485-16494, 2022.
| [
"https://github.com/eladsegal/lessbimodal-sup."
] |
[
"UPainting: Unified Text-to-Image Diffusion Generation with Cross-modal Guidance",
"UPainting: Unified Text-to-Image Diffusion Generation with Cross-modal Guidance"
] | [
"Wei Li \nBaidu Inc\n\n",
"Xue Xu \nBaidu Inc\n\n",
"Xinyan Xiao xiaoxinyan@baidu.com \nBaidu Inc\n\n",
"Jiachen Liu \nBaidu Inc\n\n",
"Hu Yang \nBaidu Inc\n\n",
"Guohao Li \nBaidu Inc\n\n",
"Zhanpeng Wang \nBaidu Inc\n\n",
"Zhifan Feng \nBaidu Inc\n\n",
"Qiaoqiao She sheqiaoqiao@baidu.com \nBaidu Inc\n\n",
"Yajuan Lyu lvyajuan@baidu.com \nBaidu Inc\n\n",
"Hua Wu \nBaidu Inc\n\n"
] | [
"Baidu Inc\n",
"Baidu Inc\n",
"Baidu Inc\n",
"Baidu Inc\n",
"Baidu Inc\n",
"Baidu Inc\n",
"Baidu Inc\n",
"Baidu Inc\n",
"Baidu Inc\n",
"Baidu Inc\n",
"Baidu Inc\n"
] | [] | An epic dreamscape with fantasy architecture, vivid colors, wide angle, super highly detailed, professional digital painting Legendary elegant gnome hold map and feel confuse in forest, highly detailed, global illumination, ray tracing, sharp focus Beautiful village around an ancient dragon head, massive scale, realistic concept art, cinematic color scheme, dramatic lighting (a) Samples of complex scene image generation (b) Samples of simple scene image generation Beautiful robot female with closed eyes, sci-fi, fantasy, mythology, complex, elegant, highly detailed, digital painting A portrait of a pink cat with human blue eyes wearing a scarf and a top hat, high quality, painting Full body portrait of a squirrel girl, blush, cute and elegant, with squirrel tail cocked to the left Figure 1: Selected samples of both complex-scene and simple-scene images generated by UPainting.AbstractDiffusion generative models have recently greatly improved the power of textconditioned image generation. Existing image generation models mainly include text conditional diffusion model and cross-modal guided diffusion model, which are good at small scene image generation and complex scene image generation respectively. In this work, we propose a simple yet effective approach, namely UPainting, to unify simple and complex scene image generation, as shown inFigure 1. Based on architecture improvements and diverse guidance schedules, UPainting effectively integrates cross-modal guidance from a pretrained image-text matching model into a text conditional diffusion model that utilizes a pretrained * Corresponding author.arXiv:2210.16031v3 [cs.CV] 3 Nov 2022Transformer language model as the text encoder. Our key findings is that combining the power of large-scale Transformer language model in understanding language and image-text matching model in capturing cross-modal semantics and style, is effective to improve sample fidelity and image-text alignment of image generation. In this way, UPainting has a more general image generation capability, which can generate images of both simple and complex scenes more effectively. To comprehensively compare text-to-image models, we further create a more general benchmark, UniBench, with well-written Chinese and English prompts in both simple and complex scenes. We compare UPainting with recent models and find that UPainting greatly outperforms other models in terms of caption similarity and image fidelity in both simple and complex scenes 2 . | 10.48550/arxiv.2210.16031 | [
"https://export.arxiv.org/pdf/2210.16031v3.pdf"
] | 253,223,865 | 2210.16031 | 8ff9667c8c948df1e84606dc086d9a4fba2256f7 |
UPainting: Unified Text-to-Image Diffusion Generation with Cross-modal Guidance
Wei Li
Baidu Inc
Xue Xu
Baidu Inc
Xinyan Xiao xiaoxinyan@baidu.com
Baidu Inc
Jiachen Liu
Baidu Inc
Hu Yang
Baidu Inc
Guohao Li
Baidu Inc
Zhanpeng Wang
Baidu Inc
Zhifan Feng
Baidu Inc
Qiaoqiao She sheqiaoqiao@baidu.com
Baidu Inc
Yajuan Lyu lvyajuan@baidu.com
Baidu Inc
Hua Wu
Baidu Inc
UPainting: Unified Text-to-Image Diffusion Generation with Cross-modal Guidance
An epic dreamscape with fantasy architecture, vivid colors, wide angle, super highly detailed, professional digital painting Legendary elegant gnome hold map and feel confuse in forest, highly detailed, global illumination, ray tracing, sharp focus Beautiful village around an ancient dragon head, massive scale, realistic concept art, cinematic color scheme, dramatic lighting (a) Samples of complex scene image generation (b) Samples of simple scene image generation Beautiful robot female with closed eyes, sci-fi, fantasy, mythology, complex, elegant, highly detailed, digital painting A portrait of a pink cat with human blue eyes wearing a scarf and a top hat, high quality, painting Full body portrait of a squirrel girl, blush, cute and elegant, with squirrel tail cocked to the left Figure 1: Selected samples of both complex-scene and simple-scene images generated by UPainting.AbstractDiffusion generative models have recently greatly improved the power of textconditioned image generation. Existing image generation models mainly include text conditional diffusion model and cross-modal guided diffusion model, which are good at small scene image generation and complex scene image generation respectively. In this work, we propose a simple yet effective approach, namely UPainting, to unify simple and complex scene image generation, as shown inFigure 1. Based on architecture improvements and diverse guidance schedules, UPainting effectively integrates cross-modal guidance from a pretrained image-text matching model into a text conditional diffusion model that utilizes a pretrained * Corresponding author.arXiv:2210.16031v3 [cs.CV] 3 Nov 2022Transformer language model as the text encoder. Our key findings is that combining the power of large-scale Transformer language model in understanding language and image-text matching model in capturing cross-modal semantics and style, is effective to improve sample fidelity and image-text alignment of image generation. In this way, UPainting has a more general image generation capability, which can generate images of both simple and complex scenes more effectively. To comprehensively compare text-to-image models, we further create a more general benchmark, UniBench, with well-written Chinese and English prompts in both simple and complex scenes. We compare UPainting with recent models and find that UPainting greatly outperforms other models in terms of caption similarity and image fidelity in both simple and complex scenes 2 .
Figure 1
: Selected samples of both complex-scene and simple-scene images generated by UPainting.
Abstract
Diffusion generative models have recently greatly improved the power of textconditioned image generation. Existing image generation models mainly include text conditional diffusion model and cross-modal guided diffusion model, which are good at small scene image generation and complex scene image generation respectively. In this work, we propose a simple yet effective approach, namely UPainting, to unify simple and complex scene image generation, as shown in Figure 1. Based on architecture improvements and diverse guidance schedules, UPainting effectively integrates cross-modal guidance from a pretrained image-text matching model into a text conditional diffusion model that utilizes a pretrained
Introduction
Multimodal learning has made great strides by scaling models on large datasets of image-caption pairs collected from the Internet, where image-text cross-modal matching and text-to-image generation , Rombach et al., 2022, Saharia et al., 2022, Ramesh et al., 2022 are at the forefront. Text conditional image generation includes simple scene image generation that usually depicts a specific object, and complex scene image generation that depicts an abstract or large scene with multiple objects, as shown in Figure 1. Existing text-to-image diffusion models mainly focus on simple scene image generation, such as GLIDE , DALL-E 2 [Ramesh et al., 2022], and Imagen [Saharia et al., 2022]. They mainly utilize text encoders trained on textual data or paired image-text data to capture the semantics and compositionality of natural language inputs. However, plain text encoders are difficult to capture all aspects of complex text prompts, such as sophisticated art styles (e.g. fantasy art, cinematic composition, etc.), aesthetic features (e.g. super wide angle, bright atmosphere, dramatic lighting, etc.) and abstract descriptions (e.g. epic dreamscape, time and pressure, etc.).
Pre-trained cross-modal matching models are effective in image-text alignment, and they typically capture rich cross-modal semantics and styles from large-scale corpora. In this work, to unify simple and complex scene image generation, we propose a simple but effective approach UPainting, which enhances a text-conditional diffusion model with cross-modal guidance. Specifically, a pretrained transformer language model encodes the textual prompts as semantic conditions for the diffusion model, and a pre-trained image-text matching model simultaneously guides the diffusion process through cross-modal matching based on several crucial architecture improvements and diverse guidance schedules. Our key findings is that combining a transformer language model and cross-modal matching models is very effective for improving sample fidelity, aesthetics and image-text alignment for diffusion image generation. In this way, UPainting has a general image generation capability, which can generate both simple and complex scene images more effectively. Some previous work have also explored to utilize pre-trained image-text matching models (i.e. CLIP ) to guide unconditional or class-conditional diffusion models [Crowson, 2021a,b, Liu et al., 2021. Without powerful text encoders to encode language prompts, these models are usually difficult to generate hyper-realistic images or complex scenes with details. Combining a large-scale transformer language model and cross-modal matching model is crucial for general text-conditional image generation. Some comparison examples are shown in Figure 12 and Figure 13.
Since existing evaluation benchmarks mainly focus on simple scene image generation, such as the COCO dataset [Lin et al., 2014] and the DrawBench [Saharia et al., 2022] dataset, we further propose a more comprehensive and challenging benchmark, UniBench, to evaluate general text-toimage generation ability. UniBench contains 200 well-written queries for both simple scenes and complex scenes in both Chinese and English. We compare UPainting with several recent models, including Stable Diffusion [Rombach et al., 2022] and Disco Diffusion [Crowson, 2021b], and find that UPainting greatly outperforms other methods in terms of image-text alignment and image fidelity in both simple and complex scenes. We also evaluate UPainting on the COCO dataset, and it achieves much better performance (with 8.34 FID score) than previous work such as Stable Diffusion (at 14.24). The results further demonstrate the general capability of image generation by effectively combining the transformer language model and cross-modal matching models. Figure 2: Illustration of the architecture of UPainting, which incorporates cross-modal matching models with a text-conditional diffusion model. UPainting has a general image generation capability, which can effectively generate simple scene images as well as complex scene images.
Key contributions of the paper include:
• For the first time, we systematically study the problem of text-conditional image generation for both simple and complex scenes, and propose a simple yet effective method to unify image generation for simple and complex scenes.
• We discover that by effectively combining cross-modal matching models with pre-trained transformer language models can greatly improve sample fidelity and image-text alignment for diffusion image generation. This gives the model a general ability to generate images for both simple and complex scenes.
• We introduce UniBench, a comprehensive and challenging evaluation benchmark for both simple and complex scene image generation. On UniBench, our proposed method outperforms other work in both image-text alignment and image fidelity.
Method
UPainting consists of a text encoder, an image-text matching component, and a conditional diffusion model, as shown in Figure 2. The text encoder maps text prompts to a sequence of embeddings, and then the conditional diffusion model maps these embeddings to corresponding images. During the diffusion process, the image-text matching component further guides each diffusion step by cross-modal matching, in order to better capture the complex cross-modal semantics and styles. In the following subsections, we describe these components in detail.
Text-conditional Diffusion Model
Our training dataset consists of pairs (x, y) of images x and their corresponding captions y. Given a text prompt y, the text encoder encodes it into a sequence of embeddings c. Then the conditional diffusion model P (x|c) learns to produce images x conditioned on text embeddings c.
Text Encoder. Creative text-to-image generation needs powerful semantic text encoders to capture the complexity and compositionality of arbitrary natural language text inputs. Transformer-based language models (e.g., BERT [Devlin et al., 2018], GPT [Brown et al., 2020], T5 [Raffel et al., 2020], ERNIE [Sun et al., 2020]) have led to leaps in textual understanding capabilities. Imagen [Saharia et al., 2022] has shown the effectiveness of large-scale pre-trained transformer language models to encode text for text-to-image generation. Thus, we apply a pre-trained transformer language model as our text encoder and further update it during the training process of the diffusion model on large volumes of paired image-text data. Conditional Diffusion Model. Diffusion models are a class of generative models that convert Gaussian noise into samples from a learned data distribution via an iterative denoising sampling process. In particular, sampling starts with Gaussian noise x T ∼ N (0, I) and produces gradually less-noisy samples x T −1 , x T −2 , . . . , until reaching a final sample x 0 . Each timestep t corresponds to a certain noise level, and x t can be thought of as a mixture of a signal x 0 with some noise where the signal to noise ratio is determined by the timestep t. A diffusion model learns to produce a slightly more "denoised" x t−1 from x t . Ho et al. [2020] parameterize this model as a function θ (x t , t, c) which predicts the noise component of a noisy sample x t . To train these models, each sample in a minibatch is produced by randomly drawing a data sample x 0 ∼ q(x 0 ), a timestep t ∼ U(1, T ), and noise ∼ N (0, I), which together give rise to a noised sample
x t = √ᾱ t x 0 + √ 1 −ᾱ t (9).
The training objective is a simple mean-squared error loss between the true noise and the predicted noise of the form:
E x0,t, ,c [ θ (x t , t, c) − 2 ](1)
To make the diffusion process conditional on the text prompts, we augment the underlying UNet [Ho et al., 2020] backbone of the diffusion model with the cross-attention mechanism. The token sequence embeddings c is mapped to the intermediate layers of the UNet via a cross-attention layer implementing
Attention(Q, K, V ) = sof tmax( QK T √ d ) · V ,
where the intermediate representations of the UNet acting as the query Q, and the token sequence embeddings c acting as the key K and value V .
Classifier-free guidance [Ho and Salimans, 2022] is a widely used technique to improve sample quality while reducing diversity in conditional diffusion models, which jointly trains a single diffusion model on conditional and unconditional objectives via randomly dropping c during training (e.g. with 10% probability). During sampling, the output of the model is extrapolated further in the direction of θ (x t |c) and away from θ (x t |∅) as follows:
θ (x t |c) = θ (x t |∅) + s · ( θ (x t |c) − θ (x t |∅))(2)
where s ≥ 1 is the guidance scale. Larger s will increase the fidelity of samples while reducing diversity. We set s = 8.0 empirically in our experiments.
Image-Text Matching Guidance
Dhariwal and Nichol have explored to utilize a classifier p(y|x) to improve a diffusion generator. In particular, they train a classifier p φ (y|x t , t) on noisy images x t , and then use gradients ∇ xt log p φ (y|x t , t) to guide the diffusion sampling process towards an arbitrary class label y. To apply the same idea to text-to-image diffusion models, further replace the classifier with a CLIP model in classifier guidance. A CLIP model consists of two separate pieces: an image encoder f (x) and a caption encoder g(y). The model optimizes a contrastive cross-entropy loss that encourages a high dot-product f (x) · g(y) if the image x is paired with the given caption y, or a low dot-product if the image and caption correspond to different parts in the training data. The denoising diffusion process can be perturbed with the gradient of the dot product of the image and caption encodings with respect to the image as follows:
θ (x t |c) = θ (x t |c) − √ 1 −ᾱ t ∇ xt (f (x t ) · g(y))(3)
As they feed noised samples x t to the CLIP model, so they must train CLIP on noised images x t to obtain correct gradient in the reverse process. Their experiments show that classifier-free guidance yields higher quality images than CLIP guidance.
In this work, we show that CLIP guidance can be combined with the classifier-free guidance to produce much higher quality samples. Moreover, any pre-trained image-text matching models can be effectively utilized without training on noised images by several crucial improvements.
Modifying the CLIP inputs. Noised samples x t are more like Gaussian noises that capture very few meaningful semantics in earlier sampling steps, especially when t is near T . It's difficult to obtain effective guidance by directly feeding x t into the image-text matching model which are pre-trained on natural images, such as CLIP. Although further training a noisy-aware CLIP help alleviate this problem, it still has two big drawbacks: (1) training a noisy-aware CLIP on noised images x t is time and computation costly. Many existing pre-trained image-text matching models cannot be utilized Figure 4: Comparing samples between CLIP inputs with x t and the modified x in . The results show that feeding CLIP guidance with modified x in is critical to improve the sample quality.
directly;
(2) noised images x t when t is near T is close to pure Gaussian noise, which is difficult to be aligned to corresponding text to give meaningful guidance to diffusion models.
To get more meaningful inputs to CLIP models, we instead first predict the (10), and extrapolate the noised images x t with the predictionx 0 , which can be formulated as follows:
final imagesx 0 = 1 √ᾱ t (x t − √ 1 −ᾱ t t )x in = √ 1 −ᾱ tx0 + (1 − √ 1 −ᾱ t )x t(4)
where √ 1 −ᾱ t ∈ [1, 0] when t ∈ [T, 1]. Thus, in earlier steps that t is close to T , the CLIP inputs x in is dominated byx 0 and gradually close to x t . Our experiments show that this modification is critical for the effectiveness of CLIP guidance. Figure 4 shows some comparing examples. Combining with classifier-free guidance. The classifier-free guidance is very effective to improve image-text alignment for text-conditional diffusion models. However, increasing the classifier-free guidance weight will also damages image fidelity producing highly saturated and unnatural images. Image-text matching models that pre-trained on large-volumes of paired image-text data capture rich cross-modal semantics, style and aesthetics. Combining Image-text matching guidance and classifier-free guidance can simultaneously help improve sample fidelity, aesthetics and image-text alignment. Thus, the denoising diffusion process can be formulated as follows:
θ (x t |c) = θ (x t |∅) + s · ( θ (x t |c) − θ (x t |∅)) − g √ 1 −ᾱ t ∇ xt (f (x in ) · g(y))(5)
where g ≥ 0 is the image-text matching guidance weight. Figure 5 compares the performance of different g. In our experiments, we set g = 10.0 to achieve better performance on text-to-image generation.
Diverse guidance schedules. As samples x t at earlier diffusion steps capture very few meaningful semantics, the image-text matching guidance weight should be weaker or even skipped at earlier steps, and be increased as T close to 1. We compare several different settings that skip different steps, and show that skipping a few earlier diffusion steps can effectively improve the sample quality. Figure 6 compares the performance of different schedule settings. The results show that skipping a few earlier steps will increase the sample fidelity, however, skipping too many steps will decrease both sample fidelity and alignment. Thus, in the following experiments, we skip 10 steps of CLIP guidance in the start with totally 50 DDIM steps.
Integrate multiple image-text matching models. Image-text matching has achieved great improvements based on image-text contrastive learning. Many vision-language pre-training models have been pre-trained on different datasets with different settings, which may capture different aspects of cross-modal semantics. It's very intuitive to integrate multiple image-text matching models to capture richer cross-modal semantics. We denote F as a set of image-text matching models. Then Equation 5 can be modified as follows: Figure 7 shows the results of using different image-text matching models in our model. The results show that different image-text matching models have different performance on sample fidelity (i.e. FID) and caption similarity (i.e. CLIPScore), composing them together can achieve better performance by giving more comprehensive cross-modal guidance.
θ (x t |c) = θ (x t |∅) + s · ( θ (x t |c) − θ (x t |∅)) − g √ 1 −ᾱ t |F | f ∈F ∇ xt (f (x in ) · g(y))(6)
Training and Inference
We only need to train the text-conditional diffusion model, and directly incorporate pre-trained image-text matching models during inference. We adopt the U-Net architecture from Ho et al. [2020] for our text-to-image diffusion model. The network is conditioned on text embeddings from the text encoder by two aspects: a pooled embedding vector added to the diffusion timestep embedding, and cross attention over the entire sequence of text embeddings. We train on a combination of internal Chinese datasets, with ≈ 600M image-text pairs, and the publicly available Laion dataset, with ≈ 400M image-text pairs. The Laion dataset has been translated into Chinese.
During inference, existing pre-trained image-text matching models can all be incorporated into the diffusion process, such as CLIP , UNIMO , and Ernie-vil . In our experiments, we utilize CLIP to evaluate the performance of our models, as it is most widely used in the community. As shown in Figure 7, different versions of CLIP models have different performance, we compose the ViTB32, ViTB16 and RN50 models to achieve a good trade-off between performance and efficiency. We also combine the classifier-free guidance and image-text matching guidance by setting the classifier-free guidance scale s = 8.0 and the image-text matching guidance weight g = 10.0. We adopt DDIM [Song et al., 2020] sampling approach with totally 50 diffusion steps by setting the noise to 0. During sampling, we skip the first 10 steps without using the image-text matching guidance and apply it in the following 40 diffusion steps, as discussed in Subsection 2.2.
Evaluation Benchmark
COCO [Lin et al., 2014] is the standard benchmark for evaluating text-to-image generation models, which mainly composition queries for simple scene images. The main used automatic metrics are [Xu et al., 2018] 35.49 DM-GAN [Zhu et al., 2019] 32.64 DF-GAN [Tao et al., 2020] 21.42 DM-GAN + CL [Ye et al., 2021] 20.79 XMC-GAN [Zhang et al., 2021a] 9.33 LAFITE 8.12 Mask-A-Scene [Gafni et al., 2022] 7.55 DALL-E 17.89 GLIDE 12.24 DALL-E 2 [Ramesh et al., 2022] 10.39 Imagen [Saharia et al., 2022] 7.27 Stable Diffusion [Rombach et al., 2022] 14.24 UPainting (Our work) 8.34
FID [Heusel et al., 2017] to measure image fidelity and CLIPscore [Hessel et al., 2021] to measure image-text alignment. We compare UPainting with existing models in both the supervised and the zero-shot settings. Consistent with previous works, we randomly sample 30K prompts from the COCO validation set as evaluation data. For model analysis in Subsection 2.2, we randomly sample 3K prompts from the COCO for efficiency and report both FID and CLIPscore. Besides COCO, several other benchmarks have also been proposed to systematically evaluate the performance of different text-to-image models, such as PaintSkills [Cho et al., 2022], DrawBench [Saharia et al., 2022] and PartiPrompts [Yu et al., 2022]. These benchmarks don't have reference images, so it's difficult to make automatic evaluation on them. And, the prompts of these benchmarks are also mainly for simple-scene hyper-realistic images.
UniBench To more comprehensively compare capabilities of different text-to-image generation models on generating both simple-scene and complex-scene images, we further propose a more general evaluation benchmark, UniBench. UniBench contains both prompts for simple-scene images and complex-scene images as shown in Figure 1 and Figure 3, all selected from DALL-E 2 [Ramesh et al., 2022], Stable Diffusion 3 , Reddit 4 and online queries from YiGe 5 . UniBench contains a total of 200 prompts, including 100 simple-scene queries and 100 complex-scene queries. It provides both Chinese and English languages for fair comparison of Chinese and English models. UniBench can be used to measure model capabilities across various categories and challenge aspects, some of which are shown in Figure 8 and Figure 9.
Experiments
We evaluate UPainting in both automatic and human evaluation settings. For all our experiments, the images are random generated samples from UPainting without post-processing or re-ranking.
Automatic Evaluation
We use FID score to evaluate UPainting on the COCO validation set, similar to Saharia et al. [2022], Ramesh et al. [2022]. Consistent with previous works, we report FID-30K, for which 30K prompts are drawn randomly from the validation set, and the model samples generated on these prompts are compared with reference images from the full validation set. The captions are translated into Chinese. Although there are mistakes when translating English captions into Chinese, UPainting outperforms other recent works such as DALL-E 2 and Stable Diffusion, as shown in Table 1. Note that FID is not a perfect metric to evaluate UPainting because the advantage of UPainting is to generate both simple scene and complex scene images for complex prompts (including sophisticated art styles, aesthetic features, etc.), but the prompts of COCO are mainly specific descriptions of simple scene images.
Human Evaluation
UniBench doesn't have golden reference images, so we apply human evaluation on this benchmark. Two human raters are asked to compare UPainting with two strong baselines side by side independently: (1) Disco Diffusion [Crowson, 2021b], good at generating imagism images of complex scenes; and (2) Stable Diffusion [Rombach et al., 2022], one of the most powerful text-to-image diffusion model. For each prompt, the raters choose to prefer Model A, Model B, or are indifferent for both image fidelity and image-text alignment.
Comparison to Disco Diffusion. Disco Diffusion (i.e. DD) is one of the most popular open released cross-modal guided image generation model. DD applies CLIP models to guide an unconditional diffusion model for text-conditional image generation. Similar to our model, DD guides the diffusion process iteratively through CLIP gradients. It's good at complex-scene imagism image generation, but difficult to generate details like specific objects, actions, and attributes. Figure 10 shows the comparison results between UPainting and Disco Diffusion, which demonstrate that human raters exceedingly prefer UPainting in both image-text alignment and image fidelity. Especially in the generation of simple-scene images that require generating concrete objects, UPainting is better than DD in more than 90% of the prompts. On complex-scene prompts, UPainting can also generate images with better aesthetics and accurate details. Some comparison examples are listed in Figure 12 and Figure 13.
Comparison to Stable Diffusion. Stable Diffusion is one of the state-of-the-art text conditional diffusion model. Figure 11 shows the comparison results between UPainting and Stable Diffusion (i.e. SD), which show that human raters much prefer UPainting over SD in both image-text alignment and image fidelity. The advantages of UPainting over SD are much larger on the complex scene prompts. The images generated by UPainting generally have better caption similarity and aesthetics. Some examples are also shown in Figure 12 and Figure 13. The results demonstrate the effectiveness of combining the power of large pre-trained transformer language model in language understanding and pre-trained image-text matching models in capturing cross-modal semantics and styles.
Related Work
Text-conditional image generation has achieved great progress during recent years. The main technique routes of text-to-image generation include three stages: GAN-based [Xu et al., 2018, Zhu et al., 2019, Tao et al., 2020, Zhang et al., 2021a, auto-regressive Transformer-based [Gafni et al., 2022, Zhang et al., 2021b, Huang et al., 2022, Ding et al., 2021, and diffusion-based [Ho et al., 2020, Ramesh et al., 2022, Saharia et al., 2022, Rombach et al., 2022. Many earlier works have trained GANs on publicly available image captioning datasets to produce text-conditional images. These works mainly focus on specific domains of image generation. Motivated by the success of GPT-3 [Brown et al., 2020], auto-regressive transformers later have been extensively trained on sequences of text tokens followed by image tokens by adapting the VQ-VAE [Van Den Oord et al., 2017] approach. The most wellknown approach, DALL-E , attracts great attention of the community. Most recently, diffusion model brings wide success to image synthesis. proposes the first text-conditional image-to-text diffusion model GLIDE. Based on GLIDE, Imagen [Saharia et al., 2022] validates the effectiveness of utilizing a large pre-trained Transformer language model as text encoder, which achieves state-of-the art FID on the COCO dataset. Instead of applying diffusion on the pixel space, Rombach et al. [2022] propose a latent diffusion model that applies diffusion in the latent space of powerful pre-trained auto-encoders, which effectively reduce the computational resources of diffusion training. These models mainly focus on hyper-realistic generation of simple scenes, and usually show low levels of detail for some complex scenes [Ramesh et al., 2022].
Image-text matching models, typically CLIPs, have been extensively utilized to steer image generation models towards text conditions [Liu et al., 2021]. Based on CLIP, Ramesh et al. [2022] propose a two-stage diffusion model DALL-E 2, where a prior generates a CLIP image embedding given a text caption, and a decoder generates an image conditioned on the image embedding. Through incorporating CLIP latents into diffusion image generation, DALL-E 2 greatly improves the diversity of generated samples. propose to train a noise-aware CLIP model on noised images and guide a text-conditional diffusion model by its gradients. However, their experiments show classifier-free guidance works more favorably than CLIP guidance for text conditional image generation. Moreover, training a noise-aware CLIP model is time and resource costly. Existing pre-trained CLIP models cannot be directly utilized. Crowson [2021a,b] use an unnoised CLIP model to guide unconditional or class-conditional diffusion models. However, it mainly targets abstract scene images without concrete details. In this paper, we propose an effective model that combines the pre-trained Transformer language model and pre-trained image-text matching models, which greatly improves both the caption similarity and image fidelity for both simple scene and complex scene image generation.
Conclusion
In this paper, we systematically study the problem of text-conditional image generation for both simple and complex scenes, and propose a simple yet effective method to unify them. We find that effectively combining cross-modal matching models with pre-trained transformer language models can greatly improve sample fidelity and image-text alignment for diffusion image generation, which gives the model a general ability to generate images for both simple and complex scenes. To more comprehensively compare different text-to-image generation models, we also propose a comprehensive and challenging evaluation benchmark for both simple and complex scene image generation. On this benchmark, UPainting greatly outperforms other strong models such as Stable Diffusion and Disco Diffusion, on both image-text alignment and image fidelity.
a) Samples of complex scene images (b) Samples of simple scene images
Figure 3 :
3More samples of both complex-scene and simple-scene images generated by UPainting.
Figure 5 :
5Comparing the performance of different CLIP guidance weight g. We set g = 10.0 in our experiments to obtain a good balance between imagetext alignment and image fidelity.
Figure 6 :
6Comparing the performance of different skip scheduling methods. We adopt DDIM sampling approach with totally 50 diffusion steps by setting the noise to 0.
Figure 7 :
7Comparing the performance of utilizing different CLIP models for cross-modal guidance.
Figure 8 :
8UPainting samples for different categories of complex-scene prompts from UniBench.
Figure 9 :
9UPainting samples for different categories of simple scene prompts from UniBench.
Figure 10 :
10The comparison of user preference rates for image-text alignment and image fidelity between UPainting and Disco Diffusion (i.e. DD) on UniBench.
Figure 11 :
11The comparison of user preference rates for image-text alignment and image fidelity between UPainting and Stable Diffusion (i.e. SD) on UniBench.
Figure 12 :
12Selected samples of simple-scene images.
Figure 13 :
13Selected samples of complex-scene images.
Table 1 :
1Evaluation results on MS-COCO 256x256 FID-30K.Model
FID Score (30K)
AttnGAN
https://upainting.github.io/
https://lexica.art/ 4 https://www.reddit.com/ 5 https://yige.baidu.com/
A Formulation of the Gaussian Diffusion ModelWe provide a detailed formulation of the Gaussian diffusion models fromHo et al. [2020]. For a data sample x 0 ∼ q(x 0 ), the Markovian noising process gradually adds noise to x 0 to produce noised samples x 1 through x T . Each step of the Markovian noising process adds Gaussian noise according to some variance schedule β t :For a given timestep t, q(x t |x 0 ) can be formulated as a Gaussian Distribution with α t = 1 − β t andThus, for a given t ∼ [1, T ] and x t , we can also approximately predictx 0 by:Based on Bayes theorem, the posterior q(x 0 ) and varianceβ t defined as follows:In order to sample from the data distribution q(x 0 ), diffusion models first sample from q ( x T ) and then sample reverse steps q(x t−1 |x t ) until reach x 0 . The distribution q(x t−1 |x t ) approaches a diagonal Gaussian distribution as T → ∞ and correspondingly β t → 0, so it can be approximated by a neural network to predict a mean u θ and a diagonal covariance matrix Σ θ :Instead of directing parameterize u θ (x t , t) by a neural network,Ho et al. [2020]found a simple formulation that predict the noise in Equation 9 by training a model θ (x t , t). The simplified training objective is defined as in Equation 1. During sampling, we can derive u θ (x t , t) from θ (x t , t):The covariance matrix Σ θ is usually fixed a a constant, choosing either β t I orβ t I which correspond to upper and lower bounds for the true reverse step variance.B Comparison ExamplesWe list several comparison examples between UPainting, Stable Diffusion and Disco Diffusion.
Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Advances in neural information processing systems. 33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901, 2020.
Dall-eval: Probing the reasoning skills and social biases of text-to-image generative transformers. Jaemin Cho, Abhay Zala, Mohit Bansal, arXiv:2202.04053arXiv preprintJaemin Cho, Abhay Zala, and Mohit Bansal. Dall-eval: Probing the reasoning skills and social biases of text-to-image generative transformers. arXiv preprint arXiv:2202.04053, 2022.
Clip guided diffusion hq 256x256. Katherine Crowson, Katherine Crowson. Clip guided diffusion hq 256x256. https://colab.research.google.com/drive/12a_Wrfi2_gwwAuN3VvMTwVMz9TfqctNj, 2021a.
Clip guided diffusion 512x512, secondary model method. Katherine Crowson, Katherine Crowson. Clip guided diffusion 512x512, secondary model method.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Diffusion models beat gans on image synthesis. Prafulla Dhariwal, Alexander Nichol, Advances in Neural Information Processing Systems. 34Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780-8794, 2021.
Mastering text-to-image generation via transformers. Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, Advances in Neural Information Processing Systems. 34Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, et al. Cogview: Mastering text-to-image generation via transformers. Advances in Neural Information Processing Systems, 34:19822-19835, 2021.
Make-ascene: Scene-based text-to-image generation with human priors. Oran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, Yaniv Taigman, arXiv:2203.13131arXiv preprintOran Gafni, Adam Polyak, Oron Ashual, Shelly Sheynin, Devi Parikh, and Yaniv Taigman. Make-a- scene: Scene-based text-to-image generation with human priors. arXiv preprint arXiv:2203.13131, 2022.
Clipscore: A referencefree evaluation metric for image captioning. Jack Hessel, Ari Holtzman, Maxwell Forbes, Yejin Ronan Le Bras, Choi, arXiv:2104.08718arXiv preprintJack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference- free evaluation metric for image captioning. arXiv preprint arXiv:2104.08718, 2021.
Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Sepp Hochreiter, 30Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
. Jonathan Ho, Tim Salimans, arXiv:2207.12598Classifier-free diffusion guidance. arXiv preprintJonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
Denoising diffusion probabilistic models. Jonathan Ho, Ajay Jain, Pieter Abbeel, Advances in Neural Information Processing Systems. 33Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020.
Du-vlg: Unifying vision-andlanguage generation via dual sequence-to-sequence pre-training. Luyang Huang, Guocheng Niu, Jiachen Liu, Xinyan Xiao, Hua Wu, arXiv:2203.09052arXiv preprintLuyang Huang, Guocheng Niu, Jiachen Liu, Xinyan Xiao, and Hua Wu. Du-vlg: Unifying vision-and- language generation via dual sequence-to-sequence pre-training. arXiv preprint arXiv:2203.09052, 2022.
Align before fuse: Vision and language representation learning with momentum distillation. Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, Steven Chu Hong Hoi, Advances in neural information processing systems. 34Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34:9694-9705, 2021.
Unimo: Towards unified-modal understanding and generation via cross-modal contrastive learning. Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, Haifeng Wang, arXiv:2012.15409arXiv preprintWei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and Haifeng Wang. Unimo: Towards unified-modal understanding and generation via cross-modal contrastive learning. arXiv preprint arXiv:2012.15409, 2020.
Unimo-2: End-to-end unified vision-language grounded learning. Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, Haifeng Wang, arXiv:2203.09067arXiv preprintWei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and Haifeng Wang. Unimo-2: End-to-end unified vision-language grounded learning. arXiv preprint arXiv:2203.09067, 2022.
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European conference on computer vision. SpringerTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision, pages 740-755. Springer, 2014.
More control for free! image synthesis with semantic diffusion guidance. Xihui Liu, Dong Huk Park, Samaneh Azadi, Gong Zhang, Arman Chopikyan, Yuxiao Hu, Humphrey Shi, Anna Rohrbach, Trevor Darrell, arXiv:2112.05744arXiv preprintXihui Liu, Dong Huk Park, Samaneh Azadi, Gong Zhang, Arman Chopikyan, Yuxiao Hu, Humphrey Shi, Anna Rohrbach, and Trevor Darrell. More control for free! image synthesis with semantic diffusion guidance. arXiv preprint arXiv:2112.05744, 2021.
Glide: Towards photorealistic image generation and editing with text-guided diffusion models. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, Mark Chen, arXiv:2112.10741arXiv preprintAlex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741, 2021.
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, International Conference on Machine Learning. PMLRAlec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR, 2021.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, J Peter, Liu, J. Mach. Learn. Res. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67, 2020.
Zero-shot text-to-image generation. Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, Ilya Sutskever, International Conference on Machine Learning. PMLRAditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In International Conference on Machine Learning, pages 8821-8831. PMLR, 2021.
Hierarchical textconditional image generation with clip latents. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen, arXiv:2204.06125arXiv preprintAditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text- conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022.
Highresolution image synthesis with latent diffusion models. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionRobin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pages 10684-10695, 2022.
Photorealistic text-to-image diffusion models with deep language understanding. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, ; S Sara Mahdavi, Rapha Gontijo Lopes, arXiv:2205.11487Burcu Karagol Ayan. arXiv preprintSeyed Kamyar Seyed GhasemipourChitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022.
. Jiaming Song, Chenlin Meng, Stefano Ermon, arXiv:2010.02502Denoising diffusion implicit models. arXiv preprintJiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020.
Ernie 2.0: A continual pre-training framework for language understanding. Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hua Hao Tian, Haifeng Wu, Wang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Hao Tian, Hua Wu, and Haifeng Wang. Ernie 2.0: A continual pre-training framework for language understanding. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8968-8975, 2020.
Ming Tao, Hao Tang, Songsong Wu, Nicu Sebe, Xiao-Yuan Jing, Fei Wu, Bingkun Bao, arXiv:2008.05865Deep fusion generative adversarial networks for text-to-image synthesis. DfganarXiv preprintMing Tao, Hao Tang, Songsong Wu, Nicu Sebe, Xiao-Yuan Jing, Fei Wu, and Bingkun Bao. Df- gan: Deep fusion generative adversarial networks for text-to-image synthesis. arXiv preprint arXiv:2008.05865, 2020.
Neural discrete representation learning. Advances in neural information processing systems. Aaron Van Den, Oriol Oord, Vinyals, 30Aaron Van Den Oord, Oriol Vinyals, et al. Neural discrete representation learning. Advances in neural information processing systems, 30, 2017.
Attngan: Fine-grained text to image generation with attentional generative adversarial networks. Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, Xiaodong He, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionTao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1316-1324, 2018.
Improving text-toimage synthesis using contrastive learning. Hui Ye, Xiulong Yang, Martin Takac, Rajshekhar Sunderraman, Shihao Ji, arXiv:2107.02423arXiv preprintHui Ye, Xiulong Yang, Martin Takac, Rajshekhar Sunderraman, and Shihao Ji. Improving text-to- image synthesis using contrastive learning. arXiv preprint arXiv:2107.02423, 2021.
Ernie-vil: Knowledge enhanced vision-language representations through scene graphs. Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hua Hao Tian, Haifeng Wu, Wang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. Ernie-vil: Knowledge enhanced vision-language representations through scene graphs. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 3208-3216, 2021.
Scaling autoregressive models for contentrich text-to-image generation. Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, arXiv:2206.10789arXiv preprintJiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, et al. Scaling autoregressive models for content- rich text-to-image generation. arXiv preprint arXiv:2206.10789, 2022.
Cross-modal contrastive learning for text-to-image generation. Han Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, Yinfei Yang, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionHan Zhang, Jing Yu Koh, Jason Baldridge, Honglak Lee, and Yinfei Yang. Cross-modal contrastive learning for text-to-image generation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 833-842, 2021a.
Ernie-vilg: Unified generative pre-training for bidirectional vision-language generation. Han Zhang, Weichong Yin, Yewei Fang, Lanxin Li, Boqiang Duan, Zhihua Wu, Yu Sun, Hua Hao Tian, Haifeng Wu, Wang, arXiv:2112.15283arXiv preprintHan Zhang, Weichong Yin, Yewei Fang, Lanxin Li, Boqiang Duan, Zhihua Wu, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. Ernie-vilg: Unified generative pre-training for bidirectional vision-language generation. arXiv preprint arXiv:2112.15283, 2021b.
Lafite: Towards language-free training for text-to-image generation. Yufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiuxiang Gu, Jinhui Xu, Tong Sun, arXiv:2111.13792arXiv preprintYufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiuxiang Gu, Jinhui Xu, and Tong Sun. Lafite: Towards language-free training for text-to-image generation. arXiv preprint arXiv:2111.13792, 2021.
Dm-gan: Dynamic memory generative adversarial networks for text-to-image synthesis. Minfeng Zhu, Pingbo Pan, Wei Chen, Yi Yang, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionMinfeng Zhu, Pingbo Pan, Wei Chen, and Yi Yang. Dm-gan: Dynamic memory generative adversarial networks for text-to-image synthesis. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5802-5810, 2019.
| [] |
[
"A REVIEW OF ON-DEVICE FULLY NEURAL END-TO-END AUTOMATIC SPEECH RECOGNITION ALGORITHMS",
"A REVIEW OF ON-DEVICE FULLY NEURAL END-TO-END AUTOMATIC SPEECH RECOGNITION ALGORITHMS"
] | [
"Chanwoo Kim \nSamsung Research\nSeoulSouth Korea\n",
"Dhananjaya Gowda d.gowda@samsung.com \nSamsung Research\nSeoulSouth Korea\n",
"Dongsoo Lee dongsoo3.lee@samsung.com \nSamsung Research\nSeoulSouth Korea\n",
"Jiyeon Kim \nSamsung Research\nSeoulSouth Korea\n",
"Ankur Kumar ankur.k@samsung.com \nSamsung Research\nSeoulSouth Korea\n",
"Sungsoo Kim \nSamsung Research\nSeoulSouth Korea\n",
"Abhinav Garg abhinav.garg@samsung.com \nSamsung Research\nSeoulSouth Korea\n",
"Changwoo Han \nSamsung Research\nSeoulSouth Korea\n"
] | [
"Samsung Research\nSeoulSouth Korea",
"Samsung Research\nSeoulSouth Korea",
"Samsung Research\nSeoulSouth Korea",
"Samsung Research\nSeoulSouth Korea",
"Samsung Research\nSeoulSouth Korea",
"Samsung Research\nSeoulSouth Korea",
"Samsung Research\nSeoulSouth Korea",
"Samsung Research\nSeoulSouth Korea"
] | [] | In this paper, we review various end-to-end automatic speech recognition algorithms and their optimization techniques for on-device applications. Conventional speech recognition systems comprise a large number of discrete components such as an acoustic model, a language model, a pronunciation model, a text-normalizer, an inverse-text normalizer, a decoder based on a Weighted Finite State Transducer (WFST), and so on. To obtain sufficiently high speech recognition accuracy with such conventional speech recognition systems, a very large language model (up to 100 GB) is usually needed. Hence, the corresponding WFST size becomes enormous, which prohibits their on-device implementation. Recently, fully neural network end-to-end speech recognition algorithms have been proposed. Examples include speech recognition systems based on Connectionist Temporal Classification (CTC), Recurrent Neural Network Transducer (RNN-T), Attention-based Encoder-Decoder models (AED), Monotonic Chunk-wise Attention (MoChA), transformerbased speech recognition systems, and so on. These fully neural network-based systems require much smaller memory footprints compared to conventional algorithms, therefore their on-device implementation has become feasible. In this paper, we review such end-to-end speech recognition models. We extensively discuss their structures, performance, and advantages compared to conventional algorithms.Index Terms-end-to-end speech recognition, attention-based model, recurrent neural network transducer, on-device speech recognition | 10.1109/ieeeconf51394.2020.9443456 | [
"https://arxiv.org/pdf/2012.07974v3.pdf"
] | 229,180,839 | 2012.07974 | 465474e965e322d16b649a8510797f3a83cf54a7 |
A REVIEW OF ON-DEVICE FULLY NEURAL END-TO-END AUTOMATIC SPEECH RECOGNITION ALGORITHMS
27 Aug 2021
Chanwoo Kim
Samsung Research
SeoulSouth Korea
Dhananjaya Gowda d.gowda@samsung.com
Samsung Research
SeoulSouth Korea
Dongsoo Lee dongsoo3.lee@samsung.com
Samsung Research
SeoulSouth Korea
Jiyeon Kim
Samsung Research
SeoulSouth Korea
Ankur Kumar ankur.k@samsung.com
Samsung Research
SeoulSouth Korea
Sungsoo Kim
Samsung Research
SeoulSouth Korea
Abhinav Garg abhinav.garg@samsung.com
Samsung Research
SeoulSouth Korea
Changwoo Han
Samsung Research
SeoulSouth Korea
A REVIEW OF ON-DEVICE FULLY NEURAL END-TO-END AUTOMATIC SPEECH RECOGNITION ALGORITHMS
27 Aug 2021
In this paper, we review various end-to-end automatic speech recognition algorithms and their optimization techniques for on-device applications. Conventional speech recognition systems comprise a large number of discrete components such as an acoustic model, a language model, a pronunciation model, a text-normalizer, an inverse-text normalizer, a decoder based on a Weighted Finite State Transducer (WFST), and so on. To obtain sufficiently high speech recognition accuracy with such conventional speech recognition systems, a very large language model (up to 100 GB) is usually needed. Hence, the corresponding WFST size becomes enormous, which prohibits their on-device implementation. Recently, fully neural network end-to-end speech recognition algorithms have been proposed. Examples include speech recognition systems based on Connectionist Temporal Classification (CTC), Recurrent Neural Network Transducer (RNN-T), Attention-based Encoder-Decoder models (AED), Monotonic Chunk-wise Attention (MoChA), transformerbased speech recognition systems, and so on. These fully neural network-based systems require much smaller memory footprints compared to conventional algorithms, therefore their on-device implementation has become feasible. In this paper, we review such end-to-end speech recognition models. We extensively discuss their structures, performance, and advantages compared to conventional algorithms.Index Terms-end-to-end speech recognition, attention-based model, recurrent neural network transducer, on-device speech recognition
INTRODUCTION
The advent of deep learning techniques has dramatically improved accuracy of speech recognition models [1]. Deep learning techniques first saw success by replacing the Gaussian Mixture Model (GMM) of the Acoustic Model (AM) part of the conventional speech recognition systems [2] with the Feed-Forward Deep Neural Networks (FF-DNNs), further with Recurrent Neural Network (RNN) such as the Long Short-Term Memory (LSTM) networks [3] or Convonlutional Neural Networks (CNNs). In addition to this, there have been improvements in noise robustness by using models motivated by auditory processing [4,5,6], data augmentation techniques [7,8,9], and beam-forming [10]. Thanks to these advances, voice as-Thanks to Samsung Electronics for funding this research. The authors are thankful to President Sebasitan Seung, Executive Vice President Seunghwan Cho and speech processing Lab. members at Samsung Research. [15]. L is the sequence length, D is the unit size or the dimension of each layer, and T is the filter width of convolution. Refer to Sec. 2 about how to obtain the typical values shown in this table.
Model
Memory Footprint Typical Value Intra-Sequence Parallelism
LSTM O(N D) 98KB X Convolution O(T N D) 245KB O Self-attention O(LN D)
12MB O sistant devices such as Google Home [11] and Amazon Alexa have been widely used at home environments. Nevertheless, it was not easy to run such high-performance speech recognition systems on devices largely because of the size of the Weighted Finite State Transducer (WFST) handling the lexicon and the language model. Fortunately, all-neural end-to-end (E2E) speech recognition systems were introduced which do not need a large WFST or an n-gram Language Model (LM) [12]. These complete end-to-end systems have started surpassing the performance of the conventional WFST-based decoders with a very large training dataset [13] and a better choice of target unit such as Byte Pair Encoded (BPE) subword units.
In this paper, we provide a comprehensive review of the various components and algorithms of an end-to-end speech recognition system. In Sec. 2, we give a brief overview of the various neural building blocks of an E2E Automatic Speech Recognition (ASR) model. The most popular E2E ASR architectures are reviewed in Sec. 3. Additional techniques used to improve the performance of E2E ASR models are discussed in Sec. 4. Techniques used for compression and quantization of the all-neural E2E ASR models are covered in Sec. 5. Sec. 6 gives a summary of the paper.
NEURAL NETWORK COMPONENTS FOR
END-TO-END SPEECH RECOGNITION Fig. 1 illustrates neural network components commonly employed in end-to-end speech recognition systems. Gated RNNs such as LSTMs and Gated Recurrent Units (GRUs) [16] have been used from the early days of encoder-decoder [17] and attention-based models. The operation of an LSTM is described by the following
i [m] = σ Wix [m] + Uih [m−1] + bi (1a) f [m] = σ W f x [m] + U f h [m−1] + b f (1b) o [m] = σ Wox [m] + Uoh [m−1] + bo (1c) c [m] = tanh Wcx [m] + Uch [m−1] + bc (1d) c [m] = i [m] ⊙c [m] + f [m] ⊙ c [m−1] (1e) h [m] = o [m] ⊙ tanh(c [m] ),(1f)
where W (·) and U (·) are weight matrices, b (·) is the bias vector. i[m], f[m], and o [m] are the input, forget, and output gates at time m, respectively. σ(·) is the sigmoid function and ⊙ is the Hadamard product operator. c [m] and h [m] are the cell and hidden states. Fig. 1a shows the structure of an LSTM described by (1). One of notable advantages of gated RNNs is that they require relatively smaller memory footprint compared to other models as shown in Table 1. Their another advantage is streaming capability if unidirectional models (e.g. uni-directional LSTMs or GRUs) are employed. Because of these advantages, at the time of writing this paper, most of the commercially available end-to-end on-device speech recognition systems are based on LSTMs [18,19]. However, as shown in Table 1, LSTMs have disadvantages in terms of intra-sequence parallelism, since the computation needs to be done sequentially.
Various CNN-based approaches have been successfully employed in building end-to-end speech recognition systems [20,21]. These approaches are characterized by a group of filters and a nonlinear rectifier as shown in Fig. 1b. As an example of various CNN-based approaches, depthwise 1-dimensional CNN is represented by the following equations [15]:
x ′ [m], d = (T −1)/2 t=−(T −1)/2 W t,d · x [m+t], d , 0 ≤ d ≤ D − 1 (2a) x ′ [m] = x ′ [m], 0 , x ′ [m], 1 , · · · , x ′ [m], D−1 ⊺ (2b) h [m] = relu(Vx ′ [m] + b),(2c)
where T and D are the length of the 1-dimensional filter and the number of such filters respectively.
x [m], d , 0 ≤ d ≤ D − 1 is the d-th element of the input x [m]
at the time index m. W ∈ R T ×D , V ∈ R D×D , and b ∈ R D are trainable variables. Unlike gated RNNs, to calculate the output of the current time step, CNN does not require the completion of computation for the previous time steps, which enables intra-sequence parallelism as summarized in Table 1.
Thus, CNN-based end-to-end speech recognition systems have advantages in computational efficiency on embedded processors supporting Single Instruction Multiple Data (SIMD) by exploiting intrasequence parallelism [15].
Recently, self-attention has been also successfully applied to speech recognition. In [14], self-attention mechanism was implemented using a scaled-dot attention described by the following equation:
Attention(Q, K, V) = softmax QK ⊺ √ d k V,(3)
where Q, K, and V are matrices representing a query, a key, and a value, and d k is the dimension of the key. In a self-attention layer, all of the queries, keys, and values are the outputs of the previous layer.
In Table 1, we compare typical values of memory footprint required for stacks of these neural network layers [15]. These stacks correspond to the neural net layers in Fig. 2a or the encoder portion of Recurrent Neural Network-Transduer (RNN-T) [22,23] or attention-based models in Fig. 2b and Fig. 2c, respectively. In obtaining these values, we assume that the number of layer N is 6 for the LSTM case [18], and 15 for the CNN and the self-attention cases. For speech recognition, L is usually a few hundred, while T is about ten or less. Based on this, we assume that the dimension D is 2048, the sequence length L is 100, and the filter length T is 5, respectively, to calculate typical values [15].
END-TO-END SPEECH RECOGNITION ARCHITECTURES
Speech recognition is a task of finding the sequence-to-sequence mapping from an input sequence of acoustic features to a output sequence of labels. Let us denote the input and output sequences by x [0:M ] and y0:L as shown below:
x [0:M ] = x [0] , x [1] , x [2] , · · · , x [M −1] ,(4a)
y0:L = y0, y1, y2, · · · , yL−1 ,
where M and L are the lengths of the input acoustic feature sequence and the output label sequence, respectively. The sequence notation adopted in this paper including (4) follows the Python array slice notation. In this paper, by convention, we use a bracket to
h (enc) [m] Encoder Prediction Network LSTM h (pred) l j [m], l y[m], l y (embed) prev([m], l) (b) RNN-T Model. LSTM Attention Softmax .
. . represent a periodically sampled sequence such as the acoustic feature, and use a subscript to represent a non-periodic sequence such as the output label. Fig. 2 shows structures of end-to-end all neural speech recognition systems. Even though we use a stack of LSTMs in Fig. 2, any kinds of neural network components described in Sec. 2 may be employed instead of LSTMs.
x[m] cl y (embed) l−1 h (enc)
Connectionist Temporal Classification (CTC)
The simplest way of implementing an end-to-end speech recognizer is using a stack of neural network layers with a Connectionist Temporal Classification loss [24] as shown in
Π M −1 m=0 P z [m] x [0:M ] . (5)
The CTC loss is defined by the following equation:
ÄCTC = −E log P y0:L x [0:M ] .(6)
The parameters of a model with the CTC loss are updated using the forward-backward recursion assuming conditional indepedence [24], which is similar to the forward-backward algorithm used for training Hidden Markov Models (HMMs) [2].
Recurrent Neural Network-Transducer (RNN-T)
In spite of its simplicity, the model described in Sec. 3.1 has several shortcomings including the conditional independence assumption during the model training [25] and the lack of explicit feedback paths from the output label. An improved version of this model is the RNN-T model [23] shown in Fig. 2b. In this model, there is an explicit feedback path from the output label to the prediction network which plays a similar role to an LM. The probability model of an RNN-T based model is similar to the CTC model:
P y0:L x [0:M ] = z [0:M +L] ∈ B RNN-T x [0:M ] , y 0:L Π M +L−1 u=0 P z [u] x [0:M ] ,(7)
where
Attention-based Models
Another popular end-to-end speech recognition approach is employing the attention-mechanism as shown in Fig. 2c [12]. In attentionbased approaches, we use the attention between the encoder and decoder hidden outputs. The equation for encoder-decoder attention is basically the same as (3):
e [m], l = Energy(h (enc) [m] , h (dec) l−1 ) (8a) a [m], l = softmax(e [m], l ) (8b) c l = M −1 m=0 a [m], l h (enc) [m] ,(8c)
where c l is the context vector which is used as the input to the decoder. h (enc) [m] and h (dec) l−1 are hidden outputs from the encoder at time m and from the decoder at the label index l − 1, respectively. e [m], l in (8a) is the energy for the input time index m and the output label index l.
Monotonic Chunkwise Attention(MoChA)-based Models
Although the attention-based approach in 3.3 has been quite successful, on-line streaming recognition with this approach has been a challenge. This is because the entire sequence of input features must be encoded before the decoder starts generating the first output label. Several variations of the attention including Monotonic Chunkwise Attention (MoChA) [26] have been proposed to resolve this problem. In the MoChA model, there are two attention mechanisms: a hard monotonic attention followed by a soft chunkwise attention. The hard monotonic attention is employed to determine which element should be attended from a sequence of hidden encoder outputs h (enc)
[m] . The hard monotonic attention is obtained from the hidden encoder output h (enc)
[m] at the time index m and the hidden decoder output h (dec) l−1 at the output label index l − 1 as follows:
e (mono) [m], l = MonotonicEnergy(h (enc) [m] , h (dec) l−1 ) (9a) a (mono) [m],l = σ(e (mono) [m], l ) (9b) z [m],l ∼ Bernoullli a (mono) [m], l ,(9c)
where σ(·) is a logistic sigmoid function and MonotonicEnergy is the energy function defined as follows [26]:
MonotonicEnergy(h (enc) [m] , h (dec) l−1 ) = g v ⊺ v tanh(W (dec) h (dec) l−1 + W (enc) h (enc) [m] + b) + r,(10)
where v, W (dec) , W (enc) , b, g, and r are learnable variables. After finding out the position to attend using (9c), a soft attention with a fixed chunk size is employed to generate the context vector that will be given as an input to the decoder. We refer readers interested in the more detailed structure of MoChA to [18,26]. A block schematic of an on-device speech recognizer based on MoChA is shown in Fig. 3.
Comparison between MoChA and RNN-T based models
In this section, we compare a MoChA-based model described in Sec.
with a RNN
-T-based model discussed in Sec. 3.2. For these RNN-T and MoChA models, we used the same encoder structures consisting of six LSTM layers with the unit size of 1,024. The lower three LSTM layers in the encoder are interleaved with 2:1 max-pool layers as in [18]. A single LSTM layer with the unit size of 1024 is employed for both the decoder of the MoChA-based model and the prediction network of RNN-T-based model. Training was performed using an in-house training toolkit built with the same Keras [27] and Tensorflow 2.3 APIs [28]. A 40-dimension powermel feature with a power coefficient of 1/15 [4] is used as the input feature. We prefer the power-mel feature to the more frequently used log-mel feature since the power-mel feature shows better speech recognition accuracy [29,30,31]. In Table 2, we compare the performance of MoChA and RNN-T in terms of speech recognition accuracy and latency. The model is compressed by 8-bit quantization and Low Rank Approximation (LRA) [32]. More details about the compression procedure is described in [18]. The latency was measured on a Samsung Galaxy Note 10 device. As shown in this table, the MoChA-based model shows slightly better speech recognition accuracy compared to the RNN-T-based model. However, the RNN-T-based model is noticeably better than the MoChAbased model in terms of latency and consistency of latency. The MoChA-based model shows more variation in latency compared to the RNN-T based model.
APPROACHES FOR FURTHER PERFORMANCE IMPROVEMENT
Recently, various techniques have been proposed to further improve the performance of all neural end-to-end speech recognition systems Fig. 3: The structure of an on-device speech recognition system based on Monotonic Chunkwise Attention (MoChA) [18] described in Sec. 3. We will discuss some of these techniques in this section.
Combination with a non-streaming model
Even though Bidirectional LSTMs (BLSTMs) perform significantly better than unidirectional counterparts for on-device applications [18], latency requirement usually prohibits their usage. To get the advantage of BLSTMs without significantly affecting overall latency, we proposed a recognition system combining the streaming MoChA model with a batch full attention model in [33]. Fig. 4: Speech recognition system that performs streaming recognition followed by low-latency batch recognition for improved performance. structure of this system is shown in Fig. 4. In this model, there is a shared streaming encoder consisting of uni-directional LSTMs. On one side, the streaming MoChA attention and decoder can generate streaming speech recognition result. On the other side, one backward LSTM layer on top of the shared unidirectional layer comprise a BLSTM layer, which is followed by a full attention model. When the user finishes speaking, then this full attention model generates low-latency batch speech recognition result. The latency impact is relatively small since there is only one layer of backward LSTM layer in the encoder. The experimental result is summarized in Table 3. We use the same model architecture and experiment configuration as described in Sec. 3.5. As shown in this table, the proposed model in the bottom row shows significantly better result than the MoChAbased model shown in the top row of this table with endurable increase in latency.
y (stream embed) l−1 y (batch embed) l−1 y (stream) lŷ (batch) l
Shallow-fusion with language models
End-to-end speech recognition models discussed in Sec. 3 are trained using only paired speech-text data. Compared to traditional AM-LM approaches where an LM is often trained using a much larger text corpus possibly containing billions of sentences [34], the end-to-end speech recognition model sees much limited number of word sequences during the training phase. To get further performance improvement, researchers have proposed various techniques of incorporating external language models such as shallow-fusion [35], deep-fusion [36], and cold-fusion [37]. Among them, in spite of its simplicity, shallow-fusion seems to be more effective than other approaches [38]. In shallow-fusion, the log probability from the end-to-end speech recognition model is linearly interpolated with the probability from the language model as follows:
log psf y l x [0:m] = log p y l x [0:m] ,ŷ 0:l + λ log plm y l ŷ 0:l where plm y l ŷ 0:l is the probability of predicting the label y l from the LM, and the p y l x [0:m] is the posterior probability obtained from the end-to-end speech recognition model. Shallow-fusion with an LM is helpful in enhancing speech performance in general domains, thus, most of the state of the art speech recognition results on publicly available test sets ((e.g) LibriSpeech [39]) are obtained with this technique [8,29,40]. In addition to this, this technique is also very useful for enhancing performance in special domain like personal names, geographic names, or music names by employing domain-specific LMs. In case of n-gram LMs, they can be easily built on-device to support personalized speech recognition [18]. For on-device command recognition, shallow-fusion with a WFST is also useful for specific domains since WFST contains a list of words not just subword units [41].
Improving NER performance using a spell corrector
Even though end-to-end all neural speech recognition systems have shown quite remarkable speech recognition accuracy with small memory footprint, it has been frequently observed that performance is poorer in recognizing named entities [42]. Compared to conventional speech recognizers built with a WFST containing dictionary information, an end-to-end speech recognition system does not explicitly contain a list of named entities. Therefore, the speech recognition accuracy of such systems is generally low for specialized domains handling song names, composers' names, personal names, and geographical names. Without dictionary information, spelling errors may also occur with all neural speech recognizers. This named entity recognition issue can be somewhat relieved by applying shallow-fusion with an LM as described in Sec. 4.2. However, more drastic performance improvement is usually obtained by classifying the domain from the speech recognition output and applying spell correction using a list of named entities found in that domain. In [41], a multi-stage spell correction approach was proposed to handle a large list of named entities with on-device speech recognizers.
COMPRESSION
To run speech recognition models on embedded processors with limited memory, we often need to further reduce the parameter size in order to satisfy computational cost and memory footprint requirements. There are many known techniques to reduce the model size such as quantization, factorization, distillation, and pruning. The simplest way of accomplishing compression may be applying 8-bit quantization, which is supported by TensorflowLite TM . Several commercially available speech recognition systems [18,19] have been built using this technique. In feed-forward neural networks, 2-bit or 3-bit quantization has been successfully employed [43]. However, for general purpose processors, there are relatively small gains in going below 8-bit quantization, since Arithmetic Logic Units (ALUs) usually do not support sub 8-bit arithmetic. Another popular technique in reducing the model size of LSTMs is applying Low Rank Approximation (LRA) [44]. As an example, the DeepTwist algorithm, which is based on LRA, is employed to reduce the parameter size in our previous work [18,41]. More specifically, we were able to reduce the size of MoChA-based models from 531 MB to less than 40 MB without sacrificing the performance using 8-bit quantization and LRA. Pruning may be a good choice if hardware supports sparse matrix arithmetic. In [45], authors propose a three-stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. In [46], the authors studied effectiveness of different compression techniques for end-to-end speech recognition models. They conclude that pruning is the most effective with proper hardware support. In the absence of this, for small models, distillation appears to be the best choice, while factorization appears to be the best approach for larger models.
CONCLUSIONS
In this paper, we reviewed various end-to-end all neural automatic speech recognition systems and their optimization techniques for on-device applications. On-device speech recognition has huge advantages compared to the server-side ones in terms of user privacy, operation without internet, server-cost, and latency. To operate speech recognition systems on embedded processors, we need to consider several factors such as recognition accuracy, computational cost, latency, and the model size. We compared pros and cons of different neural network components such as Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and attention mechanism. We explained and compared different end-to-end neural speech recognition architectures such as a stack of LSTM layers with the Connectionist Temporal Classification (CTC) loss [24], Recurrent Neural Network-Transformer(RNN-T) [22,23], attention-based models, and models based on Monotonic Chunkwise Attention (MoChA) [26]. Further improvement is achieved by combining a streaming model with a low-latency non-streaming model, by applying shallow-fusion with a Language Model (LM), and by applying spell correction using a list of named entities [47]. We also discussed several model compression techniques including quantization, singular value decomposition, pruning, and knowledge distillation. These recent advances in all neural end-to-end speech recognition made it possible to commercialize all neural on-device end-to-end speech recognition systems [18,19,41].
Fig. 1 :
1Structures of neural net components frequently used for end-to-end speech recognition: (a) LSTM, (b) CNN, and (c) Scaled Dot-Product Attention. equation:
Fig. 2 :
2Comparison of block diagrams of different sequence-to-sequence speech recognition approaches.
Fig. 2a .
2aThis model defines a probability distribution over the set of output labels augmented with a special blank symbol, b . We define the set of all possible alignments B CTC x [0:M ] , y0:L , as the set of all label sequences z [0:M ] where z[m] ∈ Y ∪ b , 0 ≤ m < M , such that z [0:M ] is identical to y 0:l after removing all blank symbols b . Y is the set of the entire alphabet of the output labels. Under this assumption, the posterior probability of the output label sequence is given by the following equation: P y0:L x [0:M ] = z [0:M ] ∈ B CTC (x[0:M], y 0:L)
B RNN-T x [0:M ] , y0:L is the set of all possible label sequences z [0:M +L] where z[u] ∈ Y ∪ b , 0 ≤ u < M + L, such that after removing blank symbols b from z [0:M +L] , it becomes the same as y0:L. As in Sec. 3.1, Y is the set of the entire alphabet of the output labels.
Table 1 :
1Size of the intermediate buffer required for streaming speech recognition
Table 2 :
2Performance comparison between the MoChA-based model and the RNN-T based model on Librispeech test-clean evaluation set.Model
WER
Avg. Latency
MoChA 6.88 %
225 ms
RNN-T
7.63 %
86.5 ms
LSTM
Chunkwise
Attention
Linear
x[m]
cl−1
yl−1
h
(enc)
[m]
The entireLSTM
Streaming
MoCha Attention
+ Decoder
x[m]
h (stream enc)
[m]
LSTM
Max-Pool
Shared
Uni-directional
Encoder
× 3
× 2
LSTM
Backward LSTM
h (batch enc)
[m]
Batch
Full Attention
+ Decoder
Streaming Result
Low-Latency Batch Result
Table 3 :
3WordError Rate (WER) comparison between a streaming
model, a non-streaming model, and a streaming model combined
with a single layer of backward LSTM followed by a full attention
to minimize the impact on latency.
Model
LibriSpeech
test-clean
LibriSpeech
test-other
Uni-Directional Encoder
+ MoChA-Attention
6.88 %
19.11 %
Bi-Directional Encoder
+ Full-Attention
3.62 %
11.20 %
Shared Uni-Directional Encoder
+ Full-Attention
4.09 %
12.04 %
Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. G Hinton, L Deng, D Yu, G E Dahl, A Mohamed, N Jaitly, A Senior, V Vanhoucke, P Nguyen, T Sainath, B Kingsbury, IEEE Signal Processing Magazine. 296G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. Sainath, and B. Kings- bury, "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups," IEEE Signal Processing Magazine, vol. 29, no. 6, Nov. 2012.
A tutorial on hidden markov models and selected applications in speech recognition. L R Rabiner, Proceedings of the IEEE. 772L. R. Rabiner, "A tutorial on hidden markov models and se- lected applications in speech recognition," Proceedings of the IEEE, vol. 77, no. 2, pp. 257-286, Feb 1989.
Long short-term memory. S Hochreiter, J Schmidhuber, Neural Computation. 9S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural Computation, no. 9, pp. 1735-1780, Nov. 1997.
Power-Normalized Cepstral Coefficients (PNCC) for Robust Speech Recognition. C Kim, R M Stern, IEEE/ACM Trans. Audio, Speech, Lang. Process. C. Kim and R. M. Stern, "Power-Normalized Cepstral Coef- ficients (PNCC) for Robust Speech Recognition," IEEE/ACM Trans. Audio, Speech, Lang. Process., pp. 1315-1329, July 2016.
Robust speech recognition using temporal masking and thresholding algorithma. C Kim, K Chin, M Bacchiani, R M Stern, INTERSPEECH-2014. C. Kim, K. Chin, M. Bacchiani, and R. M. Stern, "Robust speech recognition using temporal masking and thresholding algorithma," in INTERSPEECH-2014, Sept. 2014, pp. 2734- 2738.
Feature extraction for robust speech recognition using a power-law nonlinearity and power-bias subtraction. C Kim, R M Stern, INTERSPEECH-2009. C. Kim and R. M. Stern, "Feature extraction for robust speech recognition using a power-law nonlinearity and power-bias subtraction," in INTERSPEECH-2009, Sept. 2009, pp. 28-31.
Robust speech recognition using small power boosting algorithm. C Kim, K Kumar, R M Stern, IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). C. Kim, K. Kumar and R. M. Stern, "Robust speech recogni- tion using small power boosting algorithm," in IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Dec. 2009, pp. 243-248.
SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition. D S Park, W Chan, Y Zhang, C.-C Chiu, B Zoph, E D Cubuk, Q V Le, Proc. Interspeech. InterspeechD. S. Park, W. Chan, Y. Zhang, C.-C. Chiu, B. Zoph, E. D. Cubuk, and Q. V. Le, "SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition," in Proc. Interspeech 2019, 2019, pp. 2613-2617.
Efficient implementation of the room simulator for training deep neural network acoustic models. C Kim, E Variani, A Narayanan, M Bacchiani, in INTERSPEECH. C. Kim, E. Variani, A. Narayanan, and M. Bacchiani, "Efficient implementation of the room simulator for training deep neural network acoustic models," in INTERSPEECH- 2018, Sept 2018, pp. 3028-3032.
Neural network based spectral mask estimation for acoustic beamforming. J Heymann, L Drude, R Haeb-Umbach, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). J. Heymann, L. Drude, and R. Haeb-Umbach, "Neural network based spectral mask estimation for acoustic beamforming," in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), March 2016, pp. 196-200.
Generation of large-scale simulated utterances in virtual rooms to train deep-neural networks for far-field speech recognition in google home. C Kim, A Misra, K Chin, T Hughes, A Narayanan, T N Sainath, M Bacchiani, Proc. Interspeech. InterspeechC. Kim, A. Misra, K. Chin, T. Hughes, A. Narayanan, T. N. Sainath, and M. Bacchiani, "Generation of large-scale simulated utterances in virtual rooms to train deep-neural networks for far-field speech recognition in google home," in Proc. Interspeech 2017, 2017, pp. 379-383.
Attention-based models for speech recognition. J K Chorowski, D Bahdanau, D Serdyuk, K Cho, Y Bengio, Advances in Neural Information Processing Systems. C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. GarnettCurran Associates, Inc28J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio, "Attention-based models for speech recognition," in Advances in Neural Information Processing Systems 28, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, Eds. Curran Associates, Inc., 2015, pp. 577-585.
State-of-theart speech recognition with sequence-to-sequence models. C.-C Chiu, T N Sainath, Y Wu, R Prabhavalkar, P Nguyen, Z Chen, A Kannan, R J Weiss, K Rao, E Gonina, N Jaitly, B Li, J Chorowski, M Bacchiani, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). C.-C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, E. Gonina, N. Jaitly, B. Li, J. Chorowski, and M. Bacchiani, "State-of-the- art speech recognition with sequence-to-sequence models," in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), April 2018, pp. 4774-4778.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L U Kaiser, I Polosukhin, Advances in Neural Information Processing Systems. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, Inc30A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. u. Kaiser, and I. Polosukhin, "Attention is all you need," in Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, Eds. Curran Associates, Inc., 2017, pp. 5998-6008.
Convolution-based attention model with positional encoding for streaming speech recognition on embedded devices. J Park, C Kim, W Sung, 2021 IEEE Spoken Language Technology Workshop (SLT). J. Park, C. Kim, and W. Sung, "Convolution-based attention model with positional encoding for streaming speech recogni- tion on embedded devices," in 2021 IEEE Spoken Language Technology Workshop (SLT), Jan. 2021.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. K Cho, B Van Merriënboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, Y Bengio, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsK. Cho, B. van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, "Learning phrase representations using RNN encoder-decoder for statistical machine translation," in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar: Association for Computational Linguistics, Oct. 2014, pp. 1724-1734.
Sequence to sequence learning with neural networks. I Sutskever, O Vinyals, Q V Le, Advances in Neural Information Processing Systems. Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Q. WeinbergerCurran Associates, Inc27I. Sutskever, O. Vinyals, and Q. V. Le, "Sequence to sequence learning with neural networks," in Advances in Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Q. Weinberger, Eds., vol. 27. Curran Associates, Inc., 2014, pp. 3104-3112.
Attention based on-device streaming speech recognition with large speech corpus. K Kim, K Lee, D Gowda, J Park, S Kim, S Jin, Y.-Y Lee, J Yeo, D Kim, S Jung, J Lee, M Han, C Kim, 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). K. Kim, K. Lee, D. Gowda, J. Park, S. Kim, S. Jin, Y.-Y. Lee, J. Yeo, D. Kim, S. Jung, J. Lee, M. Han, and C. Kim, "Atten- tion based on-device streaming speech recognition with large speech corpus," in 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), Dec. 2019, pp. 956- 963.
Streaming end-to-end speech recognition for mobile devices. Y He, T N Sainath, R Prabhavalkar, I Mcgraw, R Alvarez, D Zhao, D Rybach, A Kannan, Y Wu, R Pang, Q Liang, D Bhatia, Y Shangguan, B Li, G Pundak, K C Sim, T Bagby, S Chang, K Rao, A Gruenstein, ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Y. He, T. N. Sainath, R. Prabhavalkar, I. McGraw, R. Al- varez, D. Zhao, D. Rybach, A. Kannan, Y. Wu, R. Pang, Q. Liang, D. Bhatia, Y. Shangguan, B. Li, G. Pundak, K. C. Sim, T. Bagby, S. yiin Chang, K. Rao, and A. Gruenstein, "Streaming end-to-end speech recognition for mobile devices," in ICASSP 2019 -2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2019, pp. 6381-6385.
Fully neural network based speech recognition on mobile and embedded devices. J Park, Y Boo, I Choi, S Shin, W Sung, Advances in Neural Information Processing. Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. GarnettCurran Associates, Inc31630J. Park, Y. Boo, I. Choi, S. Shin, and W. Sung, "Fully neural network based speech recognition on mobile and embedded devices," in Advances in Neural Information Processing Systems, S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., vol. 31. Curran Associates, Inc., 2018, pp. 10 620-10 630.
Sequenceto-Sequence Speech Recognition with Time-Depth Separable Convolutions. A Hannun, A Lee, Q Xu, R Collobert, Proc. Interspeech. InterspeechA. Hannun, A. Lee, Q. Xu, and R. Collobert, "Sequence- to-Sequence Speech Recognition with Time-Depth Separable Convolutions," in Proc. Interspeech 2019, 2019, pp. 3785- 3789.
Sequence transduction with recurrent neural networks. A Graves, International Conference of Machine Learning (ICML) 2012 Workshop on Representation Learning. A. Graves, "Sequence transduction with recurrent neural net- works," in International Conference of Machine Learning (ICML) 2012 Workshop on Representation Learning.
Speech recognition with deep recurrent neural networks. A Graves, A Rahman Mohamed, G Hinton, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. A. Graves, A. rahman Mohamed, and G. Hinton, "Speech recognition with deep recurrent neural networks," in 2013 IEEE International Conference on Acoustics, Speech and Sig- nal Processing, May 2013, pp. 6645-6649.
Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. A Graves, S Fernández, F Gomez, J Schmidhuber, Proceedings of the 23rd International Conference on Machine Learning, ser. ICML '06. the 23rd International Conference on Machine Learning, ser. ICML '06New York, NY, USAACMA. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, "Connectionist temporal classification: Labelling unseg- mented sequence data with recurrent neural networks," in Proceedings of the 23rd International Conference on Machine Learning, ser. ICML '06. New York, NY, USA: ACM, 2006, pp. 369-376.
Joint ctc-attention based end-to-end speech recognition using multi-task learning. S Kim, T Hori, S Watanabe, 2017 IEEE International Conference on Acoustics, Speech and Signal Processing. S. Kim, T. Hori, and S. Watanabe, "Joint ctc-attention based end-to-end speech recognition using multi-task learning," in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2017, pp. 4835-4839.
Monotonic chunkwise attention. C.-C Chiu, C Raffel, International Conference on Learning Representations. C.-C. Chiu and C. Raffel, "Monotonic chunkwise attention," in International Conference on Learning Representations, Apr. 2018.
. F Chollet, Keras. F. Chollet et al., "Keras," , 2015.
Tensorflow: A system for large-scale machine learning. M Abadi, P Barham, J Chen, Z Chen, A Davis, J Dean, M Devin, S Ghemawat, G Irving, M Isard, M Kudlur, J Levenberg, R Monga, S Moore, D G Murray, B Steiner, P Tucker, V Vasudevan, P Warden, M Wicke, Y Yu, X Zheng, 12th USENIX Symposium on Operating Systems Design and Implementation. Savannah, GAUSENIX AssociationM. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, "Tensorflow: A system for large-scale machine learning," in 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16). Savannah, GA: USENIX Association, 2016, pp. 265-283.
Improved vocal tract length perturbation for a state-of-the-art end-to-end speech recognition system. C Kim, M Shin, A Garg, D Gowda, INTERSPEECH-2019. Graz, AustriaC. Kim, M. Shin, A. Garg, and D. Gowda, "Improved vocal tract length perturbation for a state-of-the-art end-to-end speech recognition system," in INTERSPEECH-2019, Graz, Austria, Sept. 2019, pp. 739-743.
Power-law nonlinearity with maximally uniform distribution criterion for improved neural network training in automatic speech recognition. C Kim, M Kumar, K Kim, D Gowda, 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). C. Kim, M. Kumar, K. Kim, and D. Gowda, "Power-law non- linearity with maximally uniform distribution criterion for im- proved neural network training in automatic speech recogni- tion," in 2019 IEEE Automatic Speech Recognition and Un- derstanding Workshop (ASRU), Dec. 2019, pp. 988-995.
End-to-end training of a large vocabulary end-to-end speech recognition system. C Kim, S Kim, K Kim, M Kumar, J Kim, K Lee, C Han, A Garg, E Kim, M Shin, S Singh, L Heck, D Gowda, 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). C. Kim, S. Kim, K. Kim, M. Kumar, J. Kim, K. Lee, C. Han, A. Garg, E. Kim, M. Shin, S. Singh, L. Heck, and D. Gowda, "End-to-end training of a large vocabulary end-to-end speech recognition system," in 2019 IEEE Automatic Speech Recog- nition and Understanding Workshop (ASRU), Dec. 2019, pp. 562-569.
Deeptwist: Learning model compression via occasional weight distortion. D Lee, P Kapoor, B Kim, abs/1810.12823CoRR. D. Lee, P. Kapoor, and B. Kim, "Deeptwist: Learning model compression via occasional weight distortion," CoRR, vol. abs/1810.12823, 2018.
. D Gowda, A Kumar, K Kim, H Yang, A Garg, S Singh, J Kim, M Kumar, S Jin, S Singh, C , D. Gowda, A. Kumar, K. Kim, H. Yang, A. Garg, S. Singh, J. Kim, M. Kumar, S. Jin, S. Singh, and C.
Utterance Invariant Training for Hybrid Two-Pass End-to-End Speech Recognition. Kim, Proc. Interspeech. InterspeechKim, "Utterance Invariant Training for Hybrid Two-Pass End-to-End Speech Recognition," in Proc. Interspeech 2020, 2020, pp. 2827-2831.
An analysis of incorporating an external language model into a sequence-to-sequence model. A Kannan, Y Wu, P Nguyen, T N Sainath, Z Chen, R Prabhavalkar, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). A. Kannan, Y. Wu, P. Nguyen, T. N. Sainath, Z. Chen, and R. Prabhavalkar, "An analysis of incorporating an external language model into a sequence-to-sequence model," in 2018 IEEE International Conference on Acoustics, Speech and Sig- nal Processing (ICASSP), Apr. 2018, pp. 5824-5828.
Towards better decoding and language model integration in sequence to sequence models. J Chorowski, N Jaitly, Proc. Interspeech. InterspeechJ. Chorowski and N. Jaitly, "Towards better decoding and language model integration in sequence to sequence models," in Proc. Interspeech 2017, 2017, pp. 523-527.
On using monolingual corpora in neural machine translation. Ç Gülçehre, O Firat, K Xu, K Cho, L Barrault, H Lin, F Bougares, H Schwenk, Y Bengio, abs/1503.03535CoRR. Ç . Gülçehre, O. Firat, K. Xu, K. Cho, L. Barrault, H. Lin, F. Bougares, H. Schwenk, and Y. Bengio, "On using monolingual corpora in neural machine translation," CoRR, vol. abs/1503.03535, 2015.
Cold fusion: Training seq2seq models together with language models. A Sriram, H Jun, S Satheesh, A Coates, A. Sriram, H. Jun, S. Satheesh, and A. Coates, "Cold fu- sion: Training seq2seq models together with language mod- els," 2017.
A comparison of techniques for language model integration in encoder-decoder speech recognition. S Toshniwal, A Kannan, C Chiu, Y Wu, T N Sainath, K Livescu, 2018 IEEE Spoken Language Technology Workshop (SLT). S. Toshniwal, A. Kannan, C. Chiu, Y. Wu, T. N. Sainath, and K. Livescu, "A comparison of techniques for language model integration in encoder-decoder speech recognition," in 2018 IEEE Spoken Language Technology Workshop (SLT), 2018, pp. 369-375.
Librispeech: An asr corpus based on public domain audio books. V Panayotov, G Chen, D Povey, S Khudanpur, IEEE Int. Conf. Acoust., Speech, and Signal Processing. V. Panayotov, G. Chen, D. Povey, and S. Khudanpur, "Lib- rispeech: An asr corpus based on public domain audio books," in IEEE Int. Conf. Acoust., Speech, and Signal Processing, April 2015, pp. 5206-5210.
Conformer: Convolution-augmented Transformer for Speech Recognition. A Gulati, J Qin, C.-C Chiu, N Parmar, Y Zhang, J Yu, W Han, S Wang, Z Zhang, Y Wu, R Pang, Proc. Interspeech. InterspeechA. Gulati, J. Qin, C.-C. Chiu, N. Parmar, Y. Zhang, J. Yu, W. Han, S. Wang, Z. Zhang, Y. Wu, and R. Pang, "Conformer: Convolution-augmented Transformer for Speech Recognition," in Proc. Interspeech 2020, 2020, pp. 5036-5040.
Streaming On-Device End-to-End ASR System for Privacy-Sensitive Voice-Typing. A Garg, G P Vadisetti, D Gowda, S Jin, A Jayasimha, Y Han, J Kim, J Park, K Kim, S Kim, Y Yoon Lee, K Min, C Kim, Proc. Interspeech. InterspeechA. Garg, G. P. Vadisetti, D. Gowda, S. Jin, A. Jayasimha, Y. Han, J. Kim, J. Park, K. Kim, S. Kim, Y. yoon Lee, K. Min, and C. Kim, "Streaming On-Device End-to-End ASR System for Privacy-Sensitive Voice-Typing," in Proc. Interspeech 2020, 2020, pp. 3371-3375.
A comparison of sequence-to-sequence models for speech recognition. R Prabhavalkar, K Rao, T N Sainath, B Li, L Johnson, N Jaitly, Proc. Interspeech. InterspeechR. Prabhavalkar, K. Rao, T. N. Sainath, B. Li, L. Johnson, and N. Jaitly, "A comparison of sequence-to-sequence models for speech recognition," in Proc. Interspeech 2017, 2017, pp. 939-943.
Fixed-point feedforward deep neural network design using weights +1, 0, and -1. K Hwang, W Sung, 2014 IEEE Workshop on Signal Processing Systems (SiPS). K. Hwang and W. Sung, "Fixed-point feedforward deep neural network design using weights +1, 0, and -1," in 2014 IEEE Workshop on Signal Processing Systems (SiPS), 2014, pp. 1-6.
Personalized speech recognition on mobile devices. I Mcgraw, R Prabhavalkar, R Alvarez, M G Arenas, K Rao, D Rybach, O Alsharif, H Sak, A Gruenstein, F Beaufays, C Parada, 2016 IEEE International Conference on Acoustics, Speech and Signal Processing. I. McGraw, R. Prabhavalkar, R. Alvarez, M. G. Arenas, K. Rao, D. Rybach, O. Alsharif, H. Sak, A. Gruenstein, F. Beaufays, and C. Parada, "Personalized speech recognition on mobile devices," in 2016 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), 2016, pp. 5955- 5959.
Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. S Han, H Mao, W J Dally, 4th International Conference on Learning Representations, ICLR 2016. Bengio and Y. LeCunSan Juan, Puerto RicoConference Track ProceedingsS. Han, H. Mao, and W. J. Dally, "Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding," in 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2016.
Compression of end-to-end models. R Pang, T Sainath, R Prabhavalkar, S Gupta, Y Wu, S Zhang, C.-C Chiu, Proc. Interspeech. InterspeechR. Pang, T. Sainath, R. Prabhavalkar, S. Gupta, Y. Wu, S. Zhang, and C.-C. Chiu, "Compression of end-to-end models," in Proc. Interspeech 2018, 2018, pp. 27-31.
Hierarchical Multi-Stage Word-to-Grapheme Named Entity Corrector for Automatic Speech Recognition. A Garg, A Gupta, D Gowda, S Singh, C Kim, Proc. Interspeech. InterspeechA. Garg, A. Gupta, D. Gowda, S. Singh, and C. Kim, "Hierarchical Multi-Stage Word-to-Grapheme Named Entity Corrector for Automatic Speech Recognition," in Proc. Interspeech 2020, 2020, pp. 1793-1797.
| [] |
[
"CSCD-IME: Correcting Spelling Errors Generated by Pinyin IME",
"CSCD-IME: Correcting Spelling Errors Generated by Pinyin IME"
] | [
"Yong Hu \nWeChat AI\nTencent IncChina\n",
"Fandong Meng fandongmeng@tencent.com \nWeChat AI\nTencent IncChina\n",
"Jie Zhou \nWeChat AI\nTencent IncChina\n"
] | [
"WeChat AI\nTencent IncChina",
"WeChat AI\nTencent IncChina",
"WeChat AI\nTencent IncChina"
] | [] | Chinese Spelling Correction (CSC) is a task to detect and correct spelling mistakes in texts. In fact, most of Chinese input is based on pinyin input method, so the study of spelling errors in this process is more practical and valuable. However, there is still no research dedicated to this essential scenario. In this paper, we first present a Chinese Spelling Correction Dataset for errors generated by pinyin IME (CSCD-IME), including 40,000 annotated sentences from real posts of official media on Sina Weibo. Furthermore, we propose a novel method to automatically construct large-scale and high-quality pseudo data by simulating the input through pinyin IME. A series of analyses and experiments on CSCD-IME show that spelling errors produced by pinyin IME hold a particular distribution at pinyin level and semantic level and are challenging enough. Meanwhile, our proposed pseudo-data construction method can better fit this error distribution and improve the performance of CSC systems. Finally, we provide a useful guide to using pseudo data, including the data scale, the data source, and the training strategy 1 . | 10.48550/arxiv.2211.08788 | [
"https://export.arxiv.org/pdf/2211.08788v2.pdf"
] | 253,553,639 | 2211.08788 | 1945e2fe3a13cb0561269797bb035937e09bacea |
CSCD-IME: Correcting Spelling Errors Generated by Pinyin IME
Yong Hu
WeChat AI
Tencent IncChina
Fandong Meng fandongmeng@tencent.com
WeChat AI
Tencent IncChina
Jie Zhou
WeChat AI
Tencent IncChina
CSCD-IME: Correcting Spelling Errors Generated by Pinyin IME
Chinese Spelling Correction (CSC) is a task to detect and correct spelling mistakes in texts. In fact, most of Chinese input is based on pinyin input method, so the study of spelling errors in this process is more practical and valuable. However, there is still no research dedicated to this essential scenario. In this paper, we first present a Chinese Spelling Correction Dataset for errors generated by pinyin IME (CSCD-IME), including 40,000 annotated sentences from real posts of official media on Sina Weibo. Furthermore, we propose a novel method to automatically construct large-scale and high-quality pseudo data by simulating the input through pinyin IME. A series of analyses and experiments on CSCD-IME show that spelling errors produced by pinyin IME hold a particular distribution at pinyin level and semantic level and are challenging enough. Meanwhile, our proposed pseudo-data construction method can better fit this error distribution and improve the performance of CSC systems. Finally, we provide a useful guide to using pseudo data, including the data scale, the data source, and the training strategy 1 .
Introduction
Chinese spelling correction (CSC) is a task to detect and correct spelling mistakes in Chinese sentences. It has always been an important Chinese NLP task and also gradually becomes an indispensable sub-module for many downstream tasks, such as search query correction (Gao et al., 2010), named entity recognition (Wan et al., 2011), and machine translation (Zheng et al., 2020).
Further exploring the process of input, unlike alphabetic languages, the input of Chinese characters must rely on an input method (or input method editor, commonly abbreviated IME 2 ). Among all Chinese IMEs, pinyin IME 3 is the most popular 1 https://github.com/nghuyong/cscd-ime 2 https://en.wikipedia.org/wiki/Input_method 3 https://en.wikipedia.org/wiki/Pinyin_input_method Figure 1: It is hard to repoduce the sampled spelling error from SIGHAN (misspelling "错误" as "错勿") by pinyin IME, no matter what input form is used. one, used by more than 97% of Chinese people (Chen and Lee, 2000;. We can infer that most spelling errors are made by people when they input through pinyin IME, such as typing the wrong pinyin, clicking the wrong words, etc. It will be more practical and valuable to pay attention to this scenario. Therefore, this paper focuses on spelling errors generated by pinyin IME.
However, there is still no professional benchmark dataset for errors generated by pinyin IME. In this situation, previous research all use SIGHAN13-15 (Wu et al., 2013;Yu et al., 2014;Tseng et al., 2015) as baseline datasets, but they cannot accurately evaluate the real performances of CSC systems. (1) The main reason is that errors in SIGHAN datasets come from mistakes in essays written by teenage students (SIGHAN13) or Chinese as a foreign language (CFL) learners (SIGHAN14-15), which has a big gap with errors generated by native speakers through pinyin IME. For example, as shown in Figure 1, many errors in SIGHAN would hardly occur if people use pinyin IME because the modern pinyin IME always has a strong language model and would not recommend very unreasonable candidate words. (2) In addition, sentences in SIGHAN datasets are in traditional Chinese, and even after converting to Simplified Chinese, there is still a big difference in the expression. (3) The data size is too small, for example, SIGHAN13 has only 1,000 test samples, which may make the evaluation results unreliable. Therefore, in the paper, we first present a Chinese Spelling Correction Dataset for errors generated by pinyin IME (CSCD-IME). It contains 40,000 manually labeled data, which is ten times larger than SIGHAN datasets. As far as we know, this is also the largest dataset for the CSC task. As shown in Figure 2, sentences in the dataset are all real posts of official media (such as Sina Sports) on Sina Weibo and all in Simplified Chinese. We could believe that these errors are generated by editors in the process of typing through pinyin IME, which is consistent with errors made by native speakers in the real input scenario. In order to study the error distribution in depth, we design a tagging system at the pinyin and semantic levels. The comparative analysis shows that the error distribution of CSCD-IME is quite different from existing datasets, with a much higher proportion of word-level errors.
In previous research, additional pseudo data is usually used to improve the performance of CSC systems, which is often constructed by ASR based methods (Wang et al., 2018) or confusion set based methods . However, these construction methods still have a gap with the real Chinese input scene. Therefore, we propose a novel pseudo-data construction method based on pinyin IME. Specifically, we simulate the input process through pinyin IME and add sampled noise to generate pseudo data. In this way, pseudo data can be constructed at scale and high quality.
In the end, we experiment with several strong CSC baseline models and perform a series analysis. Experiments show that CMCD-IME is a challenging dataset and our proposed pseudo-data construction method can better fit the real error distribution than previous methods and improve the performance of CSC systems. We also discover how to better use pseudo data, and provide a useful strategy in the appendix, including the data scale, the data source, and the training strategy.
2 Related Work CSC Dataset: CSC datasets can be divided into two categories according to the user group: CFL learners and native speakers. Among them, the former has richer data resources, including SIGHAN14-15 (Yu et al., 2014;Tseng et al., 2015), NLPCC2018 (Zhao et al., 2018), NLPTEA2020 (Rao et al., 2020a) and YACLC . Although the latter has a broader application scenario, the related data sources are fewer, only SIGHAN13 (Wu et al., 2013) and CTC2021 (Zhao et al., 2022). Moreover, these two datasets are very small, with almost 1,000 test sentences. In addition, SIGHAN13 is a traditional Chinese dataset, and CTC2021 is a Chinese grammar correction dataset. These are all different from the real scenario of Simplified Chinese spelling error correction.
CSC Data Augmentation: In order to make up for the lack of labeled data, previous studies usually build additional pseudo data to increase the performance. The mainstream method is based on the confusion set Zhang et al., , 2020, the pseudo data constructed in this way is extensive in size but relatively low in quality because of the big gap from the true error distribution. Another relatively high-quality construction method is based on ASR or OCR method (Wang et al., 2018). However, this method requires additional labeled ASR or OCR data and is inconsistent with the pinyin IME based input scenario.
CSC models: In recent years, BERT-style (Devlin et al., 2018) models have dominated the research of the CSC task (Hong et al., 2019;Liu et al., 2022;Zhu et al., 2022;Zhang et al., 2020;Bao et al., 2020;. However, due to the lack of large-scale and high-quality datasets, the performance of these models is greatly limited. Table 1: Data statistics, including the size of the training set, the size of development set, the size of the test set, the size of the all dataset, the averaged number of characters per sentence, the ratio of error sentences, and the averaged error characters in each error sentence.
CSCD-IME
In this section, we will show how to build the CSCD-IME and discover the error distribution.
Data Collection
Our goal is to build a dataset that fits the real Chinese spelling error correction scenario where most spelling errors are generated by pinyin IME. We cleverly discovered a large-scale data source with this feature: LCSTS (Hu et al., 2015), which is a large-scale Chinese short text summarization dataset. Sentences in this dataset all come from posts of popular Chinese media on Sina Weibo, such as People's Daily. These accounts are verified on Weibo and labeled with a blue 'V'. There are usually special editors responsible for editing and publishing news and information. We can infer that these editors generally use pinyin IME to publish posts, so spelling errors in this editing progress are generated by pinyin IME, which is consistent with our target scenario. In addition, LCSTS is a large-scale and general dataset, including over 2 million posts and covering multiple domains such as politics, economics, military, movies, games, etc. Therefore, we use LCSTS as our data source.
Data Selection
We split posts in LCSTS into sentence levels and obtain over 8 million sentences. Obviously, it is not realistic to manually label all these sentences, and most of the sentences are completely correct, so we use an error detection model and rules to filter out these correct sentences.
Detection Model: Given a source sequence X = (x w 1 , x w 2 , ..., x w N ), the detection model is to check whether a character w i (1 ≤ i ≤ N ) is correct or not. We use the label 1 and 0 to mark the misspelled character and the correct character, respectively. The detection model can be formalized as follows:
y d = sigmoid(W T (E(e)))(1)
where e = (e w 1 , e w 2 , ..., e w N ) is the word embedding and E( * ) is the pre-trained encoder. The output y d = (y d 1 , y d 2 , ..., y d N ) is a probability sequence, and y d i ∈ (0, 1) means the probability that character x w i is erroneous. Training: We follow the successful experience of the NLPTEA2020 task (Rao et al., 2020b) and use a Chinese ELECTRA-Large discriminator model 4 (Clark et al., 2020) to initialize the detection model. Following previous research, we train the detection model on SIGHAN13-15's training data and Wang's pseudo data (Wang et al., 2018) and save the best checkpoint by SIGHAN13-15's test data.
Filtering: We then use the trained detection model to filter out correct sentences. For the input sentence, we can obtain the error probability of each character y d = (y d 1 , y d 2 , ..., y d N ). Previous studies have shown that the detection model will not work well on special high-frequency characters (such as "的", "地", "得" etc.) because of the poor labeling of these characters in SIGHAN (Xu et al., 2021), and is also easy to over-check low-frequency entity words (such as person names, place names, etc.) (Zhang et al., 2020). Therefore, we use a Chinese lexical analysis tool (LAC) (Jiao et al., 2018) to detect these special and entity characters in the input sentence and divide characters into three categories: C speicial , C entity , C normal . We then calculate the maximum error probability of characters in each category (if the category is empty, the maximum probability is 0), and only if all the max error probabilities of each category are less than the corresponding threshold is the sentence considered correct. This can be formalized as follows:
max({y d i |w i ∈ C special }) < δ special max({y d i |w i ∈ C entity }) < δ entity max({y d i |w i ∈ C normal }) < δ normal(2)
where δ special , δ entity and δ normal are thresholds. 4 https://github.com/ymcui/Chinese-ELECTRA
Tags Description
Pinyin same pinyin the pinyin of the correct is the same with of the wrong e.g. 现再(zai) -> 现在(zai); now fuzzy pinyin the pinyin of the correct is the fuzzy pinyin of the wrong e.g. 现宅(zhai) -> 现在(zai); now similar pinyin the edit distance of two pinyin is 1 e.g. 现砸(za) -> 现在(zai); now dissimilar pinyin the edit distance of two pinyin is greater than 1 e.g. 现太(tai) -> 现在(zai); now Semantic entity word the correct word is a named entity, such as a place name e.g. 卢(lu)山-> 庐(lu)山; Lushan Mountain normal word the wrong word is a valid Chinese word e.g. 开心地驴友(lv, you) -> 开心地旅游(lv, you); travel happily special char the error is a Chinese high-frequency error-prone character e.g. 开心的(de)旅游-> 开心地(de)旅游; travel happily normal char the wrong word is not a valid Chinese word or the length is 1 e.g. 开心地旅有(you) -> 开心地旅游(you); travel happily We use the above method to filter out about 91.2% of the sentences and keep about 700,000 sentences that may have spelling errors. We randomly select 2,000 filtered sentences to check whether these sentences are really correct, and the result shows that the accuracy is 99.2%, which is in line with our expectations. For the rest of about 700,000 sentences, we cumulatively selected about 50,000 random sentences for manual annotation.
Data Annotation
We recruit a group of educated native speakers for data annotation, and each sentence will be annotated twice by different people to ensure the annotation quality. Specifically, the annotator needs to detect and correct spelling errors in the sentence and give the final correct sentence.
In order to further clear the labeling rules and reduce objections, sentences that may be ambiguous will be discarded directly. There are three scenarios: (1) The sentence is very unsmooth, and the annotator does not understand the meaning of the sentence; (2) There are multiple possible correction results for the typo in the sentence, and no unique correction result with a pinyin edit distance less than 2 could be found; (3) The sentence contains complex grammatical errors. Therefore, the sentence that is not discarded during the labeling process is semantically clear and has a unique correction result.
In the end, we obtain 40,000 labeled and qual-ified sentences, which make up the CSCD-IME dataset. The whole dataset is randomly divided into the training set, development set, and test set according to the ratio of 6:1:1.
Basic Statistics Analysis
As shown in Table 1, the data size of CSCD-IME is much larger than existing datasets. In terms of the averaged sentence length and averaged errors in sentences, CSCD-IME is closer to SIGHAN13, with a longer sentence length and fewer errors because both SIGHAN13 and CSCD-IME are datasets for native speakers, whereas SIG-GHAN14 and SIGHAN15 are datasets for CFL learners. As for the proportion of positive and negative samples, CSCD-IME is more balanced.
Error Distribution Analysis
In order to analyze the error distribution of spelling mistakes produced by pinyin IME in depth, we first design a tagging system at pinyin level and semantic level, and then conduct the error distribution study on CSCD-IME and previous datasets.
Tag defination: As shown in Table 2, we design four pinyin-level and four semantic-level tags. For pinyin-level tags, we define different tags according to the degree of pinyin similarity between the wrong and the correct character. In practice, we use pypinyin 5 to obtain the pinyin info and add the pinyin-level tag. Note that for fuzzy pnyin, we fol- low the definition of Sogou IME 6 . As for semanticlevel tags, they can be roughly divided into two categories: word-level tags (entity word and normal word) and character-level tags (special char and normal char). In practice, we first seg the corrected sentence by LAC (Jiao et al., 2018) and get the corresponding word of the wrong character. For a given word, we then add the semantic tag based on LAC's POS tag and an entity dictionary. Note that the special char refers to "他/她/它/的/地/得".
Pinyin-level analysis: As shown in the upper part of Figure 3, in all four datasets, the same pinyin error is the largest proportion and the dissimilar pinyin error is the smallest one. Compared with SIGHAN datasets, this feature of CSCD-IME is more prominent, it has higher proportion of same pinyin errors (up to 82.4%) and lower percentage of dissimilar pinyin errors (only 2.2%). This is because pinyin IME could auto fix the pinyin typoes based on the input context (Jia and Zhao, 2014).
Semantic-level analysis: As shown in the lower part of Figure 3, the proportion of special char error in CSCD-IME and SIGHAN13 is lower than that in SIGHAN14 and SIGHAN15, this means native speakers could make fewer special char errors than CFL learners. It also can be observed that the percentage of the word-level errors (entity word and normal word) in CSCD-IME is significantly higher than those in SIGHAN datasets and is nearly 6 https://pinyin.sogou.com/help.php?list=3&q=5
doubled. This is because if we input through the whole word (Figure 1(b)), pinyin IME will generally recommend a valid word rather than a very strange "word". Note that the high proportion of word-level errors is the most important feature of CSCD-IME. This is also the biggest difference with SIGHAN datasets and makes CSCD-IME challenging. We will conduct a deep analysis between wordlevel and character-level errors in Section 5.
Data Augmentation
It is expensive to manually label data for the CSC task (Wang et al., 2018), so how to construct pseudo data has always been a valuable topic. In this section, we propose a novel method to construct largescale and high-quality pseudo data.
Data Preparation
The basic principle of pseudo-data construction is to add noise to correct sentences and generate sentences with spelling errors, so we need to prepare completely correct sentences first. In fact, this kind of text data is common on the Internet, such as sentences in Wikipedia or classic books, which also ensures that we could build large-scale dataset. In practice, we better choose the in-domain text that is relevant to the actual application scenario.
Noise Simulation
Given a correct sentence, we use the pinyin IME to simulate the input of this sentence and add noise in the input process to construct pseudo data.
We first sample a pinyin noise υ pinyin , a token granularity noise υ token , and a number of errors noise υ num . As discussed in Section 3.5, the υ pinyin could be one of same pinyin, fuzzy pinyin, similar pinyin and dissimilar pinyin, the υ token could be one of word and character, and the υ num is a number greater than 0. For the convenience of expression, we use token to represent the Chinese character or word. We can obtain the distribution of υ pinyin , υ token , and υ num from the prior knowledge of the scene or the labeled data.
In the generation process, we first determine the number of errors to generate based on the sampled υ num . For each error, as shown in Figure 4, we randomly select a word (Figure 4(a-c)) or a character (Figure 4(d-e)) from the correct sentence based on the sampled υ token . Then we first type the correct text before the selected token, and enter correct (Figure 4(a)) or wrong pinyin (Figure 4(c)) of the selected token based on the sampled υ pinyin . If the first token recommended by pinyin IME is the correct token, we randomly select the second or the third token as noise (Figure 4(b)). If not, we directly select the first token as noise (Figure 4(a)). Finally, we replace the correct token with the noise token in the original sentence.
Language Model Filtering
It can be observed that the pseudo data generated by this method may also be correct sentences (Figure 4(b)), so we use an n-gram language model (LM) to perform secondary filtering. Specifically, we calculate the perplexity (PPL) value of the generated sentence and the original sentence, and only when the PPL value is relatively improved by δ after adding the noise do we consider the generated noise is indeed erroneous. This can be formalized as follows: P P L(noised) − P P L(origin) P P L(origin) > δ (3) In practice, the value of δ is adjusted by the selected language model. This additional step ensures that we could build a high-quality dataset. Finally, based on the correct sentences from the LCSTS dataset (Section 3.2), the error distribution from the CSCD-IME dataset (Section 3.4, 3.5) and the Google pinyin IME 7 , we build a large-scale and high-quality pseudo dataset LCSTS-IME-2M, including 2 million sentences.
Experiment
In this section, we provide the benchmark of CSCD-IME and verify the effectiveness of our pseudo-data construction method.
Basic Settings
The basic experimental setup includes data, evaluation methods, and baseline models.
Data: We use CSCD-IME as the basic dataset, including the training set, development set, and test set. Note that for pseudo data, we conduct two-step training: pretraining and then finetuning, using only pseudo data in the pretraining step, and only manually labeled data in the finetuning step.
Metrics: We use the f1-score of detection and correction at sentence level and character level as Table 3: The performance (%) of baseline models on CSCD-IME with or without different pseudo datasets. Note that "+" means pretraining on pseudo data and then finetuning on CSCD-IME's training data, and "*" means no finetuning step, evaluating directly after pretraining on pseudo data. Table 4: The f1-score (%) of correction at character level on character-level errors and word-level errors, and the decline of the latter compared with the former. evaluation metrics. The sentence-level metrics follow the calculation in FASPell (Hong et al., 2019). As for character-level metrics, we do not follow the method in SpellGCN (Cheng et al., 2020). We calculate all characters for correction metric instead of only those been correctly detected characters.
Baselines: We choose the following three strong baseline models. BERT (Devlin et al., 2018) directly fine-tunes the standard masked language model to generate fixed-length corrections. Soft-Masked BERT (Zhang et al., 2020) utilizes a detection model to help the correction model learn the right context. PLOME integrates the phonetic and visual features into the pre-trained model, and it has an additional pretraining step on a large-scale confusion set based pseudo dataset.
CSCD-IME Benchmark
In this part, we provide basic benchmark performances of CSCD-IME, including two versions: without any pseudo data and with large-scale pseudo data LCSTS-IME-2M.
As shown in Table 3, if we don't add any pseudo data, the overall f1-score of correction at character level could be only nearly 67%; when introducing 2 million pseudo data, the performance could increase a lot, but the best is still just over 76%. There is still a big room for improvement.
We further analyze the model's performance (without any pseudo data version) on different semantic tags. Note that here we only analyze errors with same pinyin to reduce the impact of pinyin.
As shown in Table 4, all three models perform worse on word-level errors than on character-level errors, and the maximum gap could reach 5%. The iteration of the CSC model in recent years, for example, from BERT to PLOME, reflects in the improvement of character-level errors but no improvement in word-level errors. For word-level errors, the model should have a strong understanding of context and entities, so these errors are more difficult and require further effort. Compared with existing datasets, the biggest feature of CSCD-IME is the high proportion of word-level errors, so from this point of view, CSCD-IME is challenging.
Comparition of Pseudo Datasets
In this subsection, we compare pseudo datasets constructed by different methods. Note that PLOM already has a pretraining step on large-scale pseudo data, so this baseline is not used here. We choose existing ASR(/OCR)-based pseudo data (Wang et al., 2018) as experiment dataset, including over 271K sentences, abbreviated as Wangasr-271K. This pseudo dataset is also the most famous one, which is always used to improve performance in previous research. We further extract correct sentecnes from Wang-asr-271K to construct confusion set based pseudo data Wang-cs-271K and pinyin IME based pseudo data Wang-ime-271K.
As shown in Table 3, our proposed pinyin IME based pseudo-data construction method is more effective than all existing methods. Compared with Wang-asr-271K and Wang-cs-271K, adding Wangime-271K (ours) always obtain the best f1-score on both sentence-level and character-level metrics. To observe the effect of each method more clearly, if we only use pseudo data in the training progress, our method would significantly outperform other methods. The key to the quality of pseudo data is the error distribution of pseudo data, as shown in Figure 6, our method can better fit the real scene. More analysis can be checked in Appendix A.
Impact of LM Post-Filtering
In this part, we investigate the effect of LM filtering in Section 4.3. We reconstruct Wang-ime-271K and adopt different LM filtering strategies. We choose the basic BERT model to conduct the experiment, and only use pseudo data in the training progress to more clearly study the differences.
As shown in Table 5, if we do not use LM filtering, more unexcepted noises will be introduced, for example, the generated pseudo data may also be a completely correct sentence. However, when the threshold is too low, even less than 0, the generated errors will be more confused, and the model would have good recall but poor precision. In contrast, if the threshold is too high, the generated errors are relatively simple, and the precision is better, but the recall is lower. Therefore, LM filtering is necessary and we also need to choose a moderate threshold.
Error Analysis
We detailed analyze errors that can not be correctly handled in all three baseline models and summarize these errors into the following three main types.
The over-correction of expressions with lowfrequency (Liu et al., 2022). Since the backbone model is BERT and its pre-trained task is masked token recovering, these CSC models tend to overcorrect valid low-frequency expressions to more frequent expressions, such as people's names, place names, new Internet words, etc. For instance, "墨 尔本矩形体育场" (Melbourne Rectangular Stadium) is a location entity and is over-corrected as "墨尔本巨形体育场" (Melbourne Mega Stadium).
The missing correction of expressions requires strong contextual reasoning ability. If the wrong word in the sentence is a high-frequency word, the model needs to combine the semantic information of the context to correct it. For example, for the sentence "几辆车停在原地争执,后面的车辆也 无法进程" (since several cars parked and argued, cars behind could not enter the city), where the wrong word "进程" (process) is a common word, but it is wrong to fill in this sentence, based on the context, it should be "进城" (enter the city) here.
The failing correction of grammar-related errors. The CSC task also contains a certain number of grammar-related errors, such as the use of function words, auxiliary words, and verb-object fixed collocations, and these baseline models cannot effectively handle such errors. For example, for the phrase "准确的传达情绪" (convey emotions accurately), the auxiliary word between the adverb and the verb should be "地" not "的" according to Chinese grammar rules.
Conclusion
In this paper, we focus on the most common mistakes in practical CSC applications, i.e., spelling errors generated by pinyin IME. We first build a new dataset CSCD-IME for this field, which is also the largest dataset for the CSC task. Furthermore, we propose a novel method to construct large-scale and high-quality pseudo data based on pinyin IME. A series of analyses and experiments show that CSCD-IME holds a particular error distribution and is a challenging dataset. Our proposed pseudodata construction method can better fit this real error distribution and is more efficient than all previous methods. We hope this data resource and our discovery will advance follow-up research.
A Pseudo Data Analysis
A.1 Error Distribution
As shown in Figure 6, we study the error distribution of pseudo data generated by different methods at the pinyin level and semantic level. It can be observed that our pseudo-data construction method is most consistent with CSCD-IME, this also means that our method matches the real input scenario by pinyin IME well. In contrast, the confusion set based method and ASR based method have a big gap with the true error distribution.
A.2 Case Study
We sample more examples in Table 6. It can be seen that the confusion set based method may produce errors that are completely out of context, which is far from the real input scene. ASR based method is better, but the constructed errors are mostly character-level errors. Meanwhile, since ASR based method doesn't have a LM filtering module, the generated noise may also be correct, such as the third case in Table 6. In contrast, our method could generate high-quality pseudo data, including word-level and character-level errors.
B Guide to Using Pseudo Data
In this section, we study how to better use pseudo data and provide several recommendations. Note that PLOM already has a pretraining step on largescale pseudo data, so this baseline is not used here.
B.1 How to Choose Data Source
In this part, we study the influence of data sources when we build pseudo dataset. We randomly split a same scale sub-dataset LCSTS-ime-271K from our constructed LCSTS-IME-2M, and then compare with Wang-ime-271K. In order to clearly do the comparition, we only use pseudo data in the training progress. As shown in Table 7, compared with Wangime-271K, adding LCSTS-ime-271K could obtain a greater promotion. Since the data of LCSTS-ime-271K and CSCD-IME all come from posts in Sina Weibo, they share the same data distribution, such as language style, average length, etc. Therefore, we better choose the data source that is consistent with the real scenario.
B.2 How to Choose Data Size
In this subsection, we check the effect of data size. Specifically, we randomly split CSCD-IME-2M into sub-datasets with different scales.
As shown in Figure 5, generally, with the size of the pseudo data gradually increasing, the overall f1 score also increases. When the scale of pseudo data is not very large, especially from 0 to 200K, the improvement is significant. But when the scale of pseudo data continues to increase, the performance increases slowly or even begins to decline. Therefore, we better build a large-scale pseudo dataset, containing at least one million sentences.
B.3 How to Choose Training Method
In previous studies Zhang et al., 2020;Cheng et al., 2020;, the pseudo data is directly trained with labeled data together. In this subsection, we compare two training strategies: (1) directly adding pseudo data to the training set (the popular method in previous research) (2) pretraining on pseudo data and then finetuning on labeled data (the method in our work). We carry out experiments on different sizes of pseudo-data: 200K and 2M.
As shown in Table 8, compared with training together, training in two stages can always achieve better results, especially when the size of pseudo data is large. This is because compared with manually labeled data, pseudo data would be noisier, the finetuning step could allow the model to better fit the true error distribution. Therefore, we better use the two-step training method, i.e., pretraining and then finetuning. Figure 6: The comparison of error distribution (%) at pinyin level (above) and semantic level (below) of different pseudo datasets. If the proportion is less than 2%, the specific value is not indicated in the figure. Table 8: The performance (%) of baseline models on CSCD-IME with or without pretraining.
Figure 2 :
2A real post of official media in Sina Weibo, where the "效力于" is misspelled as "效力与".
Figure 3 :
3The comparison of error distribution (%) at pinyin level (above) and semantic level (below). If the proportion is less than 2%, the specific value is not indicated in the figure.
Figure 4 :
4The process of generating pseudo data by pinyin IME (the orange box indicates the selected noise).
Figure 5 :
5The correction results (%) at character level with different sizes of pseudo data, including 0, 200K, 500K, 1M and 2M.
Dataset train size dev size test size all size avg len. err. ratio avg err./sent.SIGHAN13
700
-
1,000
1,700
60.94
77.11%
1.20
SIGHAN14
3,437
-
1,062
4,499
49.66
86.19%
1.52
SIGHAN15
2,339
-
1,100
3,439
31.10
81.82%
1.33
CSCD-IME
30,000
5,000
5,000
40,000
57.42
46.02%
1.09
Table 2 :
2The tagging system at pnyin level and semantic level.
Table 5 :
5The correction results (%) at character level
for pseudo data with different LM filtering strategies.
Table 6 :
6The pseudo data generated by different methods. Note that O means the origin sentence, cs, asr and ime denote the corresponding noised sentence from Wang-cs-271K, Wang-asr-271K and Wang-ime-271K, respectively.
LCSTS-ime-271K 57.13 62.09 59.51 51.34 55.8 53.48 59.89 67.29 63.38 53.45 60.05 56.56 soft-masked BERT *Wang-ime-271K 47.80 40.60 43.91 41.82 35.52 38.41 47.53 45.73 46.61 41.61 40.03 40.81 *LCSTS-ime-271K 59.49 58.36 58.92 53.39 52.37 52.87 63.54 62.94 63.23 56.67 56.13 56.40Models
Sentence-level
Character-level
Detection
Correction
Detection
Correction
P
R
F1
P
R
F1
P
R
F1
P
R
F1
BERT
*Wang-ime-271K
52.41 44.33 48.04 46.87 39.64 42.95 56.07 49.72 52.7 49.69 44.07 46.71
*
Table 7 :
7The performance (%) of baseline models on CSCD-IME with different pseudo data sources.+w/o-prtrain-200K 77.83 66.48 71.71 73.0 62.35 67.26 81.37 68.59 74.44 76.25 64.28 69.76 +w-prtrain-200K 78.22 68.78 73.20 74.22 65.26 69.45 81.67 70.85 75.87 77.34 67.09 71.85 +w/o-prtrain-2M 70.28 75.25 72.68 66.42 71.12 68.69 72.33 79.00 75.52 68.53 74.84 71.55 +w-prtrain-2M 78.98 73.6 76.20 75.63 70.47 72.96 82.19 75.75 78.84 78.84 72.67 75.63 soft-masked BERT +w/o-prtrain-200K 75.9 70.30 72.99 72.01 66.7 69.25 79.27 73.81 76.44 74.98 69.82 72.31 +w-prtrain-200K 78.97 70.91 74.72 75.19 67.52 71.15 82.15 73.38 77.52 78.25 69.90 73.84 +w/o-prtrain-2M 70.90 77.77 74.18 67.46 73.99 70.57 72.59 82.44 77.20 68.86 78.20 73.24 +w-prtrain-2M 79.19 74.86 76.97 75.75 71.60 73.62 82.39 77.93 80.10 78.63 74.37 76.44Models
Sentence-level
Character-level
Detection
Correction
Detection
Correction
P
R
F1
P
R
F1
P
R
F1
P
R
F1
BERT
https://github.com/mozillazg/python-pinyin
https://en.wikipedia.org/wiki/Google_Pinyin
Chunk-based chinese spelling check with global optimization. Zuyi Bao, Chen Li, Rui Wang, Findings of the Association for Computational Linguistics: EMNLP 2020. Zuyi Bao, Chen Li, and Rui Wang. 2020. Chunk-based chinese spelling check with global optimization. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 2031-2040.
A new statistical approach to chinese pinyin input. Zheng Chen, Kai-Fu Lee, Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics. the 38th Annual Meeting of the Association for Computational LinguisticsZheng Chen and Kai-Fu Lee. 2000. A new statistical approach to chinese pinyin input. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 241-247.
Spellgcn: Incorporating phonological and visual similarities into language models for chinese spelling check. Xingyi Cheng, Weidi Xu, Kunlong Chen, Shaohua Jiang, Feng Wang, Taifeng Wang, Wei Chu, Yuan Qi, arXiv:2004.14166arXiv preprintXingyi Cheng, Weidi Xu, Kunlong Chen, Shaohua Jiang, Feng Wang, Taifeng Wang, Wei Chu, and Yuan Qi. 2020. Spellgcn: Incorporating phono- logical and visual similarities into language mod- els for chinese spelling check. arXiv preprint arXiv:2004.14166.
Kevin Clark, Minh-Thang Luong, V Quoc, Christopher D Le, Manning, arXiv:2003.10555Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprintKevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than genera- tors. arXiv preprint arXiv:2003.10555.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
A large scale ranker-based system for search query spelling correction. Jianfeng Gao, Xiaolong Li, Daniel Micol, Chris Quirk, Xu Sun, international conference on computational linguisticsJianfeng Gao, Xiaolong Li, Daniel Micol, Chris Quirk, and Xu Sun. 2010. A large scale ranker-based sys- tem for search query spelling correction. interna- tional conference on computational linguistics.
Faspell: A fast, adaptable, simple, powerful chinese spell checker based on daedecoder paradigm. Yuzhong Hong, Xianguo Yu, Neng He, Nan Liu, Junhui Liu, Proceedings of the 5th Workshop on Noisy User-generated Text. the 5th Workshop on Noisy User-generated TextW-NUTYuzhong Hong, Xianguo Yu, Neng He, Nan Liu, and Junhui Liu. 2019. Faspell: A fast, adaptable, sim- ple, powerful chinese spell checker based on dae- decoder paradigm. In Proceedings of the 5th Work- shop on Noisy User-generated Text (W-NUT 2019), pages 160-169.
Baotian Hu, Qingcai Chen, Fangze Zhu, arXiv:1506.05865Lcsts: A large scale chinese short text summarization dataset. arXiv preprintBaotian Hu, Qingcai Chen, and Fangze Zhu. 2015. Lc- sts: A large scale chinese short text summarization dataset. arXiv preprint arXiv:1506.05865.
Phmospell: Phonological and morphological knowledge guided chinese spelling check. Li Huang, Junjie Li, Weiwei Jiang, Zhiyu Zhang, Minchuan Chen, Shaojun Wang, Jing Xiao, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Li Huang, Junjie Li, Weiwei Jiang, Zhiyu Zhang, Minchuan Chen, Shaojun Wang, and Jing Xiao. 2021. Phmospell: Phonological and morphological knowledge guided chinese spelling check. In Pro- ceedings of the 59th Annual Meeting of the Associa- tion for Computational Linguistics and the 11th In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5958- 5967.
A joint graph model for pinyin-to-chinese conversion with typo correction. Zhongye Jia, Hai Zhao, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsZhongye Jia and Hai Zhao. 2014. A joint graph model for pinyin-to-chinese conversion with typo correc- tion. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1512-1523.
Chinese lexical analysis with deep bi-gru-crf network. Zhenyu Jiao, Shuqi Sun, Ke Sun, arXiv:1807.01882arXiv preprintZhenyu Jiao, Shuqi Sun, and Ke Sun. 2018. Chinese lexical analysis with deep bi-gru-crf network. arXiv preprint arXiv:1807.01882.
Craspell: A contextual typo robust approach to improve chinese spelling correction. Shulin Liu, Shengkang Song, Tianchi Yue, Tao Yang, Huihui Cai, Tinghao Yu, Shengli Sun, Findings of the Association for Computational Linguistics: ACL 2022. Shulin Liu, Shengkang Song, Tianchi Yue, Tao Yang, Huihui Cai, TingHao Yu, and Shengli Sun. 2022. Craspell: A contextual typo robust approach to im- prove chinese spelling correction. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3008-3018.
Plome: Pre-training with misspelled knowledge for chinese spelling correction. Shulin Liu, Tao Yang, Tianchi Yue, Feng Zhang, Di Wang, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Shulin Liu, Tao Yang, Tianchi Yue, Feng Zhang, and Di Wang. 2021. Plome: Pre-training with mis- spelled knowledge for chinese spelling correction. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 2991-3000.
Overview of nlptea-2020 shared task for chinese grammatical error diagnosis. Gaoqi Rao, Erhong Yang, Baolin Zhang, Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications. the 6th Workshop on Natural Language Processing Techniques for Educational ApplicationsGaoqi Rao, Erhong Yang, and Baolin Zhang. 2020a. Overview of nlptea-2020 shared task for chinese grammatical error diagnosis. Proceedings of the 6th Workshop on Natural Language Processing Tech- niques for Educational Applications.
Overview of nlptea-2020 shared task for chinese grammatical error diagnosis. Gaoqi Rao, Erhong Yang, Baolin Zhang, Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications. the 6th Workshop on Natural Language Processing Techniques for Educational ApplicationsGaoqi Rao, Erhong Yang, and Baolin Zhang. 2020b. Overview of nlptea-2020 shared task for chinese grammatical error diagnosis. In Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications, pages 25- 35.
Introduction to sighan 2015 bake-off for chinese spelling check. Yuen-Hsien, Lung-Hao Tseng, Li-Ping Lee, Hsin-Hsi Chang, Chen, CIPS-SIGHAN Joint Conference on Chinese Language Processing. Yuen-Hsien Tseng, Lung-Hao Lee, Li-Ping Chang, and Hsin-Hsi Chen. 2015. Introduction to sighan 2015 bake-off for chinese spelling check. CIPS-SIGHAN Joint Conference on Chinese Language Processing.
Named entity recognition in chinese news comments on the web. Xiaojun Wan, Liang Zong, Xiaojiang Huang, Tengfei Ma, Houping Jia, Yuqian Wu, Jianguo Xiao, Proceedings of 5th international joint conference on natural language processing. 5th international joint conference on natural language processingXiaojun Wan, Liang Zong, Xiaojiang Huang, Tengfei Ma, Houping Jia, Yuqian Wu, and Jianguo Xiao. 2011. Named entity recognition in chinese news comments on the web. In Proceedings of 5th inter- national joint conference on natural language pro- cessing, pages 856-864.
A hybrid approach to automatic corpus generation for chinese spelling check. Dingmin Wang, Yan Song, Jing Li, Jialong Han, Haisong Zhang, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingDingmin Wang, Yan Song, Jing Li, Jialong Han, and Haisong Zhang. 2018. A hybrid approach to auto- matic corpus generation for chinese spelling check. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2517-2527.
Combining resnet and transformer for chinese grammatical error diagnosis. Shaolei Wang, Baoxin Wang, Jiefu Gong, Zhongyuan Wang, Xiao Hu, Xingyi Duan, Zizhuo Shen, Gang Yue, Ruiji Fu, Dayong Wu, Proceedings of the 6th Workshop on Natural Language Processing Techniques for Educational Applications. the 6th Workshop on Natural Language Processing Techniques for Educational ApplicationsShaolei Wang, Baoxin Wang, Jiefu Gong, Zhongyuan Wang, Xiao Hu, Xingyi Duan, Zizhuo Shen, Gang Yue, Ruiji Fu, Dayong Wu, et al. 2020. Combining resnet and transformer for chinese grammatical er- ror diagnosis. In Proceedings of the 6th Workshop on Natural Language Processing Techniques for Ed- ucational Applications, pages 36-43.
Yaclc: A chinese learner corpus with multidimensional annotation. Yingying Wang, Cunliang Kong, Liner Yang, Yijun Wang, Xiaorong Lu, Renfen Hu, Shan He, Zhenghao Liu, Yun Chen, Erhong Yang, arXiv:2112.15043arXiv preprintYingying Wang, Cunliang Kong, Liner Yang, Yijun Wang, Xiaorong Lu, Renfen Hu, Shan He, Zheng- hao Liu, Yun Chen, Erhong Yang, et al. 2021. Yaclc: A chinese learner corpus with multidimensional an- notation. arXiv preprint arXiv:2112.15043.
Chinese spelling check evaluation at sighan bake-off 2013. Shih-Hung Wu, Chao-Lin Liu, Lung-Hao Lee, CIPS-SIGHAN Joint Conference on Chinese Language Processing. Shih-Hung Wu, Chao-Lin Liu, and Lung-Hao Lee. 2013. Chinese spelling check evaluation at sighan bake-off 2013. CIPS-SIGHAN Joint Conference on Chinese Language Processing.
Read, listen, and see: Leveraging multimodal information helps chinese spell checking. Heng-Da, Zhongli Xu, Qingyu Li, Chao Zhou, Zizhen Li, Yunbo Wang, Heyan Cao, Xian-Ling Huang, Mao, arXiv:2105.12306arXiv preprintHeng-Da Xu, Zhongli Li, Qingyu Zhou, Chao Li, Zizhen Wang, Yunbo Cao, Heyan Huang, and Xian- Ling Mao. 2021. Read, listen, and see: Leveraging multimodal information helps chinese spell check- ing. arXiv preprint arXiv:2105.12306.
Overview of sighan 2014 bake-off for chinese spelling check. Liang-Chih Yu, Lung-Hao Lee, Yuen-Hsien Tseng, Hsin-Hsi Chen, CIPS-SIGHAN Joint Conference on Chinese Language Processing. Liang-Chih Yu, Lung-Hao Lee, Yuen-Hsien Tseng, and Hsin-Hsi Chen. 2014. Overview of sighan 2014 bake-off for chinese spelling check. CIPS-SIGHAN Joint Conference on Chinese Language Processing.
Correcting chinese spelling errors with phonetic pre-training. Ruiqing Zhang, Chao Pang, Chuanqiang Zhang, Shuohuan Wang, Zhongjun He, Yu Sun, Hua Wu, Haifeng Wang, Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Ruiqing Zhang, Chao Pang, Chuanqiang Zhang, Shuo- huan Wang, Zhongjun He, Yu Sun, Hua Wu, and Haifeng Wang. 2021. Correcting chinese spelling errors with phonetic pre-training. In Findings of the Association for Computational Linguistics: ACL- IJCNLP 2021, pages 2250-2261.
Shaohua Zhang, Haoran Huang, Jicong Liu, Hang Li, arXiv:2005.07421Spelling error correction with soft-masked bert. arXiv preprintShaohua Zhang, Haoran Huang, Jicong Liu, and Hang Li. 2020. Spelling error correction with soft-masked bert. arXiv preprint arXiv:2005.07421.
Mucgec: a multi-reference multi-source evaluation dataset for chinese grammatical error correction. Yue Zhang, Zhenghua Li, Zuyi Bao, Jiacheng Li, Bo Zhang, Chen Li, Fei Huang, Min Zhang, arXiv:2204.10994arXiv preprintYue Zhang, Zhenghua Li, Zuyi Bao, Jiacheng Li, Bo Zhang, Chen Li, Fei Huang, and Min Zhang. 2022. Mucgec: a multi-reference multi-source eval- uation dataset for chinese grammatical error correc- tion. arXiv preprint arXiv:2204.10994.
Overview of ctc 2021: Chinese text correction for native speakers. Honghong Zhao, Baoxin Wang, Dayong Wu, Wanxiang Che, Zhigang Chen, Shijin Wang, arXiv:2208.05681arXiv preprintHonghong Zhao, Baoxin Wang, Dayong Wu, Wanx- iang Che, Zhigang Chen, and Shijin Wang. 2022. Overview of ctc 2021: Chinese text correction for native speakers. arXiv preprint arXiv:2208.05681.
Overview of the nlpcc 2018 shared task: Grammatical error correction. international conference on natural language processing and chinese computing. Yuanyuan Zhao, Nan Jiang, Weiwei Sun, Xiaojun Wan, Yuanyuan Zhao, Nan Jiang, Weiwei Sun, and Xiaojun Wan. 2018. Overview of the nlpcc 2018 shared task: Grammatical error correction. international confer- ence on natural language processing and chinese computing.
Opportunistic decoding with timely correction for simultaneous translation. Renjie Zheng, Mingbo Ma, Baigong Zheng, Kaibo Liu, Liang Huang, arXiv:2005.00675arXiv preprintRenjie Zheng, Mingbo Ma, Baigong Zheng, Kaibo Liu, and Liang Huang. 2020. Opportunistic decoding with timely correction for simultaneous translation. arXiv preprint arXiv:2005.00675.
Mdcspell: A multi-task detectorcorrector framework for chinese spelling correction. Chenxi Zhu, Ziqiang Ying, Boyu Zhang, Feng Mao, Findings of the Association for Computational Linguistics: ACL 2022. Chenxi Zhu, Ziqiang Ying, Boyu Zhang, and Feng Mao. 2022. Mdcspell: A multi-task detector- corrector framework for chinese spelling correction. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1244-1253.
| [
"https://github.com/nghuyong/cscd-ime",
"https://github.com/ymcui/Chinese-ELECTRA",
"https://github.com/mozillazg/python-pinyin"
] |
[
"Attention Link: An Efficient Attention-Based Low Resource Machine Translation Architecture",
"Attention Link: An Efficient Attention-Based Low Resource Machine Translation Architecture"
] | [
"Zeping Min zpm@pku.edu.cn \nSchool of Mathematical Sciences\nPeking University\nBeijingChina\n"
] | [
"School of Mathematical Sciences\nPeking University\nBeijingChina"
] | [] | Transformers have achieved great success in machine translation, but transformer-based NMT models often require millions of bilingual parallel corpus for training. In this paper, we propose a novel architecture named as attention link (AL) to help improve transformer models' performance, especially in low training resources. We theoretically demonstrate the superiority of our attention link architecture in low training resources. Besides, we have done a large number of experiments, including en-de, de-en, en-fr, en-it, it-en, en-ro translation tasks on the IWSLT14 dataset as well as real low resources scene on bn-gu and gu-ta translation tasks on the CVIT PIB dataset. All the experiment results show our attention link is powerful and can lead to a significant improvement. In addition, we achieve a 37.9 BLEU score, a new sota, on the IWSLT14 de-en task by combining our attention link and other advanced methods. | 10.48550/arxiv.2302.00340 | [
"https://export.arxiv.org/pdf/2302.00340v1.pdf"
] | 256,459,882 | 2302.00340 | 4947991080b2acf432722cbc6dfd106a425b26ce |
Attention Link: An Efficient Attention-Based Low Resource Machine Translation Architecture
Zeping Min zpm@pku.edu.cn
School of Mathematical Sciences
Peking University
BeijingChina
Attention Link: An Efficient Attention-Based Low Resource Machine Translation Architecture
Index Terms-transformerlow resourcemachine translationtheoretical analysis
Transformers have achieved great success in machine translation, but transformer-based NMT models often require millions of bilingual parallel corpus for training. In this paper, we propose a novel architecture named as attention link (AL) to help improve transformer models' performance, especially in low training resources. We theoretically demonstrate the superiority of our attention link architecture in low training resources. Besides, we have done a large number of experiments, including en-de, de-en, en-fr, en-it, it-en, en-ro translation tasks on the IWSLT14 dataset as well as real low resources scene on bn-gu and gu-ta translation tasks on the CVIT PIB dataset. All the experiment results show our attention link is powerful and can lead to a significant improvement. In addition, we achieve a 37.9 BLEU score, a new sota, on the IWSLT14 de-en task by combining our attention link and other advanced methods.
I. INTRODUCTION
The machine translation task is one of the most important natural language processing tasks. In general, there are three paradigms for machine translation: Rule-based machine translation, Statistical machine translation, and Neural machine translation. Rule-based machine translation e.g. [1], [2], [3], [4] and [5] is the most traditional machine translation method, which often relies on fixed rules extracted by experts to operate such as The statistical machine translation [6], [7] [8] and [9]. usually works by constructing a statistical translation model by statistically analyzing a large number of parallel corpus and then using that model for translation Neural Machine Translation (NMT) technology has made remarkable breakthroughs in recent years. Before the attention mechanism, the most commonly used neural machine translation models are RNN [10], LSTM [11], GRU [12] and there are lots of works based on them such as [13], [14], [15] and [16]. Although both RNNbased and LSTM-based models have achieved great success, they face some intractable problems. For example, both of them struggle to translate long sentences.
A. Attention based Neural machine translation [17] try to use the attention mechanism in the machine translation task. The design of the attention mechanism is inspired by imitating human attention to quickly filter out highvalue information from a large amount of information. A fully attention-based NMT model is proposed in [18]. It achieves excellent results on the WMT dataset, which demonstrates the amazing potential of attention mechanisms in the neural machine translation task. Since then, various NMT models based on the attention mechanism have become mainstream models in the machine translation task. For example, [19], [20], [21] and [22] try to make it easier for the transformer to handle long sentences. [23] and [24] try to make the transformer train faster and require less memory. [25] try to train a deep transformer. There is no doubt that transformer models have achieved great success in machine translation, but transformerbased language models often require millions of bilingual parallel corpus to train and often require careful tuning of hyper-parameters. It is difficult to get enough parallel corpora for some specific language pairs. In this paper, we propose an easy-to-implement and effective architecture called attention link. Theoretical and experimental analyzes demonstrate the effectiveness of the attention link architecture. We believe that attention links are easy to capture more general semantic information in sentences and are robust to noise. Furthermore, our architecture can be easily deployed in transformer-based models without adding extra parameters. Our contributions are as below:
* We propose attention link (AL), a novel architecture to help improve transformer models' performance, especially in low training resources. * We theoretically explain the superiority of attention link architecture. * A large number of experiments have verified the effectiveness of the attention link. All the experiment results show our attention link is powerful and can lead to a significant improvement, especially under low-resource translation conditions. We also achieve a new state-ofthe-art on the IWSLT14 de-en dataset.
II. RELATED WORK A. Improve through additional corpus One way to improve the performance of neural machine translation with low resources is to use the additional corpus, including the data augmentation method and other corpus which is much easier to obtain. a monolingual corpus of a language than to obtain a bilingual parallel corpus of a language pair. Pre-training [26] and back translation [27], [28] and [29] are two ways to use monolingual corpus. Pretraining boosts performance by learning a good representation. For example, [30]. [31] learning the representation through predicting mask words in sentences and it has achieved great success on numerous NLP tasks.
A classic and commonly used method is back translation [27], [32]. The main idea of back translation [27] is to use the monolingual corpus to improve the performance of models. Back translation can often significantly improve machine translation performance in low-resource conditions and there are many works on back-translation methods.
B. Improvement through model
There are also a lot of works on improving the model to make the model more suitable for low-resource conditions.
One of the easiest ways to improve the model is to choose appropriate hyper-parameters. [33] and [34] gained performance by choosing appropriate transformer hyper-parameters. Also, there are works focused on improving the model itself. [35] replaced all self-attention matrices in the encoder and decoder with fixed Gaussian distribution. However, the obtained model performance did not decrease significantly. [36] fixing the attention matrix of the Transformer encoder can bring up a 3 points improvement in BLEU score in low-resource scenarios. In this work, we also focus on the attention matrix. For low resource conditions, we propose an adaptive architecture, attention link, from the attention matrix perspective.
III. ATTENTION LINK ARCHITECTURE
In this section, we first introduce our insight and the motivation for designing the attention link. Then we precisely describe the attention link architecture and its mathematical formulations.
A. Motivation
In general, a large part of the difficulty caused by low resources comes from the difficulty in capturing adequate information. Excessive training is more likely to lead to serious overfitting. In particular, when we try to train a transformerbased neural machine translation system under low-resource conditions, it may be difficult for the transformer model to extract semantic information, but a simple memory of samples. Excessive memory may cause the model to pay too much attention to the correlation between a small number of words (subwords) and a lack of global vision. The attention matrices in the transformer model reflect the model's judgment on the correlation between words (subwords).
We start with the attention matrix to verify our speculation. Firstly, we randomly select about 10K and 30K parallel corpus of en-fr IWSLT14 dataset to simulate the low resource condition train the transformer models on the 10K, 30K training resource level, and full dataset respectively. Then we visualize the attention matrices of transformer models trained in different resource levels on one randomly selected sentence pair in Fig. 1. We can observe that for the transformer model trained under low-resource conditions, both the self-attention matrix and the cross-attention matrix are significantly sparser. This means that they put most of their attention on a few word pairs, and lack a global vision. This might work well across certain sentence pairs (such as the samples memorized by the model), but the lack of a global view will cause the model to lack generalization.
Inspired by the shortcut structure in resnet [37], we propose an attention link structure. Since the attention matrix is to capture the relationship among words, to decrease the low resource effects, we try to connect the neighbor layers' attention matrix inspired by the shortcut structure in resnet [37]. Through attention links, the information between the attention matrices can be shared, thereby reducing the possibility of the single-layer attention matrix being disturbed by noise.
B. Transformer and attention link
We first briefly introduce the transformer model, specify some notation, and then give our attention link architecture. The transformer model [18] is the cornerstone of many fields in recent years. The encoder and decoder of the transformer model are spliced by the encoder layer and decoder layer. Both encoder layer and decoder layer are mainly composed by Self Attn part, Cross Attn part and FFN part. These layers map X ∈ R d×n to X ∈ R d×n . Mathematically, as the formulation in [38], we have
Self Attn n (X) = h i=1 W i n,O W i n,V X· S W i n,K X T W i n,Q X (1) Cross Attn n (X) = h i=1 W i n,O V · S (K) T W i n,Q X (2) FFN(X) = W 2 · ReLU W 1 · (X) + b 1 1 T + b 2 1 T (3)
where Self Attn n represents the self attention part of the nth encoder layer or the nth decoder layer. Cross Attn n represents the self attention part of the nth decoder layer.
W i n,O ∈ R d×dv , W i n,V ∈ R dv×d , W i n,K ∈ R d k ×d , W i n,Q ∈ R dq×d , W 2 ∈ R d×d hidden , W 1 ∈ R d hidden ×d , b 2 ∈ R d , b 1 ∈ R d hidden . And d, d q , d k , d v , d hidden ,
h are 6 main hyperparameters. d represents the text embedding dimension, d q represents the query vector dimension, d k represents the key vector dimension, d v represents the value vector dimension, and d hidden represents the hidden layer dimension, h represents the number of transformer heads.S represents the soft-max function. K represents the key tensor of the last encoder. V represents the value tensor of the last encoder. S W i n,K X T W i n,Q X represents the self attention matrix of the ith head in nth encoder layer or nth decoder layer. S (K) T W i n,Q X represents the cross attention matrix of the ith head in nth decoder layer.
Inspired by the shortcut structure in resnet [37], our attention link structure is to connect the attention matrices between adjacent layers. Through attention links, the information between the attention matrices can be shared, thereby reducing the possibility of the single-layer attention matrix being disturbed by noise. Mathematically, we have
Linked Self Attn n (X) = h i=1 W i n,O W i n,V X· S[ W i n,K X T W i n,Q X + W i n−1,K X T W i n−1,Q X] (4) Linked Cross Attn n (X) = h i=1 W i n,O V · S[(K) T W i n,Q X + (K) T W i n−1,Q X](5)
where Linked Self Attn n represents the self-attention part of the nth encoder layer or the nth decoder layer. Linked Cross Attn n represents the self-attention part of the nth decoder layer. Other notations are the same as in (1), (2), and (3).
The core point of the attention link is to make the attention matrix in each layer of the transformer model depend not only on the query tensor and key tensor of the current layer but also on the query tensor and key tensor of the previous layer so that each layer has a larger receptive field and at the same time alleviates the problem of low insufficient resource training biases the attention matrix. Note that the attention link architecture just replaces the Self Attn n and Cross Attn n parts of the encoder layer and decoder layer of the transformer model with Linked Self Attn n and Linked Cross Attn n . This does not introduce additional parameters and is easy to implement. The structure of the attention link is shown in Fig. 2.
IV. THEORY ANALYZE OF THE ATTENTION LINK
A. Representation ability
We first illustrate that the attention link does not change the representation ability of the transformer model. Using T h (θ, ·) denotes transformer model with hyper-parameters h and trainable parameters θ, T h (θ, ·) denotes transformer model with attention link as well as hyper-parameters h and trainable parametersθ. Since the attention link structure only adds the Q, K product in the previous layer of the transformer model to the Q, K product of the current layer, we only need the attention matrices of the last layer of T h (θ, ·) and T h (θ, ·) to remain the same for both the encoder and the decoder. This can obviously be done by choosing the appropriate Q, K of the last layer of the encoder or decoder in T h (θ, ·). In summary, we have . That is to say, the representation ability of the linked transformer will not less than the transformer.
B. Robustness
In Lemma 1, we explained that the transformer with attention link and the transformer model have the same representation ability. In this section, we try to explain the advantages of attention links in theory. a) Notation: We firstly give the notation in Table I b) Analysis setting: To simplify the analysis, we make some non-general simplifications. Firstly we ignore the effect of pointwise linear layers and just analyze one layer in the transformer model. Besides, we assume both x and y are vectors with N length. So mathematically we have
T θ (x)(j) = N i=1 x(i)P (θ, x, i, j) (6) N i=1 P (θ, x, i, j) = 1(7)
P (θ, x, i, j) is attention matrix with parameters θ and x input. Note that the P (θ, x, i, j) act as the attention matrix in the transformer model. We denote the ground truth of the parameters as θ * . So we have,
y(j) = N i=1 x(i)P (θ * , x, i, j)(8)
c) Vanilla transformer in low resource: Since the low training resource, we can not get the exact θ * as well as the P (θ * , x, i, j). We denote the error caused by low resource in P (θ * , x, i, j) as σ(i, j) and assume that σ(i, j)
i.i.d ∼ N 0, σ 2 0 . So mathematically we havẽ y(j) = 1 c j N i=1 x(i) (P (θ * , x, i, j) + σ(i, j))(9)
The σ 2 0 is a constant andc j is the normalization coefficient. Since the error in P is relatively small, we havec j ≈ 1.
∆(ỹ(j)) = |y(j) − y(j)| ≈ N i=1 x(i)σ(i, j) ≤ 1 2 N i=1 x 2 (i) + σ 2 (i, j) (10) 1 N N j=1∆ (ỹ(j)) ≤ 1 2N N i=1 N j=1 x 2 (i) + 1 N N i=1 N j=1 σ 2 (i, j)(11)
d) Transformer with attention link: For the linked transformer model, since the low training resource, both previous and current layers' attention matrices also have error and we denote previous layers' attention matrices error as σ pre . So we have σ pre (i, j)
i.i.d ∼ N 0, σ 2 0 and σ(i, j) i.i.d ∼ N 0, σ 2 0 .
Due to the attention link, we have the outputỹ:
y(j) = 1 c j N i=1 x(i)( 1 2 P (θ * , x, i, j) + 1 2 P pre (θ * , x, i, j) + 1 2 σ(i, j) + 1 2 σ pre (i, j))(12)
If we denote γ (θ * , x, i, j)) as
γ (θ * , x, i, j)) = P pre (θ * , x, i, j)) − P (θ * , x, i, j) (13)
Then we haveỹ
(j) = 1 c j N i=1 x(i)(P (θ * , x, i, j) + 1 2 σ(i, j) + 1 2 σ pre (i, j) + 1 2 γ (θ * , x, i, j))(14)
Since the error in P is relatively small, we also havec j ≈ 1.
If the ground truth P difference between adjacent transformer layers is relatively small, that is γ (θ * , x, i, j)) ≈ 0, then we have∆
(ỹ(j)) = |y(j) −ỹ(j)| ≈ N i=1 x(i) 1 2 σ(i, j) + 1 2 σ pre (i, j) ≤ 1 2 N i=1 x 2 (i) + 1 2 σ(i, j) + 1 2 σ pre (i, j) 2(15)
Take the average of the elements inỹ, we have
1 N N j=1∆ (ỹ(j)) ≤ 1 2N N i=1 N j=1 x 2 (i)+ 1 N N i=1 N j=1 1 2 σ 2 (i, j) + 1 2 σ pre (i, j)(16)
e) The superiority of attention link: Now we compare the error ofỹ andỹ in (11) and (16) respectively. Note that σ pre (i, j)
i.i.d ∼ N 0, σ 2 0 and 1 2 σ(i, j) + 1 2 σ pre (i, j) i.i.d ∼ N 0, 1 2 σ 2 0 .
According to the law of large numbers, for the error ofỹ in (11) we have
1 N N j=1 N i=1 σ 2 (i, j) ≈ E N i=1 σ 2 (i, j) = N σ 2 0 (17)
and for the error ofỹ in (16), we have
1 N N j=1 N i=1 1 2 σ 2 (i, j) + 1 2 σ 2 pre (i, j) ≈ E N i=1 1 2 σ 2 (i, j) + 1 2 σ 2 pre (i, j) = 1 2 N σ 2 0(18)
Comparing (17) and (18), we find that the error output of transformer model with attention linkỹ is reduced by 1 2 N σ 2 0 and this shows the superiority of the attention link.
V. EXPERIMENT
A. Set up 1) Dataset: We perform numerical experimental tests on six tasks of IWSLT14: en-de, de-en, en-fr, en-it, it-en, and enro as well as on bn-gu and gu-ta translation tasks on the CVIT PIB dataset.
To verify the effectiveness of the attention link structure in low resource conditions, we first simulate the low resource scenes by randomly extracting about 10K and 30K parallel corpora from the full bilingual training split corpora on the en-de, de-en, en-fr, en-it, it-en, and en-ro machine translation tasks of IWSLT14 dataset. We train the models on 10K, and 30K training resource levels and full training split respectively, and evaluate the full test split for the en-de, de-en, en-fr, en-it, it-en, and en-ro translation tasks of IWSLT14.
Then we experiment on bn-gu and gu-ta translation tasks of the CVIT PIB dataset. We randomly sample about 5% datas of the bn-gu and gu-ta translation tasks in CVIT PIB dataset as the test split and the rest as the training split.
At the same time, we test the attention link structure on the full dataset of en-de, de-en, en-fr, en-it, it-en, and en-ro machine translation tasks in the IWSLT14 dataset. Besides, we combine our attention link architecture with other advanced methods and then test on the IWSLT14 en-de dataset.
2) Model hyper-parameters setting: We mainly compare the two structures: transformer [18] and transformer with attention link. We use the standard 6-layer encoder and 6layer decoder structure. Note that the attention link does not introduce additional parameters to the model. For the sake of fairness, the hyper-parameters of the transformer [18] and transformer with attention link models are kept the same in the experiments. In our experiments, we set d = 512, d q = 128, d k = 128, d v = 128, d hidden = 1024, h = 4. Our numerical experiments are based on the fairseq 1 code. Both transformer and transformer with attention link are optimized using the adam [39] method, using the default setting in fairseq β 0 = 0.9, β 1 = 0.98, = 10 −8 , weightdecay = 10 −4 . Besides, we use the warmup strategy to control the change in learning rate as well as the default setting in fairseq warmupstep = 4000 with lr = 5 × 10 −4 on the en-de, de-en, en-fr, en-it, it-en, and en-ro machine translation tasks in the IWSLT14 dataset.
B. Result
In the following Table II, we show some translation examples of the models trained in the 10K training resource level of en-de task. The words in red color are mistranslated and the words in green color are the improvement in transformer + attention link architecture.
In the following Table III, IV, and V, we show the BLEU score of the model on the test set under different resource configurations. We train the model on a single P100 GPU for no more than 3 days for each task. Firstly, we see that the transformer with attention link can achieve better results under different training resource settings than the transformer model alone. Secondly, we found that generally the smaller the amount of resources, the greater the improvement. On average, the attention link architecture can improve by 1.0 BLEU score in the 10K training resource level, improve by 0.8 BLEU score in 30K training resource level, and improve by 0.3 BLEU score in the full dataset. Thirdly, we found that generally the improvements brought by AL changes with language. For example, in the 10K training resource level, the improvement on the en-de task is 1.1. But also in the 10K training resource level, the improvement on the it-en task is 0.6.
In Table VI, we show the training results on bn-gu and gu-ta translation tasks of the CVIT PIB dataset, reflecting that the model brings significant improvements in real low-resource scenarios.
C. Combined with other advanced models
In this section, we show that our proposed attention link architecture can be easily combined with the current advanced transformer-based NMT models, and can bring performance improvements. We tried to combine attention link with other existing advanced transformer-based models: rdrop [40] and cutoff [41]. We reached a new sota on the IWSLT de-en task with a 37.9 BLEU score after combining with the cutoff [41] model. Since the attention link does not introduce new parameters and hyper-parameters and to ensure fairness, we keep the same hyper-parameters settings after combined in our experiments. The experimental results are shown in Table VII.
VI. ABLATION STUDY
To further illustrate the effectiveness of our proposed attention link, we conduct ablation experiments, that is, only add the attention link to the encoder module of the transformer and only add the attention link to the decoder module of the transformer. We perform ablation experiments on the 10K training resource level of six tasks of IWSLT: en-de, de-en, en-fr, en-it, it-en, and en-ro.
The experimental results are shown in Table VIII. We can see that only adding the attention link on the encoder module of the transformer and only adding the attention link on the decoder module of the transformer will generally improve the performance compared to the original transformer model. However, the improvement is generally not as apparent as adding the attention link on both the encoder and decoder sides of the transformer. This further illustrates the effectiveness of the attention link. Besides, we can also see that adding the attention link on the encoder module is generally more effective.
VII. CONCLUSION
In this paper, motivated by shortcut design, we designed a new and efficient architecture named attention link. We theoretically explain the superiority of the attention link. Furthermore, experiments have shown that the transformer model with the attention link achieves much better performance and we achieved a new sota on IWSLT14 de-en translation task with a 37.9 BLEU score by combining attention link and other advanced method.
We believe that attention links are easy to capture more general semantic information in sentences and are robust to noise. Furthermore, our architecture can be easily deployed in transformer-based models without adding extra parameters. In some sense, it's a free lunch. We hope our research can provide a new idea for obtaining an efficient NMT model with low resources.
For limitations, this paper mainly studies the low-resource NMT model. And the attention link we proposed is based on the transformer model, so it may also inherit the shortcomings of the transformer, such as difficulty in selecting suitable optimization parameters.
Fig. 1 .
1Attention matrices of transformer models trained in different resources for en-fr translation task
Fig. 2 .
2Linked self attention and Linked cross attention Lemma 1. For transformer model T h (θ, ·) with arbitrary parameters θ, there exists parametersθ so that the linked transformer model T h (θ, ·) equals with the T h (θ, ·)
1 https://github.com/facebookresearch/fairseq
TABLE I NOTATION
IIN THE ANALYSISNotation Meaning
x
input vector
y
(ground truth) output vector
y
vanilla transformer output vector
y
transformer with AL output vector
N
vector length
T
(simplified) transformer operation
P
attention matrix
θ
model parameters
θ *
ground truth model parameters
σ(i, j)
error in position (i, j) of P
∆
error ofỹ
∆
error ofỹ
TABLE III BLEU
IIIOF TRANSFORMER AND TRANSFORMER WITH ATTENTION LINK TRAINING ON THE 10K TRAINING RESOURCE LEVEL TABLE V BLEU OF TRANSFORMER AND TRANSFORMER WITH ATTENTION LINK TRAINING ON FULL IWSLT14 TRAINING DATASET TABLE VI BLEU OF TRANSFORMER AND TRANSFORMER WITH ATTENTION LINK TRAINING ON CVIT PIB TRAINING DATASETTask
transformer transformer+AL
en-de 13.3
14.4
de-en 17.3
18.0
en-fr
22.9
23.9
en-it
15.1
16.4
it-en
19.2
19.8
en-ro
13.5
14.7
Avg
16.9
17.9
TABLE IV
BLEU OF TRANSFORMER AND TRANSFORMER WITH ATTENTION LINK
TRAINING ON 30K TRAINING RESOURCE LEVEL
Task
transformer transformer+AL
en-de 20.7
21.6
de-en 25.3
26.1
en-fr
31.6
32.3
en-it
23.4
24.3
it-en
26.5
27.3
en-ro
20.9
21.7
Avg
24.7
25.5
Task
transformer transformer+AL
en-de 28.6
28.7
de-en 34.4
34.7
en-fr
40.4
40.9
en-it
30.8
31.2
it-en
34.7
34.9
en-ro
28.3
29.0
Avg
32.9
33.2
Task
transformer transformer+AL
bn-gu 9.1
10.0
gu-ta
10.3
10.4
Avg
9.8
10.2
TABLE II SOME
IITRANSLATION EXAMPLES OF MODELS TRAINED IN 10K TRAINING RESOURCE LEVEL OF EN-DE TASK WITH. we show some translation examples of models trained in 10K training resource level of en-de task. The words in red color are mistranslated and the words in green color are the improvement in transformer + attention link architecture.source sentence
this is hard .
transformer result
das ist schwierig .
transformer+AL result
das ist schwer .
target sentence (ground truth)
das ist schwer .
source sentence
tremendously exciting .
transformer result
enorm .
transformer+AL result
sehr aufregend .
target sentence (ground truth)
ungeheuer aufregend .
source sentence
tell me about this world .
transformer result
ich erzählen ihnen darüber nach .
transformer+AL result
erzählen sie mirüber diese welt .
target sentence (ground truth)
erzählen sie mir von dieser welt .
source sentence
now , what does that mean ?
transformer result
und was bedeutet das ?
transformer+AL result
nun , was bedeutet das ?
target sentence (ground truth)
nun , was bedeutet das ?
a
TABLE VII BLUE
VIIOF COMBINING ATTENTION LINK WITH OTHER TRANSFORMER-BASED METHODSTABLE VIII BLEU OF THE VANILLA TRANSFORMER (WO AL), ONLY AL IN THE ENCODER (ENC AL), ONLY AL IN THE ENCODER (DEC AL), AL IN BOTH ENCODER AND DECODER (AL)model
BLEU
R-drop [40]
36.9±0.5
cut-off [41]
37.7±0.5
R-drop [40]+AL 37.3±0.5
cut-off [41]+AL
37.9±0.5
Task
wo AL enc AL dec AL AL
en-de 13.3
13.8
14.2
14.4
de-en 17.3
18.1
17.7
18.0
en-fr
22.9
24.3
23.5
23.9
en-it
15.1
15.7
15.5
16.4
it-en
19.2
19.7
19.2
19.8
en-ro
13.5
14.1
14.3
14.7
Avg
16.9
17.6
17.4
17.9
Rule based machine translation from english to malayalam. R Rajan, R Sivan, R Ravindran, K Soman, 2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies. IEEER. Rajan, R. Sivan, R. Ravindran, and K. Soman, "Rule based ma- chine translation from english to malayalam," in 2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies. IEEE, 2009, pp. 439-441.
Rule-based machine translation from english to finnish. A Hurskainen, J Tiedemann, Proceedings of the Second Conference on Machine Translation (WMT2017). The Association for Computational Linguistics. the Second Conference on Machine Translation (WMT2017). The Association for Computational LinguisticsA. Hurskainen, J. Tiedemann et al., "Rule-based machine translation from english to finnish," in Proceedings of the Second Conference on Machine Translation (WMT2017). The Association for Computational Linguistics, 2017.
Rule-based machine translation from tunisian dialect to modern standard arabic. M A Sghaier, M Zrigui, Procedia Computer Science. 176M. A. Sghaier and M. Zrigui, "Rule-based machine translation from tunisian dialect to modern standard arabic," Procedia Computer Science, vol. 176, pp. 310-319, 2020.
Rule-based machine translation. Y Shiwen, B Xiaojing, Routledge Encyclopedia of Translation Technology. Routledge. Y. Shiwen and B. Xiaojing, "Rule-based machine translation," in Rout- ledge Encyclopedia of Translation Technology. Routledge, 2014, pp. 224-238.
Apertium: a free/open-source platform for rule-based machine translation. M L Forcada, M Ginestí-Rosell, J Nordfalk, J O'regan, S Ortiz-Rojas, J A Pérez-Ortiz, F Sánchez-Martínez, G Ramírez-Sánchez, F M Tyers, Machine translationM. L. Forcada, M. Ginestí-Rosell, J. Nordfalk, J. O'Regan, S. Ortiz- Rojas, J. A. Pérez-Ortiz, F. Sánchez-Martínez, G. Ramírez-Sánchez, and F. M. Tyers, "Apertium: a free/open-source platform for rule-based machine translation," Machine translation, pp. 127-144, 2011.
Unsupervised statistical machine translation. M Artetxe, G Labaka, E Agirre, arXiv:1809.01272arXiv preprintM. Artetxe, G. Labaka, and E. Agirre, "Unsupervised statistical machine translation," arXiv preprint arXiv:1809.01272, 2018.
Minimum error rate training in statistical machine translation. F J Och, Proceedings of the 41st annual meeting of the Association for Computational Linguistics. the 41st annual meeting of the Association for Computational LinguisticsF. J. Och, "Minimum error rate training in statistical machine transla- tion," in Proceedings of the 41st annual meeting of the Association for Computational Linguistics, 2003, pp. 160-167.
Improved alignment models for statistical machine translation. F J Och, C Tillmann, H Ney, 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora. F. J. Och, C. Tillmann, and H. Ney, "Improved alignment models for statistical machine translation," in 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, 1999.
Phrase-based statistical machine translation. R Zens, F J Och, H Ney, KI 2002: Advances in Artificial Intelligence: 25th Annual German Conference on AI, KI 2002 Aachen. GermanySpringerR. Zens, F. J. Och, and H. Ney, "Phrase-based statistical machine translation," in KI 2002: Advances in Artificial Intelligence: 25th Annual German Conference on AI, KI 2002 Aachen, Germany, September 16- 20, 2002 Proceedings 25. Springer, 2002, pp. 18-32.
Recurrent continuous translation models. N Kalchbrenner, P Blunsom, Proceedings of the 2013 conference on empirical methods in natural language processing. the 2013 conference on empirical methods in natural language processingN. Kalchbrenner and P. Blunsom, "Recurrent continuous translation models," in Proceedings of the 2013 conference on empirical methods in natural language processing, 2013, pp. 1700-1709.
Sequence to sequence learning with neural networks. I Sutskever, O Vinyals, Q V Le, Advances in neural information processing systems. 27I. Sutskever, O. Vinyals, and Q. V. Le, "Sequence to sequence learning with neural networks," Advances in neural information processing systems, vol. 27, 2014.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. K Cho, B Van Merriënboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, Y Bengio, arXiv:1406.1078arXiv preprintK. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, "Learning phrase representations using rnn encoder-decoder for statistical machine translation," arXiv preprint arXiv:1406.1078, 2014.
Neural machine translation using recurrent neural network. D Datta, P E David, D Mittal, A Jain, International Journal of Engineering and Advanced Technology. 94D. Datta, P. E. David, D. Mittal, and A. Jain, "Neural machine translation using recurrent neural network," International Journal of Engineering and Advanced Technology, vol. 9, no. 4, pp. 1395-1400, 2020.
Recurrent neural network language model for english-indonesian machine translation: Experimental study. A Hermanto, T B Adji, N A Setiawan, 2015 International conference on science in information technology (ICSITech). IEEEA. Hermanto, T. B. Adji, and N. A. Setiawan, "Recurrent neural network language model for english-indonesian machine translation: Experimental study," in 2015 International conference on science in information technology (ICSITech). IEEE, 2015, pp. 132-136.
Lstm-based attentional embedding for english machine translation. L Jian, H Xiang, G Le, Scientific Programming. 2022L. Jian, H. Xiang, and G. Le, "Lstm-based attentional embedding for english machine translation," Scientific Programming, vol. 2022, 2022.
Multihead highly parallelized lstm decoder for neural machine translation. H Xu, Q Liu, J Van Genabith, D Xiong, M Zhang, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1H. Xu, Q. Liu, J. van Genabith, D. Xiong, and M. Zhang, "Multi- head highly parallelized lstm decoder for neural machine translation," in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2021, pp. 273-282.
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, arXiv:1409.0473arXiv preprintD. Bahdanau, K. Cho, and Y. Bengio, "Neural machine translation by jointly learning to align and translate," arXiv preprint arXiv:1409.0473, 2014.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. 30A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.
Longformer: The longdocument transformer. I Beltagy, M E Peters, A Cohan, arXiv:2004.05150arXiv preprintI. Beltagy, M. E. Peters, and A. Cohan, "Longformer: The long- document transformer," arXiv preprint arXiv:2004.05150, 2020.
Long-short transformer: Efficient transformers for language and vision. C Zhu, W Ping, C Xiao, M Shoeybi, T Goldstein, A Anandkumar, B Catanzaro, Advances in Neural Information Processing Systems. 34C. Zhu, W. Ping, C. Xiao, M. Shoeybi, T. Goldstein, A. Anandkumar, and B. Catanzaro, "Long-short transformer: Efficient transformers for language and vision," Advances in Neural Information Processing Sys- tems, vol. 34, 2021.
Poolingformer: Long document modeling with pooling attention. H Zhang, Y Gong, Y Shen, W Li, J Lv, N Duan, W Chen, International Conference on Machine Learning. PMLR, 2021. H. Zhang, Y. Gong, Y. Shen, W. Li, J. Lv, N. Duan, and W. Chen, "Poolingformer: Long document modeling with pooling attention," in International Conference on Machine Learning. PMLR, 2021, pp. 12 437-12 446.
Compressive transformers for long-range sequence modelling. J W Rae, A Potapenko, S M Jayakumar, T P Lillicrap, arXiv:1911.05507arXiv preprintJ. W. Rae, A. Potapenko, S. M. Jayakumar, and T. P. Lillicrap, "Com- pressive transformers for long-range sequence modelling," arXiv preprint arXiv:1911.05507, 2019.
Reformer: The efficient transformer. N Kitaev, Ł Kaiser, A Levskaya, arXiv:2001.04451arXiv preprintN. Kitaev, Ł. Kaiser, and A. Levskaya, "Reformer: The efficient trans- former," arXiv preprint arXiv:2001.04451, 2020.
Linformer: Self-attention with linear complexity. S Wang, B Z Li, M Khabsa, H Fang, H Ma, arXiv:2006.04768arXiv preprintS. Wang, B. Z. Li, M. Khabsa, H. Fang, and H. Ma, "Linformer: Self-attention with linear complexity," arXiv preprint arXiv:2006.04768, 2020.
Learning deep transformer models for machine translation. Q Wang, B Li, T Xiao, J Zhu, C Li, D F Wong, L S Chao, arXiv:1906.01787arXiv preprintQ. Wang, B. Li, T. Xiao, J. Zhu, C. Li, D. F. Wong, and L. S. Chao, "Learning deep transformer models for machine translation," arXiv preprint arXiv:1906.01787, 2019.
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, arXiv:1301.3781arXiv preprintT. Mikolov, K. Chen, G. Corrado, and J. Dean, "Efficient estimation of word representations in vector space," arXiv preprint arXiv:1301.3781, 2013.
Improving neural machine translation models with monolingual data. R Sennrich, B Haddow, A Birch, arXiv:1511.06709arXiv preprintR. Sennrich, B. Haddow, and A. Birch, "Improving neural machine trans- lation models with monolingual data," arXiv preprint arXiv:1511.06709, 2015.
Understanding backtranslation at scale. S Edunov, M Ott, M Auli, D Grangier, arXiv:1808.09381arXiv preprintS. Edunov, M. Ott, M. Auli, and D. Grangier, "Understanding back- translation at scale," arXiv preprint arXiv:1808.09381, 2018.
Revisiting back-translation for low-resource machine translation between chinese and vietnamese. H Li, J Sha, C Shi, IEEE Access. 8H. Li, J. Sha, and C. Shi, "Revisiting back-translation for low-resource machine translation between chinese and vietnamese," IEEE Access, vol. 8, pp. 119 931-119 939, 2020.
When and why are pre-trained word embeddings useful for neural machine translation. Y Qi, D S Sachan, M Felix, S J Padmanabhan, G Neubig, arXiv:1804.06323arXiv preprintY. Qi, D. S. Sachan, M. Felix, S. J. Padmanabhan, and G. Neubig, "When and why are pre-trained word embeddings useful for neural machine translation?" arXiv preprint arXiv:1804.06323, 2018.
Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805arXiv preprintJ. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018.
Iterative backtranslation for neural machine translation. V C D Hoang, P Koehn, G Haffari, T Cohn, Proceedings of the 2nd Workshop on Neural Machine Translation and Generation. the 2nd Workshop on Neural Machine Translation and GenerationV. C. D. Hoang, P. Koehn, G. Haffari, and T. Cohn, "Iterative back- translation for neural machine translation," in Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, 2018, pp. 18-24.
Optimizing transformer for low-resource neural machine translation. A Araabi, C Monz, arXiv:2011.02266arXiv preprintA. Araabi and C. Monz, "Optimizing transformer for low-resource neural machine translation," arXiv preprint arXiv:2011.02266, 2020.
On optimal transformer depth for low-resource language translation. E Van Biljon, A Pretorius, J Kreutzer, arXiv:2004.04418arXiv preprintE. Van Biljon, A. Pretorius, and J. Kreutzer, "On optimal trans- former depth for low-resource language translation," arXiv preprint arXiv:2004.04418, 2020.
Hard-coded gaussian attention for neural machine translation. W You, S Sun, M Iyyer, arXiv:2005.00742arXiv preprintW. You, S. Sun, and M. Iyyer, "Hard-coded gaussian attention for neural machine translation," arXiv preprint arXiv:2005.00742, 2020.
Fixed encoder selfattention patterns in transformer-based machine translation. A Raganato, Y Scherrer, J Tiedemann, arXiv:2002.10260arXiv preprintA. Raganato, Y. Scherrer, and J. Tiedemann, "Fixed encoder self- attention patterns in transformer-based machine translation," arXiv preprint arXiv:2002.10260, 2020.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
Are transformers universal approximators of sequence-to-sequence functions. C Yun, S Bhojanapalli, A S Rawat, S J Reddi, S Kumar, arXiv:1912.10077arXiv preprintC. Yun, S. Bhojanapalli, A. S. Rawat, S. J. Reddi, and S. Kumar, "Are transformers universal approximators of sequence-to-sequence functions?" arXiv preprint arXiv:1912.10077, 2019.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
R-drop: regularized dropout for neural networks. L Wu, J Li, Y Wang, Q Meng, T Qin, W Chen, M Zhang, T.-Y Liu, Advances in Neural Information Processing Systems. 34L. Wu, J. Li, Y. Wang, Q. Meng, T. Qin, W. Chen, M. Zhang, T.-Y. Liu et al., "R-drop: regularized dropout for neural networks," Advances in Neural Information Processing Systems, vol. 34, 2021.
A simple but toughto-beat data augmentation approach for natural language understanding and generation. D Shen, M Zheng, Y Shen, Y Qu, W Chen, arXiv:2009.13818arXiv preprintD. Shen, M. Zheng, Y. Shen, Y. Qu, and W. Chen, "A simple but tough- to-beat data augmentation approach for natural language understanding and generation," arXiv preprint arXiv:2009.13818, 2020.
| [
"https://github.com/facebookresearch/fairseq"
] |
[
"Post-Training Dialogue Summarization using Pseudo-Paraphrasing",
"Post-Training Dialogue Summarization using Pseudo-Paraphrasing"
] | [
"Qi Jia \nShanghai Jiao Tong University\nShanghaiChina\n",
"Yizhu Liu liuyizhu@sjtu.edu.cn2thfeng@cmbchina.com3kzhu@cs.sjtu.edu.cn \nShanghai Jiao Tong University\nShanghaiChina\n",
"Haifeng Tang \nMerchants Bank Credit Card Center\nShanghaiChina, China\n",
"Kenny Q Zhu "
] | [
"Shanghai Jiao Tong University\nShanghaiChina",
"Shanghai Jiao Tong University\nShanghaiChina",
"Merchants Bank Credit Card Center\nShanghaiChina, China"
] | [] | Previous dialogue summarization techniques adapt large language models pretrained on the narrative text by injecting dialogue-specific features into the models. These features either require additional knowledge to recognize or make the resulting models harder to tune. To bridge the format gap between dialogues and narrative summaries in dialogue summarization tasks, we propose to post-train pretrained language models (PLMs) to rephrase from dialogue to narratives. After that, the model is fine-tuned for dialogue summarization as usual. Comprehensive experiments show that our approach significantly improves vanilla PLMs on dialogue summarization and outperforms other SOTA models by the summary quality and implementation costs. | 10.18653/v1/2022.findings-naacl.125 | [
"https://arxiv.org/pdf/2204.13498v1.pdf"
] | 248,426,849 | 2204.13498 | 80c05e4a7d2a4c838e1cb697571b3ac89f8a0b53 |
Post-Training Dialogue Summarization using Pseudo-Paraphrasing
Qi Jia
Shanghai Jiao Tong University
ShanghaiChina
Yizhu Liu liuyizhu@sjtu.edu.cn2thfeng@cmbchina.com3kzhu@cs.sjtu.edu.cn
Shanghai Jiao Tong University
ShanghaiChina
Haifeng Tang
Merchants Bank Credit Card Center
ShanghaiChina, China
Kenny Q Zhu
Post-Training Dialogue Summarization using Pseudo-Paraphrasing
Previous dialogue summarization techniques adapt large language models pretrained on the narrative text by injecting dialogue-specific features into the models. These features either require additional knowledge to recognize or make the resulting models harder to tune. To bridge the format gap between dialogues and narrative summaries in dialogue summarization tasks, we propose to post-train pretrained language models (PLMs) to rephrase from dialogue to narratives. After that, the model is fine-tuned for dialogue summarization as usual. Comprehensive experiments show that our approach significantly improves vanilla PLMs on dialogue summarization and outperforms other SOTA models by the summary quality and implementation costs.
Introduction
Dialogue summarization is a specialized summarization task that takes a series of utterances from multiple speakers in the first person as input, and outputs fluent and concise summaries in third persons as shown in Figure 1. Different from previous monologue inputs such as news (Narayan et al., 2018) and scientific publications (Cohan et al., 2018), dialogues are always less well-organized. They usually contain complicated reference relations, inconsecutive inter-utterance dependencies, informal expressions, and so on, making dialogue summarization a more challenging task.
The most obvious characteristic of this task is the difference in the format and language styles between dialogue and its narrative summary. Liu, Shi and Chen (2021b) mentioned that coreference resolution models trained on general narrative text underperforms by about 10% on dialogue corpus, demonstrating the inherent gap between dialogue To narrow this gap, previous work on dialogue summarization mainly resort to injecting dialogue features into PLMs to enhance dialogue understanding. These features include dialogue acts (Goo and Chen, 2018), topic transitions (Chen and Yang, 2020), coreference relations (Liu et al., 2021b), discourse graphs (Chen and Yang, 2021), etc, leading to the rule-based conversion from dialogues to plain text (Ganesh and Dingliwal, 2019). However, they suffer from three weaknesses. First, collecting or extracting these features becomes an additional step in the summarization pipeline, complicating the inference procedure at runtime. Second, oracle feature labels are hard to collect and errors can propagate from wrong labels to poor summaries. Third, additional layers or more encoders are required to incorporate features into PLMs, increasing the GPU memory footprint both during training and inference.
A more natural way to bridge this gap is to give the model more dialogue-narrative pairs to train on.
Due to the scarcity of dialogue summarization data, one approach (Zhu et al., 2020) is to convert other text summarization pairs into dialogue to summary pairs via some template, but such work requires additional data 2 .
In this paper, we propose an alternative approach that doesn't use any more data than the original dialogue summarization dataset. We convert each existing data pair into many "pseudo-paraphrase" pairs between a dialogue and a narrative sentence. Then we post-train a pre-trained seq2seq language model using a prefix-guided generation (PGG) task on the augmented paraphrase dataset. After that, the post-trained model is further fine-tuned as usual for dialogue summarization. To this end, no human efforts on crafting complicated rules or hyper-parameter tuning, or additional memory costs, as well as additional training data, is required. In sum, our contributions are:
• We propose a novel and effective post-training processing to close the format and linguistic style gap between dialogues and narrative texts ( § 2).
• PGG with pseudo-paraphrase pairs requires no extra training data or labeling tools for features extractions ( § 3.2).
• Extensive experiments show that the proposed approach compares favorably with current SOTA models using less human efforts and computational costs ( § 3.3).
Approach
The training of a dialogue summarization model is divided two stages: post-training and fine-tuning. The model can be any seq-to-seq PLMs and it remains unchanged except for the parameters which are updated stage by stage. We will elaborate on the post-training stage in the rest of this section.
Pseudo-paraphrase Dataset Construction
We construct rephrasing datasets from the dialogue summarization dataset itself. The original dialogue summarization dataset (DSum) is made up of dialogue-summary (D-S) pairs. Each dialogue D is a sequence of utterances and can be concatenated into a whole sequence:
D = {U 1 , U 2 , ..., U T } = {x 1 , . . . , x n }(1)
2 More related work is in Appendix A.
Each turn U t is in the form of [r t : u t ], where r is a speaker and u is the actual utterance. Our goal is to create more dialogue to narration kind of paraphrasing pairs. The most intuitive approach is to divide S into sentences, and pair each sentence to D. We call such pairs "pseudoparaphrases" because the output sentence (which we call p) isn't exactly the paraphrase of the whole input, but rather part of the input.
However, doing this poses two challenges: 1) S is a coherent piece of text, and its sentences may depend on each other, so a single sentence p out of it may not stand by itself; 2) one D will be paired with several different p, and it is hard for the model to distinguish the meaning of these pairs.
Datasets
Input Output
DSum
U1∼8
Katarina wants to rent a flat from Liz. She will come visit it today after 6 pm.
DialSent
U1∼8
Katarina wants to rent a flat from Liz.
U1∼8
Katarina will come visit it today after 6 pm. To solve 1) we apply coreference resolution 3 on S and convert every pronoun in it to the full reference first, before splitting the summary S into sentences. Sentence with fewer than 3 words (e.g., "Ally agree") are discarded since it carries too little information. The set of data pairs thus created is called (DialSent). An example is in Table 1.
To tackle 2), one obvious thought is to further split D into sets of sentences in which each set corresponds to a sentence p in the summary. However, our extensive experiments (see Appendix C) showed that none of the straight-forward heuristics work well to establish such alignments. This is mainly due to the fact that dialogue utterances are highly dependent. Thus, splitting operations are not optimal. Instead of changing D, we decide to use the pseudo-paraphrases directly but introduce a prefix-guided generation task to guide the model learning to extract relevant information from D.
Prefix-guided Generation Task
Summarization for dialogues focuses on analyzing "who-did-what" storylines (Chen and Yang, 2021) and the beginning of each summary sentence are usually different speakers or the same speaker doing different things. As a result, using the prefix made up of "who" or "who-did" can help to select the related information from dialogues or plan the content to be generated.
In other words, we take the inspiration from content planning (Narayan et al., 2021;Wu et al., 2021). When training, the first few tokens of p are provided as prefix to the decoder. This prefix serves as an information selection hint to the model so it is easier to learn why that particular p should be generated. The losses are calculated between the generated tokens and reference tokens after the prefix as shown in Figure 2.
Encoder Decoder ! " # … ! $ … " % & BOS ! $ … " % & EOS
Losses computed during training
Prefix tokensˆˆˆˆP re-training Let p = {s 1 , . . . , s l }. Our prefix-guided training task is a vanilla auto-regressive generation task minimizing the negative log-likelihood of p:
L = − 1 l − a l t=a log P (s t |s <t , H d )(2)
where a is the number of prefix tokens. H d is the output hidden vectors of the encoder with input D.
There are various ways to determine the prefix length a. We can take a fixed length, a random length or a prefix up to a certain linguistic feature such as NOUN, VERB or ROOT. The exact linguistic feature to use is a dataset-dependent hyperparameter and can be tuned by the validation set. Examples of prefix tokens is marked in Table 1.
Evaluation
We first present the experimental setups, then conduct an ablation study to determine the proper prefix in PGG training, before our main results. More implementation details are in Appendix B.
Experimental Setup
We implement our experiments on SAM-Sum (Gliwa et al., 2019) and DialSumm (Chen et al., 2021), whose statistics are listed in Table 2. We compare our method with these baselines. Lead-3 and Longest-3 are simple rule-based baselines that extract the first or the longest 3 utterances in a dialogue as the summary respectively. We evaluate both automatically and by human. For automatic evaluation, we use Rouge-1, 2, and L (Lin, 2004) F1-scores 4 . Following Feng et al. (2021b), we adopt the same Rouge evaluation tool and compute between reference summaries and generated summaries. For DialSumm, we use maximum rouge scores among references for each sample. For human evaluation, we three proficient English speakers to evaluate 100 random samples from SAMSum. Each original dialogue and its reference summary are shown with generated summaries in a random order simultaneously. Showing summaries from different approaches together helps humans do comparisons between them. Following Chen and Yang (2020) and Liu et al. (2021b), each summary is scored on the scale of [2, 0, −2], where 2 means concise and informative, 0 means acceptable with minor errors, and −2 means unacceptable. The final scores are averaged among annotators. We also ask human annotators to label the error types in the summary. We consider the following 4 error types: Missing important contents, Redundant content, Coreference mismatches, and Reasoning error. Rea and Cor concentrate on comparisons to the dialogue, and the rest two focus on comparisons to the reference. We determine the error for each case by majority voting, and count the errors of each model.
Ablations Study
We conduct ablations to verify the effectiveness of post-training on DialSent with PGG, including posttraining on DSum with PGG task (DSum-PGG), DSum with vanilla generation task (DSum-VG), and DialSent with vanilla generation task (DialSent-VG) in Table 3. The results of DSum-VG drop, indicating that fine-tuning for BART on DSum with early-stop is enough. Post-training with the same data and task leads to overfitting. DialSent-PGG performs best for two reasons. Compared with DialSent-VG, the prefix solves one-to-many mappings between a dialogue and summary sentences, so that the same dialogue can lead to different generations. On the other hand, the prefix can manipulate the selection within a short sentence but is not strong enough to direct content in multiple sentences. Thus, DialSent-PGG learns more crossformat paraphrasing ability and performs better. We try several choices of prefix length: (1) W/O: without any prefix. (2) Const: Constant length set to 2 and 3 for SAMSum and DialSumm respectively, since a person's name is 1.69 ± 0.69 tokens long on average 5 . (3) Random: set by uniform sampling from a range of numbers. We set the range to 1 ∼ 3 and 2 ∼ 4 for the two datasets respectively. (4) Ling: using the validation set, we determined that Noun and Root are the best choice for the two datasets, respectively. In this way, the number of prefix tokens for SAMSum and DialSum are 1.90 ± 1.10 and 3.55 ± 1.24.
In Table 4, Ling performs the best among these variants. The actual linguistic feature to use may vary from dataset to dataset though. The remaining experiments will be conducted using PGG-Ling.
Comparison to SOTA Models
Automatic Evaluation: Our model DialSent-PGG performs competitively against other models on SAMSum and significantly better than the peers on DialSumm. It improves 1.5 on Rouge scores over BART for both datasets, while DialoBART achieves less gains on DialSumm. Based on Table 1, DialSumm is a more difficult dataset with lower compression ratios. Our model performs better on samples with lower CR, i.e. more compressed samples, as shown in Figure 3, thus differences between DialSent-PGG and DialoBART are more obvious on DialSumm. A simple case study is shown in Table 6. Multi-view faces the repetition problem as it takes the dialogue as input twice with two encoders. DialoBART has reasoning errors because it regards "William" as a keyword. DialSent-PGG instead generates a concise and correct summary. More cases are in Appendix D.
Human Evaluation: The overall human scores on BART, Multi-view, DialoBART and DialSent-PGG are 0.35, 0.40, 0.43 and 0.55 respectively. The Fleiss Kappa among three annotators is 0.39 6 . 6 Fleiss Kappa between 0.4 and 0.6 is considered moderate. For error analysis, the Fleiss Kappa for Mis, Red, Cor and Rea are 0.55, 0.10, 0.26, 0.42 respectively. The agreement on Red is lower because identifying unimportant information is hard. The agreement on Cor is fair due to undistinguishable errors. For example, mismatching of a person and an event among multiple utterances can be either a Cor or a Rea. Besides, Red always leads to Mis. So, we divide the error types into two groups and merge them with "OR" logical operation within each group. The Fleiss Kappa for Mis|Red and Cor|Rea are 0.45 and 0.46. We show error types with the agreement larger than 0.40 in Figure 4.
Multi-view performs better on content selection and DialSent-PGG performs better on reasoning and coreference understanding, while DialoBART lies in between. Fewer errors on Rea and Cor|Rea reflect that our approach successfully narrows the understanding gap. Because references are not the only good summary, high missing content doesn't mean that the generated summary is unacceptable. As a result, the model with fewer Cor|Rea errors receives higher overall score.
Implementation Costs: We compare the im-plementation costs between our approach and two state-of-the-art models, i.e. Multi-view and Dialo-BART, in Table 7 Although explicitly injecting features for dialogue understanding is effective, labels for these features are hard to collect and implementation costs for these approaches on a new dataset are high. Multi-view and DialoBART proposed doing labeling automatically with unsupervised algorithms or language models. However, these labeling approaches bring extra hyper-parameters which are different between datasets and need to be found by trial and error. If we use the same keywords extraction ratio, similarity threshold and topic segmentation ratio from SAMSum directly, the results on DialSumm are only 50.61/26.67/49.06 (Rouge-1/2/L). We searched for the best combination of hyper-parameters following their paper and did 14 trials, while applying our approach on DialSumm only need 4 trials.
On the other hand, injecting features increases the requirement of GPU memory. With the same training parameters(max tokens=1024, batch size=1, gradient checkpointing=False), Multiview with double-encoder design encounters an outof-memory error on RTX 2080Ti with 11G GPU memory. DialoBART occupies around 10.36G since it lengthens the dialogue with additional annotations. DialSent-PGG only occupies 9.87G during post-training for recording the length of the prefix, and 9.65G during fine-tuning which is the same as vanilla BART. In a word, our approach costs less for implementation.
Models
Mem #HP #Tri #St
Conclusion
We propose to post-train dialogue summarization models to enhance their cross-format rephrase ability by prefix-guided generation training on dialogue-sentence pseudo-paraphrases, and get promising results. Creating self-supervised tasks for cross-format post-training and incorporating compatible features for downstream fine-tuning are plausible future directions.
A Related Work
Dialogue summarization and pretrained language models are discussed as follows.
Dialogue Summarization: A growing number of works have been proposed for dialogue summarization in recent years. In this work, we mainly refer to the chat summarization defined in (Feng et al., 2021a). Previous works widely explore dialogue features explicitly and input them as known labels to enhance the dialogue understanding ability of summarization models. Features, including dialogue acts (Goo and Chen, 2018), topic transitions (Chen and Yang, 2020), discourse dependencies (Chen and Yang, 2021), coreference relations (Liu et al., 2021b), argument graphs (Fabbri et al., 2021), semantic structures or slots (Lei et al., 2021;Zhao et al., 2021), etc. are carefully designed and collected by transferring tools pre-trained on other corpus or unsupervised methods with multiple hyper-parameters. These work also modify the basic transformer-based models with additional encoders (Chen and Yang, 2020) or attention layers (Chen and Yang, 2021;Liu et al., 2021b;Lei et al., 2021;Zhao et al., 2021) to utilize the injected features. Liu et al. (2021a) propose a contrastive learning approach for dialogue summarization with multiple training objectives. They also introduce a number of hyper-parameters for contrastive dataset construction and balancing among those objectives.
Pretrained Language Models: Previous pretrained seq-to-seq models can be divided into two categories by training data formats. One is models pretrained on narrative text, such as BART (Lewis et al., 2020), PEGASUS (Zhang et al., 2020a), andT5 (Raffel et al., 2020). They use training data from Wikipedia, BookCorpus (Zhu et al., 2015) and C4 (Raffel et al., 2020). These models show great potentials for tasks such as translation and story ending generation. The other is models pretrained on dialogue, such as DialoGPT (Zhang et al., 2020b) and PLATO (Bao et al., 2020). Their training data are general-domain dialogues, such as Reddit (Henderson et al., 2019) and Twitter (Cho et al., 2014). These models work for dialogue response selection and generation tasks. All of the above models are trained to exploit language features within the same data format, with pre-training tasks such as masked token/sentence prediction and utterance permutation. Pretraining with crossformat data hasn't been researched so far. As a first step, we focus on narrowing the gap by learn-ing to rephrase unidirectionally from dialogue to narratives.
B Implementation Details
We use BART 7 as our basic language model. For both post-training and fine-tuning, the speakers and utterances of each dialogue are concatenated into a single sequence and truncated to the first 1024 tokens. The learning rate is set to 3e−5 with weight decay equaling 0.01. The number of warmup steps is 500 and dropout is 0.1. The model is tested on the corresponding validation set after each training epoch and the early-stop is activated if there is no improvement in the Rouge-2 F1 score. The early-stop and maximum training epochs are set to 3 and 10. During inference, i.e., validation and testing, the beam size is set to 4 with length penalty equaling 1.0 and no-repeat-n-gram size equaling 3. The minimum and maximum lengths are set to the corresponding lengths of the reference summaries based on statistics of each dataset, allowing for freelength text generation. Besides, for the inference on the validation set during the post-training stage, we also set the first 3 tokens as the known prefix. This constant number enables a fair comparison of performances on validation sets under different experimental settings. All of our experiments are done on an RTX 2080Ti with 11G GPU memory. We run experiments three times and show the best results following (Feng et al., 2021b).
C Other Types of Paraphrase Datasets
To make the input and output carry the same amount of information, one way is to fix D as input and convert utterances into indirect speech as the output. Ganesh and Dingliwal (2019) restructured dialogue into text with complicated rules which are not released and difficult to transfer among datasets under different scenarios. Thus, we only use simple rules to convert all of the utterances into [r t says,"u t "] and concatenated as the output. We call this dataset as DialIndirect.
Another way is fixing S as output and removing the redundant utterances in D to get the rephrasing input. We take advantage of the idea of oracle extraction for news summarization (Zhou et al., 2018) and regard the combination of dialogue utterances with the highest Rouge scores computed with S as the input. Considering that utterances are highly dependent, we modify the original extraction algorithm by extracting all of the utterances lying between the extracted ones, different from the window-sized snippet selection in (Liu et al., 2021a). Datasets with or without this modification are called ExtSum and ExtSumM respectively. A summary S is divided into sentences to construct more rephrase pairs. Similar extraction operations can be done between D and p, and we get ExtSent and ExtSentM datasets.
An example of the paraphrase pair generated from the dialogue-summary pair in Figure 1 is shown in Table 8. The statistics of post-training datasets derived from SAMSum and DialSumm are shown in Table 9. We compare the performances between different rephrasing approaches with these datasets of our two-stage approach with the finetuning-only BART. The results are in Table 10. DialIndirect performs incredibly well on SAM-Sum. However, if we use the converted dialogue as input and directly fine-tune the original BART, the results are only 50.91/28.51/50.25 for Rouge-1/2/L. It shows that when accompanied with the posttraining stage, the model can learn relationships between speakers and utterances, and boundaries of utterances better than a direct transformation of dialogue inputs. This rule-based transformation falls on DialSumm compared with BART baseline. More complicated rules may lead to better results, but such labored work is not what we are after.
The extraction-based methods fall behind the others. The modification to the algorithm tends to bring more noises than useful information to the input as the results drop mostly. Besides, splitting the summary into sentences doesn't improve the results here. In a word, such hard extractions hurt the intricate discourse and coreference relations among utterances and are not suitable for crossformat data construction. The case in Table 11 is a dialogue happened between three speakers from SAMSum. The labeled dialogues, which are directly extracted from Multi-view's and DialoBART's released datasets are shown in Table 12. "|" label for Multi-view refers to the topic transitions and stage transitions for the same dialogue respectively. We can see that topic segments by Multi-view BART are reasonable. However, such linear segmentation is not quite suitable for this dialogue since the first and third topics are the same. "|" in DialoBART just refers to the end of each utterance. DialoBART failed to label any topic transitions or redundant utterances.
Compared to the reference summary, the summary generated by BART lost the information about Greg's suggestion, and DialoBART lost the information about "medical insurance" even though it recognized "medical insurance" as a keyword. Multi-view did incorrect reasoning on who will call Linda. Our model generated a more condensed summary covering the same key points as the reference with the original dialogue as input.
Another case from DialSumm between two speakers is in Table 13. BART recognized "him" in the second utterance as "#Person1#" incorrectly. DialoBART regarded the man as "#Person1#'s #Person2# is worried about one man, and #Per-son1# thinks that that man might be on the way home now.
Reference-2
#Person2# is worried about a man, but #Person1# thinks it would be fine.
Reference-3
#Person2# is worried about a man but #Person1# is not.
Figure 1 :
1An example from SAMSum dataset. and narrative text. As a result, popular PLMs such as BART (Lewis et al., 2020) and PEGA-SUS (Zhang et al., 2020a) which excel on news summarization perform mediocrely on dialogue summarization.
Figure 2 :
2A illustration of our approach. BOS and EOS stand for begin and end of the sequence.
PGN (See et al., 2017), Fast-Abs (Chen and Bansal, 2018), and PEGASUS (Zhang et al., 2020a) are well-known models for text summarization. BART (Lewis et al., 2020) is a general PLM and performs well after fine-tuning. CODS (Wu et al., 2021), Multi-view (Chen and Yang, 2020) and DialoBART (Feng et al., 2021b) are the SOTA models designed for dialogue summarization.
Figure 3 :
3Comparison for models on samples with different CR. X-axis represents the ranges for CR(%). is still angry and still angry.DialoBART William and Emilia are still angry. DialSent-PGG Emilia is still angry.
Figure 4 :
4Error analysis on SAMSum.
Table 1 :
1Example pseudo-paraphrase pairs generated from the example inFigure 1. One pair in DSum becomes two pairs in DialSent. The prefix tokens determined by linguistic features, NOUN and ROOT, are underlined and italic respectively.
Table 2 :
2Statistics of dialogue summarization datasets.IW, OW and CR represent the number of input words,
the number of output words and compression ratio
(OW/IW) respectively.
Table 3 :
3Ablations on DialSent with PGG task.
Table 4 :
4Ablations on prefix designs for PGG.
Table 5 :
5Dialogue summarization results compared with baselines. † represents the models implemented by ourselves. Underlined scores are statistically significantly better than BART with p < 0.05 based on t-test.
Table 6 :
6A case from SAMSum. Errors are in italic.The latter three models all improve BART, with DialSent-PGG topping the ranks.20
30
40
50
60
70
Mis
Mis|Red
Rea
Cor|Rea
BART
Multi-view
DialoBART
DialSent-PGG
#Errors
Table 7 :
7The upper-bound of GPU memory foot-
print (Mem), newly introduced hyper-parameter counts
(#HP), the number of trails (#Tri) and total training
steps (#St) for implementing different models.
Table 8 :
8An illustration of post-training pairs generated
from the example in Figure 1. ExtSent and ExtSentM
get the same training pairs in this case.
Datasets
Train/Val
IW
OW
CR
SAMSum
DialIndirect
14,731/818
124.10
157.41
1.31
ExtSum
14,731/818
31.23
23.44
0.94
ExtSumM
14,731/818
66.09
23.44
0.69
EntSent
29,757/1,654
31.05
11.93
0.68
ExtSentM
29,757/1,654
46.45
11.93
0.60
DSum
14,731/818
124.10
23.44
0.25
DialSent
29,757/1,654
149.93
11.93
0.13
DialSumm
DialIndirect
12,460/500
187.52
215.30
1.16
ExtSum
12,460/500
44.43
30.02
0.84
ExtSumM
12,460/500
94.32
31.02
0.61
EntSent
22,407/840
39.27
17.78
0.65
ExtSentM
22,407/840
61.17
17.78
0.56
DSum
12,460/500
187.52
31.02
0.18
DialSent
22,407/840
214.00
17.78
0.10
Table 9 :
9Statistics of constructed datasets. IW and OW
refer to the number of words in the input and output of
corresponding dataset. DSum and DialSent are in-list
for easier comparison.
Table 10 :
10Comparisons among different post-training approaches and fine-tuning-only BART baseline on dialogue summarization.
DialSent with PGG task outperforms other methods and BART consistently across datasets, while DSum with PGG performs almost the same as BART. If we use DialSent data to augment the original DSum during fine-tuning, the results on SAMSum are 44.61/22.81/44.15 for Rouge-1/2/L respectively showing that the data in both datasets is not compatible. Thus, our approach is different from data augmentation. Overall, post-training with cross-format rephrasing intuition does help with dialogue summarization,D Case StudiesWe show more cases as follows.Kate: Hey, do you know if our medical insurance covers hospital costs? Greg: Hm, it depends Mel: What happened dear? Kate: I broke my arm and they're sending me to the hospital :/ Greg: Call Linda or ask someone at the reception, they should be able to tell you what kind of package you have Kate: thnx Reference Kate broke her arm and she's going to the hospital. She'd like to know whether her medical insurance covers hospital costs. Greg suggests her to call Linda or ask someone at the reception about it. BART Kate broke her arm and they're sending her to the hospital. Greg doesn't know if their medical insurance covers hospital costs. (53.33/37.93/53.19) Multi-view Kate broke her arm and they're sending her to the hospital. Greg will call Linda or ask someone at the reception to find out if their insurance covers hospital costs.(67.64/51.52/56.15) DialoBART Kate broke her arm and they're sending her to the hospital . Greg advises her to call Linda or ask someone at the reception .(65.57/50.85/67.62) DialSent-PGG Kate broke her arm and they're sending her to the hospital. Greg advises her to call Linda or ask someone at the reception if their insurance covers hospital costs. (71.64/55.38/62.39)Dialogue
Table 11 :
11A case from SAMSum. Names are in bold and unfaithful contents are in italic. Rouge-1/2/L scores(%) are in parentheses.
Multi-viewTopic Kate: Hey, do you know if our medical insurance covers hospital costs? Greg: Hm, it depends | Mel: What happened dear? Kate: I broke my arm and they're sending me to the hospital :/ | Greg: Call Linda or ask someone at the reception, they should be able to tell you what kind of package you have Kate: thnx | Multi-view Stage | Kate: Hey, do you know if our medical insurance covers hospital costs? Greg: Hm, it depends Mel: What happened dear? | Kate: I broke my arm and they're sending me to the hospital :/ | Greg: Call Linda or ask someone at the reception, they should be able to tell you what kind of package you have Kate: thnx DialoBART Kate : Hey , do you know if our medical insurance covers hospital costs ? | Greg : Hm , it depends | Mel : What happened dear ? | Kate : I broke my arm and they're sending me to the hospital | Greg : Call Linda or ask someone at the reception , they should be able to tell you what kind of package you have | Kate : thnx #KEY# Mel Kate Greg Hey do you know if our medical insurance covers hospital costs happened dear Linda reception package
Table 12 :
12Modified inputs by Multi-view and Dialo-BART.friends" which isn't mentioned in the original dialogue. Our model, DialSent-PGG generates a more accurate summary.#Person1#: Like a cat on hot bricks, as you might say. I don ' t believe you are listening at all. #Person2#: Sorry, I just worried about him. You know, he should be here an hour ago. #Person1#: Don ' t worry him, he has been grown up and I think he can take himself very well. #Person2#: But he still does not come back. #Person1#: Maybe he is on the way home now. Reference-1Dialogue
BART #Person2# is worried about #Person1# because he hasn't come back from work. (43.48/28.57/50.01) DialoBART #Person2# is worried about #Person1#'s friend who hasn't come back. (45.45/30.00/51.87) DialSent-PGG #Person2# is worried about a boy who hasn't come back.(47.62/42.11/53.90)
Table 13 :
13A case from DialSumm.
We use https://spacy.io/.
https://pypi.org/project/py-rouge/
DialSumm normalizes speaker names into "#Person1#" resulting in more tokens.
https://huggingface.co/facebook/ bart-large
AcknowledgementThis research is partially supported by NSFC Grant No. 91646205, and SJTU-CMBCC Joint Research Scheme.
Plato: Pre-trained dialogue generation model with discrete latent variable. Siqi Bao, Huang He, Fan Wang, Hua Wu, Haifeng Wang, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsSiqi Bao, Huang He, Fan Wang, Hua Wu, and Haifeng Wang. 2020. Plato: Pre-trained dialogue generation model with discrete latent variable. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 85-96.
| [] |
[
"Cultural Shift or Linguistic Drift? Comparing Two Computational Measures of Semantic Change",
"Cultural Shift or Linguistic Drift? Comparing Two Computational Measures of Semantic Change"
] | [
"William L Hamilton \nDepartment of Computer Science\nStanford University\n94305StanfordCA\n",
"Jure Leskovec \nDepartment of Computer Science\nStanford University\n94305StanfordCA\n",
"Dan Jurafsky jurafsky@stanford.edu \nDepartment of Computer Science\nStanford University\n94305StanfordCA\n"
] | [
"Department of Computer Science\nStanford University\n94305StanfordCA",
"Department of Computer Science\nStanford University\n94305StanfordCA",
"Department of Computer Science\nStanford University\n94305StanfordCA"
] | [
"Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing"
] | Words shift in meaning for many reasons, including cultural factors like new technologies and regular linguistic processes like subjectification. Understanding the evolution of language and culture requires disentangling these underlying causes. Here we show how two different distributional measures can be used to detect two different types of semantic change. The first measure, which has been used in many previous works, analyzes global shifts in a word's distributional semantics; it is sensitive to changes due to regular processes of linguistic drift, such as the semantic generalization of promise ("I promise."→"It promised to be exciting."). The second measure, which we develop here, focuses on local changes to a word's nearest semantic neighbors; it is more sensitive to cultural shifts, such as the change in the meaning of cell ("prison cell" → "cell phone"). Comparing measurements made by these two methods allows researchers to determine whether changes are more cultural or linguistic in nature, a distinction that is essential for work in the digital humanities and historical linguistics. | 10.18653/v1/d16-1229 | [
"https://www.aclweb.org/anthology/D16-1229.pdf"
] | 2,162,648 | 1606.02821 | dd563291e49ac26d96938943154521a5e1cfbb4a |
Cultural Shift or Linguistic Drift? Comparing Two Computational Measures of Semantic Change
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 1-5, 2016. 2016
William L Hamilton
Department of Computer Science
Stanford University
94305StanfordCA
Jure Leskovec
Department of Computer Science
Stanford University
94305StanfordCA
Dan Jurafsky jurafsky@stanford.edu
Department of Computer Science
Stanford University
94305StanfordCA
Cultural Shift or Linguistic Drift? Comparing Two Computational Measures of Semantic Change
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsNovember 1-5, 2016. 2016
Words shift in meaning for many reasons, including cultural factors like new technologies and regular linguistic processes like subjectification. Understanding the evolution of language and culture requires disentangling these underlying causes. Here we show how two different distributional measures can be used to detect two different types of semantic change. The first measure, which has been used in many previous works, analyzes global shifts in a word's distributional semantics; it is sensitive to changes due to regular processes of linguistic drift, such as the semantic generalization of promise ("I promise."→"It promised to be exciting."). The second measure, which we develop here, focuses on local changes to a word's nearest semantic neighbors; it is more sensitive to cultural shifts, such as the change in the meaning of cell ("prison cell" → "cell phone"). Comparing measurements made by these two methods allows researchers to determine whether changes are more cultural or linguistic in nature, a distinction that is essential for work in the digital humanities and historical linguistics.
Introduction
Distributional methods of embedding words in vector spaces according to their co-occurrence statistics are a promising new tool for diachronic semantics (Gulordava and Baroni, 2011;Jatowt and Duh, 2014;Kulkarni et al., 2014;Xu and Kemp, 2015;Hamilton et al., 2016). Previous work, however, does not consider the underlying causes of seman-tic change or how to distentangle different types of change.
We show how two computational measures can be used to distinguish between semantic changes caused by cultural shifts (e.g., technological advancements) and those caused by more regular processes of semantic change (e.g., grammaticalization or subjectification). This distinction is essential for research on linguistic and cultural evolution. Detecting cultural shifts in language use is crucial to computational studies of history and other digital humanities projects. By contrast, for advancing historical linguistics, cultural shifts amount to noise and only the more regular shifts matter.
Our work builds on two intuitions: that distributional models can highlight syntagmatic versus paradigmatic relations with neighboring words (Schutze and Pedersen, 1993) and that nouns are more likely to undergo changes due to irregular cultural shifts while verbs more readily participate in regular processes of semantic change (Gentner and France, 1988;Traugott and Dasher, 2001). We use this noun vs. verb mapping as a proxy to compare our two measures' sensitivities to cultural vs. linguistic shifts. Sensitivity to nominal shifts indicates a propensity to capture irregular cultural shifts in language, such as those due to technological advancements (Traugott and Dasher, 2001). Sensitivity to shifts in verbs (and other predicates) indicates a propensity to capture regular processes of linguistic drift (Gentner and France, 1988;Kintsch, 2000;Traugott and Dasher, 2001).
The first measure we analyze is based upon changes to a word's local semantic neighborhood; With the global measure of change, we measure how far a word has moved in semantic space between two time-periods. This measure is sensitive to subtle shifts in usage and also global effects due to the entire semantic space shifting. For example, this captures how actually underwent subjectification during the 20th century, shifting from uses in objective statements about the world ("actually did try") to subjective statements of attitude ("I actually agree"; see Traugott and Dasher, 2001 for details). In contrast, with the local neighborhood measure of change, we measure changes in a word's nearest neighbors, which captures drastic shifts in core meaning, such as gay's shift in meaning over the 20th century.
we show that it is more sensitive to changes in the nominal domain and captures changes due to unpredictable cultural shifts. Our second measure relies on a more traditional global notion of change; we show that it better captures changes, like those in verbs, that are the result of regular linguistic drift. Our analysis relies on a large-scale statistical study of six historical corpora in multiple languages, along with case-studies that illustrate the fine-grained differences between the two measures.
Methods
We use the diachronic word2vec embeddings constructed in our previous work (Hamilton et al., 2016) to measure how word meanings change between consecutive decades. 1 In these representations each word w i has a vector representation w (t) (Turney and Pantel, 2010) at each time point, which captures its co-occurrence statistics for that time period. The vectors are constructed using the skip-gram with negative sampling (SGNS) algorithm (Mikolov et al., 2013) and post-processed to align the semantic spaces between years. Measuring the distance between word vectors for consecutive decades allows us to compute the rate at which the different words change in meaning (Gulordava and Baroni, 2011).
We analyzed the decades from 1800 to 1990 using vectors derived from the Google N-gram datasets (Lin et al., 2012) that have large amounts of historical text (English, French, German, and English Fiction). We also used vectors derived from the Corpus of Historical American English (COHA), which is smaller than Google N-grams but was carefully constructed to be genre balanced and contains word lemmas as well as surface forms (Davies, 2010). We examined all decades from 1850 through 2000 using the COHA dataset and used the part-of-speech tags provided with the corpora.
Measuring semantic change
We examine two different ways to measure semantic change ( Figure 1).
Global measure
The first measure analyzes global shifts in a word's vector semantics and is identical to the measure used in most previous works (Gulordava and Baroni, 2011;Jatowt and Duh, 2014;Kim et al., 2014;Hamilton et al., 2016). We simply take a word's vectors for two consecutive decades and measure the cosine distance between them, i.e.
d G (w (t) i , w (t+1) i ) = cos-dist(w (t) i , w (t+1) i ). (1) English (All) English (Fic.) German French COHA (word) COHA (lemma) −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 (Verb -noun) change
Global measure Local measure Figure 2: The global measure is more sensitive to semantic changes in verbs while the local neighborhood measure is more sensitive to noun changes. Examining how much nouns change relative to verbs (using coefficients from mixed-model regressions) reveals that the two measures are sensitive to different types of semantic change. Across all languages, the local neighborhood measure always assigns relatively higher rates of change to nouns (i.e., the right/green bars are lower than the left/blue bars for all pairs), though the results vary by language (e.g., French has high noun change-rates overall). 95% confidence intervals are shown.
Local neighborhood measure
The second measure is based on the intuition that only a word's nearest semantic neighbors are relevant. For this measure, we first find word w i 's set of k nearest-neighbors (according to cosine-similarity) within each decade, which we denote by the ordered set N k (w (t) i ). Next, to measure the change between decades t and t + 1, we compute a "second-order" similarity vector for w (t) i from these neighbor sets with entries defined as
s (t) (j) = cos-sim(w (t) i , w (t) j ) ∀w j ∈ N k (w (t) i ) ∪ N k (w (t+1) i ),(2)
and we compute an analogous vector for w i , contains the cosine similarity of w i and the vectors of all w i 's nearest semantic neighbors in the the time-periods t and t + 1. Working with variants of these second-order vectors has been a popular approach in many recent works, though most of these works define these vectors against the full vocabulary and not just a word's nearest neighbors (del Prado Martin and Brendel, 2016;Eger and Mehler, 2016;Rodda et al., 2016).
Finally, we compute the local neighborhood change as
d L (w (t) i , w (t+1) i ) = cos-dist(s (t) i , s (t+1) i ).(3)
This measures the extent to which w i 's similarity with its nearest neighbors has changed. The local neighborhood measure defined in (3) captures strong shifts in a word's paradigmatic relations but is less sensitive to global shifts in syntagmatic contexts (Schutze and Pedersen, 1993 used k = 25 in all experiments (though we found the results to be consistent for k ∈ [10, 50]).
Statistical methodology
To test whether nouns or verbs change more according to our two measures of change, we build on our previous work and used a linear mixed model approach (Hamilton et al., 2016). This approach amounts to a linear regression where the model also includes "random" effects to account for the fact that the measurements for individual words will be correlated across time (McCulloch and Neuhaus, 2001). We ran two regressions per datatset: one with the global d G values as the dependent variables (DVs) and one with the local neighborhood d L values. In both cases we examined the change between all consecutive decades and normalized the DVs to zeromean and unit variance. We examined nouns/verbs within the top-10000 words by frequency rank and removed all words that occurred <500 times in the smaller COHA dataset. The independent variables are word frequency, the decade of the change (represented categorically), and variable indicating Word 1850s context 1990s context actually "...dinners which you have actually eaten." "With that, I actually agree." must "O, George, we must have faith." "Which you must have heard ten years ago..." promise "I promise to pay you...' "...the day promised to be lovely."
gay "Gay bridals and other merry-makings of men." "...the result of gay rights demonstrations." virus "This young man is...infected with the virus." "...a rapidly spreading computer virus." cell "The door of a gloomy cell..." "They really need their cell phones." Examining the semantic distance between the 1850s and 1990s shows that the global measure is more sensitive to regular shifts (and vice-versa for the local measure). The plot shows the difference between the measurements made by the two methods.
Global -local change
Regular linguistic shifts Irregular cultural shifts
whether a word is a noun or a verb (proper nouns are excluded, as in Hamilton et al., 2016). 2
Results
Our results show that the two seemingly related measures actually result in drastically different notions of semantic change.
Nouns vs. verbs
The local neighborhood measure assigns far higher rates of semantic change to nouns across all languages and datasets while the opposite is true for the global distance measure, which tends to assign higher rates of change to verbs (Figure 2). We focused on verbs vs. nouns since they are the two major parts-of-speech and previous research has shown that verbs are more semantically mutable than nouns and thus more likely to undergo linguistic drift (Gentner and France, 1988), while nouns are far more likely to change due to cultural shifts like new technologies (Traugott and Dasher, 2001). However, some well-known regular linguistic shifts include rarer parts of speech like adverbs (included in our case studies below). Thus we also confirmed 2 Frequency was included since it is known to strongly influence the distributional measures (Hamilton et al., 2016). that the differences shown in Figure 2 also hold when adverbs and adjectives are included along with the verbs. This modified analysis showed analogous significant trends, which fits with previous research arguing that adverbial and adjectival modifiers are also often the target of regular linguistic changes (Traugott and Dasher, 2001).
The results of this large-scale regression analysis show that the local measure is more sensitive to changes in the nominal domain, a domain in which change is known to be driven by cultural factors. In contrast, the global measure is more sensitive to changes in verbs, along with adjectives and adverbs, which are known to be the targets of many regular processes of linguistic change (Traugott and Dasher, 2001;Hopper and Traugott, 2003)
Case studies
We examined six case-study words grouped into two sets. These case studies show that three examples of well-attested regular linguistic shifts (set A) changed more according to the global measure, while three well-known examples of cultural changes (set B) change more according to the local neighborhood measure. Table 2 lists these words with some representative historical contexts (Davies, 2010). Set A contains three words that underwent attested regular linguistic shifts detailed in Traugott and Dasher (2001): actually, must, and promise. These three words represent three different types of regular linguistic shifts: actually is a case of subjectification (detailed in Figure 1); must shifted from a deontic/obligation usage ("you must do X") to a epistemic one ("X must be the case"), exemplifying a regular pattern of change common to many modal verbs; and promise represents the class of shifting "performative speech acts" that undergo rich changes due to their pragmatic uses and subjectification (Traugott and Dasher, 2001). The contexts listed in Table 2 exemplify these shifts.
Set B contains three words that were selected because they underwent well-known cultural shifts over the last 150 years: gay, virus, and cell. These words gained new meanings due to uses in community-specific vernacular (gay) or technological advances (virus, cell). The cultural shifts underlying these changes in usage -e.g., the development of the mobile "cell phone" -were unpredictable in the sense that they were not the result of regularities in human linguistic systems. Figure 3 shows how much the meaning of these word changed from the 1850s to the 1990s according to the two different measures on the English Google data. We see that the words in set A changed more when measurements were made using the global measure, while the opposite holds for set B.
Discussion
Our results show that our novel local neighborhood measure of semantic change is more sensitive to changes in nouns, while the global measure is more sensitive to changes in verbs. This mapping aligns with the traditional distinction between irregular cultural shifts in nominals and more regular cases of linguistic drift (Traugott and Dasher, 2001) and is further reinforced by our six case studies.
This finding emphasizes that researchers must develop and use measures of semantic change that are tuned to specific tasks. For example, a cultural change-point detection framework would be more successful using our local neighborhood measure, while an empirical study of grammaticalization would be better off using the traditional global dis-tance approach. Comparing measurements made by these two approaches also allows researchers to assess the extent to which semantic changes are linguistic or cultural in nature.
Figure 1 :
1Two different measures of semantic change.
Figure 3 :
3The global measure captures classic examples of linguistic drift while the local measure captures example cultural shifts.
Table 1 :
1Number of nouns and verbs tested in each dataset.
Table 2 :
2Example case-studies of semantic change. The first three words are examples of regular linguistic shifts, while the latter three are examples of words that shifted due to exogenous cultural factors. Contexts are from the COHA data(Davies, 2010).actually
must
promise
gay
virus
cell
−0.3
−0.1
0.1
0.3
http://nlp.stanford.edu/projects/histwords/. This URL also links to detailed dataset descriptions and the code needed to replicate the experiments in this paper.
AcknowledgementsThe authors thank C. Manning, V. Prabhakaran, S. Kumar, and our anonymous reviewers for their helpful comments. This research has been supported in part by NSF CNS-1010921, IIS-1149837, IIS-1514268 NIH BD2K, ARO MURI, DARPA XDATA, DARPA SIMPLEX, Stanford Data Science Initiative, SAP Stanford Graduate Fellowship, NSERC PGS-D, Boeing, Lightspeed, and Volkswagen.
The Corpus of Historical American English: 400 million words. Mark Davies, Mark Davies. 2010. The Corpus of Historical American English: 400 million words, 1810-2009. http://corpus.byu.edu/coha/.
Case and Cause in Icelandic: Reconstructing Causal Networks of Cascaded Language Changes. Fermin Moscoso Del Prado Martin, Christian Brendel, Proc. ACL. ACLFermin Moscoso del Prado Martin and Christian Brendel. 2016. Case and Cause in Icelandic: Reconstructing Causal Networks of Cascaded Language Changes. In Proc. ACL.
On the Linearity of Semantic Change: Investigating Meaning Variation via Dynamic Graph Models. Steffen Eger, Alexander Mehler, Proc. ACL. ACLSteffen Eger and Alexander Mehler. 2016. On the Linearity of Semantic Change: Investigating Meaning Variation via Dynamic Graph Models. In Proc. ACL.
The verb mutability effect: Studies of the combinatorial semantics of nouns and verbs. Dedre Gentner, Ilene M France, Lexical ambiguity resolution: Perspectives from psycholinguistics, neuropsychology, and artificial intelligence. Dedre Gentner and Ilene M. France. 1988. The verb mutability effect: Studies of the combinatorial seman- tics of nouns and verbs. Lexical ambiguity resolution: Perspectives from psycholinguistics, neuropsychology, and artificial intelligence, pages 343-382.
A distributional similarity approach to the detection of semantic change in the Google Books Ngram corpus. Kristina Gulordava, Marco Baroni, Proc. GEMS 2011 Workshop on Geometrical Models of Natural Language Semantics. GEMS 2011 Workshop on Geometrical Models of Natural Language SemanticsAssociation for Computational LinguisticsKristina Gulordava and Marco Baroni. 2011. A distribu- tional similarity approach to the detection of semantic change in the Google Books Ngram corpus. In Proc. GEMS 2011 Workshop on Geometrical Models of Nat- ural Language Semantics, pages 67-71. Association for Computational Linguistics.
Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change. William L Hamilton, Jure Leskovec, Dan Jurafsky, Proc. ACL. ACLWilliam L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Diachronic Word Embeddings Reveal Statisti- cal Laws of Semantic Change. In Proc. ACL.
Grammaticalization. J Paul, Elizabeth Closs Hopper, Traugott, Cambridge University PressCambridge, UKPaul J Hopper and Elizabeth Closs Traugott. 2003. Grammaticalization. Cambridge University Press, Cambridge, UK.
A framework for analyzing semantic change of words across time. Adam Jatowt, Kevin Duh, Proc. 14th ACM/IEEE-CS Conf. on Digital Libraries. 14th ACM/IEEE-CS Conf. on Digital LibrariesIEEE PressAdam Jatowt and Kevin Duh. 2014. A framework for analyzing semantic change of words across time. In Proc. 14th ACM/IEEE-CS Conf. on Digital Libraries, pages 229-238. IEEE Press.
Yoon Kim, -I Yi, Kentaro Chiu, Darshan Hanaki, Slav Hegde, Petrov, arXiv:1405.3515Temporal analysis of language through neural language models. arXiv preprintYoon Kim, Yi-I. Chiu, Kentaro Hanaki, Darshan Hegde, and Slav Petrov. 2014. Temporal analysis of lan- guage through neural language models. arXiv preprint arXiv:1405.3515.
Metaphor comprehension: A computational theory. Walter Kintsch, Psychon. Bull. Rev. 72Walter Kintsch. 2000. Metaphor comprehension: A computational theory. Psychon. Bull. Rev., 7(2):257- 266.
Statistically significant detection of linguistic change. Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, Steven Skiena, Proc. 24th WWW Conf. 24th WWW ConfVivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2014. Statistically significant detec- tion of linguistic change. In Proc. 24th WWW Conf., pages 625-635.
Syntactic annotations for the google books ngram corpus. Yuri Lin, Jean-Baptiste Michel, Erez Lieberman Aiden, Jon Orwant, Will Brockman, Slav Petrov, Proc. ACL System Demonstrations. ACL System DemonstrationsYuri Lin, Jean-Baptiste Michel, Erez Lieberman Aiden, Jon Orwant, Will Brockman, and Slav Petrov. 2012. Syntactic annotations for the google books ngram cor- pus. In Proc. ACL System Demonstrations.
Generalized linear mixed models. E Charles, John M Mcculloch, Neuhaus, Wiley-InterscienceHoboken, NJCharles E McCulloch and John M Neuhaus. 2001. Gen- eralized linear mixed models. Wiley-Interscience, Hoboken, NJ.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, NIPS. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS.
Panta rei: Tracking Semantic Change with Distributional Semantics in Ancient Greek. Martina Rodda, Marco Senaldi, Alessandro Lenci, Italian Conference of Computational Linguistics. Martina Rodda, Marco Senaldi, and Alessandro Lenci. 2016. Panta rei: Tracking Semantic Change with Dis- tributional Semantics in Ancient Greek. In Italian Conference of Computational Linguistics.
A vector model for syntagmatic and paradigmatic relatedness. Hinrich Schutze, Jan Pedersen, Proc. 9th Annu. Conf. of the UW Centre for the New OED and Text Research. 9th Annu. Conf. of the UW Centre for the New OED and Text ResearchCiteseerHinrich Schutze and Jan Pedersen. 1993. A vector model for syntagmatic and paradigmatic relatedness. In Proc. 9th Annu. Conf. of the UW Centre for the New OED and Text Research, pages 104-113. Citeseer.
Regularity in Semantic Change. Elizabeth Closs Traugott, Richard B Dasher, Cambridge University PressCambridge, UKElizabeth Closs Traugott and Richard B Dasher. 2001. Regularity in Semantic Change. Cambridge Univer- sity Press, Cambridge, UK.
From frequency to meaning: Vector space models of semantics. D Peter, Patrick Turney, Pantel, J. Artif. Intell. Res. 371Peter D. Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of semantics. J. Artif. Intell. Res., 37(1):141-188.
A computational evaluation of two laws of semantic change. Yang Xu, Charles Kemp, Proc. 37th Annu. Conf. 37th Annu. ConfYang Xu and Charles Kemp. 2015. A computational evaluation of two laws of semantic change. In Proc. 37th Annu. Conf. Cogn. Sci. Soc.
| [] |
[
"Multimodal Machine Translation through Visuals and Speech",
"Multimodal Machine Translation through Visuals and Speech"
] | [
"Umut Sulubacak ",
"Ozan Caglayan ",
"Stig-Arne Grönroos ",
"Aku Rouhe ",
"Desmond Elliott ",
"Lucia Specia ",
"Jörg Tiedemann "
] | [] | [] | Multimodal machine translation involves drawing information from more than one modality, based on the assumption that the additional modalities will contain useful alternative views of the input data. The most prominent tasks in this area are spoken language translation, image-guided translation, and video-guided translation, which exploit audio and visual modalities, respectively. These tasks are distinguished from their monolingual counterparts of speech recognition, image captioning, and video captioning by the requirement of models to generate outputs in a different language. This survey reviews the major data resources for these tasks, the evaluation campaigns concentrated around them, the state of the art in end-to-end and pipeline approaches, and also the challenges in performance evaluation. The paper concludes with a discussion of directions for future research in these areas: the need for more expansive and challenging datasets, for targeted evaluations of model performance, and for multimodality in both the input and output space. | 10.1007/s10590-020-09250-0 | [
"https://arxiv.org/pdf/1911.12798v1.pdf"
] | 208,513,058 | 1911.12798 | a6f62d2365aa63f5d9c90893ab8aaa25551276fe |
Multimodal Machine Translation through Visuals and Speech
Umut Sulubacak
Ozan Caglayan
Stig-Arne Grönroos
Aku Rouhe
Desmond Elliott
Lucia Specia
Jörg Tiedemann
Multimodal Machine Translation through Visuals and Speech
Noname manuscript No. (will be inserted by the editor)
Multimodal machine translation involves drawing information from more than one modality, based on the assumption that the additional modalities will contain useful alternative views of the input data. The most prominent tasks in this area are spoken language translation, image-guided translation, and video-guided translation, which exploit audio and visual modalities, respectively. These tasks are distinguished from their monolingual counterparts of speech recognition, image captioning, and video captioning by the requirement of models to generate outputs in a different language. This survey reviews the major data resources for these tasks, the evaluation campaigns concentrated around them, the state of the art in end-to-end and pipeline approaches, and also the challenges in performance evaluation. The paper concludes with a discussion of directions for future research in these areas: the need for more expansive and challenging datasets, for targeted evaluations of model performance, and for multimodality in both the input and output space.
(VGT)
, and spoken language translation (SLT), shown in contrast to unimodal translation tasks, such as text-based machine translation (MT) and speech-to-speech translation (S2S), and multimodal NLP tasks that do not involve translation, such as automatic speech recognition (ASR), image captioning (IC), and video description (VD).
in Figure 1, outlining the major tasks of spoken language translation (SLT) (Akiba et al, 2004), image-guided translation (IGT) (Elliott et al, 2015;Specia et al, 2016), and video-guided translation (VGT) (Sanabria et al, 2018;Wang et al, 2019b).
Today, the rising interest in MMT is largely driven by the state-of-the-art performance and the architectural flexibility of neural sequence-to-sequence models (Sutskever et al, 2014;Bahdanau et al, 2015;Vaswani et al, 2017). This flexibility, which is due to the end-to-end nature of these approaches, has the potential of bringing the vision, speech and language processing communities back together. From a historical point of view however, there was already a great deal of interest in doing machine translation (MT) with non-text modalities, even before the arrival of successful statistical machine translation models. Among the earliest attempts is the Automatic Interpreting Telephony Research project (Morimoto, 1990), a 1986 proposal that aimed at implementing a pipeline of automatic speech recognition, rule-based machine translation, and speech synthesis, making up a full speech-to-speech translation system. Further research has led to several other speech-to-speech translation systems (Lavie et al, 1997;Takezawa et al, 1998;Wahlster, 2000).
In contrast, the use of visual modality in translation has not attracted comparable interest until recently. At present, there is a variety of multimodal task formulations including some form of machine translation, involving image captions, instructional text with photographs, video recordings of sign language, subtitles for videos (and especially movies), and descriptions of video scenes. As a consequence, modern multimodal MT studies dealing with visual (or audiovisual) information are becoming as prominent as those tackling audio. We believe that multimodal MT is a better reflection of how humans acquire and process language, with many theoretical advantages in language grounding over text-based MT as well as the potential for new practical applications like cross-modal cross-lingual information retrieval (Gella et al, 2017;Kádár et al, 2018).
In the following, we will provide a detailed description of MMT tasks and approaches that have been proposed in the past. Section 2 contains an overview of the tasks of spoken language translation, image-guided translation and videoguided translation. Section 3 reviews the methods and caveats of evaluating MT performance, and discusses prominent evaluation campaigns, while Section 4 contains an overview of major datasets that can be used as training or test corpora. Section 5 discusses the state-of-the-art models and approaches in MMT, especially focusing on image-guided translation and spoken language translation. Section 6 outlines fruitful directions of future research in multimodal MT.
Tasks
While our definition of multimodal machine translation excludes both cross-modal conversion tasks with no crosslinguality (e.g. automatic speech recognition and video description), and machine translation tasks within a single modality (e.g. text-to-text and speech-to-speech translation), it is still general enough to accommodate a fair variety of tasks. Some of these tasks such as spoken language translation (SLT) and continuous sign language recognition (CSLR) meet the criteria because their source and target languages are, by definition, expressed through different modes. Other tasks like image-guided translation (IGT) and video-guided translation (VGT) are included on the grounds that they complement the source language with related visuals that constitute an extra modality. In some cases, a wellestablished multimodal machine translation task can be characterised by methodological constraints (e.g. simultaneous interpretation), or by domain and semantics (e.g. video description translation).
We observe that a shared modality composition is the foremost prerequisite that dictates the applicability of data, approaches and methodologies across multimodal translation tasks. For this reason, further in this article, we classify the studies we have surveyed according to the modality composition involved. We also restrict the scope of our discussions to the more well-recognised cases that involve audio and/or visual data in addition to text. In the following subsections, we explain our use of the terms spoken language translation, image-guided translation, and video-guided translation, and provide further discussions for each of these tasks.
Spoken language translation
Spoken language translation (SLT), also known as speech-to-text translation or automatic speech translation, comprises the translation of speech in a source language to text in a target language. As such, it differs from conventional MT in the source-side modality. The need to simultaneously perform both modality conversion and translation means that systems must learn a complex input-output mapping, which poses a significant challenge. The SLT task has been shaped by a number of influential early works (e.g. Vidal, 1997;Ney, 1999), and championed by the speech translation tasks of the IWSLT evaluation campaign since 2004 (see Section 3.2.2).
Traditionally, SLT was addressed by a pipeline approach (see Section 5 for more details), effectively separating multimodal MT into modality conversion followed by unimodal MT. More recently, end-to-end systems have been proposed, often based on NMT architectures, where the source language audio sequence is directly converted to the target language text sequence (Weiss et al, 2017;Bérard et al, 2018). Despite the short time during which end-toend approaches have been developed, they have been rapidly closing the gap with the dominant paradigm of pipeline systems. The current state of end-to-end systems is discussed further in Section 5.2.3.
Image-guided translation
Image-guided translation can be defined as a contextual grounding task, where, given a set of images and associated documents, the aim is to enhance the translation of the documents by leveraging their semantic correspondence to the images. Resolving ambiguities through visual cues is one of the main motivating forces behind this task.
A well-known realisation of IGT is image caption translation, where the correspondence is related to sentences being the descriptions of the images. Initial attempts at image caption translation were mostly pipeline approaches: Elliott et al (2015) proposed a pipeline of visually conditioned neural language models, while Hitschler et al (2016) approached the problem from a multimodal retrieval and reranking perspective. With the introduction of the WMT multimodal translation shared task , see Section 3.2.1), IGT attracted a lot more attention from the research community. Today, the prominent approaches rely on visually conditioning end-to-end neural MT systems with visual features extracted from state-of-the-art pretrained CNNs.
Although the utility of the visual modality has recently been disputed under specific dataset and task conditions (Elliott, 2018;, using images when translating captions is theoretically very advantageous to handle grammatical characteristics (e.g. noun genders) in translating between dissimilar languages, and resolving translational ambiguities. Also, shows how state-of-the-art models become capable of leveraging the visual signal when source captions are deliberately deteriorated in a simulated low-resource scenario. We discuss the current state of the art and the predominant approaches in IGT in Section 5.1.
Video-guided translation
We posit the task of video-guided translation (VGT) as a multimodal machine translation task similar to image-guided translation, but tackling video clips (and potentially audio clips as well) rather than static images associated with the textual input. Within video-guided translation, there can be variants depending on the textual content. The source text can be transcripts of speech from the video, which would be typically segmented as standard subtitles, or a textual description of the visual scene or an action demonstrated in the clip, often created for visually impaired people. As such, video-guided translation can be subject to particular challenges from both SLT (time-variant audiovisual input) and IGT (indirect correspondence between source modalities). On the other hand, these similarities could also indicate that it might be possible to adapt or reuse approaches from both of those areas to bootstrap VGT systems.
One major challenge hindering progress in video-guided translation is the relative scarcity of datasets. While a large collection such as the OpenSubtitles corpus 1 (Lison and Tiedemann, 2016) can provide access to a considerable amount of parallel subtitles, there is no attached audiovisual content since the corresponding movies are not freely available. Recent efforts to compile freely accessible data for video-guided translation, like the How2 (Sanabria et al, 2018) and
VaTeX datasets (both described in Section 4.3) have started to alleviate this bottleneck. Although there has been decidedly little time to observe the full impact of such initiatives, we hope that they will inspire further research in video-guided translation.
Evaluation
Evaluating the performance of a machine translation system is a difficult and controversial problem. Typically, there are numerous ways of translating even a single sentence which would be acceptably produced by human translators (or systems), and it is often unclear which one is (or which ones are) good or better, and in what respect, given that the pertinent evaluation criteria are multi-dimensional, context-dependent, and highly subjective (see for example Chesterman and Wagner, 2002;Drugan, 2013). Traditionally, human analysis of translation quality has often been divided into the evaluation of adequacy (semantic transfer from source language) and fluency (grammatical soundness of target language) (Doherty, 2017). While this separation is considered somewhat artificial, it was created to make evaluation simpler and to allow comparison of translation systems in more specific terms. In practice, systems that are good at one criterion tend to be good at the other, and a lot of the more recent evaluation campaigns have focused on directly ranking systems for general quality rather than scoring individual systems on these criteria (relative ranking), or scoring systems for general quality instead (direct assessment).
Since human evaluation comes with considerable monetary and time costs (Castilho et al, 2018), evaluation efforts have converged to devising automatic metrics in recent years , which typically operate by comparing the output of a translation system against one or more human translations. While a number of metrics have been proposed over the last two decades, they are mostly based on statistics computed between the translation hypothesis and one or more references. Procuring reference translations in itself entails some costs, and any metrics and approaches that require multiple references to work well may therefore not be feasible for common use. Further in this section, we discuss the details of some of the dominant evaluation metrics as well as the most well-known shared tasks of multimodal MT that serve as standard evaluation settings to facilitate research.
Metrics
Among the various MT evaluation metrics in the literature, the most commonly used ones are BLEU (Papineni et al, 2001), METEOR (Lavie and Agarwal, 2007;Denkowski and Lavie, 2014) and TER (Snover et al, 2006). To summarise them briefly, BLEU is based on an aggregate precision measure of n-gram matches between the reference(s) and machine translation, and penalises translations that are too short. METEOR accounts for and gives partial credit to stem, synonyms, and paraphrase matches, and considers both precision and recall with configurable weights for both criteria. TER is a variant of word-level edit distance between the source and the target sentences, with an added operation for shifting one or more adjacent words. BLEU is by far the most commonly used automatic evaluation metric, despite its relative simplicity.Most quantitative comparisons of machine translation systems are reported using only BLEU scores. METEOR has been shown to correlate better with human judgements (especially for adequacy) due to both its flexibility in string matching and its better balance between precision and recall, but its dependency on linguistic resources makes it less applicable in the general case. Both BLEU and METEOR, much like the majority of other evaluation metrics developed so far, are reference-based metrics. These metrics are inadvertently heavily biased on the translation styles that they see in the reference data, and end up penalising any alternative phrasing that might be equally correct (Fomicheva and Specia, 2016).
Human evaluation is the optimal choice when a trustworthy measure of translation quality is needed and resources to perform it are available. The usual strategies for human evaluation are fluency and adequacy rankings, direct assessment (DA) (Graham et al, 2013), and post-editing evaluation (PE) (Snover et al, 2006). Fluency and adequacy rankings are conventionally between 1-5, while DA is a general scale between 0-100 indicating how "good" the translation is, either with respect the original sentence in the source language (DA-src), or the ground truth translation in the target language (DA-ref ). On the other hand, in PE, human annotators are asked to correct translations by changing the words and the ordering as little as possible, and the rest of the evaluation is based on an automatic edit distance measure between the original and post-edited translations, or other metrics such as post-editing time and keystrokes . For pragmatics reasons, these human evaluation methods are typically crowdsourced to non-expert annotators to reduce costs. While this may still result in consistent evaluation scores if multiple crowd annotators are considered, it is a well-accepted fact that professional translators capture more details and are generally better judges than non-expert speakers (Bentivogli et al, 2018).
The problems recognised even in human evaluation methods substantiate the notion that no metric is perfect. In fact, evaluation methods are an active research subject in their own right Ma et al, 2018Ma et al, , 2019. However, there is currently little research on developing evaluation approaches specifically tailored to multimodal translation. Fully-automatic evaluation is typically text-based, while methods that go beyond the text rely on manually annotated resources, and could rather be considered semi-automatic. One such method is multimodal lexical translation (MLT) , which is a measure of translation accuracy for a set of ambiguous words given their textual context and an associated image that allows visual disambiguation. Even in human evaluation there are only a few examples where the evaluation is multimodal, such as the addition of images in the evaluation of image caption translations via direct assessment Barrault et al, 2018), or via qualitative comparisons of post-editing . Having consistent methods to evaluate how well translation systems take multimodal data into account would make it possible to identify bottlenecks and facilitate future development. One possible promising direction is the work of Madhyastha et al (2019) for image captioning evaluation, where the content of the image is directly taken into account via the matching of detected objects in the image and concepts in the generated caption.
Shared tasks
A great deal of research into developing natural language processing systems is made in preparation for shared tasks under academic conferences and workshops, and the relatively new subject of multimodal machine translation is not an exception. These shared tasks lay out a specific experimental setting for which participants submit their own systems, often developed using the training data provided by the campaign. Currently, there are not many datasets encompassing both multiple languages and multiple modalities that are also of sufficiently high quality and large size, and available for research purposes. However, multilingual datasets that augment text with only speech or only images are somewhat less rare than those with videos, given their utility for tasks such as automatic speech recognition and image captioning.
Adding parallel text data in other languages enables such datasets to be used for spoken language translation and imageguided translation, both of which are represented in shared tasks organised by the machine translation community. The Conference on Machine Translation (WMT) ran three shared tasks for image caption translation from 2016-2018, and the International Workshop on Spoken Language Translation (IWSLT) has led an annual evaluation campaign on speech translation since 2004.
Image-guided translation: WMT multimodal translation task
The Conference on Machine Translation (WMT) has organised multimodal translation shared tasks annually since the first event in 2016. The first shared task was such that the participants were given images and an English caption for each image as input, and were required to generate a translated caption in German. The second shared task had a similar experimental setup, but added French to the list of target languages, and new test sets. The third shared task in 2018 added Czech as a third possible target language, and another new test set. This last 2 task also had a secondary track which only had Czech on the target side, but allowed the use of English, French and German captions together along with the image in a multisource translation setting.
The WMT multimodal translation shared tasks evaluate the performances of submitted systems on several test sets at once, including the Ambiguous COCO test set , which incorporates image captions that contain ambiguous verbs (see Section 4.1). The translations generated by the submitted systems are scored by the METEOR, BLEU, and TER metrics. In addition, all participants are required to devote resources to manually scoring translations in a blind fashion. This scoring is done by direct assessment using the original source captions and the image as references. During the assessment, ground truth translations are shuffled into the outputs from the submissions, and scored just like them. This establishes an approximate reference score for the ground truth, and the submitted systems are analysed in relation to this.
Spoken language translation: IWSLT evaluation campaign
The spoken language translation tasks have been held as part of the annual IWSLT evaluation campaign since Akiba et al (2004). Following the earlier C-STAR evaluations, the aim of the campaign is to investigate newly-developing translation technologies as well as methodologies for evaluating them. The first years of the campaign were based on a basic travel expression corpus developed by C-STAR to facilitate standard evaluation, containing basic tourist utterances (e.g. "Where is the restroom?") and their transcripts. The corpus was eventually extended with more samples (from a few thousand to tens of thousands) and more languages (from Japanese and English, to Arabic, Chinese, French, German, Italian, Korean, and Turkish). Each year also had a new challenge theme, such as robustness of spoken language translation, spontaneous (as opposed to scripted) speech, and dialogue translation, introducing corresponding data sections (e.g. running dialogues) as well as sub-tasks (e.g. translating from noisy ASR output) to facilitate the challenges. Starting with Paul et al (2010), the campaign adopted TED talks as their primary training data, and eventually shifted away from the tourism domain towards lecture transcripts.
Until Cettolo et al (2016), the evaluation campaign had three main tracks: Automatic speech recognition, textbased machine translation, and spoken language translation. While these tasks involve different sources and diverging methodologies, they converge on text output. The organisers have made considerable effort to use several automatic metrics at once to evaluate participating systems, and to analyse the outputs from these metrics. Traditionally, there has also been human evaluation on the most successful systems for each track according to the automatic metrics. These assessments have been used to investigate which automatic metrics correlate with which human assessments to what extent, and to pick out and discuss drawbacks in evaluation methodologies.
Additional tasks such as dialogue translation (Cettolo et al, 2017) and low-resource spoken language translation (Niehues et al, 2018) were reintroduced to the IWSLT evaluation campaign from 2017 on, as TED data and machine translation literature both grew richer. Niehues et al (2019) introduced a new audiovisual spoken language translation task, leveraging the How2 corpus (Sanabria et al, 2018). In this task, video is included as an additional input modality, for the general case of subtitling audiovisual content.
Datasets
Text-based machine translation has recently enjoyed widespread success with the adoption of deep learning model architectures. The success of these data-driven systems rely heavily on the factor of data availability. An implication of this for multimodal MT is the need for large datasets in order to keep up with the data-driven state-of-the-art methodologies. Unfortunately, due to its simultaneous requirement of multimodality and multilinguality in data, multimodal MT is subject to an especially restrictive bottleneck. Datasets that are sufficiently large for training multimodal MT models are only available for a handful of languages and domain-specific tasks. The limitations imposed by this are increasingly well-recognised, as evidenced by the fact that most major datasets intended for multimodal MT were released relatively recently. Some of these datasets are outlined in Table 1, and explained in more detail in the subsections to follow.
Image-guided translation datasets
IAPR TC-12
The International Association of Pattern Recognition (IAPR) TC-12 benchmark dataset was created for the cross-language image retrieval track of the CLEF evaluation campaign (ImageCLEF 2006) . The benchmark is structurally similar to the multilingual image caption datasets commonly used by contemporary image-guided translation systems. IAPR TC-12 contains 20,000 images from a collection of photos of landmarks taken in various countries, provided by a travel organisation. Each image was originally annotated with German descriptions, and later translated to English. These descriptions are composed of phrases that describe the visual contents of the photo following strict linguistic patterns, as shown in Figure 2. The dataset also contains light annotations such as titles and locations in English, German, and Spanish.
Flickr8k
Released in 2010, the Flickr8k dataset (Rashtchian et al, 2010) has been one of the most widely-used multimodal corpora. Originally intended as a high-quality training corpus for automatic image captioning, the dataset comprises a set of 8,092 images extracted from the Flickr website, each with 5 crowdsourced captions in English that describe the image. Flickr8k has shorter captions compared to IAPR TC-12, focusing on the most salient objects or actions, rather than complete descriptions. As the dataset has been a popular and useful resource, it has been further extended with captions in other languages such as Chinese and Turkish (Unal et al, 2016). However, (Post et al, 2013) 38h audio 171k segments en, es MSLT (Federmann and Lewis, 2017) 4.5-10h audio 7k-18k segments de, en, fr, ja, zh IWSLT '18 (Niehues et al, 2018) 1,565 audio clips 171k segments de, en LibriSpeech (Kocabiyikoglu et al, 2018) 236h audio 131k segments en, fr MuST-C (Di Gangi et al, 2019a) 385-504h audio 211k-280k segments 10 languages
MaSS (Boito et al, 2019) 18.5-23h audio 8.2k segments 8 languages as these captions were independently crowdsourced, they are not translations of each other, which makes them less effective for MMT.
Flickr30k / Multi30k The Flickr30k dataset (Young et al, 2014) was released in 2014 as a larger dataset following in the footsteps of Flickr8k. Collected using the same crowdsourcing approach for independent captions as its predecessor, Flickr30k contains 31,783 photos depicting common scenes, events, and actions, each annotated with 5 independent English captions. Multi30k was initially released as a bilingual subset of Flickr30k captions, providing German translations for 1 out of the 5 English captions per image, with the aim of stimulating multimodal and multilingual research. In addition, the study collected 5 independent German captions for each image. The WMT multimodal translation tasks later introduced French and Czech extensions to Multi30k, making it a staple dataset for image-guided translation, and further expanding the set's utility to cutting-edge subtasks such as multisource training. An example from this dataset can be seen in Figure 2.
WMT test sets The past three years of multimodal shared tasks at WMT each came with a designated test set for the task Barrault et al, 2018). Totalling 3,017 images in the same domain as the Flickr sets (including Multi30k), these sets are too small to be used for training purposes, but could smoothly blend in with the other Flickr sets to expand their size. So far, test sets from the previous shared tasks (each containing roughly 1,000 images with captions) have been allowed for validation and internal evaluation. In parallel with the language expansion of Multi30k, the test set from 2016 contains only English and German captions, and the one from 2017 contains only English, German, and French. The 2018 test set contains English, German, French, and Czech captions that are not publicly available, though systems can be evaluated against it using an online server. 3
MS COCO Captions
Introduced in 2015, the MS COCO Captions dataset offers caption annotations for a subset of roughly 123,000 images from the large-scale object detection and segmentation training corpus MS COCO (Microsoft Common Objects in Context) (Lin et al, 2014b). Each image in this dataset is associated with up to 5 independently annotated English captions, with a total of 616,767 captions. Though originally a monolingual dataset, the dataset's large size makes it useful for data augmentation methods for image-guided translation, as demonstrated in Grönroos et al (2018). There has also been some effort to add other languages to COCO. A small subset with only 461 captions containing ambiguous verbs was released as a test set for the WMT 2017 multimodal machine translation shared task, called Ambiguous COCO , and is available in all target languages of the task. The YJ Captions dataset (Miyazaki and Shimizu, 2016) and the STAIR Captions dataset (Yoshikawa et al, 2017) comprise, respectively, 132k and 820k crowdsourced Japanese captions for COCO images. However, these are not parallel to the original English captions, as they were independently annotated. EN: the courtyard of an orange, two-storey building with a footpath to a swimming pool in the shape of an eight and small palm trees to the left and right; DE: der Innenhof eines zweistöckigen, orangen Gebäudes mit einem Weg zu einem achterförmigen Schwimmbecken und kleine Palmen rechts und links davon;
EN: Mexican women in decorative white dresses perform a dance as part of a parade. DE: Mexikanische Frauen in hübschen weißen Kleidern führen im Rahmen eines Umzugs einen Tanz auf. FR: Les femmes mexicaines en robes blanches décorées dansent dans le cadre d'un défilé. CS: Součástí průvodu jsou mexičanky tančící v bílých ozdobných šatech.
Spoken language translation datasets
The TED corpus TED is a nonprofit organisation that hosts talks in various topics, comprising a rich resource of spoken language produced by a variety of speakers in English. Video recordings of all TED talks are made available through the TED website 4 , as well as transcripts with translations in up to 116 languages. While the talks comprise a rich resource for language processing, the original transcripts are divided into arbitrary segments formatted like subtitles, which makes it difficult to get an accurate sentence-level parallel segmentation for use in translation systems. While resegmentation is possible with heuristic approaches, it comes with the additional challenge of aligning the new segments to the audiovisual content, and to each other in source and target languages. The Web Inventory of Transcribed and Translated Talks (WIT 3 ) (Cettolo et al, 2012) is a resource with the aim of facilitating the use of the TED Corpus in MT. The initiative distributes transcripts organised in XML files through their website 5 , as well as tools to process them in order to extract parallel sentences. Currently, WIT 3 covers 2,086 talks in 109 languages containing anywhere between 3 and 575k segments in raw transcripts, and is continually growing.
Since 2011, the annual speech translation tracks of the IWSLT evaluation campaign (see Section 3.2.2) has used datasets compiled from WIT 3 . While each of these sets contain a high-quality selection of English transcripts aligned with the audio and the target languages featured each year, they are not useful for training SLT systems due to their small sizes. As part of the 2018 campaign, the organisers released a large-scale English-German corpus (Niehues et al, 2018) containing 1,565 talks with 170,965 segments automatically aligned based on time overlap, which allows end-toend training of SLT models. The MuST-C dataset (Di Gangi et al, 2019a) is a more recent effort to compile a massively multilingual dataset from TED data, spanning 10 languages (English aligned with Czech, Dutch, French, German, Italian, Portuguese, Romanian, Russian, and Spanish translations), using more reliable timestamps for alignments than the IWSLT '18 dataset using a rigorous alignment process. The dataset contains a large amount of data for each target language, corresponding to a selection of English speech ranging from 385 hours for Portuguese to 504 hours for Spanish.
LibriSpeech
The original LibriSpeech corpus (Panayotov et al, 2015) is a collection of 982 hours of read English speech derived from audiobooks from the LibriVox project, automatically aligned to their text versions available from the Gutenberg project for the purpose of training ASR systems. Kocabiyikoglu et al (2018) augments this dataset for use in training SLT systems by aligning chapters from LibriSpeech with their French equivalents through a multi-stage automatic alignment process. The result is a parallel corpus of spoken English to textual French, consisting of 1408 chapters from 247 books, totalling 236 hours of English speech and approximately 131k text segments.
MSLT The Microsoft Speech Language Translation (MSLT) corpus (Federmann and Lewis, 2016) consists of bilingual conversations on Skype, together with transcriptions and translations. For each bilingual speaker pair, there is one conversation where the first speaker uses their native language and the second speaker uses English, and another with the roles reversed. The first phase transcripts were annotated for disfluencies, noise and code switching. In a second phase, the transcripts were cleaned, punctuated and recased. The corpus contains 7 to 8 hours of speech for each of English, German, and French. The English speech was translated to both German and French, while German and French speech was translated only to English. Federmann and Lewis (2017) repeat the process with Japanese and Chinese, expanding the dataset with 10 hours of Japanese and 4.5 hours of Chinese speech.
Fisher & Callhome
Post et al (2013) extends the Fisher 6 and Callhome 7 datasets of transcribed Spanish speech with English translations, developed by the Linguistic Data Consortium. The original Fisher dataset contains about 160 hours of telephone conversations in various dialects of Spanish between strangers, while the Callhome dataset contains 20 hours of telephone conversations between relatives and friends. The translations were collected from non-professional translators on the crowdsourcing platform Mechanical Turk. Fisher & Callhome is distributed with predesignated development and test splits, a part of which contains four reference translations for each transcript segment. The data in the corpus also includes ground truth ASR lattices that facilitate the training of strong specialized ASR models, allowing pipeline SLT studies to focus on the MT component. As the largest SLT corpus available at the time of its release, the Fisher & Callhome corpus has been widely used, and remains relevant for SLT today.
MaSS
The Multilingual corpus of Sentence-aligned Spoken utterances (MaSS) (Boito et al, 2019) is a multilingual corpus of read bible verses and chapter names from the New Testament. It is fully multi-parallel across 8 languages (Basque, English, Finnish, French, Hungarian, Romanian, Russian, and Spanish), comprising 56 language pairs in total. The multi-parallel content makes this dataset suitable for training SLT systems for language pairs not including English, unlike other multilingual datasets such as MuST-C. The data is aligned on the level of verses, rather than sentences. In rare cases, the audio for some verses is missing for some languages. MaSS contains a total of 8,130 eight-way parallel text segments, corresponding to anywhere between 18.5 and 23 hours of speech per language.
Video-guided translation datasets
The QED corpus The QCRI Educational Domain (QED) Corpus (Guzman et al, 2013;Abdelali et al, 2014), formerly known as the QCRI AMARA Corpus, is a large-scale collection of multilingual video subtitles. The corpus contains publicly available videos scraped from massive online open courses (MOOCs), spanning a wide range of subjects. The latest v1.4 release comprises a selection of 23.1k videos in 20 languages (Arabic, Bulgarian, Traditional and Simplified Chinese, Czech, Danish, Dutch, English, French, German, Hindi, Italian, Japanese, Korean, Polish, Portuguese, Russian, Spanish, Thai, and Turkish), subtitled in the collaborative Amara environment 8 (Jansen et al, 2014) by volunteers. A sizeable portion of the videos has parallel subtitles in multiple languages, varying in size from 8k segments (for Hindi-Russian) to 335k segments (for English-Spanish). Of these, about 75% of the parallel segments align perfectly in the original data, while the rest were automatically aligned using heuristic algorithms. An alpha v2.0 of the QED corpus is currently underway, scheduled to appear in the OPUS repository (Tiedemann, 2012), containing a large amount of (noisy) re-crawled subtitles.
The How2 dataset
The How2 dataset (Sanabria et al, 2018) is a collection of 79,114 clips with an average length of 90 seconds, containing around 2,000 hours of instructional YouTube videos in English, spanning a variety of topics. The dataset is intended as a resource for several multimodal tasks, such as multimodal ASR, multimodal summarisation, spoken language translation, and video-guided translation. To establish cross-modal associations, the videos in the dataset were annotated with word-level alignments to ground truth English subtitles. There are also English descriptions of each video written by the users who uploaded the videos, added to the dataset as metadata corresponding to video-level summaries. For the purpose of multimodal translation, a 300-hours subset of How2 that covers 22 different topics is available with crowdsourced Portuguese translations. This dataset has also recently been used for multimodal machine translation (Sanabria et al, 2018;Wu et al, 2019b). An example from this dataset can be seen in Figure 3. EN: I'm very close to the green but I didn't get it on the green so now I'm in this grass bunker.
PT: Eu estou muito perto do green, mas eu não pus a bola no green, então agora estou neste bunker de grama.
EN:
A person dressed as a teddy bear stands in a bouncy house and then falls over. The Video and TeXt (VaTeX) dataset ) is a bilingual collection of video descriptions, built on a subset of 41,250 video clips from the action classification benchmark DeepMind Kinetics-600 (Kay et al, 2017;Carreira et al, 2018). Each clip runs for about 10 seconds, showing one of 600 human activities. VaTeX adds 10 Chinese and 10 English crowdsourced captions describing each video, half of which are independent annotations, and the other half Chinese-English parallel sentences. With low-approval samples removed, the released version of the dataset contains 206,345 translation pairs in total. VaTeX is intended to facilitate research in multilingual video captioning and video-guided machine translation, and the authors keep a blind test set reserved for use in evaluation campaigns. The rest of the dataset is divided into training (26k videos), validation (3k videos), and public test splits (6k videos). The training and validation splits also have public action labels. An example from VaTeX is shown in Figure 3.
ZH: 一个打扮成泰迪熊的人站在充气房上, 然后摔倒了。
Models and Approaches
This section discusses the state-of-the-art models proposed to solve the multimodal machine translation (MMT) tasks introduced in Section 2. For some MMT tasks, the traditional approach is to put together a pipeline to divide the task into several sub-tasks, and cascade different modules to handle each of them. For instance, in the case of spoken language translation (SLT), this pipeline would first convert the input speech into text by an automatic speech recognition module (modality conversion), and then redirect the output to a text-based MT module. This is in contrast to endto-end models, where the source language would be encoded into an intermediate representation, and decoded directly into the target language. Pipeline systems are less vulnerable to training data insufficiency compared to data-driven end-to-end systems, since each component can be pretrained in isolation on abundant sub-task resources. However, they carry the risk of error propagation between stages and ignore cross-modal transfer of implicit semantics. As an example for the latter, consider two languages which emphasise words via prosody and specific word order, respectively. Translating the transcript would make it impossible to reflect the word order in the target sentence as the semantic correspondence would be lost at transcription stage. Nevertheless, both pipeline and end-to-end approaches rely heavily on the sequence-to-sequence learning framework on account of its flexibility and good performance across tasks. In the following, we describe this framework in detail.
General purpose sequence-to-sequence learning is inspired by the pioneering works in unimodal neural machine translation (NMT). The state of the art in unimodal MT has been dominated by statistical machine translation (SMT) methodologies (Koehn, 2009) for at least two decades, until the field drastically moved towards NMT techniques around 2015. Inspired by the successful use of deep neural networks in language modelling (Bengio et al, 2003;Mikolov et al, 2010) and automatic speech recognition (Graves et al, 2013), there has been a plethora of NMT studies featuring different neural architectures and learning methods. These architectures often rely on continuous word vector representations to encode various kinds of linguistic information in a common vector space, thereby eliminating the need for hand-crafted linguistic features. One of the first NMT studies by Kalchbrenner and Blunsom (2013) combined recurrent language modelling (Mikolov et al, 2010) and convolutional neural networks (CNN) to improve the performance of SMT systems through rescoring. Later on, the application of recurrent architectures, such as bidirectional RNNs (Schuster and Paliwal, 1997), LSTMs (Hochreiter and Schmidhuber, 1997;Graves and Schmidhuber, 2005), and GRUs (Chung et al, 2014), introduced further diversity into the field, eventually leading to the fundamental encoder-decoder architecture (Cho et 2014; Sutskever et al, 2014). These more advanced neural units were not as susceptible to the problems initially perceived in NMT, dealing naturally with variable-length sequences, and having clear computational advantages as well as superior performance. However, the difficulty of learning long-range dependencies in translation sequences (e.g. grammatical agreement in very long sentences) remained an issue until the introduction of the attention mechanism (Bahdanau et al, 2015). The attention mechanism addressed this issue by simultaneously learning to align translation units and to translate, supplying a context window with the relevant input units at each decoding step, i.e. for each generated word in the target language (Figure 4). The performance of the NMT systems that followed came close to, and soon surpassed, that of the state-of-the-art SMT systems. Successful non-recurrent alternatives have also been proposed, such as convolutional encoders and decoders with attention (Gehring et al, 2017), and the fully-connected deep transformers which employ the idea of self-attention in addition to the default cross-attention mechanism (Vaswani et al, 2017). The main motivation behind these is to allow for efficient parallel training across multiple processing units, and to prevent learning difficulties such as vanishing gradients.
Lastly, we would like to mention some major open-source toolkits which contribute vastly to the state of the art in machine translation by allowing fast prototyping of new approaches as well as the extension of existing ones to new tasks and paradigms: Moses (Koehn et al, 2007) for SMT, and FairSeq (Ott et al, 2019), Lingvo (Shen et al, 2019), Marian (Junczys-Dowmunt et al, 2018), Nematus , NeuralMonkey (Helcl et al, 2018a), nmtpytorch (Caglayan et al, 2017b), OpenNMT (Klein et al, 2017), Sockeye (Hieber et al, 2017) and Tensor2Tensor (Vaswani et al, 2018) for NMT.
Image-guided translation
In this section, we present the state-of-the-art models for the image-guided translation (IGT) task. We first discuss the visual feature extraction process, continue with reviews of the two main end-to-end neural approaches, and finally briefly cover retrieval and reranking methods. image classification or object detection (Russakovsky et al, 2015), it has been shown that the learned representations transfer very well into vision-to-language tasks such as image captioning Xu et al, 2015). Therefore, the majority of IGT approaches rely on features extracted from state-of-the-art CNNs (Simonyan and Zisserman, 2015;Ioffe and Szegedy, 2015;He et al, 2016) trained for the ImageNet (Deng et al, 2009) image classification task, where the output of the network is a distribution over 1000 object categories. These features usually come in two flavors ( Figure 5): (i) spatial features which are feature maps V ∈ R W ×H×C extracted from specific convolutional layers, and (ii) a pooled feature vector v ∈ R C which is the outcome of applying a projection or pooling layer on top of spatial features. The main difference between these features is that the former is dense and preserves spatial information, while the latter is a compact, spatially-unaware representation. An even more compact representation is to use the posterior class probabilities (v ∈ R K ) extracted from the output layer of a pretrained CNN, with K denoting the size of the taskspecific label set (for ImageNet, K is 1000). Finally, it is also possible to obtain a set of pooled feature vectors (or local features) from salient regions of a given image, with regions predicted by object detection CNNs (Girshick et al, 2014).
Sequence-to-sequence grounding with pooled features
The simplest and the most intuitive way of visually conditioning a sequence-to-sequence model is to employ pooled features in a way that they will interact with various components of the architecture. These approaches are mostly inspired by the early works in neural image captioning (Kiros et al, 2014;Mao et al, 2015;Vinyals et al, 2015), and are categorised in Figure 6 with respect to their entry points.
The very first attempt for neural image-guided translation comes from Elliott et al (2015), where they formulate the problem as a semantic transfer from a source language model to a target language model, within an encoder-decoder framework without attention. They propose to initialise the hidden state(s) of the source language model (LM), the target LM, or both, using pretrained VGG features (Simonyan and Zisserman, 2015). Later initialisation variants are applied to attentive NMTs: Calixto et al (2016) and Libovický et al (2016) experiment with recurrent decoder initialisation while Ma et al (2017) initialise both the encoder and the decoder, with features from a state-of-the-art ResNet (He et al, 2016). Madhyastha et al (2017) explore the expressiveness of the posterior probability vector as a visual representation, rather than the pooled features from the penultimate layer of a CNN.
Huang et al (2016) take a different approach and enrich the source sentence representation with visual information by projecting the feature vector into the source language embedding space and then adding it to the beginning or the end of the embedding sequence. This allows the attention mechanism in the decoder to attend to a mixed-modality source representation instead of a purely textual one. Instead of the conventional ImageNet-extracted features, they make use of local features from RCNN (Girshick et al, 2014) to represent explicit visual semantics related to salient objects. In another model referred to as Parallel-RCNN, they build five different source embedding sequences, each being enriched with a visual feature vector extracted from a different salient region of the image. A shared LSTM encodes these five sequences and average pools them to end up with the final source representation. revisit the idea of source enrichment to extend it by simultaneously appending and prepending the projected visual features to the embedding sequence; and combining it with encoder and/or decoder initialisation. Caglayan et al (2017a) explore different source and target interaction methods such as the element-wise multiplication between the visual features and the source/target word embeddings. Delbrouck and Dupont (2018) add another recurrent layer within the decoder in their DeepGRU model, conditioned on the visual features and the bottom layer hidden state. Both recurrent layers simultaneously decide on the output probability distribution by additively fusioning their respective unnormalised logits.
As for transformer-based architectures, Grönroos et al (2018) revisit the source enrichment by adding the visual feature vector to the beginning of the embedding sequence (Huang et al, 2016). They also experiment with modulating the output probability distribution through a time-dependent visual decoder gate. More interestingly, they explore different pooled visual representations such as scene-type associations (Xiao et al, 2010), action-type associations (Yao et al, 2011), and object features from Mask R-CNN (He et al, 2017).
Multi-task learning. Training an end-to-end neural model to perform multiple tasks at once can improve the model's task-specific performance by forcing it to exploit commonalities across the tasks involved (Caruana, 1997;Dong et al, 2015;Luong et al, 2015). The Imagination architecture, initially proposed by Elliott and Kádár (2017) and later integrated into transformer-based NMTs by Helcl et al (2018b), attempts to leverage the benefits of multi-tasking by proposing a one-to-many framework which shares the sentence encoder between the translation task and an auxiliary visual reconstruction task. Besides the usual cross-entropy translation objective, the model weights are also optimised through a margin-based loss which minimises the distance between the ground-truth visual feature vector and the one predicted from the sentence encoding. The visual features are only used at training time and are not needed when generating translations. Zhou et al (2018) further extends the Imagination network by incorporating an attention 9 over source sentence encodings, with the query vector being the visual features. In this approach, the auxiliary margin-based loss is modified so that the output of the attention layer is considered a reconstruction of the pooled feature vector.
Other approaches. All grounding approaches covered so far rely on the maximum-likelihood estimation (MLE) principle for the sequence transduction task, i.e. they try to maximise the log-probability of target sentences given the source sentences. Zheng et al (2018) extends MLE with a fine-tuning step, where they use reinforcement learning to find the model parameters which directly maximise the translation metric BLEU. In terms of multimodality, they simply initialise the decoder with pooled features. Toyama et al (2016), Calixto et al (2018) and Delbrouck and Dupont (2019) cast the problem as a latent variable model and resort to techniques such as variational inference and generative adversarial networks (GANs). Finally, Nakayama and Nishida (2017) approach the problem from a zero-resource perspective: they encode {source caption, image} pairs into a multimodal vectorial space using a max-margin loss. In a second step, they train the decoder using {target caption, image} pairs. Specifically, they do a forward-pass with the image as input and obtain the multimodal embedding, from which the recurrent decoder is trained to generate the target caption as usual. The image encoder is a pretrained VGG CNN. The zero-resource aspect comes from the fact that the sets of pairs do not overlap i.e. the approach does not require parallel IGT corpus. , attentive approaches explore how to efficiently integrate a visual attention (approach A in Figure 6) over the spatial features, alongside the language attention in NMTs. The most interesting research questions about visual attention are as follows: where to apply the visual attention, what kind of parameter sharing should be preferred and, how to fuse the output of language and visual attention layers. Caglayan et al (2016a) and Calixto et al (2016) are the first works to tackle these questions, through a visual attention which uses the hidden state of the decoder as query into the set of W × H spatial features. Their implementation is quite similar to the language attention, which results in two modality-specific contexts that should be fused before the output layer of the network. One notable difference is that Caglayan et al (2016a) experiment with a single multimodal attention layer shared across modalities while Calixto et al (2016) keep the attention layers separate. Later on, Caglayan et al (2016b) evaluate both shared and separate attentions with additive and concatenative fusion, and discover that proper feature normalisation is crucial for their recurrent approaches . Delbrouck and Dupont (2017a) propose a different fusion operation based on compact bilinear pooling (Fukui et al, 2016), to efficiently realise the computationally expensive outer product. Unlike additive and concatenative fusions, outer product ensures that each dimension of the language context vector interacts with each dimension of the visual context vector and vice-versa. Follow-up studies extend the decoder-based visual attention approach in different ways: reimplement the gating mechanism to rescale the magnitude of the visual information before the fusion, while Libovický and Helcl (2017) introduce the hierarchical attention which replaces the concatenative fusion with a new attention layer that dynamically weighs the modality-specific context vectors. Finally, Arslan et al (2018) and Libovický et al (2018) introduce the same idea into the Transformer-based (Vaswani et al, 2017) architectures. Besides revisiting the hierarchical attention, Libovický et al (2018) also introduce parallel and serial variants. The former is quite similar to Arslan et al (2018) and simply performs additive fusion while the latter first applies the language attention, which produces the query vector for the subsequent visual attention. Ive et al (2019) extend Libovický et al (2018) to add a 2-stage decoding process where visual features are only used in the second stage, through a visual cross-modal attention. They also experiment with another model where the attention is applied over the embeddings of object labels detected from the images.
Visual attention
Inspired by the previous success of visual attention in image captioning
In contrast to the decoder-based visual attention, encoder-based approaches are relatively less explored. To that end, Delbrouck and Dupont (2017b) propose conditional batch normalisation, a technique to modulate the batch normalisation layer (Ioffe and Szegedy, 2015) of ResNet. Specifically, they condition the mean and the variance of the batch normalisation layer on the source sentence representation for informed feature extraction. In the same work, Delbrouck and Dupont (2017b) also propose to apply an early visual attention inside the encoder, to yield inherently multimodal source encodings, on top of which the usual language attention would be applied by the decoder.
Reranking and Retrieval based approaches
The most typical pipeline for MT is to obtain an n-best list of translation candidates from an arbitrary MT system and select the best candidate amongst them after reranking with respect to an aggregated score. This score is often a combination of several models that are able to quantitatively assess translation-related qualities of a candidate sentence, such as the adequacy or the fluency, for example. Each model is assigned a coefficient and an optimisation step is executed to find the best set of coefficients that maximise the translation performance on an held-out test set (Och, 2003). The challenge for the IGT task is notably how to incorporate the visual modality into this pipeline in order to assign a better rank to visually plausible translations. To this end, Caglayan et al (2016a) combine a feed-forward language model (Bengio et al, 2003;Schwenk et al, 2006) and a recurrent NMT to rerank the translation candidates obtained from an SMT system. The language model is special in the sense that it is not only conditioned on n-gram contexts but also on the pooled visual feature vector. In contrast, Shah et al (2016) conjecture that the posterior class probabilities may be more expressive than a pooled representation for reranking, and treat each probability v i as an independent score for which a coefficient is learned. In a recent work, Lala et al (2018) demonstrate that for the Multi30k dataset, better translations are available inside an n-best list obtained from a text-only NMT model, which allow up to 10 points absolute improvement in METEOR score. They propose the multimodal lexical translation (MLT) model where they rerank the n-best list with scores assigned by a multimodal word sense disambiguation system based on pooled features.
Another line of work considers the task as a joint retrieval and reranking problem. Hitschler et al (2016) construct a multimodal/cross-lingual retrieval pipeline to rerank SMT translation candidates. Specifically, they leverage a large corpus of target {caption, image} pairs, and retrieve a set of pairs similar to the translation candidates and the associated image. The visual similarity is computed using the Euclidean distance in the pooled CNN feature space. The initial translation candidates are then reranked with respect to their -inverse document frequency based -relevance to the retrieved captions. Zhang et al (2017) also employ a combined framework of retrieval and reranking. For a given {caption, image} pair, they first retrieve a set of similar training images. The target captions associated with these images are considered as candidate translations. They learn a multimodal word alignment between source and candidate words and select the most probable target word for each source word. An n-best list from their SMT is reranked using a bidirectional NMT trained on the aforementioned source/target word sequences. Finally, Duselis et al (2017) and Gwinnup et al (2018) propose a pure retrieval system without any reranking involved. For a given image, they first obtain a set of candidate captions from a pretrained image captioning system. Two distinct neural encoders are used to encode the source and the candidate captions, respectively. A mapping is then learned from the hidden space of the source encoder Table 2: Automatic scores of state-of-the-art IGT methods on Multi30k English→German test2016: the table is clustered (and sorted by METEOR) across years for constrained systems, followed by unconstrained ones. Systems marked with ( †) are re-evaluated with tokenised sentences, denotes the use of visual features other than ImageNet CNNs. The gains and losses are with respect to the MT baselines reported in the papers. The types refer to Figure 6. to the target one, allowing the retrieval of the candidate caption which minimises the distance with respect to the source caption representation. Table 2 presents BLEU and METEOR scores on the English→German test2016 set of Multi30k dataset, as this is the test set that most studies report against. When possible, we annotate each score with the associated gain or loss with respect to the underlying unimodal MT baseline reported in the respective papers. The results concentrate around constrained systems, which only allow the use of parallel Multi30k corpus during training. A few studies experiment with using external resources Helcl and Libovický, 2017;Elliott and Kádár, 2017;Grönroos et al, 2018) for pretraining the MT system and then fine-tuning it on Multi30k, or directly training the system on the combination of Multi30k and the external resource. Two such unconstrained systems are also reported. At a first glance, the automatic results reveal that (i) initially, neural systems were not able to surpass the SMT systems, (ii) the use of external resources is beneficial to boost the underlying baseline performance, which further manifests itself as a boost in the multimodal scores and (iii) careful tuning allows RNN-based models to reach and even surpass Transformer-based models. From a multimodal perspective, the results are not very conclusive as there does not seem to be a single architecture, feature type or integration type that brings consistent improvements. Elliott (2018) attempted to answer the question of how efficiently state-of-the-art models were integrating information from the visual modality and concluded that when models were adversarially challenged with wrong images at test time, the quality of the produced translations was not that much affected as one would expect. Later on, showed how these seemingly insensitive architectures start to significantly rely on the visual modality, once words were systematically removed from source sentences during training and test. We believe that this latter finding may also be connected to the fact that better baselines benefit less from the visual modality (Table 2) i.e. sub-optimal architectures may leverage more from the visual information when compared to well trained NMT models. In fact, even the choice of vocabulary size may simulate systematic word removal, if a significant portion of the source vocabulary are mapped to unknown tokens. The same experimental pipeline of also paved the way for assessing the particular strengths of some of the covered IGT approaches and showed that, the use of spatial features through visual attention is superior than initialising the encoders and the decoders using pooled features.
Comparison of approaches
Lastly, if we take a look at the human evaluation rankings conducted throughout the WMT shared tasks, we see that the top three ranks for English→German and English→French are occupied by two unconstrained ensembles (Grönroos et al, 2018;Helcl et al, 2018b), the MLT Reranking and the DeepGRU (Delbrouck and Dupont, 2018) systems in 2018. In 2017, the multiplicative interaction (Caglayan et al, 2017a), unimodal NMT reranking , unconstrained Imagination (Elliott and Kádár, 2017), encoder enrichment and hierarchical attention were ranked as top three, again for both language pairs.
Spoken language translation
In spoken language translation, the non-text modality is the source language audio, which is translated into target language text. While source language transcripts may be available for training, at translation time the speech is typically the only input modality. We begin this section with a brief introduction to speech-specific feature extraction (Section 5.2.1). Section 5.2.2 reviews the current state of the art for the traditional pipeline methods and finally, Section 5.2.3 covers the end-to-end methods which saw a rapid development in recent years.
Feature extraction
Even though many deep learning applications use raw input data, it is still common to use somewhat engineered features in speech applications. The raw audio waveform consists of thousands of samples per second, and thus one-sample-at-atime processing would be computationally very expensive. Instead, a spectrogram representation is computed. It shows the signal activity at different frequencies, as a function of time. The frequency content is computed over frames of suitable length. The frame length trades off time and frequency precision: longer frames capture finer spectral (i.e. frequency) detail, but also describe a longer segment of time, which can be problematic as certain speech events (e.g. the stop consonants p, t) can have a very short duration.
Next, a Mel-scale filterbank is applied to each frame, and the logarithm of each filter's output is computed. This leads to log Mel-filterbank features. The filterbank operation reduces the number of dimensions. However, these operations are also perceptually motivated: the filterbank by the masking of frequencies close to each other in the ear, the Mel-scale as it relates frequency to perceived pitch, and the logarithm by the relation of perceived loudness to signal activity (Pulkki and Karjalainen, 2015).
Continued efforts in learning deep representations from raw samples exist, with some success (Sainath et al, 2015). However, log Mel-filterbank vectors as input to deep neural network models (Mohamed et al, 2012) remain the standard choice. Additional, more complex features may be used to aid robustness to speaker variability (Saon et al, 2013) or recognition in tonal languages (Ghahremani et al, 2014).
State of the art in pipeline methods
Pipeline approaches in SLT chain together separate ASR and MT modules, and these naturally follow progress in their respective fields. A popular ASR system architecture is an HMM-DNN hybrid acoustic model (Yu and Li, 2017), followed by an n-gram language model in the first decoding pass, and a neural language model for rescoring. This type of HMM-based ASR is essentially pipeline ASR. In addition to pipeline ASR, end-to-end ASR methods have recently gained popularity. Particularly, encoder-decoder architectures with attention have been successful, although on standard publicly available datasets HMM-based models still narrowly outperform end-to-end ones (Lüscher et al, 2019). Chiu et al (2018) show that encoder-decoder with attention ASR can outperform HMM-based models on an very large (12500h) proprietary dataset. Another common end-to-end ASR method is Connectionist Temporal Classification (CTC) (e.g. Li et al (2019)). Table 3: SLT formulated as Bayesian search, for translation y, source language transcript z, source language speech x, and set of all possible transcripts Z.
End-to-end search argmax y P (y|x)
General pipeline search argmax y z∈Z (x) P (y|z)P (z|x)
Pure serial pipeline Z (x) = argmax z P (z|x)
Loosely coupled pipeline Z (x) ⊂ Z Tightly coupled pipeline Z (x) = Z Wang et al (2018c) and Liu et al (2018) place first and second, respectively, in the IWSLT 2018 evaluation campaign. Both apply similar pipeline architectures: a system combination of multiple different HMM-DNN acoustic models and LSTM rescoring for ASR, followed by a system combination of multiple Transformer NMT models for translation. Liu et al (2018) additionally use an encoder-decoder with attention ASR to improve the system combination ASR results, although individually the end-to-end model is clearly outperformed by the HMM-DNN models. Wang et al (2018c) use an additional target-to-source NMT system for rescoring to improve adequacy. The systems also differ in interfacing strategies between ASR and MT.
In the latest IWSLT evaluation campaign in 2019, end-to-end SLT models were encouraged. However, the best performance was still achieved with a pipeline SLT approach, where Pham et al (2019) use end-to-end ASR and a Transformer NMT model. In the ASR module, an LSTM-based approach outperforms a Transformer model, though combining both in an ensemble proved beneficial. Weiss et al (2017) and Pino et al (2019) also report competitive results using end-to-end ASR, with Pino et al (2019) surpassing the state-of-the-art in SLT. End-to-end ASR has attracted attention in SLT, because it allows for parameter transfer in end-to-end SLT (e.g. Bérard et al (2018), and Figure 8).
Challenges in pipeline SLT Research in pipeline SLT has specifically focused on the interface between ASR and MT. There is a clear mismatch between MT training data and ASR output, caused by the ASR noise characteristics (i.e. transcription errors), and the ASR output dissimilarity with respect to the written text due to lack of capitalisation and punctuation, and the disfluencies (e.g. repetitions and hesitations), which naturally occur in speech. Federico (2014, 2015); Ruiz et al (2017) quantify the effect of ASR errors on MT. In a linear mixed-effects model, the amount of WER added on top of gold standard transcripts has a direct effect on TER increase. The results do not vary over different ASR systems. Minor localised ASR errors can result in longer distance errors or duplication of content words in NMT. Homophonic substitution error spans (e.g. anatomy → and that to me) are shown to account for a significant portion of ASR errors and to have a large impact on translation quality. With regards to noise robustness, it is noted that the utterances which were best translated by phrase-based MT, had higher average WER than utterances which were best translated by NMT. In general, NMT has been established as particularly sensitive to noisy inputs (Belinkov and Bisk, 2018;Cheng et al, 2018).
One approach to address the mismatch is training the MT system on noisy, ASR-like input. Peitz et al (2012) use an additional phrase-table trained on ASR-outputs on the SLT corpus. Tsvetkov et al (2014) augment a phrase-table with plausible ASR misrecognitions. These errors are synthesised by mapping each phrase to phones via a pronunciation dictionary, and randomly applying heuristic phone-level edit operations. Sperber et al (2017b) first train an NMT system on reference transcripts, and then fine-tune on noisy transcripts. The noise is sampled from a uniform distribution over insertions, deletions or substitutions, with optional unigram weighting for the substitutions and insertions. Additionally, a deletion-only noise is used. Smaller amounts of noise are shown to improve SLT results, but increasing noise levels to actual test-time ASR levels (rather high, at 40%) only degrades performance. Increased noise is noted to produce shorter outputs, which in turn are punished by the BLEU brevity penalty. A precision-recall tradeoff is observed: the system could either drop uncertain inputs (better precision) or try to guess translations (better recall). Fine-tuning with deletion-only noise biases the system to produce longer outputs, which is shown to counteract the effect of noisy inputs producing shorter outputs. Pham et al (2019) use the data augmentation method SwitchOut (Wang et al, 2018b), to make their NMT models more robust to ASR errors. During training, SwitchOut randomly replaces words in both the source and the target sentences.
Another approach to cope with the mismatch is to transform the ASR-output into written text. Wang et al (2018c) apply a Transformer-based punctuation restoration and heuristic rules which remove disfluencies and transform written out numbers and quantities into numerals. Liu et al (2018) experiment with NMT-based transformations in both directions: producing ASR-like text from written text for training the translation system, or producing written text from ASR-like text as a test-time bridge between ASR and translation. Transforming the MT training data into an ASR-like format consistently outperforms inverse normalization of ASR-output, though both are beneficial in the final system combination.
Long audio streams typically need to be segmented into manageable length pieces using voice activity detection (Ramirez et al, 2007), or more elaborate speaker diarisation methods (Anguera et al, 2012). These methods may not produce clean sentence boundaries. This is a clear problem in MT, as the boundaries can cut between actual sentences. Liu et al (2018) alleviate the problem by applying an LSTM-based resegmenter after the ASR system. Pham et al (2019) combine resegmentation, and casing and punctuation restoration into a single ASR post-processing task, and apply an NMT model.
Coupling between ASR and MT The SLT search is often described in Bayesian terms as shown in Table 3. Generally, pipeline search is based on the assumption that P (y|z, x) = P (y|z), i.e. given the source language transcript, the translation does not depend on the speech. It is still possible to take the uncertainty of the transcription into account under this conditional independence assumption, but it rules out the use of paralinguistic cues, e.g. prosody. In pure serial pipeline search, first the 1-best ASR result is decoded, then only this 1-best result is translated. The hard choice in 1-best decoding is especially susceptible to error propagation. Early work in SLT found consistent improvements with loosely coupled search, where a rich representation carrying the ASR uncertainty, such as an N-best list or word lattice, is used in translation. Tightly coupled search, i.e. joint decoding, is also possible, although the application is limited by excessive computational demands. In tightly coupled search, the translation model would also influence which ASR hypotheses were searched further. This was done by representing both the ASR and the phrase-based MT search spaces as Weighted Finite State Transducers (WFST). (Matusov et al, 2006;Zhou, 2013) Osamura et al (2018) implement a type of loose coupling by using the softmax posterior distribution from the ASR module as the input for NMT. Loose coupling via using lattices as input in NMT is not straightforward. Sperber et al (2017a) implement LatticeLSTM for lattice inputs in RNN-based NMT, and find that preserving the uncertainty in the ASR output is beneficial for SLT. Zhang et al (2019) further propose a Transformer model which can use lattice inputs, and find that it outperforms both a standard Transformer and a LatticeLSTM baseline in an SLT task. However, tight coupling of NMT and ASR has not been proposed in pipeline SLT.
In addition to coupled decoding, end-to-end SLT leverages coupled training. This can avoid suboptimization; for phrase-based MT and HMM-GMM ASR, He et al (2011) show how optimizing the ASR component purely for WER can produce worse results in SLT. He and Deng (2013) foreshadow end-to-end neural SLT systems, proposing a joint, end-to-end optimization procedure for a pipeline of HMM-GMM ASR and phrase-based MT. In the proposed approach, the ASR and MT components are first trained separately, and then the whole pipeline is jointly optimized for sentencelevel BLEU, by iteratively sampling sets of competing hypotheses from the pipeline and updating the parameters of the submodels discriminatively.
End-to-end spoken language translation
The first attempts to use end-to-end methods for SLT were published in 2016. This period saw experimentation with a wide variety of approaches, before research focus converged on sequence-to-sequence architectures. These early methods (Duong et al, 2016;Anastasopoulos et al, 2016;Bansal et al, 2017) were able to align source language audio to target language text, but they were not able to perform translation. The first true end-to-end SLT system is presented by Bérard et al (2016). Still a proof-of-concept, it was trained on BTEC French→English with synthetic audio containing a small number of speakers. Figure 7 shows the different types of training data applicable for SLT. The standard learning setup for end-to-end SLT is only able to train from untranscribed SLT data. The task is very challenging, as data of this type is scarce, and the representation gap between source audio and target text is large. The source transcript is useful as an intermediary representation, a stepping stone to divide the gap into two smaller ones: modality conversion and translation. Many learning setups (see Figure 8), e.g. pretraining, multi-task learning, and knowledge distillation, have been applied for exploiting the source transcripts. In early experiments, no new examples are introduced for the auxiliary task(s); Only source transcript labels for the SLT examples were added. Later the same learning setups have been applied to exploit more abundant auxiliary ASR and MT data.
An important milestone towards parity with pipeline approaches was to achieve better translation quality when both the end-to-end system and the pipeline system are trained on the same SLT data. This milestone was reached by Fig. 7: Four types of data that can be used to train SLT systems. Untranscribed SLT is the minimal type of data for end-to-end systems. Adding source text transcripts completes the triple. The source text is an intermediate representation which divides the SLT mapping into a modality conversion and a translation. Two types of auxiliary data, ASR and MT data, form adjacent pairs in the triple, leaving one of the ends empty. The auxiliary data can be used as is for pretraining or multi-task learning, or it can be completed into synthetic triples using external TTS or MT systems. Weiss et al (2017), training on the 163h Fisher&Callhome Spanish→English data set. As pipeline methods are naturally capable of exploiting the more abundant paired ASR and MT data, but in this case this condition was unrealistically constrained. When the constraint is lifted, pipeline methods improve to a level that is difficult or impossible to reach on small amounts of source audio-translated text data. The effective use of auxiliary data was a key insight going forward towards achieving parity with pipeline approaches. Figure 8 shows learning setups that have been applied for exploiting source transcripts and auxiliary data. Weiss et al (2017) use a multi-task learning procedure with ASR as the auxiliary task, training only on transcribed SLT data. In multi-task learning (Caruana, 1997), multiple tasks are trained in parallel, with some network components shared between the tasks. Bérard et al (2018) compare pretraining (sequential transfer) with multi-task learning (parallel transfer), finding very little difference between the two. In pretraining, some of the parameters from a network trained to perform an auxiliary task are used to initialise parameters in the network for the main task. The system is trained only on transcribed SLT data, with two auxiliary tasks: pretraining the encoder and decoder with ASR and textual MT respectively. Stoian et al (2019) compare the effects of pretraining on auxiliary ASR datasets of different languages and sizes, concluding that the WER of the ASR system is more predictive of the final translation quality than language relatedness. Anastasopoulos and Chiang (2018) make the line between pipeline and end-to-end approaches more blurred by using a multi-task learning setup with two-step decoding. First the source transcript is decoded using the ASR decoder. A second SLT decoder attends to both the speech input and the hidden states of the ASR decoder. While the system is trained end-to-end, the two-step decoding is still necessary at translation time. The system is trained only on transcribed SLT data. Liu et al (2019) focus on exploiting source transcripts by means of knowledge distillation. They train the student SLT model to match the output probabilities of a text-only MT teacher model, finding that knowledge distillation is better than pretraining. Inaguma et al (2019b) also see substantial improvements from knowledge distillation when adding auxiliary textual parallel data. Wang et al (2019a) introduce the Tandem Connectionist Encoding Network (TCEN), which allows neural network components to be pretrained while minimising both the number of parameters not transferred from the pretraining phase, and the mismatch of components between pretraining and finetuning. The final network consists of four components: ASR encoder, MT encoder, MT attention and MT decoder. The ASR encoder is pretrained with a Connectionist Temporal Classification objective function, which does not require a separate ASR decoder which would go to waste after pretraining. The last three parts can be pretrained with a textual MT task. Jia et al (2019) show that augmenting auxiliary data is more effective than multi-task learning. MT data is augmented with synthesised speech, while ASR data is augmented with synthetic target text by forward translation using a textonly MT system (see Figure 7). These kinds of synthetic data augmentation are conceptually similar to the highly Fig. 8: Learning setups for end-to-end SLT: The standard framework uses untranscribed SLT data. Auxiliary data can be exploited in different ways such as by pretraining the encoder through ASR, pretraining the decoder through MT, knowledge distillation, or multi-task learning. The optional link in multi-task learning results in 2-step decoding. TCEN combines multiple types of pretraining.
successful practice of using backtranslation (Sennrich et al, 2016a) to exploit monolingual data in textual MT. With both pretraining and multi-task learning, the end-to-end system slightly outperforms the pipeline. Adding synthetic data substantially outperforms the pipeline. The systems are both trained on exceptionally large proprietary corpora: ca 1300 h translated speech and 49000 h transcribed speech. Controversially the system is also evaluated on a proprietary test set. The speech encoder is divided into two parts, of which only the first is pretrained on an ASR auxiliary task. The entire decoder is pretrained on the text MT task. Pino et al (2019) evaluate several pretraining and data augmentation approaches. They use TTS to synthesise source audio for parallel text data, finding that the effect depends on the quality and quantity of the synthetic data. Using textual MT to synthesise target text from ASR data is clearly beneficial. Pretraining the speech encoder on an ASR task is useful for the lower resourced English→Romanian, but not for English→French. Pretraining on ASR is not a good substitute for using textual MT for augmenting the ASR data, but does speed up convergence of the SLT model. Using a combination of a VGG Transformer speech encoder and decoder, they very nearly reach parity with a strong pipeline system. Bansal et al (2019) apply crosslingual pretraining, by pretraining on high-resource ASR to improve low-resource SLT. They use a small Mboshi→French SLT corpus without source transcripts. As Mboshi has no official orthography, transcripts may be difficult to collect. Pretraining the speech encoder using a completely unrelated high-resource language, English, effectively allows to account for acoustic variability, such as speaker and channel differences. Di Gangi et al (2019c) train a one-to-many multilingual system to translate from English to all 8 target languages of the MuST-C corpus, with an additional task pair for English ASR. Prepending a target language tag to the input (Johnson et al, 2017), is not effective in multilingual SLT, resulting in many acceptable translations into the wrong language. Better results are achieved with a stronger language signal using merge, a language-dependent shifting operation. Inaguma et al (2019a) train multilingual models for {en, es} → {en, fr, de} SLT. They achieve better results with the multilingual models than with bilingual ones, including pipeline methods for some test sets.
Noise-based data augmentation methods have also been applied to the speech audio. Bahar et al (2019) and Di Gangi et al (2019) apply spectral augmentation (SpecAugment), which randomly masks blocks of features that are consecutive in time and/or frequency.
End-to-end SLT architectures
There is a large variety of architectures that have been applied to end-to-end SLT, with no clear favourite having emerged. However, recent architectures all follow some type of sequence-to-sequence architectures that makes use of attention mechanisms.
Two varieties of LSTM layers have been used: standard bi-LSTM (e.g. Jia et al, 2019) and pyramidal bi-LSTM (e.g. Duong et al, 2016;Bérard et al, 2016;Bahar et al, 2019). The pyramidal construction of the encoder downsamples the long speech input sequence, making subsequent bi-LSTM layers and the attention mechanism faster and alignment easier. Bérard et al (2016) use convolutional attention, finding it to be particularly useful with long input sequences. Following Weiss et al (2017), Bérard et al (2018) move away from the pyramidal bi-LSTM encoder architecture to convolution followed by bi-LSTM. The prepended convolutional layers perform the downsampling of the audio signal, making the pyramidal construction unnecessary. Transformers have also been used in many SLT systems. Liu et al (2019) propose an architecture in which all encoders and decoders are standard Transformer encoders and decoders respectively. Pino et al (2019) further prepend VGG-style convolutional blocks to Transformer encoders and decoders, in order to replace the positional embedding layer of the standard Transformer architecture and to downsample the signal. Di Gangi et al (2019c) use a speech encoder which begins with stacks of convolutional layers interleaved with 2D self-attention (Dong et al, 2018), followed by a stack of Transformer layers. Salesky et al (2019) revisit the network-in-network (Lin et al, 2014a) architecture to achieve downsampling: parameters are shared spatially in a similar way to CNN, but a full multi-layer perceptron network is applied to each window.
Convolutional Neural Networks are used in many SLT architectures, but only in combination with LSTM or Transformer, not in isolation. The combined CNN-LSTM architecture is popular in end-to-end ASR (Watanabe et al, 2018). The CNN is well suited for reduction of the time scale to something manageable, and modeling short range dependencies. The appended LSTM or Transformer is useful for encoding the semantic information for translation. The CNNs used in SLT are typically 2D convolutions (parameter sharing across both time and frequency). Time Delay Neural Networks (TDNN) are still popular in ASR, but have not to the best of our knowledge been used in end-to-end SLT. TDNNs can be seen as a 1D convolution, only sharing parameters across time. The VGG (Simonyan and Zisserman, 2015) architecture of CNNs is used in SLT, but not ResNet (He et al, 2016).
Comparison of architectures. In SLT, the choice between LSTM and Transformer architectures doesn't seem to be a settled matter: recent papers use both. Both architectures are powerful enough, when stacked into sufficiently deep networks. Pino et al (2019) present a result in favour of the Transformer, as they only reach parity with their pipeline using Transformers, but not LSTMs. Inaguma et al (2019b) find that Transformers consistently outperform LSTMs in their experiments. A downside of LSTM is slow training on the very long sequences encountered in speech translation. While the Transformer parallelises to a larger extent, making training fast, it is not immune to long sequences, as the self-attention is quadratic in memory w.r.t. the length. The Transformer also lacks explicit modelling of short range dependencies, due to the self-attention learning dependencies of any range with equal difficulty. Di Gangi et al (2019b) attempt to augment the Transformer to alleviate some of its shortcomings.
Decoding units. In textual NMT, subword-level decoders have become the standard choice (Sennrich et al, 2016b). Most end-to-end SLT systems use character-level decoders. Although word level decoding is rare, Bansal et al (2018) focus on a low-computation setting, deciding to use word-level decoding to shorten the sequence length. Some wellperforming recent systems use subword units (Liu et al, 2019;Jia et al, 2019;Pino et al, 2019;Bansal et al, 2019). Wang et al (2019a) find characters to work better than subwords in their system.
Has parity with pipeline approaches been reached? Recent results (Jia et al, 2019;Pino et al, 2019) show that on certain tasks with large enough datasets of high-quality, end-to-end systems can reach the same or even better performance than pipeline systems. In low-resource settings, end-to-end systems do not perform as well. However, in the IWSLT 2019 evaluation campaign , the pipeline system of Schneider and Waibel (2019) clearly outperforms all end-to-end submissions. Sperber et al (2019) find that current methods do not use auxiliary data effectively enough. The amount of transcribed SLT data is critical: When the size of the data containing all three of source audio, source text and target text is sufficient, end-to-end methods outperform pipeline methods. In lower resource settings where the amount of SLT data is insufficient, pipeline methods are better. Table 4 shows results on the English→French Augmented LibriSpeech test set, which is one of the most competed test sets for SLT, particularly end-to-end SLT. It shows the rapid increase in performance during the last two years, and the importance of maximally exploiting available training data.
Future Directions
The previous sections provide a detailed overview of resources, definitions of various kinds of multimodal MT, and the extensive work that has been devoted to develop models for the different tasks. However, multimodal MT is still in its infancy. This is especially the case for truly end-to-end models, which have only appeared in recent years. Future work should explore more realistic settings that go beyond restricted domains and rather artificial problems such as visually-guided image caption translation.
Datasets and resources
Image-guided translation has, thus far, been studied with small-scale datasets , and there is a need for larger-scale datasets that bring the resources for this task closer to the size of image captioning and machine translation datasets (Tiedemann, 2012). Larger-scale datasets have started to appear for video-guided translation (Sanabria et al, 2018;Wang et al, 2019b). Spoken-language translation datasets (Kocabiyikoglu et al, 2018;Niehues et al, 2018) are smaller than standard automatic speech recognition datasets. A common challenge in multimodal translation is the need for crosslingually aligned resources, which are expensive to collect , or can result in a small dataset of clean examples (Kocabiyikoglu et al, 2018). Future work will obviously benefit from larger datasets, however, researchers should further explore the role of data augmentation strategies (Jia et al, 2019) in both spoken language translation and visually-guided translation.
Evaluation and "verification"
A significant challenge in image-guided translation has been to demonstrate that a model definitively improves translation with image guidance. This has resulted in more focused evaluation datasets that test noun sense disambiguation Lala and Specia, 2018) and verb sense disambiguation (Gella et al, 2019). In addition to new evaluations, researchers are focusing their efforts on determining whether image-guided translation models are sensitive to perturbations in the inputs. Elliott (2018) showed that the translations of some trained models are not affected when guided by incongruent images (i.e. the translation models were not guided by the image that the source language sentence describes, instead they are guided by a randomly selected image; see Section 5.1.5 for more details); demonstrated that training models with masked tokens increases the sensitivity of models to incongruent image guidance; and, more recently, Dutta Chowdhury and Elliott (2019) showed that trained models are more sensitive to textual perturbations than incongruent image guidance. Overall, there is a need for more focused evaluations, especially in a wider variety of language pairs, and for models to be explicitly evaluated in these more challenging conditions. Future research on visually-guided translation should also ensure that new models are actually using the visual guidance in the translation process.
In spoken language translation, this line of research into focused evaluations might involve digging into the cases where a good transcript is not enough to disambiguate the translation. One possible case is translating into a language where the speaker's gender matters, such as French or Arabic (Elaraby et al, 2018). End-to-end SLT systems have the potential to use non-linguistic information from the speech signal to tackle these challenges, but it is currently unknown to which extent they are able to do so.
Shared tasks
In addition to stimulating research interest, shared task evaluation campaigns enable easier comparison of results by encouraging the use of standardised data conditions. The choice of data condition can be made with many aims in mind. To set up a race for state-of-the-art results using any and all available resources, it is enough to define a common test set. For this goal, any additional restrictions are unnecessary or even detrimental. For example the GLUE natural language understanding task (Wang et al, 2018a) takes this approach.
On the other hand, if the goal is to achieve as fair as possible comparison between architectures, then strict limitations on the training data are required as well. Most evaluation campaigns choose this approach. However, it is far from trivial to select an appropriate set of data types to include in the condition. In many tasks, the use of auxiliary or synthetic data has proved vitally useful, e.g. exploiting monolingual data in textual MT using backtranslation (Sennrich et al, 2016a). In spoken language translation, the use of auxiliary data has prompted some discussion of when end-to-end systems are considered to have reached parity with pipeline systems. To answer this question in a fair comparison, both types of systems should be evaluated under standardised data conditions.
Multimodality and new tasks
Most previous work on multimodal translation emphasises multimodal inputs and unimodal outputs, mainly text. The integration of speech synthesis, and also a better integration of visual signals in generated communication is required for improved intelligent systems and interactive artificial agents. In addition to multimodal outputs, there should be a stronger emphasis on real-time language processing and translation. This new emphasis would also result in a closer integration of models for spoken language translation models and visually-guided translation.
In SLT, the visual modality could contribute both complementary and disambiguating information. In addition, visual speech recognition, automatic lip reading in particular (e.g. Chung et al, 2017), could aid SLT for example in audio noise robustness. The How2 dataset should allow a flurry of research in the nascent field of audio-visual SLT. Wu et al (2019a) present exploratory first results. BLEU improvements over the best non-visual baseline are not found, although the visual modality improves results when comparing between model using cascaded deliberation.
In zero-shot translation, a multilingual model is used for translating between a language pair that was not included in the parallel training data (Firat et al, 2016;Johnson et al, 2017). For example, if a model does zero-shot French→Chinese translation, the training data contains language pairs with French as the source language and Chinese as the target language but no parallel French→Chinese data. Considering ongoing research into multilingual translation models also in multimodal translation (e.g. Inaguma et al, 2019a), and the fact that multimodal translation training data of sufficient size is available for a very limited number of language pairs, we expect an interest in zero-shot multimodal language translation in the future.
Conclusions
Multimodal machine translation provides an exciting framework for further development in grounded cross-lingual natural language understanding combining work in NLP, computer vision and speech processing. This paper provides a thorough survey of the current state of the art in the field focusing on specific tasks and benchmarks that drive the research. This survey details the essential language, vision, and speech resources that are available to researchers, and discusses the models and learning approaches in the extensive literature on various multimodal translation paradigms. Combining these different paradigms into truly multimodal end-to-end models of natural cross-lingual communication will be the goal of future developments, given the foundations laid out in this survey.
Fig. 1 :
1Prominent examples of multimodal translation tasks, such as image-guided translation (IGT), video-guided translation
Fig. 2 :
2Contrasting examples from IAPR TC-12 image descriptions (top) and Multi30k image captions (bottom).
Fig. 3 :
3Examples from How2 video subtitles (top) and VaTeX video descriptions (bottom), retrieved and adapted from Sanabria et al (2018) and Wang et al (2019b), respectively. The VaTeX dataset
Fig. 4 :
4A simplified view of encoder-decoder architecture with attention: an English sentence is first encoded into a latent space from which an attentive decoder sequentially generates the German sentence. The dashed recurrent connections are replaced by self-attention in fully-connected architectures such as transformers(Vaswani et al, 2017).
Fig. 5 :Fig. 6 :
56practice of embedding translation units into continuous vector representations has become a standard in NMT. For compatibility with various NMT architectures, multimodal MT systems need to embed input data from other modalities, whether alongside or in place of the text, in a similar fashion. For visual information, the current best practice is to use a convolutional neural network (CNN) with multiple layers stacked on top of each other, train the system for a relevant computer vision task, and use the latent features extracted from the trained network as visual representations. Although these visual encoders are highly optimised for the underlying vision tasks such as large-scale An overview of two common types of visual featuers extracted from CNNs. A broad visualisation of the state of the art in image-guided translation.
Table 1 :
1Summary statistics from most prominent multimodal machine translation datasets. We report image captions per language, and audio clips and segments per language pair.Dataset
Media
Text
Languages
SLT IGT VGT
IAPR TC-12 (Grubinger et al, 2006)
20k images
20k captions
de, en
Flickr8k (Rashtchian et al, 2010)
8k images
41k captions
en, tr, zh
Flickr30k (Young et al, 2014)
30k images
158k captions
de, en
Multi30k (Elliott et al, 2016)
30k images
30k captions
cs, de, en, fr
QED (Abdelali et al, 2014)
23.1k video clips
8k-335k segments
20 languages
How2 (Sanabria et al, 2018)
13k video clips
189k segments
en, pt
VaTeX (Wang et al, 2019b)
41k video clips
206k segments
en, zh
WIT 3 (Cettolo et al, 2012)
2,086 audio clips 3-575k segments
109 languages
Fisher & Callhome
Table 4 :
4BLEU scores for SLT methods on English→French Augmented LibriSpeech/test. All systems are end-to-end, except for the pipeline system marked with a dagger ( †).Approach
BLEU ↑
Training data
Description
SLT (h) ASR (h) MT (sent)
Bérard et al (2018)
13.4
100h
CNN+LSTM. Multi-task.
Di Gangi et al (2019b)
13.8
236h
CNN+Transformer.
Bahar et al (2019)
17.0
100h
130h
95k
Pyramidal LSTM. Pretraining, augmentation.
Liu et al (2019)
17.0
100h
Transformer. Knowledge distillation.
Inaguma et al (2019a)
17.3
472h
CNN+LSTM. Multilingual.
Pino et al (2019)
21.7
100h
902h
29M
CNN+Transformer. Pretraining, augmentation.
Pino et al (2019) †
21.8
100h
902h
29M
End-to-end ASR. CNN+LSTM.
Derived from https://www.opensubtitles.com/
The multimodal translation task was not held in WMT 2019.
https://competitions.codalab.org/competitions/19917
http://www.ted.com/talks 5 http://wit3.fbk.eu
Speech: https://catalog.ldc.upenn.edu/LDC2010S01, Transcripts: https://catalog.ldc.upenn.edu/LDC2010T04 7 Speech: https://catalog.ldc.upenn.edu/LDC96S35, Transcripts: https://catalog.ldc.upenn.edu/LDC2010T04 8 https://amara.org/
It should be noted that the attention here is over the source language encodings, and hence not a visual/spatial attention.
Umut Sulubacak et al.
AcknowledgmentsThis study has been supported by the MeMAD project, funded by the European Union's Horizon 2020 research and innovation programme (grant agreement № 780069), the FoTran and MultiMT projects, funded by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreements № 771113 and № 678017 respectively), and the MMVC project, funded by the Newton Fund Institutional Links grant programme (grant ID 352343575). We would also like to thank Maarit Koponen for her valuable feedback and her help in establishing our discussions of machine translation evaluation.
The AMARA Corpus: Building parallel language resources for the educational domain. A Abdelali, F Guzman, H Sajjad, S Vogel, Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC). the 9th International Conference on Language Resources and Evaluation (LREC)Reykjavík, IcelandAbdelali A, Guzman F, Sajjad H, Vogel S (2014) The AMARA Corpus: Building parallel language resources for the educational domain. In: Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC), Reykjavík, Iceland, pp 1856-1862
Overview of the IWSLT 2004 evaluation campaign. Y Akiba, M Federico, N Kando, H Nakaiwa, M Paul, J Tsujii, Proceedings of the 2004 International Workshop on Spoken Language Translation. the 2004 International Workshop on Spoken Language TranslationKyoto, JapanAkiba Y, Federico M, Kando N, Nakaiwa H, Paul M, Tsujii J (2004) Overview of the IWSLT 2004 evaluation campaign. In: Proceedings of the 2004 International Workshop on Spoken Language Translation, Kyoto, Japan
Tied multitask learning for neural speech translation. A Anastasopoulos, D Chiang, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Anastasopoulos A, Chiang D (2018) Tied multitask learning for neural speech translation. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), Association for Computational Linguistics, New Orleans, Louisiana, pp 82-91
An unsupervised probability model for speech-to-translation alignment of low-resource languages. A Anastasopoulos, D Chiang, L Duong, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsAnastasopoulos A, Chiang D, Duong L (2016) An unsupervised probability model for speech-to-translation alignment of low-resource languages. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Austin, Texas, pp 1255-1263
Speaker diarization: A review of recent research. X Anguera, S Bozonnet, N Evans, C Fredouille, G Friedland, O Vinyals, IEEE Transactions on Audio, Speech, and Language Processing. 202Anguera X, Bozonnet S, Evans N, Fredouille C, Friedland G, Vinyals O (2012) Speaker diarization: A review of recent research. IEEE Transactions on Audio, Speech, and Language Processing 20(2):356-370
VQA: Visual Question Answering. S Antol, A Agrawal, J Lu, M Mitchell, D Batra, Lawrence Zitnick, C Parikh, D , Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionAntol S, Agrawal A, Lu J, Mitchell M, Batra D, Lawrence Zitnick C, Parikh D (2015) VQA: Visual Question Answering. In: Proceedings of the IEEE international conference on computer vision, pp 2425-2433
Doubly attentive transformer machine translation. H S Arslan, M Fishel, G Anbarjafari, arXiv:1807.11605Computing Research RepositoryArslan HS, Fishel M, Anbarjafari G (2018) Doubly attentive transformer machine translation. Computing Research Repository arXiv:1807.11605
Neural Machine Translation by Jointly Learning to Align and Translate. P Bahar, A Zeyer, R Schlüter, H Ney, K Cho, Y Bengio, Proceedings of the 16th International Workshop on Spoken Language Translation Bahdanau D. the 16th International Workshop on Spoken Language Translation Bahdanau DSan Diego, CA, USAProceedings of the 3rd International Conference on Learning RepresentationsBahar P, Zeyer A, Schlüter R, Ney H (2019) On using specaugment for end-to-end speech translation. In: Proceedings of the 16th International Workshop on Spoken Language Translation Bahdanau D, Cho K, Bengio Y (2015) Neural Machine Translation by Jointly Learning to Align and Translate. In: Proceedings of the 3rd International Conference on Learning Representations, pp San Diego, CA, USA
Multimodal Machine Learning: A Survey and Taxonomy. T Baltrušaitis, C Ahuja, L P Morency, arXiv:1705.09406Computing Research RepositoryBaltrušaitis T, Ahuja C, Morency LP (2017) Multimodal Machine Learning: A Survey and Taxonomy. Computing Research Repository arXiv:1705.09406
Towards speech-to-text translation without speech recognition. S Bansal, H Kamper, A Lopez, S Goldwater, Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European ChapterValencia, SpainAssociation for Computational Linguistics2Bansal S, Kamper H, Lopez A, Goldwater S (2017) Towards speech-to-text translation without speech recognition. In: Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, Association for Computational Linguistics, Valencia, Spain, pp 474-479
Low-resource speech-to-text translation. S Bansal, H Kamper, K Livescu, A Lopez, S Goldwater, Bansal S, Kamper H, Livescu K, Lopez A, Goldwater S (2018) Low-resource speech-to-text translation. In: Interspeech 2018, pp 1298-1302
Pre-training on high-resource speech recognition improves low-resource speech-to-text translation. S Bansal, H Kamper, K Livescu, A Lopez, S Goldwater, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Bansal S, Kamper H, Livescu K, Lopez A, Goldwater S (2019) Pre-training on high-resource speech recognition improves low-resource speech-to-text translation. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota, pp 58-68
Findings of the Third Shared Task on Multimodal Machine Translation. L Barrault, F Bougares, L Specia, C Lala, D Elliott, S Frank, Proceedings of the Third Conference on Machine Translation. the Third Conference on Machine TranslationBelgium, BrusselsAssociation for Computational Linguistics2Shared Task PapersBarrault L, Bougares F, Specia L, Lala C, Elliott D, Frank S (2018) Findings of the Third Shared Task on Multimodal Machine Translation. In: Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, Association for Computational Linguistics, Belgium, Brussels, pp 308-327
Synthetic and natural noise both break neural machine translation. Y Belinkov, Y Bisk, International Conference on Learning Representations. Belinkov Y, Bisk Y (2018) Synthetic and natural noise both break neural machine translation. In: International Con- ference on Learning Representations
A neural probabilistic language model. Y Bengio, R Ducharme, P Vincent, C Jauvin, Journal of machine learning research. 3Bengio Y, Ducharme R, Vincent P, Jauvin C (2003) A neural probabilistic language model. Journal of machine learning research 3(Feb):1137-1155
Machine Translation Human Evaluation: An investigation of evaluation based on Post-Editing and its relation with Direct Assessment. L Bentivogli, M Cettolo, M Federico, C Federmann, Proceedings of the 2018 International Workshop on Spoken Language Translation. the 2018 International Workshop on Spoken Language TranslationBruges, BelgiumBentivogli L, Cettolo M, Federico M, Federmann C (2018) Machine Translation Human Evaluation: An investigation of evaluation based on Post-Editing and its relation with Direct Assessment. In: Proceedings of the 2018 International Workshop on Spoken Language Translation, Bruges, Belgium, pp 62-69
Listen and translate: A proof of concept for end-to-end speech-to-text translation. A Bérard, O Pietquin, C Servan, L Besacier, NIPS 2016 End-to-end Learning for Speech and Audio Processing Workshop Bérard A, Besacier L, Kocabiyikoglu AC. IEEEInternational Conference on Acoustics, Speech and Signal ProcessingBérard A, Pietquin O, Servan C, Besacier L (2016) Listen and translate: A proof of concept for end-to-end speech-to-text translation. In: NIPS 2016 End-to-end Learning for Speech and Audio Processing Workshop Bérard A, Besacier L, Kocabiyikoglu AC, Pietquin O (2018) End-to-end automatic speech translation of audiobooks. In: International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE
Automatic description generation from images: A survey of models, datasets, and evaluation measures. R Bernardi, R Cakici, D Elliott, A Erdem, E Erdem, N Ikizler-Cinbis, F Keller, A Muscat, B Plank, Journal of Artificial Intelligence Research. 55Bernardi R, Cakici R, Elliott D, Erdem A, Erdem E, Ikizler-Cinbis N, Keller F, Muscat A, Plank B (2016) Automatic description generation from images: A survey of models, datasets, and evaluation measures. Journal of Artificial Intelligence Research 55:409-442
MaSS: A large and clean multilingual corpus of sentence-aligned spoken utterances extracted from the Bible. M Z Boito, W N Havard, M Garnerin, Ferrandél, L Besacier, arXiv:1907.12895Computing Research RepositoryBoito MZ, Havard WN, Garnerin M, FerrandÉL, Besacier L (2019) MaSS: A large and clean multilingual corpus of sentence-aligned spoken utterances extracted from the Bible. Computing Research Repository arXiv:1907.12895
Does Multimodality Help Human and Machine for Translation and Image Captioning?. Université Caglayan O ; Theses, O Du Maine Caglayan, W Aransa, Y Wang, M Masana, M García-Martínez, F Bougares, L Barrault, J Van De Weijer, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational LinguisticsMultimodal Machine TranslationCaglayan O (2019) Multimodal Machine Translation. Theses, Université du Maine Caglayan O, Aransa W, Wang Y, Masana M, García-Martínez M, Bougares F, Barrault L, van de Weijer J (2016a) Does Multimodality Help Human and Machine for Translation and Image Captioning? In: Proceedings of the First Conference on Machine Translation, Association for Computational Linguistics, Berlin, Germany, pp 627-633
Multimodal Attention for Neural Machine Translation. O Caglayan, L Barrault, F Bougares, arXiv:1609.03976Computing Research RepositoryCaglayan O, Barrault L, Bougares F (2016b) Multimodal Attention for Neural Machine Translation. Computing Re- search Repository arXiv:1609.03976
LIUM-CVC Submissions for WMT17 Multimodal Translation Task. O Caglayan, W Aransa, A Bardet, M García-Martínez, F Bougares, L Barrault, M Masana, L Herranz, J Van De Weijer, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, DenmarkAssociation for Computational LinguisticsCaglayan O, Aransa W, Bardet A, García-Martínez M, Bougares F, Barrault L, Masana M, Herranz L, van de Weijer J (2017a) LIUM-CVC Submissions for WMT17 Multimodal Translation Task. In: Proceedings of the Second Conference on Machine Translation, Association for Computational Linguistics, Copenhagen, Denmark, pp 432-439
NMTPY: A flexible toolkit for advanced neural machine translation systems. O Caglayan, M García-Martínez, A Bardet, W Aransa, F Bougares, L Barrault, Prague Bull Math Linguistics. 109Caglayan O, García-Martínez M, Bardet A, Aransa W, Bougares F, Barrault L (2017b) NMTPY: A flexible toolkit for advanced neural machine translation systems. Prague Bull Math Linguistics 109:15-28
LIUM-CVC submissions for WMT18 multimodal translation task. O Caglayan, A Bardet, F Bougares, L Barrault, K Wang, M Masana, L Herranz, J Van De Weijer, Proceedings of the Third Conference on Machine Translation. the Third Conference on Machine TranslationBelgium, BrusselsAssociation for Computational LinguisticsCaglayan O, Bardet A, Bougares F, Barrault L, Wang K, Masana M, Herranz L, van de Weijer J (2018) LIUM- CVC submissions for WMT18 multimodal translation task. In: Proceedings of the Third Conference on Machine Translation, Association for Computational Linguistics, Belgium, Brussels, pp 603-608
Probing the need for visual context in multimodal machine translation. O Caglayan, P Madhyastha, L Specia, L Barrault, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Caglayan O, Madhyastha P, Specia L, Barrault L (2019) Probing the need for visual context in multimodal machine translation. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota, pp 4159-4170
Incorporating global visual features into attention-based neural machine translation. I Calixto, Q Liu, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsCalixto I, Liu Q (2017) Incorporating global visual features into attention-based neural machine translation. In: Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Copenhagen, Denmark, pp 992-1003
Dcu-uva multimodal mt system report. I Calixto, D Elliott, S Frank, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational LinguisticsCalixto I, Elliott D, Frank S (2016) Dcu-uva multimodal mt system report. In: Proceedings of the First Conference on Machine Translation, Association for Computational Linguistics, Berlin, Germany, pp 634-638
Doubly-attentive decoder for multi-modal neural machine translation. I Calixto, Q Liu, N Campbell, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Long Papers)Calixto I, Liu Q, Campbell N (2017) Doubly-attentive decoder for multi-modal neural machine translation. In: Pro- ceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, pp 1913-1924
Latent visual cues for neural machine translation. I Calixto, M Rios, W Aziz, arXiv:1811.00357Computing Research RepositoryCalixto I, Rios M, Aziz W (2018) Latent visual cues for neural machine translation. Computing Research Repository arXiv:1811.00357
A short note about Kinetics-600. J Carreira, E Noland, A Banki-Horvath, C Hillier, A Zisserman, arXiv:1808.01340Computing Research RepositoryCarreira J, Noland E, Banki-Horvath A, Hillier C, Zisserman A (2018) A short note about Kinetics-600. Computing Research Repository arXiv:1808.01340
Multitask learning. R Caruana, Machine Learning. 281Caruana R (1997) Multitask learning. Machine Learning 28(1):41-75
Approaches to human and machine translation quality assessment. S Castilho, S Doherty, F Gaspari, J Moorkens, Translation Quality Assessment: From Principles to Practice, Machine Translation: Technologies and Applications. Springer International PublishingCastilho S, Doherty S, Gaspari F, Moorkens J (2018) Approaches to human and machine translation quality assessment. In: Translation Quality Assessment: From Principles to Practice, Machine Translation: Technologies and Applications, Springer International Publishing, pp 9-38
WIT3: Web Inventory of Transcribed and Translated Talks. M Cettolo, C Girardi, M Federico, Proceedings of the 16th Conference of the European Association for Machine Translation. the 16th Conference of the European Association for Machine TranslationTrento, ItalyCettolo M, Girardi C, Federico M (2012) WIT3: Web Inventory of Transcribed and Translated Talks. In: Proceedings of the 16th Conference of the European Association for Machine Translation, Trento, Italy, pp 261-268
Overview of the IWSLT 2017 evaluation campaign. M Cettolo, J Niehues, S Stüker, L Bentivogli, R Cattoni, M ; Federico, M Federico, L Bentivogli, J Niehues, S Stüker, K Sudoh, K Yoshino, C Federmann, Proceedings of the 2016 International Workshop on Spoken Language Translation Cettolo M. the 2016 International Workshop on Spoken Language Translation Cettolo MTokyo, JapanProceedings of the 2017 International Workshop on Spoken Language TranslationCettolo M, Niehues J, Stüker S, Bentivogli L, Cattoni R, Federico M (2016) The IWSLT 2016 evaluation campaign. In: Proceedings of the 2016 International Workshop on Spoken Language Translation Cettolo M, Federico M, Bentivogli L, Niehues J, Stüker S, Sudoh K, Yoshino K, Federmann C (2017) Overview of the IWSLT 2017 evaluation campaign. In: Proceedings of the 2017 International Workshop on Spoken Language Translation, Tokyo, Japan, pp 2-14
Microsoft COCO Captions: Data Collection and Evaluation Server. X Chen, H Fang, T Y Lin, R Vedantam, S Gupta, P Dollar, C L Zitnick, arXiv:1504.00325Computing Research. Chen X, Fang H, Lin TY, Vedantam R, Gupta S, Dollar P, Zitnick CL (2015) Microsoft COCO Captions: Data Collection and Evaluation Server. Computing Research Repository arXiv:1504.00325
Towards robust neural machine translation. Y Cheng, Z Tu, F Meng, J Zhai, Y Liu, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Long Papers)Cheng Y, Tu Z, Meng F, Zhai J, Liu Y (2018) Towards robust neural machine translation. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Melbourne, Australia, pp 1756-1766
State-of-the-art speech recognition with sequence-to-sequence models. A Chesterman, E ; Wordface. Routledge Wagner, C Chiu, T N Sainath, Y Wu, R Prabhavalkar, P Nguyen, Z Chen, A Kannan, R J Weiss, K Rao, E Gonina, N Jaitly, B Li, J Chorowski, M Bacchiani, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Can Theory Help Translators? A Dialogue Between the Ivory Tower and theChesterman A, Wagner E (2002) Can Theory Help Translators? A Dialogue Between the Ivory Tower and the Wordface. Routledge Chiu C, Sainath TN, Wu Y, Prabhavalkar R, Nguyen P, Chen Z, Kannan A, Weiss RJ, Rao K, Gonina E, Jaitly N, Li B, Chorowski J, Bacchiani M (2018) State-of-the-art speech recognition with sequence-to-sequence models. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 4774-4778
On the Properties of Neural Machine Translation: Encoder-Decoder Approaches. K Cho, B Van Merrienboer, D Bahdanau, Y Bengio, arXiv:1409.1259Computing Research RepositoryCho K, van Merrienboer B, Bahdanau D, Bengio Y (2014) On the Properties of Neural Machine Translation: Encoder- Decoder Approaches. Computing Research Repository arXiv:1409.1259
Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. J Chung, C Gulcehre, K Cho, Y Bengio, arXiv:1412.3555Computing Research RepositoryChung J, Gulcehre C, Cho K, Bengio Y (2014) Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling. Computing Research Repository arXiv:1412.3555
Lip reading sentences in the wild. J S Chung, A Senior, O Vinyals, A Zisserman, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Chung JS, Senior A, Vinyals O, Zisserman A (2017) Lip reading sentences in the wild. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3444-3453
Overview of the ImageCLEF 2006 photographic retrieval and object annotation tasks. P Clough, M Grubinger, T Deselaers, A Hanbury, H Müller, Proceedings of the 7th International Conference on Cross-Language Evaluation Forum. the 7th International Conference on Cross-Language Evaluation ForumSpringerClough P, Grubinger M, Deselaers T, Hanbury A, Müller H (2006) Overview of the ImageCLEF 2006 photographic re- trieval and object annotation tasks. In: Proceedings of the 7th International Conference on Cross-Language Evaluation Forum (CLEF), Springer, pp 579-594
Multimodal compact bilinear pooling for multimodal neural machine translation. J Delbrouck, S Dupont, arXiv:1703.08084Computing Research RepositoryDelbrouck J, Dupont S (2017a) Multimodal compact bilinear pooling for multimodal neural machine translation. Com- puting Research Repository arXiv:1703.08084
Modulating and attending the source image during encoding improves multimodal translation. J B Delbrouck, S Dupont, arXiv:1712.03449Computing Research RepositoryDelbrouck JB, Dupont S (2017b) Modulating and attending the source image during encoding improves multimodal translation. Computing Research Repository arXiv:1712.03449
UMONS Submission for WMT18 Multimodal Translation Task. J B Delbrouck, S Dupont, Proceedings of the Third Conference on Machine Translation. the Third Conference on Machine TranslationBelgium, BrusselsAssociation for Computational Linguistics2Shared Task PapersDelbrouck JB, Dupont S (2018) UMONS Submission for WMT18 Multimodal Translation Task. In: Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, Association for Computational Linguistics, Belgium, Brussels, pp 643-647
Adversarial reconstruction for multi-modal machine translation. J B Delbrouck, S Dupont, arXiv:1910.02766Computing Research RepositoryDelbrouck JB, Dupont S (2019) Adversarial reconstruction for multi-modal machine translation. Computing Research Repository arXiv:1910.02766
Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L J Li, K Li, L Fei-Fei, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionIEEEDeng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Imagenet: A large-scale hierarchical image database. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, pp 248-255
Meteor universal: Language specific translation evaluation for any target language. M Denkowski, A Lavie, Proceedings of the Ninth Workshop on Statistical Machine Translation. the Ninth Workshop on Statistical Machine TranslationAssociation for Computational LinguisticsDenkowski M, Lavie A (2014) Meteor universal: Language specific translation evaluation for any target language. In: Proceedings of the Ninth Workshop on Statistical Machine Translation, Association for Computational Linguistics, pp 376-380
Data augmentation for end-to-end speech translation: FBK@IWSLT '19. Di Gangi, M Negri, M Nguyen, V N Tebbifakhr, A Turchi, M ; Di Gangi, M A Cattoni, R Bentivogli, L Negri, M Turchi, M , Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Proceedings of the 16th International Workshop on Spoken Language TranslationDi Gangi M, Negri M, Nguyen VN, Tebbifakhr A, Turchi M (2019) Data augmentation for end-to-end speech translation: FBK@IWSLT '19. In: Proceedings of the 16th International Workshop on Spoken Language Translation Di Gangi MA, Cattoni R, Bentivogli L, Negri M, Turchi M (2019a) MuST-C: a Multilingual Speech Translation Corpus. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota, pp 2012-2017
Adapting transformer to end-to-end spoken language translation. Di Gangi, M A Negri, M Turchi, M , TERSPEECH 2019, International Speech Communication Association (ISCA). Di Gangi MA, Negri M, Turchi M (2019b) Adapting transformer to end-to-end spoken language translation. In: IN- TERSPEECH 2019, International Speech Communication Association (ISCA), pp 1133-1137
One-to-many multilingual end-to-end speech translation. Di Gangi, M A Negri, M Turchi, M , IEEE Workshop on Automatic Speech Recognition and Understanding. ASRUDi Gangi MA, Negri M, Turchi M (2019c) One-to-many multilingual end-to-end speech translation. In: 2019 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)
Issues in human and automatic translation quality assessment. S Doherty, Issues in Translation Technology: The IATIS Yearbook, Routledge. Kenny DDoherty S (2017) Issues in human and automatic translation quality assessment. In: Kenny D (ed) Human Issues in Translation Technology: The IATIS Yearbook, Routledge, pp 131-148
Multi-task learning for multiple language translation. D Dong, H Wu, W He, D Yu, H Wang, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational Linguistics1Long Papers)Dong D, Wu H, He W, Yu D, Wang H (2015) Multi-task learning for multiple language translation. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Association for Computational Linguistics, Beijing, China, pp 1723-1732
Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition. L Dong, S Xu, B Xu, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEDong L, Xu S, Xu B (2018) Speech-transformer: a no-recurrence sequence-to-sequence model for speech recognition. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, pp 5884-5888
An attentional model for speech translation without transcription. J Drugan, A Anastasopoulos, D Chiang, S Bird, T Cohn, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsQuality in Professional Translation: Assessment and Improvement. Continuum Advances in TranslationDrugan J (2013) Quality in Professional Translation: Assessment and Improvement. Continuum Advances in Translation, Bloomsbury Academic Duong L, Anastasopoulos A, Chiang D, Bird S, Cohn T (2016) An attentional model for speech translation without transcription. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Association for Computational Linguistics, San Diego, California, pp 949-959
The AFRL-OSU WMT17 multimodal translation system: An image processing approach. J Duselis, M Hutt, J Gwinnup, J Davis, J Sandvick, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, DenmarkAssociation for Computational Linguistics2Shared Task PapersDuselis J, Hutt M, Gwinnup J, Davis J, Sandvick J (2017) The AFRL-OSU WMT17 multimodal translation system: An image processing approach. In: Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, Association for Computational Linguistics, Copenhagen, Denmark, pp 445-449
Understanding the effect of textual adversaries in multimodal machine translation. Dutta Chowdhury, K Elliott, D , Proceedings of the Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN). the Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN)Hong Kong, ChinaAssociation for Computational LinguisticsDutta Chowdhury K, Elliott D (2019) Understanding the effect of textual adversaries in multimodal machine translation. In: Proceedings of the Beyond Vision and LANguage: inTEgrating Real-world kNowledge (LANTERN), Association for Computational Linguistics, Hong Kong, China, pp 35-40
Gender aware spoken language translation applied to english-arabic. M Elaraby, A Y Tawfik, M Khaled, H Hassan, A Osama, 2018 2nd International Conference on Natural Language and Speech Processing. IEEEElaraby M, Tawfik AY, Khaled M, Hassan H, Osama A (2018) Gender aware spoken language translation applied to english-arabic. In: 2018 2nd International Conference on Natural Language and Speech Processing (ICNLSP), IEEE, pp 1-6
Adversarial evaluation of multimodal machine translation. D Elliott, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsElliott D (2018) Adversarial evaluation of multimodal machine translation. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, pp 2974-2978
Imagination improves multimodal translation. D Elliott, Kádárá, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, TaiwanLong Papers1Asian Federation of Natural Language ProcessingElliott D, KádárÁ (2017) Imagination improves multimodal translation. In: Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Asian Federation of Natural Language Processing, Taipei, Taiwan, pp 130-141
Multi-language image description with neural sequence models. D Elliott, S Frank, E Hasler, arXiv:1510.04709Computing Research RepositoryElliott D, Frank S, Hasler E (2015) Multi-language image description with neural sequence models. Computing Research Repository arXiv:1510.04709
Multi30k: Multilingual English-German Image Descriptions. D Elliott, S Frank, K Sima'an, L Specia, Proceedings of the 5th Workshop on Vision and Language. the 5th Workshop on Vision and LanguageBerlin, GermanyAssociation for Computational LinguisticsElliott D, Frank S, Sima'an K, Specia L (2016) Multi30k: Multilingual English-German Image Descriptions. In: Pro- ceedings of the 5th Workshop on Vision and Language, Association for Computational Linguistics, Berlin, Germany, pp 70-74
Findings of the Second Shared Task on Multimodal Machine Translation and Multilingual Image Description. D Elliott, S Frank, L Barrault, F Bougares, L Specia, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, DenmarkAssociation for Computational LinguisticsElliott D, Frank S, Barrault L, Bougares F, Specia L (2017) Findings of the Second Shared Task on Multimodal Machine Translation and Multilingual Image Description. In: Proceedings of the Second Conference on Machine Translation, Association for Computational Linguistics, Copenhagen, Denmark, pp 215-233
Microsoft Speech Language Translation (MSLT) Corpus: The IWSLT 2016 release for English, French and German. C Federmann, W D Lewis, Proceedings of the 13th International Workshop on Spoken Language Translation (IWSLT). the 13th International Workshop on Spoken Language Translation (IWSLT)Seattle, USAFedermann C, Lewis WD (2016) Microsoft Speech Language Translation (MSLT) Corpus: The IWSLT 2016 release for English, French and German. In: Proceedings of the 13th International Workshop on Spoken Language Translation (IWSLT), Seattle, USA
The Microsoft Speech Translation (MSLT) Corpus for Chinese and Japanese: Conversational test data for machine translation and speech recognition. C Federmann, W D Lewis, Proceedings of the Machine Translation Summit XVI. the Machine Translation Summit XVIMT Summit; Nagoya, JapanFedermann C, Lewis WD (2017) The Microsoft Speech Translation (MSLT) Corpus for Chinese and Japanese: Conver- sational test data for machine translation and speech recognition. In: Proceedings of the Machine Translation Summit XVI (MT Summit), Nagoya, Japan, pp 72-85
Zero-resource translation with multi-lingual neural machine translation. O Firat, B Sankaran, Y Al-Onaizan, Yarman Vural, F T Cho, K , Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsFirat O, Sankaran B, Al-onaizan Y, Yarman Vural FT, Cho K (2016) Zero-resource translation with multi-lingual neural machine translation. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Austin, Texas, pp 268-277
Reference bias in monolingual machine translation evaluation. M Fomicheva, L Specia, 54th Annual Meeting of the Association for Computational Linguistics. Berlin, Germany, ACLFomicheva M, Specia L (2016) Reference bias in monolingual machine translation evaluation. In: 54th Annual Meeting of the Association for Computational Linguistics, Berlin, Germany, ACL, pp 77-82
Assessing multilingual multimodal image description: Studies of native speaker preferences and translator choices. S Frank, D Elliott, L Specia, Natural Language Engineering. 2403Frank S, Elliott D, Specia L (2018) Assessing multilingual multimodal image description: Studies of native speaker preferences and translator choices. Natural Language Engineering 24(03):393-413
Multimodal compact bilinear pooling for visual question answering and visual grounding. A Fukui, D H Park, D Yang, A Rohrbach, T Darrell, M Rohrbach, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsFukui A, Park DH, Yang D, Rohrbach A, Darrell T, Rohrbach M (2016) Multimodal compact bilinear pooling for visual question answering and visual grounding. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Austin, Texas, pp 457-468
Convolutional Sequence to Sequence Learning. J Gehring, M Auli, D Grangier, D Yarats, Y N Dauphin, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70JMLR.org, ICML'17Gehring J, Auli M, Grangier D, Yarats D, Dauphin YN (2017) Convolutional Sequence to Sequence Learning. In: Proceedings of the 34th International Conference on Machine Learning -Volume 70, JMLR.org, ICML'17, pp 1243- 1252
Image pivoting for learning multilingual multimodal representations. S Gella, R Sennrich, F Keller, M Lapata, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsGella S, Sennrich R, Keller F, Lapata M (2017) Image pivoting for learning multilingual multimodal representations. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, pp 2839-2845
Cross-lingual visual verb sense disambiguation. S Gella, D Elliott, F Keller, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Gella S, Elliott D, Keller F (2019) Cross-lingual visual verb sense disambiguation. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota, pp 1998-2004
A pitch extraction algorithm tuned for automatic speech recognition. P Ghahremani, B Babaali, D Povey, K Riedhammer, J Trmal, S Khudanpur, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Ghahremani P, BabaAli B, Povey D, Riedhammer K, Trmal J, Khudanpur S (2014) A pitch extraction algorithm tuned for automatic speech recognition. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 2494-2498
Rich feature hierarchies for accurate object detection and semantic segmentation. R Girshick, J Donahue, T Darrell, J Malik, The IEEE Conference on Computer Vision and Pattern Recognition. CVPRGirshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Continuous measurement scales in human evaluation of machine translation. Y Graham, T Baldwin, A Moffat, J Zobel, Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. the 7th Linguistic Annotation Workshop and Interoperability with DiscourseSofia, BulgariaAssociation for Computational LinguisticsGraham Y, Baldwin T, Moffat A, Zobel J (2013) Continuous measurement scales in human evaluation of machine trans- lation. In: Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, Association for Computational Linguistics, Sofia, Bulgaria, pp 33-41
Framewise phoneme classification with bidirectional LSTM networks. A Graves, J Schmidhuber, Proceedings. 2005 IEEE International Joint Conference on Neural Networks. 2005 IEEE International Joint Conference on Neural NetworksIEEE4Graves A, Schmidhuber J (2005) Framewise phoneme classification with bidirectional LSTM networks. In: Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., IEEE, Montreal, Que., Canada, vol 4, pp 2047-2052
Speech recognition with deep recurrent neural networks. A Graves, Mohamed Ar, G Hinton, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. Vancouver, BC, CanadaGraves A, Mohamed Ar, Hinton G (2013) Speech recognition with deep recurrent neural networks. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Vancouver, BC, Canada, pp 6645-6649
The IAPR TC-12 Benchmark: A New Evaluation Resource for Visual Information Systems. M Grubinger, P Clough, H Müller, T Deselaers, Proceedings of the OntoImage Workshop on Language Resources for Content-based Image Retrieval. the OntoImage Workshop on Language Resources for Content-based Image RetrievalGenoa, ItalyGrubinger M, Clough P, Müller H, Deselaers T (2006) The IAPR TC-12 Benchmark: A New Evaluation Resource for Visual Information Systems. In: Proceedings of the OntoImage Workshop on Language Resources for Content-based Image Retrieval, Genoa, Italy, pp 13-23
The memad submission to the wmt18 multimodal translation task. S A Grönroos, B Huet, M Kurimo, J Laaksonen, B Merialdo, P Pham, M Sjöberg, U Sulubacak, J Tiedemann, R Troncy, R Vázquez, Proceedings of the Third Conference on Machine Translation. the Third Conference on Machine TranslationBelgium, BrusselsAssociation for Computational Linguistics2Shared Task PapersGrönroos SA, Huet B, Kurimo M, Laaksonen J, Merialdo B, Pham P, Sjöberg M, Sulubacak U, Tiedemann J, Troncy R, Vázquez R (2018) The memad submission to the wmt18 multimodal translation task. In: Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, Association for Computational Linguistics, Belgium, Brussels, pp 609-617
The AMARA Corpus: Building resources for translating the web's educational content. F Guzman, H Sajjad, S Vogel, A Abdelali, Proceedings of the 10th International Workshop on Spoken Language Translation (IWSLT). the 10th International Workshop on Spoken Language Translation (IWSLT)Heidelberg, GermanyGuzman F, Sajjad H, Vogel S, Abdelali A (2013) The AMARA Corpus: Building resources for translating the web's educational content. In: Proceedings of the 10th International Workshop on Spoken Language Translation (IWSLT), Heidelberg, Germany
The AFRL-Ohio State WMT18 multimodal system: Combining visual with traditional. J Gwinnup, J Sandvick, M Hutt, G Erdmann, J Duselis, J Davis, Proceedings of the Third Conference on Machine Translation. the Third Conference on Machine TranslationBelgium, BrusselsAssociation for Computational LinguisticsGwinnup J, Sandvick J, Hutt M, Erdmann G, Duselis J, Davis J (2018) The AFRL-Ohio State WMT18 multimodal sys- tem: Combining visual with traditional. In: Proceedings of the Third Conference on Machine Translation, Association for Computational Linguistics, Belgium, Brussels, pp 618-621
Deep residual learning for image recognition. K He, Z Xiangyu, R Shaoqing, J Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionHe K, Xiangyu Z, Shaoqing R, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 770-778
K He, G Gkioxari, P Dollár, R Girshick, 2017 IEEE International Conference on Computer Vision (ICCV). He K, Gkioxari G, Dollár P, Girshick R (2017) Mask R-CNN. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp 2980-2988
Speech-centric information processing: An optimization-oriented approach. X He, L Deng, Proceedings of the IEEE. 1015He X, Deng L (2013) Speech-centric information processing: An optimization-oriented approach. Proceedings of the IEEE 101(5):1116-1135
Why word error rate is not a good metric for speech recognizer training for the speech translation task?. X He, L Deng, A Acero, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). He X, Deng L, Acero A (2011) Why word error rate is not a good metric for speech recognizer training for the speech translation task? In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 5632-5635
CUNI System for the WMT17 Multimodal Translation Task. J Helcl, J Libovický, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, DenmarkAssociation for Computational Linguistics2Shared Task PapersHelcl J, Libovický J (2017) CUNI System for the WMT17 Multimodal Translation Task. In: Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, Association for Computational Linguistics, Copenhagen, Denmark, pp 450-457
Neural Monkey: The current state and beyond. J Helcl, J Libovický, T Kocmi, T Musil, O Cífka, D Variš, O Bojar, Proceedings of the 13th Conference of the Association for Machine Translation in the Americas. the 13th Conference of the Association for Machine Translation in the AmericasBoston, MA1Association for Machine Translation in the AmericasHelcl J, Libovický J, Kocmi T, Musil T, Cífka O, Variš D, Bojar O (2018a) Neural Monkey: The current state and beyond. In: Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Papers), Association for Machine Translation in the Americas, Boston, MA, pp 168-176
CUNI System for the WMT18 Multimodal Translation Task. J Helcl, J Libovický, D Varis, Proceedings of the Third Conference on Machine Translation. the Third Conference on Machine TranslationBelgium, BrusselsAssociation for Computational Linguistics2Shared Task PapersHelcl J, Libovický J, Varis D (2018b) CUNI System for the WMT18 Multimodal Translation Task. In: Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, Association for Computational Linguistics, Belgium, Brussels, pp 622-629
Sockeye: A Toolkit for Neural Machine Translation. F Hieber, T Domhan, M Denkowski, D Vilar, A Sokolov, A Clifton, M Post, arXiv:1712.05690Computing Research RepositoryHieber F, Domhan T, Denkowski M, Vilar D, Sokolov A, Clifton A, Post M (2017) Sockeye: A Toolkit for Neural Machine Translation. Computing Research Repository arXiv:1712.05690
Multimodal pivots for image caption translation. J Hitschler, S Schamoni, S Riezler, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Long Papers)Hitschler J, Schamoni S, Riezler S (2016) Multimodal pivots for image caption translation. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Berlin, Germany, pp 2399-2409
Long Short-Term Memory. S Hochreiter, J Schmidhuber, Neural Computation. 98Hochreiter S, Schmidhuber J (1997) Long Short-Term Memory. Neural Computation 9(8):1735-1780
Attention-based Multimodal Neural Machine Translation. P Y Huang, F Liu, S R Shiang, J Oh, C Dyer, Proceedings of the 1st Conference on Machine Translation. the 1st Conference on Machine TranslationBerlin, GermanyAssociation for Computational Linguistics2Huang PY, Liu F, Shiang SR, Oh J, Dyer C (2016) Attention-based Multimodal Neural Machine Translation. In: Pro- ceedings of the 1st Conference on Machine Translation, Association for Computational Linguistics, Berlin, Germany, vol 2, pp 639-645
Multilingual end-to-end speech translation. H Inaguma, K Duh, T Kawahara, S Watanabe, IEEE Workshop on Automatic Speech Recognition and Understanding. ASRUInaguma H, Duh K, Kawahara T, Watanabe S (2019a) Multilingual end-to-end speech translation. In: 2019 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)
Espnet how2 speech translation system for iwslt 2019: Pre-training, knowledge distillation, and going deeper. H Inaguma, S Kiyono, Ney Soplin, J Suzuki, K Duh, S Watanabe, Proceedings of the 16th International Workshop on Spoken Language Translation. the 16th International Workshop on Spoken Language TranslationInaguma H, Kiyono S, Soplin NEY, Suzuki J, Duh K, Watanabe S (2019b) Espnet how2 speech translation system for iwslt 2019: Pre-training, knowledge distillation, and going deeper. In: Proceedings of the 16th International Workshop on Spoken Language Translation
Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, Proceedings of The 32nd International Conference on Machine Learning. The 32nd International Conference on Machine LearningIoffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: Proceedings of The 32nd International Conference on Machine Learning, pp 448-456
Distilling translations with visual awareness. J Ive, P Madhyastha, L Specia, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsIve J, Madhyastha P, Specia L (2019) Distilling translations with visual awareness. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Florence, Italy, pp 6525-6538
AMARA: A sustainable, global solution for accessibility, powered by communities of volunteers. D Jansen, A Alcala, F Guzman, Universal Access in Human-Computer Interaction. Design for All and Accessibility Practice. SpringerJansen D, Alcala A, Guzman F (2014) AMARA: A sustainable, global solution for accessibility, powered by commu- nities of volunteers. In: Universal Access in Human-Computer Interaction. Design for All and Accessibility Practice, Springer, pp 401-411
Leveraging weakly supervised data to improve end-to-end speech-to-text translation. Y Jia, M Johnson, W Macherey, R J Weiss, Y Cao, C C Chiu, Ari N Laurenzo, S Wu, Y , ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEJia Y, Johnson M, Macherey W, Weiss RJ, Cao Y, Chiu CC, Ari N, Laurenzo S, Wu Y (2019) Leveraging weakly super- vised data to improve end-to-end speech-to-text translation. In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, pp 7180-7184
Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. M Johnson, M Schuster, Q V Le, M Krikun, Y Wu, Z Chen, N Thorat, F Viégas, M Wattenberg, G Corrado, M Hughes, J Dean, Transactions of the Association for Computational Linguistics. 5Johnson M, Schuster M, Le QV, Krikun M, Wu Y, Chen Z, Thorat N, Viégas F, Wattenberg M, Corrado G, Hughes M, Dean J (2017) Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation. Transac- tions of the Association for Computational Linguistics 5:339-351
Marian: Fast neural machine translation in C++. M Junczys-Dowmunt, R Grundkiewicz, T Dwojak, H Hoang, K Heafield, T Neckermann, F Seide, U Germann, A F Aji, N Bogoychev, Aft Martins, Proceedings of ACL 2018, System Demonstrations. ACL 2018, System DemonstrationsMelbourne, AustraliaAssociation for Computational LinguisticsJunczys-Dowmunt M, Grundkiewicz R, Dwojak T, Hoang H, Heafield K, Neckermann T, Seide F, Germann U, Aji AF, Bogoychev N, Martins AFT, Birch A (2018) Marian: Fast neural machine translation in C++. In: Proceedings of ACL 2018, System Demonstrations, Association for Computational Linguistics, Melbourne, Australia, pp 116-121
Lessons learned in multilingual grounded language learning. Elliott D Kádárá, M A Côté, G Chrupa La, A Alishahi, Proceedings of the 22nd Conference on Computational Natural Language Learning. the 22nd Conference on Computational Natural Language LearningBrussels, BelgiumAssociation for Computational LinguisticsKádárÁ, Elliott D, Côté MA, Chrupa la G, Alishahi A (2018) Lessons learned in multilingual grounded language learning. In: Proceedings of the 22nd Conference on Computational Natural Language Learning, Association for Computational Linguistics, Brussels, Belgium, pp 402-412
Visual question answering: Datasets, algorithms, and future challenges. K Kafle, C Kanan, Computer Vision and Image Understanding. 163Kafle K, Kanan C (2017) Visual question answering: Datasets, algorithms, and future challenges. Computer Vision and Image Understanding 163:3-20
Recurrent Continuous Translation Models. N Kalchbrenner, P Blunsom, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational LinguisticsKalchbrenner N, Blunsom P (2013) Recurrent Continuous Translation Models. In: Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Seattle, Washington, USA, pp 1700-1709
The Kinetics human action video dataset. W Kay, J Carreira, K Simonyan, B Zhang, C Hillier, S Vijayanarasimhan, F Viola, T Green, T Back, P Natsev, M Suleyman, A Zisserman, arXiv:1705.06950Computing Research RepositoryKay W, Carreira J, Simonyan K, Zhang B, Hillier C, Vijayanarasimhan S, Viola F, Green T, Back T, Natsev P, Suleyman M, Zisserman A (2017) The Kinetics human action video dataset. Computing Research Repository arXiv:1705.06950
Multimodal Neural Language Models. R Kiros, R Salakhutdinov, R Zemel, Proceedings of the 31st International Conference on Machine Learning. the 31st International Conference on Machine LearningKiros R, Salakhutdinov R, Zemel R (2014) Multimodal Neural Language Models. In: Proceedings of the 31st Interna- tional Conference on Machine Learning
OpenNMT: Open-Source Toolkit for Neural Machine Translation. G Klein, Y Kim, Y Deng, J Senellart, A Rush, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational LinguisticsKlein G, Kim Y, Deng Y, Senellart J, Rush A (2017) OpenNMT: Open-Source Toolkit for Neural Machine Transla- tion. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Vancouver, Canada, pp 67-72
Augmenting Librispeech with French Translations: A Multimodal Corpus for Direct Speech Translation Evaluation. A C Kocabiyikoglu, L Besacier, O Kraif, Proceedings of the 11th Conference on Language Resources and Evaluation (LREC). the 11th Conference on Language Resources and Evaluation (LREC)European Language Resources Association (ELRAKocabiyikoglu AC, Besacier L, Kraif O (2018) Augmenting Librispeech with French Translations: A Multimodal Cor- pus for Direct Speech Translation Evaluation. In: Proceedings of the 11th Conference on Language Resources and Evaluation (LREC), European Language Resources Association (ELRA)
Moses: open source toolkit for statistical machine translation. P Koehn, P ; Koehn, R Zens, C Dyer, O Bojar, A Constantin, E Herbst, H Hoang, A Birch, C Callison-Burch, M Federico, N Bertoldi, B Cowan, W Shen, C Moran, Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions -ACL '07. the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions -ACL '07Prague, Czech Republic Lala C, Specia L; Miyazaki, JapanAssociation for Computational LinguisticsProceedings of the 11th Conference on Language Resources and EvaluationKoehn P (2009) Statistical machine translation. Cambridge University Press Koehn P, Zens R, Dyer C, Bojar O, Constantin A, Herbst E, Hoang H, Birch A, Callison-Burch C, Federico M, Bertoldi N, Cowan B, Shen W, Moran C (2007) Moses: open source toolkit for statistical machine translation. In: Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions -ACL '07, Association for Computational Linguistics, Prague, Czech Republic Lala C, Specia L (2018) Multimodal Lexical Translation. In: Proceedings of the 11th Conference on Language Resources and Evaluation, Miyazaki, Japan
Sheffield submissions for WMT18 multimodal translation shared task. C Lala, P S Madhyastha, C Scarton, L Specia, Proceedings of the Third Conference on Machine Translation. the Third Conference on Machine TranslationBelgium, BrusselsAssociation for Computational LinguisticsLala C, Madhyastha PS, Scarton C, Specia L (2018) Sheffield submissions for WMT18 multimodal translation shared task. In: Proceedings of the Third Conference on Machine Translation, Association for Computational Linguistics, Belgium, Brussels, pp 630-637
Meteor: an automatic metric for MT evaluation with high levels of correlation with human judgments. A Lavie, A Agarwal, Proceedings of the Second Workshop on Statistical Machine Translation -StatMT '07. the Second Workshop on Statistical Machine Translation -StatMT '07Prague, Czech RepublicAssociation for Computational LinguisticsLavie A, Agarwal A (2007) Meteor: an automatic metric for MT evaluation with high levels of correlation with human judgments. In: Proceedings of the Second Workshop on Statistical Machine Translation -StatMT '07, Association for Computational Linguistics, Prague, Czech Republic, pp 228-231
JANUS-III: Speech-tospeech translation in multiple languages. A Lavie, A Waibel, L Levin, M Finke, D Gates, M Gavalda, T Zeppenfeld, P Zhan, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing. Munich, GermanyIEEE Comput. Soc. Press1Lavie A, Waibel A, Levin L, Finke M, Gates D, Gavalda M, Zeppenfeld T, Zhan P (1997) JANUS-III: Speech-to- speech translation in multiple languages. In: 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, IEEE Comput. Soc. Press, Munich, Germany, vol 1, pp 99-102
Jasper: An End-to-End Convolutional Neural Acoustic Model. J Li, V Lavrukhin, B Ginsburg, R Leary, O Kuchaiev, J M Cohen, H Nguyen, R T Gadde, Proc. Interspeech. InterspeechLi J, Lavrukhin V, Ginsburg B, Leary R, Kuchaiev O, Cohen JM, Nguyen H, Gadde RT (2019) Jasper: An End-to-End Convolutional Neural Acoustic Model. In: Proc. Interspeech 2019, pp 71-75
Adding Chinese Captions to Images. X Li, W Lan, J Dong, H Liu, Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval -ICMR '16. the 2016 ACM on International Conference on Multimedia Retrieval -ICMR '16New York, New York, USAACM PressLi X, Lan W, Dong J, Liu H (2016) Adding Chinese Captions to Images. In: Proceedings of the 2016 ACM on Interna- tional Conference on Multimedia Retrieval -ICMR '16, ACM Press, New York, New York, USA, pp 271-275
Attention strategies for multi-source sequence-to-sequence learning. J Libovický, J Helcl, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics2Short Papers)Libovický J, Helcl J (2017) Attention strategies for multi-source sequence-to-sequence learning. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Association for Computational Linguistics, pp 196-202
CUNI system for WMT16 automatic post-editing and multimodal translation tasks. J Libovický, J Helcl, M Tlustý, O Bojar, P Pecina, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational Linguistics2Shared Task PapersLibovický J, Helcl J, Tlustý M, Bojar O, Pecina P (2016) CUNI system for WMT16 automatic post-editing and multimodal translation tasks. In: Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, Association for Computational Linguistics, Berlin, Germany, pp 646-654
Input combination strategies for multi-source transformer decoder. J Libovický, J Helcl, D Mareček, Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBelgium, BrusselsAssociation for Computational LinguisticsLibovický J, Helcl J, Mareček D (2018) Input combination strategies for multi-source transformer decoder. In: Proceed- ings of the Third Conference on Machine Translation: Research Papers, Association for Computational Linguistics, Belgium, Brussels, pp 253-260
Multimodality in Machine Translation. J Libovický, PrahaCharles University, Faculty of Mathematics and Physics, Institute of Formal and Applied LinguisticsPhD thesisLibovický J (2019) Multimodality in Machine Translation. PhD thesis, Charles University, Faculty of Mathematics and Physics, Institute of Formal and Applied Linguistics, Praha
Network in network. M Lin, Q Chen, S Yan, International Conference on Learning Representations. ICLRLin M, Chen Q, Yan S (2014a) Network in network. In: International Conference on Learning Representations (ICLR)
Microsoft COCO: Common Objects in Context. T Y Lin, M Maire, S Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, Proceedings of the 13th European Conference on Computer Vision. Fleet D, Pajdla T, Schiele B, Tuytelaars Tthe 13th European Conference on Computer VisionZurich, SwitzerlandSpringer International Publishing8693Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014b) Microsoft COCO: Common Objects in Context. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Proceedings of the 13th European Conference on Computer Vision, Springer International Publishing, Zurich, Switzerland, vol 8693, pp 740-755
Deep learning for acoustic modeling in parametric speech generation: A systematic review of existing techniques and future trends. Z H Ling, S Y Kang, H Zen, A Senior, M Schuster, X J Qian, H M Meng, L Deng, IEEE Signal Processing Magazine. 332Ling ZH, Kang SY, Zen H, Senior A, Schuster M, Qian XJ, Meng HM, Deng L (2015) Deep learning for acoustic modeling in parametric speech generation: A systematic review of existing techniques and future trends. IEEE Signal Processing Magazine 3(32):35-52
OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. P Lison, J ; Tiedemann, Ncc, K Choukri, T Declerck, S Goggi, M Grobelnik, B Maegaard, J Mariani, H Mazo, A Moreno, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). Odijk J, Piperidis Sthe Tenth International Conference on Language Resources and Evaluation (LREC 2016)Paris, FranceLison P, Tiedemann J (2016) OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In: Chair) NCC, Choukri K, Declerck T, Goggi S, Grobelnik M, Maegaard B, Mariani J, Mazo H, Moreno A, Odijk J, Piperidis S (eds) Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), European Language Resources Association (ELRA), Paris, France
The ustc-nel speech translation system at iwslt. D Liu, J Liu, W Guo, S Xiong, Z Ma, R Song, C Wu, Q Liu, Proceedings of the 15th International Workshop on Spoken Language Translation. the 15th International Workshop on Spoken Language TranslationLiu D, Liu J, Guo W, Xiong S, Ma Z, Song R, Wu C, Liu Q (2018) The ustc-nel speech translation system at iwslt 2018. In: Proceedings of the 15th International Workshop on Spoken Language Translation, pp 70-75
End-to-end speech translation with knowledge distillation. Y Liu, H Xiong, Z He, J Zhang, H Wu, H Wang, C Zong, InterspeechLiu Y, Xiong H, He Z, Zhang J, Wu H, Wang H, Zong C (2019) End-to-end speech translation with knowledge distillation. In: Interspeech 2019
Multi-task sequence to sequence learning. M T Luong, Q V Le, I Sutskever, O Vinyals, L Kaiser, arXiv:1511.06114Computing Research RepositoryLuong MT, Le QV, Sutskever I, Vinyals O, Kaiser L (2015) Multi-task sequence to sequence learning. Computing Research Repository arXiv:1511.06114
RWTH ASR Systems for LibriSpeech: Hybrid vs Attention. C Lüscher, E Beck, K Irie, M Kitza, W Michel, A Zeyer, R Schlüter, H Ney, Proc. Interspeech. InterspeechLüscher C, Beck E, Irie K, Kitza M, Michel W, Zeyer A, Schlüter R, Ney H (2019) RWTH ASR Systems for LibriSpeech: Hybrid vs Attention. In: Proc. Interspeech 2019, pp 231-235
OSU multimodal machine translation system report. M Ma, D Li, K Zhao, L Huang, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, DenmarkAssociation for Computational Linguistics2Shared Task PapersMa M, Li D, Zhao K, Huang L (2017) OSU multimodal machine translation system report. In: Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, Association for Computational Linguistics, Copenhagen, Denmark, pp 465-469
Results of the WMT18 Metrics Shared Task: Both characters and embeddings achieve good performance. Q Ma, O Bojar, Y Graham, Proceedings of the Third Conference on Machine Translation. the Third Conference on Machine TranslationBelgium, BrusselsAssociation for Computational Linguistics2Shared Task PapersMa Q, Bojar O, Graham Y (2018) Results of the WMT18 Metrics Shared Task: Both characters and embeddings achieve good performance. In: Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, Association for Computational Linguistics, Belgium, Brussels, pp 682-701
Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges. Q Ma, J Wei, O Bojar, Y Graham, Proceedings of the 4th Conference on Machine Translation (WMT). the 4th Conference on Machine Translation (WMT)Florence, ItalyAssociation for Computational LinguisticsMa Q, Wei J, Bojar O, Graham Y (2019) Results of the WMT19 metrics shared task: Segment-level and strong MT systems pose big challenges. In: Proceedings of the 4th Conference on Machine Translation (WMT), Association for Computational Linguistics, Florence, Italy, pp 62-90
VIFIDEL: Evaluating the visual fidelity of image descriptions. P Madhyastha, J Wang, L Specia, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyMadhyastha P, Wang J, Specia L (2019) VIFIDEL: Evaluating the visual fidelity of image descriptions. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Florence, Italy, pp 6539-6550
Sheffield MultiMT: Using Object Posterior Predictions for Multimodal Machine Translation. P S Madhyastha, J Wang, L Specia, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, DenmarkAssociation for Computational Linguistics2Shared Task PapersMadhyastha PS, Wang J, Specia L (2017) Sheffield MultiMT: Using Object Posterior Predictions for Multimodal Machine Translation. In: Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, Association for Computational Linguistics, Copenhagen, Denmark, pp 470-476
Deep captioning with multimodal recurrent neural networks (m-rnn). J Mao, W Xu, Y Yang, J Wang, Z Huang, A Yuille, International Conference on Learning Representations. ICLRMao J, Xu W, Yang Y, Wang J, Huang Z, Yuille A (2015) Deep captioning with multimodal recurrent neural networks (m-rnn). In: International Conference on Learning Representations (ICLR)
Integrating speech recognition and machine translation: Where do we stand?. E Matusov, S Kanthak, H Ney, 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings. 5Matusov E, Kanthak S, Ney H (2006) Integrating speech recognition and machine translation: Where do we stand? In: 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, vol 5, pp V-V
Recurrent neural network based language model. T Mikolov, M Karafiát, L Burget, J Cernocký, S Khudanpur, INTERSPEECH, ISCAMikolov T, Karafiát M, Burget L, Cernocký J, Khudanpur S (2010) Recurrent neural network based language model. In: Kobayashi T, Hirose K, Nakamura S (eds) INTERSPEECH, ISCA, pp 1045-1048
Cross-lingual image caption generation. T Miyazaki, N Shimizu, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). the 54th Annual Meeting of the Association for Computational Linguistics (ACL)Berlin, GermanyAssociation for Computational LinguisticsMiyazaki T, Shimizu N (2016) Cross-lingual image caption generation. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), Association for Computational Linguistics, Berlin, Germany, pp 1780-1790
A Mogadala, M Kalimuthu, D Klakow, arXiv:1907.09358Trends in Integration of Vision and Language Research: A Survey of Tasks, Datasets, and Methods. Computing Research Repository. Mogadala A, Kalimuthu M, Klakow D (2019) Trends in Integration of Vision and Language Research: A Survey of Tasks, Datasets, and Methods. Computing Research Repository arXiv:1907.09358
Understanding how deep belief networks perform acoustic modelling. A Mohamed, G Hinton, G Penn, 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Mohamed A, Hinton G, Penn G (2012) Understanding how deep belief networks perform acoustic modelling. In: 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 4273-4276
Automatic interpreting telephony research at ATR. T Morimoto, Proceedings of a Workshop on Machine Translation. a Workshop on Machine TranslationMorimoto T (1990) Automatic interpreting telephony research at ATR. In: Proceedings of a Workshop on Machine Translation, UMIST
Zero-resource machine translation by multimodal encoder-decoder network with multimedia pivot. H Nakayama, N Nishida, Machine Translation. 311-2Nakayama H, Nishida N (2017) Zero-resource machine translation by multimodal encoder-decoder network with mul- timedia pivot. Machine Translation 31(1-2):49-64
Speech translation: Coupling of recognition and translation. H Ney, Proceedings of the 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing. the 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing1Ney H (1999) Speech translation: Coupling of recognition and translation. In: Proceedings of the 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), IEEE, vol 1, pp 517-520
The IWSLT 2018 Evaluation Campaign. J Niehues, R Cattoni, S Stüker, M Cettolo, M Turchi, M Federico, Proceedings of the 2018 International Workshop on Spoken Language Translation. the 2018 International Workshop on Spoken Language TranslationBruges, BelgiumNiehues J, Cattoni R, Stüker S, Cettolo M, Turchi M, Federico M (2018) The IWSLT 2018 Evaluation Campaign. In: Proceedings of the 2018 International Workshop on Spoken Language Translation, Bruges, Belgium
The IWSLT 2019 evaluation campaign. J Niehues, R Cattoni, S Stüker, M Negri, M Turchi, T L Ha, E Salesky, R Sanabria, L Barrault, L Specia, M Federico, Proceedings of the 16th International Workshop on Spoken Language Translation. the 16th International Workshop on Spoken Language TranslationIWSLTNiehues J, Cattoni R, Stüker S, Negri M, Turchi M, Ha TL, Salesky E, Sanabria R, Barrault L, Specia L, Federico M (2019) The IWSLT 2019 evaluation campaign. In: Proceedings of the 16th International Workshop on Spoken Language Translation (IWSLT)
Minimum Error Rate Training in Statistical Machine Translation. F J Och, Proceedings of the 41st Annual Meeting on Association for Computational Linguistics. the 41st Annual Meeting on Association for Computational LinguisticsStroudsburg, PA, USAAssociation for Computational Linguistics1Och FJ (2003) Minimum Error Rate Training in Statistical Machine Translation. In: Proceedings of the 41st An- nual Meeting on Association for Computational Linguistics -Volume 1, Association for Computational Linguistics, Stroudsburg, PA, USA, ACL '03, pp 160-167
Using spoken word posterior features in neural machine translation. K Osamura, T Kano, S Sakti, K Sudoh, S Nakamura, Proceedings of the 15th International Workshop on Spoken Language Translation. the 15th International Workshop on Spoken Language TranslationOsamura K, Kano T, Sakti S, Sudoh K, Nakamura S (2018) Using spoken word posterior features in neural machine translation. In: Proceedings of the 15th International Workshop on Spoken Language Translation, pp 189-195
fairseq: A fast, extensible toolkit for sequence modeling. M Ott, S Edunov, A Baevski, A Fan, S Gross, N Ng, D Grangier, M Auli, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)Minneapolis, MinnesotaAssociation for Computational LinguisticsOtt M, Edunov S, Baevski A, Fan A, Gross S, Ng N, Grangier D, Auli M (2019) fairseq: A fast, extensible toolkit for sequence modeling. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), Association for Computational Linguistics, Minneapolis, Minnesota, pp 48-53
Librispeech: an ASR corpus based on public domain audio books. V Panayotov, G Chen, D Povey, S Khudanpur, Acoustics, Speech and Signal Processing. IEEE2015 IEEE International Conference onPanayotov V, Chen G, Povey D, Khudanpur S (2015) Librispeech: an ASR corpus based on public domain audio books. In: Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, IEEE, pp 5206-5210
BLEU: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W J Zhu, Proceedings of the 40th Annual Meeting on Association for Computational Linguistics -ACL '02. the 40th Annual Meeting on Association for Computational Linguistics -ACL '02Philadelphia, PennsylvaniaAssociation for Computational LinguisticsPapineni K, Roukos S, Ward T, Zhu WJ (2001) BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics -ACL '02, Association for Computational Linguistics, Philadelphia, Pennsylvania
Overview of the IWSLT 2010 evaluation campaign. M Paul, M Federico, S Stüker, Proceedings of the 2010 International Workshop on Spoken Language Translation. the 2010 International Workshop on Spoken Language TranslationPaul M, Federico M, Stüker S (2010) Overview of the IWSLT 2010 evaluation campaign. In: Proceedings of the 2010 International Workshop on Spoken Language Translation
Spoken language translation using automatically transcribed text in training. S Peitz, S Wiesler, M Nußbaum-Thom, H Ney, Proceedings of the 9th International Workshop on Spoken Language Translation. the 9th International Workshop on Spoken Language TranslationPeitz S, Wiesler S, Nußbaum-Thom M, Ney H (2012) Spoken language translation using automatically transcribed text in training. In: Proceedings of the 9th International Workshop on Spoken Language Translation, pp 276-283
Harnessing indirect training data for end-to-end automatic speech translation: Tricks of the trade. N Q Pham, T S Nguyen, T L Ha, J Hussain, F Schneider, J Niehues, S Stüker, A Waibel, J Pino, L Puzon, J Gu, X Ma, A D Mccarthy, D Gopinath, Proceedings of the 16th International Workshop on Spoken Language Translation. the 16th International Workshop on Spoken Language TranslationProceedings of the 16th International Workshop on Spoken Language Translation. IWSLTPham NQ, Nguyen TS, Ha TL, Hussain J, Schneider F, Niehues J, Stüker S, Waibel A (2019) The iwslt 2019 kit speech translation system. In: Proceedings of the 16th International Workshop on Spoken Language Translation Pino J, Puzon L, Gu J, Ma X, McCarthy AD, Gopinath D (2019) Harnessing indirect training data for end-to-end automatic speech translation: Tricks of the trade. In: Proceedings of the 16th International Workshop on Spoken Language Translation (IWSLT)
Improved speech-to-text translation with the Fisher and Callhome Spanish-English speech translation corpus. M Post, G Kumar, A Lopez, D Karakos, C Callison-Burch, S Khudanpur, Proceedings of the 10th International Workshop on Spoken Language Translation (IWSLT). the 10th International Workshop on Spoken Language Translation (IWSLT)Heidelberg, GermanyPost M, Kumar G, Lopez A, Karakos D, Callison-Burch C, Khudanpur S (2013) Improved speech-to-text translation with the Fisher and Callhome Spanish-English speech translation corpus. In: Proceedings of the 10th International Workshop on Spoken Language Translation (IWSLT), Heidelberg, Germany
Communication acoustics: an introduction to speech, audio and psychoacoustics. V Pulkki, M Karjalainen, John Wiley & SonsPulkki V, Karjalainen M (2015) Communication acoustics: an introduction to speech, audio and psychoacoustics. John Wiley & Sons
Linking people in videos with "their" names using coreference resolution. V Ramanathan, A Joulin, P Liang, L Fei-Fei, European conference on computer vision. SpringerRamanathan V, Joulin A, Liang P, Fei-Fei L (2014) Linking people in videos with "their" names using coreference resolution. In: European conference on computer vision, Springer, pp 95-110
Voice activity detection. fundamentals and speech recognition system robustness. J Ramirez, J M Gorriz, J C Segura, Robust SpeechIntechOpen, RijekaRamirez J, Gorriz JM, Segura JC (2007) Voice activity detection. fundamentals and speech recognition system robust- ness. In: Grimm M, Kroschel K (eds) Robust Speech, IntechOpen, Rijeka, chap 1
Collecting image annotations using Amazon's Mechanical Turk. C Rashtchian, P Young, M Hodosh, J Hockenmaier, Proceedings of the Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk. the Workshop on Creating Speech and Language Data with Amazon's Mechanical TurkAssociation for Computational LinguisticsRashtchian C, Young P, Hodosh M, Hockenmaier J (2010) Collecting image annotations using Amazon's Mechanical Turk. In: Proceedings of the Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, Association for Computational Linguistics, pp 139-147
Assessing the impact of speech recognition errors on machine translation quality. N Ruiz, M Federico, AMTA 2014: proceedings of the eleventh conference of the Association for Machine Translation in the Americas. Vancouver, BCRuiz N, Federico M (2014) Assessing the impact of speech recognition errors on machine translation quality. AMTA 2014: proceedings of the eleventh conference of the Association for Machine Translation in the Americas, Vancouver, BC pp 261-274
Phonetically-oriented word error alignment for speech recognition error analysis in speech translation. N Ruiz, M Federico, 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). Ruiz N, Federico M (2015) Phonetically-oriented word error alignment for speech recognition error analysis in speech translation. In: 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp 296-302
Assessing the tolerance of neural machine translation systems against speech recognition errors. N Ruiz, Mad Gangi, N Bertoldi, M Federico, Proc. Interspeech. InterspeechRuiz N, Gangi MAD, Bertoldi N, Federico M (2017) Assessing the tolerance of neural machine translation systems against speech recognition errors. In: Proc. Interspeech 2017, pp 2635-2639
ImageNet Large Scale Visual Recognition Challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, A C Berg, L Fei-Fei, International Journal of Computer Vision (IJCV). 1153Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV) 115(3):211-252
Fluent translations from disfluent speech in end-to-end speech translation. T N Sainath, R J Weiss, A Senior, K W Wilson, O Vinyals, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics116th Annual Conference of the International Speech Communication Association Salesky E, Sperber M, Waibel ASainath TN, Weiss RJ, Senior A, Wilson KW, Vinyals O (2015) Learning the speech front-end with raw waveform cldnns. In: 16th Annual Conference of the International Speech Communication Association Salesky E, Sperber M, Waibel A (2019) Fluent translations from disfluent speech in end-to-end speech translation. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota, pp 2786-2792
How2: A large-scale dataset for multimodal language understanding. R Sanabria, O Caglayan, S Palaskar, D Elliott, L Barrault, L Specia, F Metze, Proceedings of the Workshop on Visually Grounded Interaction and Language. the Workshop on Visually Grounded Interaction and LanguageSanabria R, Caglayan O, Palaskar S, Elliott D, Barrault L, Specia L, Metze F (2018) How2: A large-scale dataset for multimodal language understanding. In: Proceedings of the Workshop on Visually Grounded Interaction and Language (NeurIPS 2018)
Speaker adaptation of neural network acoustic models using i-vectors. G Saon, H Soltau, D Nahamoo, M Picheny, IEEE Workshop on Automatic Speech Recognition and Understanding. Saon G, Soltau H, Nahamoo D, Picheny M (2013) Speaker adaptation of neural network acoustic models using i-vectors. In: 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pp 55-59
KIT's submission to the IWSLT 2019 shared task on text translation. F Schneider, A Waibel, Proceedings of the 16th International Workshop on Spoken Language Translation. the 16th International Workshop on Spoken Language TranslationSchneider F, Waibel A (2019) KIT's submission to the IWSLT 2019 shared task on text translation. In: Proceedings of the 16th International Workshop on Spoken Language Translation
Bidirectional recurrent neural networks. M Schuster, K K Paliwal, IEEE Transactions on Signal Processing. 4511Schuster M, Paliwal KK (1997) Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing 45(11):2673-2681
Continuous space language models for statistical machine translation. H Schwenk, D Dechelotte, J L Gauvain, Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions. the COLING/ACL 2006 Main Conference Poster SessionsAssociation for Computational LinguisticsSchwenk H, Dechelotte D, Gauvain JL (2006) Continuous space language models for statistical machine translation. In: Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, Association for Computational Linguistics, pp 723-730
Improving neural machine translation models with monolingual data. R Sennrich, B Haddow, A Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational Linguistics1Long Papers)Sennrich R, Haddow B, Birch A (2016a) Improving neural machine translation models with monolingual data. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp 86-96
Neural machine translation of rare words with subword units. R Sennrich, B Haddow, A Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Long Papers)Sennrich R, Haddow B, Birch A (2016b) Neural machine translation of rare words with subword units. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Association for Computational Linguistics, Berlin, Germany, pp 1715-1725
Nematus: a toolkit for neural machine translation. R Sennrich, O Firat, K Cho, A Birch-Mayne, B Haddow, J Hitschler, M Junczys-Dowmunt, S Läubli, Miceli Barone, A Mokry, J Nadejde, M , Proceedings of the EACL 2017 Software Demonstrations, Association for Computational Linguistics (ACL). the EACL 2017 Software Demonstrations, Association for Computational Linguistics (ACL)Sennrich R, Firat O, Cho K, Birch-Mayne A, Haddow B, Hitschler J, Junczys-Dowmunt M, Läubli S, Miceli Barone A, Mokry J, Nadejde M (2017) Nematus: a toolkit for neural machine translation. In: Proceedings of the EACL 2017 Software Demonstrations, Association for Computational Linguistics (ACL), pp 65-68
Shef-multimodal: Grounding machine translation on images. K Shah, J Wang, L Specia, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational LinguisticsShah K, Wang J, Specia L (2016) Shef-multimodal: Grounding machine translation on images. In: Proceedings of the First Conference on Machine Translation, Association for Computational Linguistics, Berlin, Germany, pp 660-665
Lingvo: a modular and scalable framework for sequence-to-sequence modeling. J Shen, P Nguyen, Y Wu, Z Chen, arXiv:1902.08295Computing Research RepositoryShen J, Nguyen P, Wu Y, Chen Z, et al (2019) Lingvo: a modular and scalable framework for sequence-to-sequence modeling. Computing Research Repository arXiv:1902.08295
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, International Conference on Learning Representations. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations
A Study of Translation Edit Rate with Targeted Human Annotation. M Snover, B Dorr, R Schwartz, L Micciulla, J Makhoul, Proceedings of Association for Machine Translation in the Americas. Association for Machine Translation in the Americas6Snover M, Dorr B, Schwartz R, Micciulla L, Makhoul J (2006) A Study of Translation Edit Rate with Targeted Human Annotation. In: Proceedings of Association for Machine Translation in the Americas, 6
A Shared Task on Multimodal Machine Translation and Crosslingual Image Description. L Specia, S Frank, K Sima'an, D Elliott, Proceedings of the First Conference on Machine Translation. the First Conference on Machine TranslationBerlin, GermanyAssociation for Computational Linguistics2Shared Task PapersSpecia L, Frank S, Sima'an K, Elliott D (2016) A Shared Task on Multimodal Machine Translation and Crosslingual Image Description. In: Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, Association for Computational Linguistics, Berlin, Germany, pp 543-553
Translation quality and productivity: A study on rich morphology languages. L Specia, K Harris, F Blain, A Burchardt, V Macketanz, I Skadina, M Negri, M Turchi, Machine Translation Summit XVI. Nagoya, JapanSpecia L, Harris K, Blain F, Burchardt A, Macketanz V, Skadina I, Negri M, , Turchi M (2017) Translation quality and productivity: A study on rich morphology languages. In: Machine Translation Summit XVI, Nagoya, Japan, pp 55-71
Findings of the WMT 2018 Shared Task on Quality Estimation. L Specia, F Blain, V Logacheva, R F Astudillo, A Martins, Proceedings of the Third Conference on Machine Translation. the Third Conference on Machine TranslationBelgium, BrusselsAssociation for Computational Linguistics2Shared Task PapersSpecia L, Blain F, Logacheva V, Astudillo RF, Martins A (2018) Findings of the WMT 2018 Shared Task on Qual- ity Estimation. In: Proceedings of the Third Conference on Machine Translation, Volume 2: Shared Task Papers, Association for Computational Linguistics, Belgium, Brussels, pp 702-722
Neural lattice-to-sequence models for uncertain inputs. M Sperber, G Neubig, J Niehues, A Waibel, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsSperber M, Neubig G, Niehues J, Waibel A (2017a) Neural lattice-to-sequence models for uncertain inputs. In: Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Copenhagen, Denmark, pp 1380-1389
Toward robust neural machine translation for noisy input sequences. M Sperber, J Niehues, A Waibel, Proceedings of the 14th International Workshop on Spoken Language Translation. the 14th International Workshop on Spoken Language TranslationSperber M, Niehues J, Waibel A (2017b) Toward robust neural machine translation for noisy input sequences. In: Proceedings of the 14th International Workshop on Spoken Language Translation, pp 90-96
Attention-passing models for robust and data-efficient end-to-end speech translation. M Sperber, G Neubig, J Niehues, A Waibel, Transactions of the Association for Computational Linguistics. 7Sperber M, Neubig G, Niehues J, Waibel A (2019) Attention-passing models for robust and data-efficient end-to-end speech translation. Transactions of the Association for Computational Linguistics 7:313-325
The neural basis of multisensory integration in the midbrain: Its organization and maturation. B E Stein, T R Stanford, B A Rowland, Hearing Research. 2581multisensory integration in auditory and auditory-related areas of cortexStein BE, Stanford TR, Rowland BA (2009) The neural basis of multisensory integration in the midbrain: Its organi- zation and maturation. Hearing Research 258(1):4 -15, multisensory integration in auditory and auditory-related areas of cortex
Analyzing ASR pretraining for low-resource speech-to-text translation. M C Stoian, S Bansal, S Goldwater, arXiv:1910.10762Computing Research RepositoryStoian MC, Bansal S, Goldwater S (2019) Analyzing ASR pretraining for low-resource speech-to-text translation. Computing Research Repository arXiv:1910.10762
Sequence to Sequence Learning with Neural Networks. I Sutskever, O Vinyals, Q V Le, Proceedings of the 27th International Conference on Neural Information Processing Systems. the 27th International Conference on Neural Information Processing SystemsCambridge, MA, USAMIT PressSutskever I, Vinyals O, Le QV (2014) Sequence to Sequence Learning with Neural Networks. In: Proceedings of the 27th International Conference on Neural Information Processing Systems, MIT Press, Cambridge, MA, USA, NIPS'14, pp 3104-3112
A Japanese-to-English Speech Translation System: ATR-MATRIX. T Takezawa, T Morimoto, Y Sagisaka, N Campbell, H Iida, F Sugaya, A Yokoo, S Yamamoto, Fifth International Conference on Spoken Language Processing. Takezawa T, Morimoto T, Sagisaka Y, Campbell N, Iida H, Sugaya F, Yokoo A, Yamamoto S (1998) A Japanese-to- English Speech Translation System: ATR-MATRIX. In: Fifth International Conference on Spoken Language Process- ing
Neural machine translation with latent semantic of image and text. J ; Tiedemann, Ncc, K Choukri, T Declerck, M U Dogan, B Maegaard, J Mariani, A ; Moreno, Turkey Istanbul, J Toyama, M Misono, M Suzuki, K Nakayama, Y Matsuo, arXiv:1611.08459Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12). Odijk J, Piperidis Sthe Eight International Conference on Language Resources and Evaluation (LREC'12)Computing Research RepositoryParallel Data, Tools and Interfaces in OPUSTiedemann J (2012) Parallel Data, Tools and Interfaces in OPUS. In: Chair) NCC, Choukri K, Declerck T, Dogan MU, Maegaard B, Mariani J, Moreno A, Odijk J, Piperidis S (eds) Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC'12), European Language Resources Association (ELRA), Istanbul, Turkey Toyama J, Misono M, Suzuki M, Nakayama K, Matsuo Y (2016) Neural machine translation with latent semantic of image and text. Computing Research Repository arXiv:1611.08459
Augmenting translation models with simulated acoustic confusions for improved spoken language translation. Y Tsvetkov, F Metze, C Dyer, Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. the 14th Conference of the European Chapter of the Association for Computational LinguisticsGothenburg, SwedenAssociation for Computational LinguisticsTsvetkov Y, Metze F, Dyer C (2014) Augmenting translation models with simulated acoustic confusions for improved spoken language translation. In: Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, Association for Computational Linguistics, Gothenburg, Sweden, pp 616-625
Tasviret: A benchmark dataset for automatic Turkish description generation from images. M E Unal, B Citamak, S Yagcioglu, A Erdem, E Erdem, N I Cinbis, R Cakici, 24th Signal Processing and Communication Application Conference (SIU). Unal ME, Citamak B, Yagcioglu S, Erdem A, Erdem E, Cinbis NI, Cakici R (2016) Tasviret: A benchmark dataset for automatic Turkish description generation from images. In: 2016 24th Signal Processing and Communication Application Conference (SIU), pp 1977-1980
Attention is All you Need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, R Garnett, Advances in Neural Information Processing Systems. Curran Associates, Inc30Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is All you Need. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, Garnett R (eds) Advances in Neural Information Processing Systems 30, Curran Associates, Inc., pp 5998-6008
Tensor2Tensor for neural machine translation. A Vaswani, S Bengio, E Brevdo, F Chollet, A Gomez, S Gouws, L Jones, L Kaiser, N Kalchbrenner, N Parmar, R Sepassi, N Shazeer, J Uszkoreit, Proceedings of the 13th Conference of the Association for Machine Translation in the Americas. the 13th Conference of the Association for Machine Translation in the AmericasBoston, MA1Association for Machine Translation in the AmericasVaswani A, Bengio S, Brevdo E, Chollet F, Gomez A, Gouws S, Jones L, Kaiser L, Kalchbrenner N, Parmar N, Sepassi R, Shazeer N, Uszkoreit J (2018) Tensor2Tensor for neural machine translation. In: Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Papers), Association for Machine Translation in the Americas, Boston, MA, pp 193-199
Finite-state speech-to-speech translation. E Vidal, Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing. the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing1Vidal E (1997) Finite-state speech-to-speech translation. In: Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), IEEE, vol 1, pp 111-114
Show and tell: A neural image caption generator. O Vinyals, A Toshev, S Bengio, D Erhan, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionIEEEVinyals O, Toshev A, Bengio S, Erhan D (2015) Show and tell: A neural image caption generator. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, IEEE, pp 3156-3164
Mobile Speech-to-Speech Translation of Spontaneous Dialogs: An Overview of the Final Verbmobil System. Wahlster, Wahlster W (SpringerBerlin Heidelberg; Berlin, Heidelberged) Verbmobil: Foundations of Speech-to-Speech TranslationWahlster W (2000) Mobile Speech-to-Speech Translation of Spontaneous Dialogs: An Overview of the Final Verbmobil System. In: Wahlster W (ed) Verbmobil: Foundations of Speech-to-Speech Translation, Springer Berlin Heidelberg, Berlin, Heidelberg, pp 3-21
GLUE: A multi-task benchmark and analysis platform for natural language understanding. A Wang, A Singh, J Michael, F Hill, O Levy, S Bowman, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPBrussels, BelgiumAssociation for Computational LinguisticsWang A, Singh A, Michael J, Hill F, Levy O, Bowman S (2018a) GLUE: A multi-task benchmark and analysis platform for natural language understanding. In: Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Association for Computational Linguistics, Brussels, Belgium, pp 353-355
Bridging the gap between pre-training and fine-tuning for end-to-end speech translation. C Wang, Y Wu, S Liu, Z Yang, M Zhou, arXiv:1909.07575Computing Research RepositoryWang C, Wu Y, Liu S, Yang Z, Zhou M (2019a) Bridging the gap between pre-training and fine-tuning for end-to-end speech translation. Computing Research Repository arXiv:1909.07575
SwitchOut: an efficient data augmentation algorithm for neural machine translation. X Wang, H Pham, Z Dai, G Neubig, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsWang X, Pham H, Dai Z, Neubig G (2018b) SwitchOut: an efficient data augmentation algorithm for neural ma- chine translation. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Brussels, Belgium, pp 856-861
VATEX: A large-scale, high-quality multilingual dataset for video-and-language research. X Wang, J Wu, J Chen, L Li, Y Wang, W Y Wang, arXiv:1904.03493Computing Research. Wang X, Wu J, Chen J, Li L, Wang Y, Wang WY (2019b) VATEX: A large-scale, high-quality multilingual dataset for video-and-language research. Computing Research Repository arXiv:1904.03493
The Sogou-TIIC Speech Translation System for. Y Wang, L Shi, L Wei, W Zhu, J Chen, Z Wang, S Wen, W Chen, Y Wang, J Jia, Proceedings of the 2018 International Workshop on Spoken Language Translation. the 2018 International Workshop on Spoken Language TranslationWang Y, Shi L, Wei L, Zhu W, Chen J, Wang Z, Wen S, Chen W, Wang Y, Jia J (2018c) The Sogou-TIIC Speech Translation System for IWSLT 2018. In: Proceedings of the 2018 International Workshop on Spoken Language Translation, pp 112-117
Espnet: End-to-end speech processing toolkit. S Watanabe, T Hori, S Karita, T Hayashi, J Nishitoba, Y Unno, Ney Soplin, J Heymann, M Wiesner, N Chen, Watanabe S, Hori T, Karita S, Hayashi T, Nishitoba J, Unno Y, Soplin NEY, Heymann J, Wiesner M, Chen N, et al (2018) Espnet: End-to-end speech processing toolkit. In: Interspeech 2018, pp 2207-2211
Sequence-to-sequence models can directly translate foreign speech. R J Weiss, J Chorowski, N Jaitly, Y Wu, Z Chen, Weiss RJ, Chorowski J, Jaitly N, Wu Y, Chen Z (2017) Sequence-to-sequence models can directly translate foreign speech. In: Interspeech 2017
Transformer-based cascaded multimodal speech translation. Z Wu, O Caglayan, J Ive, J Wang, L Specia, Proceedings of the 16th International Workshop on Spoken Language Translation. the 16th International Workshop on Spoken Language TranslationWu Z, Caglayan O, Ive J, Wang J, Specia L (2019a) Transformer-based cascaded multimodal speech translation. In: Proceedings of the 16th International Workshop on Spoken Language Translation
Predicting Actions to Help Predict Translations. Z Wu, J Ive, J Wang, P Madhyastha, L Specia, Proceedings of The How2 Challenge: New Tasks for Vision and Language. The How2 Challenge: New Tasks for Vision and LanguageLong Beach, CA, U.S.AWu Z, Ive J, Wang J, Madhyastha P, Specia L (2019b) Predicting Actions to Help Predict Translations. In: Proceedings of The How2 Challenge: New Tasks for Vision and Language, Long Beach, CA, U.S.A.
SUN database: Large-scale scene recognition from abbey to zoo. J Xiao, J Hays, K A Ehinger, A Oliva, A Torralba, The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010. San Francisco, CA, USAXiao J, Hays J, Ehinger KA, Oliva A, Torralba A (2010) SUN database: Large-scale scene recognition from abbey to zoo. In: The Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2010, San Francisco, CA, USA, 13-18 June 2010, pp 3485-3492
Show, attend and tell: Neural image caption generation with visual attention. K Xu, J Ba, R Kiros, K Cho, A Courville, R Salakhudinov, R Zemel, Y Bengio, Proceedings of the 32nd International Conference on Machine Learning (ICML-15), JMLR Workshop and Conference Proceedings. the 32nd International Conference on Machine Learning (ICML-15), JMLR Workshop and Conference ProceedingsXu K, Ba J, Kiros R, Cho K, Courville A, Salakhudinov R, Zemel R, Bengio Y (2015) Show, attend and tell: Neural image caption generation with visual attention. In: Proceedings of the 32nd International Conference on Machine Learning (ICML-15), JMLR Workshop and Conference Proceedings, pp 2048-2057
Human action recognition by learning bases of action attributes and parts. B Yao, X Jiang, A Khosla, A L Lin, L Guibas, L Fei-Fei, 2011 International Conference on Computer Vision. Yao B, Jiang X, Khosla A, Lin AL, Guibas L, Fei-Fei L (2011) Human action recognition by learning bases of action attributes and parts. In: 2011 International Conference on Computer Vision, pp 1331-1338
STAIR captions: Constructing a large-scale Japanese image caption dataset. Y Yoshikawa, Y Shigeto, A Takeuchi, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics2Short Papers)Yoshikawa Y, Shigeto Y, Takeuchi A (2017) STAIR captions: Constructing a large-scale Japanese image caption dataset. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Association for Computational Linguistics, Vancouver, Canada, pp 417-421
From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. P Young, A Lai, M Hodosh, J Hockenmaier, Transactions of the Association for Computational Linguistics. 2Young P, Lai A, Hodosh M, Hockenmaier J (2014) From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics 2:67-78
Recent progresses in deep learning based acoustic models. D Yu, L ; Springer Deng, D Yu, J Li, IEEE/CAA Journal of Automatica Sinica. 43Automatic Speech Recognition: A Deep Learning ApproachYu D, Deng L (2016) Automatic Speech Recognition: A Deep Learning Approach. Springer Yu D, Li J (2017) Recent progresses in deep learning based acoustic models. IEEE/CAA Journal of Automatica Sinica 4(3):396-409
MOSI: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. A Zadeh, R Zellers, Pincus E Morency, L P , arXiv:1606.06259Computing Research RepositoryZadeh A, Zellers R, Pincus E, Morency LP (2016) MOSI: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos. Computing Research Repository arXiv:1606.06259
NICT-NAIST system for WMT17 multimodal translation task. J Zhang, M Utiyama, E Sumita, G Neubig, S Nakamura, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationCopenhagen, DenmarkAssociation for Computational Linguistics2Shared Task PapersZhang J, Utiyama M, Sumita E, Neubig G, Nakamura S (2017) NICT-NAIST system for WMT17 multimodal translation task. In: Proceedings of the Second Conference on Machine Translation, Volume 2: Shared Task Papers, Association for Computational Linguistics, Copenhagen, Denmark, pp 477-482
Lattice transformer for speech translation. P Zhang, N Ge, B Chen, K Fan, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsZhang P, Ge N, Chen B, Fan K (2019) Lattice transformer for speech translation. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Florence, Italy, pp 6475-6484
Ensemble sequence level training for multimodal MT: OSU-Baidu WMT18 multimodal machine translation system report. R Zheng, Y Yang, M Ma, L Huang, Proceedings of the Third Conference on Machine Translation. the Third Conference on Machine TranslationBelgium, BrusselsAssociation for Computational LinguisticsZheng R, Yang Y, Ma M, Huang L (2018) Ensemble sequence level training for multimodal MT: OSU-Baidu WMT18 multimodal machine translation system report. In: Proceedings of the Third Conference on Machine Translation, Association for Computational Linguistics, Belgium, Brussels, pp 638-642
Statistical machine translation for speech: A perspective on structures, learning, and decoding. Proceedings of the. B Zhou, IEEE. 1015Zhou B (2013) Statistical machine translation for speech: A perspective on structures, learning, and decoding. Proceed- ings of the IEEE 101(5):1180-1202
A visual attention grounding neural model for multimodal machine translation. M Zhou, R Cheng, Y J Lee, Z Yu, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsZhou M, Cheng R, Lee YJ, Yu Z (2018) A visual attention grounding neural model for multimodal machine translation. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Brussels, Belgium, pp 3643-3653
| [] |
[
"CLUCDD: CONTRASTIVE DIALOGUE DISENTANGLEMENT VIA CLUSTERING",
"CLUCDD: CONTRASTIVE DIALOGUE DISENTANGLEMENT VIA CLUSTERING"
] | [
"Jingsheng Gao \nSchool of Electronic Information and Electrical Engineering\nShanghai Jiao Tong University\nChina\n",
"Zeyu Li \nSchool of Electronic Information and Electrical Engineering\nShanghai Jiao Tong University\nChina\n",
"Suncheng Xiang \nSchool of Electronic Information and Electrical Engineering\nShanghai Jiao Tong University\nChina\n",
"Ting Liu \nSchool of Electronic Information and Electrical Engineering\nShanghai Jiao Tong University\nChina\n",
"Yuzhuo Fu \nSchool of Electronic Information and Electrical Engineering\nShanghai Jiao Tong University\nChina\n"
] | [
"School of Electronic Information and Electrical Engineering\nShanghai Jiao Tong University\nChina",
"School of Electronic Information and Electrical Engineering\nShanghai Jiao Tong University\nChina",
"School of Electronic Information and Electrical Engineering\nShanghai Jiao Tong University\nChina",
"School of Electronic Information and Electrical Engineering\nShanghai Jiao Tong University\nChina",
"School of Electronic Information and Electrical Engineering\nShanghai Jiao Tong University\nChina"
] | [] | A huge number of multi-participant dialogues happen online every day, which leads to difficulty in understanding the nature of dialogue dynamics for both humans and machines. Dialogue disentanglement aims at separating an entangled dialogue into detached sessions, thus increasing the readability of long disordered dialogue. Previous studies mainly focus on message-pair classification and clustering in two-step methods, which cannot guarantee the whole clustering performance in a dialogue. To address this challenge, we propose a simple yet effective model named CluCDD, which aggregates utterances by contrastive learning. More specifically, our model pulls utterances in the same session together and pushes away utterances in different ones. Then a clustering method is adopted to generate predicted clustering labels. Comprehensive experiments conducted on the Movie Dialogue dataset and IRC dataset demonstrate that our model achieves a new state-of-the-art result 1 . | 10.48550/arxiv.2302.08146 | [
"https://export.arxiv.org/pdf/2302.08146v1.pdf"
] | 256,900,678 | 2302.08146 | 20c67328d58af2ea5159fc87c3b69ceb4cd48a52 |
CLUCDD: CONTRASTIVE DIALOGUE DISENTANGLEMENT VIA CLUSTERING
Jingsheng Gao
School of Electronic Information and Electrical Engineering
Shanghai Jiao Tong University
China
Zeyu Li
School of Electronic Information and Electrical Engineering
Shanghai Jiao Tong University
China
Suncheng Xiang
School of Electronic Information and Electrical Engineering
Shanghai Jiao Tong University
China
Ting Liu
School of Electronic Information and Electrical Engineering
Shanghai Jiao Tong University
China
Yuzhuo Fu
School of Electronic Information and Electrical Engineering
Shanghai Jiao Tong University
China
CLUCDD: CONTRASTIVE DIALOGUE DISENTANGLEMENT VIA CLUSTERING
Index Terms-Dialogue DisentanglementContrastive LearningSequential InformationClusteringBERT
A huge number of multi-participant dialogues happen online every day, which leads to difficulty in understanding the nature of dialogue dynamics for both humans and machines. Dialogue disentanglement aims at separating an entangled dialogue into detached sessions, thus increasing the readability of long disordered dialogue. Previous studies mainly focus on message-pair classification and clustering in two-step methods, which cannot guarantee the whole clustering performance in a dialogue. To address this challenge, we propose a simple yet effective model named CluCDD, which aggregates utterances by contrastive learning. More specifically, our model pulls utterances in the same session together and pushes away utterances in different ones. Then a clustering method is adopted to generate predicted clustering labels. Comprehensive experiments conducted on the Movie Dialogue dataset and IRC dataset demonstrate that our model achieves a new state-of-the-art result 1 .
INTRODUCTION
With the rapid development of the Internet and chatting apps, online group chatting has increased popularity, which also generates many multi-party dialogues [1]. In many cases, multi-party dialogues include many users, and every user's messages are entangled with each other, making it difficult for a new user to grasp the previous topics quickly. As shown in Fig 1, several sessions compose a larger dialogue randomly. This kind of entanglement will bring difficulty for new users to find a specific topic. Automatic dialogue disentanglement will segment entangled utterances into different sessions and help users and machine find a specific session quickly.
Owing to the automatical topics separating, dialogue disentanglement is proved to be valuable in handling the corresponding downstream tasks [2,3,4,5]. Given the considerable variability in the content of each dialogue, a conventional classification method is not readily applicable to the task of disentangling dialogues. Existing methods can be roughly divided into two categories: two-step and endto-end. Two-step methods [6,7] get the reply relationships 1 Code is available at https://github.com/gaojingsheng/CluCDD among message pairs firstly, then apply a clustering method to establish different sessions based on the message pairs relationships. However, these two-step methods are susceptible to noisy utterance pair relations, resulting in poor final clustering results. End-to-end methods [8,9] are proposed to fill the gap between two steps and usually perform better than two-step methods, where the dialogue and session representations can be directly used to predict the clustering results. Liu et al [8] proposed an end-to-end transition-based model in a supervised way, and their E2E model classifies an utterance into an existing or a new session. Besides, Liu et al [9] proposed an unsupervised co-training method based on the pseudo data generated by speaker labels. However, previous end-to-end methods have not considered aggregating utterances in one dialogue directly, and they mainly focus on classifying the relations between utterances and sessions.
Recently, Pre-trained Language Models (PrLMs) have considerably improved downstream natural language process tasks by providing effective backbones. Based on the pre-trained BERT [10], we construct an end-to-end framework: Contrastive Dialogue Disentanglement via Clustering (CluCDD). Our approach focuses on distinguishing utterances in different sessions for entangled dialogues by contrastive learning. We first retrieve the utterances representations in each dialogue through pre-trained BERT. Since utterances in dialogues are temporally coherent with preceding utterances, we capture the sequential information by a sequential feature fusion (SFF) encoder. Meanwhile, the number of clusters is required in some clustering methods, so we put forward a cluster head to predict the session number for clustering. Finally, we forge our session outcomes through a clustering process. We conduct experiments on the Movie Dialogue dataset [8] and Ubuntu IRC dataset [11], and results show that our CluCDD outperforms the existing methods. Our contributions are summarized as follows:
• We propose CluCDD, an effective framework for dialogue disentanglement. The model captures the utterances and sequential representations in dialogues.
• The contrastive training paradigm is employed to amend feature space, and a cluster head predicts the session number to enhance the final clustering result.
• Extensive experiments demonstrate that our CluCDD is suitable for solving dialogue disentanglement and establishes the state-of-the-art on two datasets.
PROPOSED METHODOLOGY
Problem Definition
Dialogue disentanglement is a clustering task. Given a dialogue D = u 1 , u 2 , · · · , u n , where u i represents the i-th utterance in D. An utterance in a dialogue includes a session number l i , and different sessions commonly mean different topics or reply-to relations. In this task, we aim to separate the dialogue D into different sessions l 1 , l 2 , · · · , l k , where session l i contains m i utterances and k i=1 m i = n.
Model Architecture
Utterance Encoder. BERT [10] yields strong performances across many downstream natural language processing tasks.
In CluCDD, we employ pre-trained BERT as our utterance encoder. BERT is consisted of twelve(L) blocks, where each block contains two types of sub-layers: multi-head selfattention and a fully connected feed-forward network. We assume that u i represent a certain utterance in dialogue D, the
input of u i is processed into [CLS, word 1 , word 2 , ... word m ],
and CLS is the embedding for pre-trained classified task:
u i,1 , u i,2 , · · · , u i,m+1 = BERT(u i ) (1) where u i,k represent k-th output embedding derived from u i . ∀j = 1, 2, .., m + 1, u i,j ∈ R 1×d ,
where m is the words number of the utterance and d is the output dimension of BERT. Following previous work [12], we utilize the mean-pooling to get the average utterance embedding from BERT. Sequential Feature Fusion (SFF). Utterance's chronological order is significant to its semantic meaning in a dialogue. Hence, the historical utterances and the content of the dialogue are momentous to the meaning of a certain utterance. Thus we apply a sequential feature fusion module to capture the temporal context information. A full-connected (FC) layer is added after the output of BERT, then the i-th utterance representation v i is:
v i = W (f (u i,1 , u i,2 , · · · , u i,m+1 )) + b(2)
where f represents the mean-pooling layer, W and b are parameters of the FC layer, and W ∈ R d×d , b ∈ R 1×d . Then we put a Bi-LSTM [13] layer for compensating contextual clues within the whole dialogue. The input of the Bi-LSTM is all n utterances representations v 1 , v 2 , · · · , v n in one dialogue. Then all utterances representations after Bi-LSTM can be defined as:
h 1 , h 2 , · · · , h n = Bi-LSTM(v 1 , v 2 , · · · , v n )(3)
Given the previous utterances representations with sequential information, we would like to perform a regularization process to prevent overfitting and extreme cases. The other important sublayer is the fully connected feed-forward network which consists of a linear transformation with a ReLU activation function in between: r 1 , r 2 , · · · , r n = Ψ(ReLU(W (h 1 , h 2 , · · · , h n ) + b)) (4) where Ψ represents the L2 Normalization and W ∈ R d×d , b ∈ R 1×d . Cluster Head. Cluster number is a significant parameter in several clustering methods, e.g. K-means [14]. To address this problem, we add an extra cluster head to predict the session number in each dialogue. Our cluster head comprises an LSTM layer and a linear layer, which share the same input as the SFF module. The training loss of our cluster head L H is the cross-entropy loss, which can be formulated as:
L H = E y∼P [− log P (y = k)](5)
where k is the golden session number for each dialogue.
Contrastive Learning
To enforce maximizing the consistency between positive pairs compared with negative pairs, we adopt contrastive learning for our clustering methods. Consider r 1 , r 2 , · · · , r n are the output of SFF module. Let r i , r j be a pair of input vectors, y ij be a binary label assigned to this pair. y ij = 0 if r i and r j are deemed similar, y ij = 1 if r i and r j are deemed dissimilar.
Following [15], we adopt a euclidean distance-based loss function for better performance. The distance function to be learned D W between r i and r j as the euclidean distance between the outputs of G W . It can be formulated as:
D ij W = D W (r i , r j ) = G W (r i ) − G W (r j ) 2(6)
Consequently, we can retrieve the contrastive loss function:
L C = n i=1 n j=i+1 L c (r i , r j , y ij )(7)
L c (r i , r j , y ij ) = (1 − y)L S (D ij W ) + yL D (D ij W ) (8) L S and L D are designed for minimizing L, where L S for similar pairs while L D for dissimilar pairs.
In our experiment, we set the exact loss function:
L c (r i , r j , y ij )) = (1 − y ij ) 1 2 D ij W 2 + (y ij ) 1 2 max 0, m − D ij W 2(9)
where margin m > 0 is a hyper-parameter which can prevent loss from being less than zero.
Adding up the L H in equation 5, we have the total training loss L:
L = L C + γL H(10)
3. EXPERIMENTS
Dataset and Training details
We conduct experiments on two dialogue disentanglement datasets: Movie Dialogue dataset [8] and IRC dataset [11]. The larger Movie Dialogue dataset is collected from online movie scripts, which contains 29669/2036/2010 instances for train/dev/test. The origin label of an utterance in the Movie Dialogue dataset is session label, so it can be used for end-toend dialogue disentanglement directly, and the session number of one dialogue is 2, 3 or 4, respectively. The IRC dataset is annotated from online conversations, whose labels are the reply-to relations between utterances pairs. For the reason of lacking direct annotations of session labels for utterances in dialogues, the IRC dataset is used for two-step ways for dialogue disentanglement originally. Similar to [8], we process every continued 50 utterances into a dialogue. The minimum and maximum session numbers in our generated IRC dataset are 2 and 14. We adopt the Adam optimizer with the initial learning rate of 5e-4. The hidden size of all layers is set to be 768. Meanwhile, as suggested in [12], we freeze all but the last transformer layer parameters of pre-trained BERT to speed up the training procedure and improve the training availability. The γ in equation 10 is set to be 0.1.
Comparison with the State-of-the-art
Following previous work [8,9,11] on dialogue disentanglement, five evaluation metrics are employed in our experiments to evaluate the performance of different methods: Normalized Mutual Information (NMI), Adjusted Rand Index (ARI), Loc 3 , One-to-one Overlap (1-1) and Shen F value (Shen-F). All of these measures with higher scores imply more accurate clustering results.
The results in the Movie Dialogue dataset and the Ubuntu IRC dataset are presented in Table 1, where the best results are highlighted in bold. CISIR is a two-step method proposed by [7]. BERT means retrieving the relationships between utterance pairs, then applying the clustering method by [7] to retrieve the results. E2E is an end-to-end method proposed by [8], which can predict the session labels directly. CluCDD w/o SFF denotes that we fine-tune the pre-trained BERT in contrastive manner, then apply K-means and session number k retrieved by cluster head to generate the results. CluCDD w/o BL represents that we remove the Bi-LSTM layer in the SFF module. CluCDD is our proposed model, and the clustering method used in Table 1 is also K-means.
From Table 1, we can observe that our CluCDD outperforms all baselines on the Movie Dialogue dataset and IRC dataset. Besides, Bi-LSTM is significant in capturing the sequential information between utterances since each utterance concept can retain the hidden state of the whole dialogue.
We attribute these improvements on CluCDD to three main reasons: First, the direct clustering method is suitable for solving dialogue disentanglement, which is a clustering problem. Second, contrastive learning makes utterances in the same session become closer and utterances in different sessions become further, which is significant in separating an entangled dialogue into several sessions. Third, CluCDD can make full use of pre-trained knowledge and utilize the utterances representations and sequential feature extraordinarily.
Ablation study
Session Number. At this stage, we conduct experiments on the Movie Dialogue dataset to investigate how the session number in a dataset will influence the performance of models. We compare the Shen F value of E2E, CluCDD w/o SFF and CluCDD trained on different settings of session number. Since the Movie Dialogue dataset only occupies dialogues with 2, 3 or 4 sessions, we split the Movie Dialogue dataset into three subsets according to the number of sessions in one dialogue. We train and evaluate models on different subsets, and the result is shown in Fig 3a. We can see that our CluCDD outperforms baselines in different numbers of sessions. Influence of margin. We further conduct experiments on the Movie Dialogue dataset to study how our model is influenced by the contrastive loss margin m. In equation 9, the margin defines a radius, and antagonistic pairs contribute to the loss function only if their distance is within this radius. Mean- while, the margin can keep contrastive loss from dropping below zero. As Fig 3b reveals, the value of margin affects the performance of our model limitedly, which proves that our model maintains the performance in a wide range of margins.
Comparision of clustering methods. To further investigate the performance of different clustering methods, we compare K-means [14], Gaussian mixtures model (GMM) [16], DB-SCAN [17] and Affinity propagation (AP) [18]. K-means is the most common partition-based clustering method, which aims to partition n samples into k clusters in which each sample belongs to the cluster with the nearest distance. GMM is a probabilistic model for representing the presence of subpopulations within an overall population, which also separates n samples into k clusters. DBSCAN is a dense-based clustering method that directly searches for connected dense regions in the feature space by estimating the density. AP does not require the number of clusters to be determined or estimated, which also finds representative exemplars of several clusters.
The result is shown in Table 2. The Gaussian mixtures model performs a little worse than K-means. Meanwhile, DBSCAN is not as good as the three other methods, which is due to DBSCAN relying on two parameters: neighborhood size in terms of distance and the minimum number of points in a neighborhood. Mainly, there exist gaps between different dialogues, for which it's tough to search parameters for DBSCAN in all dialogues. The satisfactory result of Affinity propagation proves that the similarity between two utterances generated by our CluCDD is suitable for exploring the best cluster center, even without relying on the cluster number.
CONCLUSION
In this work, we introduce an effective method for dialogue disentanglement. Our method is motivated by a general assumption that clustering utterances through contrastive learning. Based on this assumption, we propose a contrastive framework by distinguishing the difference between utterances in different sessions. We employ contrastive training and cluster head to construct the utterances feature space to fit the final clustering process. On average, the encouraging experimental results demonstrate that our method outperforms previous methods by 15.7% on the Movie Dialogue dataset and 28.2% on the Ubuntu IRC dataset.
Fig. 1 .
1An example of dialogue disentanglement.
Fig. 2 .
2The architecture of our CluCDD. (a) We use BERT to encode the utterances in one dialogue, all utterances share the same parameters. (b) The utterances representations are fed into the SFF module retrieve the sequential features. (c) We adopt a Cluster Head to predict session number k. (d) We generate predicted labels by a clustering method for utterances.
Fig. 3 .
3(a) Shen F value on the Movie Dialogue dataset with different session numbers (left). (b) The influence of margin m in equation 9 (right). Zoom in for the best view.
Table 1 .
1The results of dialogue disentanglement on the Movie Dialogue(MD) dataset and Ubuntu IRC dataset. SFF represents the Sequential Feature Fusion module. BL represents the Bi-LSTM layer in the SFF module.Dataset
Method
NMI ARI Loc 3
1-1
Shen-F
MD
CISIR [7]
20.47 6.45
-
-
53.77
BERT [10]
25.57 10.97
-
-
56.91
E2E [8]
35.30 24.90
-
-
64.7
CluCDD w/o SFF 37.25 27.13 63.46 57.53 65.06
CluCDD w/o BL 38.57 28.95 64.29 58.14 65.55
CluCDD
40.98 31.45 67.98 61.75 67.92
IRC
CISIR [7]
46.62 3.37
-
-
40.78
BERT [10]
54.61 8.15
-
-
43.87
E2E [8]
61.4 18.00
-
-
48.19
CluCDD w/o SFF 54.47 14.68 61.07 41.52 49.97
CluCDD w/o BL 58.83 18.16 61.15 44.08 51.62
CluCDD
64.98 28.36 61.52 51.14 58.42
Table 2 .
2Comparison about clustering methods Experiments NMI ARI Loc 3 Shen-FCluCDD+K-means 40.98 31.45 67.98 67.92
CluCDD+GMM
39.52 29.72 65.99 67.94
CluCDD+DBSCAN 39.97 23.12 65.27 62.76
CluCDD+AP
46.02 29.06 68.37 65.62
The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems. Ryan Lowe, Nissan Pow, Iulian Serban, Joelle Pineau, Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue. the 16th Annual Meeting of the Special Interest Group on Discourse and DialoguePrague, Czech RepublicAssociation for Computational LinguisticsRyan Lowe, Nissan Pow, Iulian Serban, and Joelle Pineau, "The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue sys- tems," in Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Prague, Czech Republic, Sept. 2015, pp. 285-294, As- sociation for Computational Linguistics.
Training End-to-End Dialogue Systems with the Ubuntu Dialogue Corpus. Ryan Lowe, Nissan Pow, Iulian Vlad Serban, Laurent Charlin, Chia-Wei Liu, Joelle Pineau, Dialogue & Discourse. 81Ryan Lowe, Nissan Pow, Iulian Vlad Serban, Laurent Charlin, Chia-Wei Liu, and Joelle Pineau, "Training End-to-End Dialogue Systems with the Ubuntu Dia- logue Corpus," Dialogue & Discourse, vol. 8, no. 1, pp. 31-65, 2017.
Multi-turn response selection using dialogue dependency relations. Qi Jia, Yizhu Liu, Siyu Ren, Kenny Zhu, Haifeng Tang, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsQi Jia, Yizhu Liu, Siyu Ren, Kenny Zhu, and Haifeng Tang, "Multi-turn response selection using dialogue de- pendency relations," in Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Pro- cessing (EMNLP), Online, Nov. 2020, pp. 1911-1920, Association for Computational Linguistics.
Who says what to whom: A survey of multi-party conversations. Jia-Chen Gu, Chongyang Tao, Zhen-Hua Ling, Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence. the Thirty-First International Joint Conference on Artificial IntelligenceVienna, Austria; Luc De Raedt2022ijcai.orgJia-Chen Gu, Chongyang Tao, and Zhen-Hua Ling, "Who says what to whom: A survey of multi-party con- versations," in Proceedings of the Thirty-First Interna- tional Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, Luc De Raedt, Ed. 2022, pp. 5486-5493, ijcai.org.
Addressee and response selection for multi-party conversation. Hiroki Ouchi, Yuta Tsuboi, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingHiroki Ouchi and Yuta Tsuboi, "Addressee and response selection for multi-party conversation," in Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, 2016, pp. 2133-2143.
Chat disentanglement: Identifying semantic reply relationships with random forests and recurrent neural networks. Shikib Mehri, Giuseppe Carenini, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, Taiwan1Asian Federation of Natural Language ProcessingShikib Mehri and Giuseppe Carenini, "Chat disentan- glement: Identifying semantic reply relationships with random forests and recurrent neural networks," in Pro- ceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Pa- pers), Taipei, Taiwan, Nov. 2017, pp. 615-623, Asian Federation of Natural Language Processing.
Learning to disentangle interleaved conversational threads with a siamese hierarchical network and similarity ranking. Jyun-Yu Jiang, Francine Chen, Yan-Ying Chen, Wei Wang, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jyun-Yu Jiang, Francine Chen, Yan-Ying Chen, and Wei Wang, "Learning to disentangle interleaved conversa- tional threads with a siamese hierarchical network and similarity ranking," in Proceedings of the 2018 Con- ference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 2018, pp. 1812- 1822.
End-to-end transition-based online dialogue disentanglement. Hui Liu, Zhan Shi, Jia-Chen Gu, Quan Liu, Si Wei, Xiaodan Zhu, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, Christian Bessiere. the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, Christian Bessiere7Main trackHui Liu, Zhan Shi, Jia-Chen Gu, Quan Liu, Si Wei, and Xiaodan Zhu, "End-to-end transition-based online dia- logue disentanglement," in Proceedings of the Twenty- Ninth International Joint Conference on Artificial Intel- ligence, IJCAI-20, Christian Bessiere, Ed. 7 2020, pp. 3868-3874, International Joint Conferences on Artifi- cial Intelligence Organization, Main track.
Unsupervised conversation disentanglement through co-training. Hui Liu, Zhan Shi, Xiaodan Zhu, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingHui Liu, Zhan Shi, and Xiaodan Zhu, "Unsupervised conversation disentanglement through co-training," in Proceedings of the 2021 Conference on Empirical Meth- ods in Natural Language Processing, 2021, pp. 2345- 2356.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, "BERT: Pre-training of deep bidi- rectional transformers for language understanding," in Proceedings of the 2019 Conference of the North Amer- ican Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, June 2019, pp. 4171-4186, Association for Computational Linguistics.
A large-scale corpus for conversation disentanglement. Jonathan K Kummerfeld, R Sai, Joseph J Gouravajhala, Vignesh Peper, Chulaka Athreya, Jatin Gunasekara, Ganhotra, Sankalp Siva, Patel, C Lazaros, Walter Polymenakos, Lasecki, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsJonathan K. Kummerfeld, Sai R. Gouravajhala, Joseph J. Peper, Vignesh Athreya, Chulaka Gunasekara, Jatin Ganhotra, Siva Sankalp Patel, Lazaros C Poly- menakos, and Walter Lasecki, "A large-scale corpus for conversation disentanglement," in Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, Florence, Italy, July 2019, pp. 3846- 3856, Association for Computational Linguistics.
Discovering new intents with deep aligned clustering. Hanlei Zhang, Hua Xu, Rui Ting-En Lin, Lyu, AAAI PressHanlei Zhang, Hua Xu, Ting-En Lin, and Rui Lyu, "Discovering new intents with deep aligned clustering," 2021, pp. 14365-14373, AAAI Press.
Supervised sequence labelling with recurrent neural networks. Alex Graves, Long short-term memoryAlex Graves, "Long short-term memory," Supervised sequence labelling with recurrent neural networks, pp. 37-45, 2012.
Some methods for classification and analysis of multivariate observations. J Mcqueen, Computer and Chemistry. 4J. McQueen, "Some methods for classification and anal- ysis of multivariate observations," Computer and Chem- istry, vol. 4, pp. 257-272, 01 1967.
Dimensionality reduction by learning an invariant mapping. Raia Hadsell, Sumit Chopra, Yann Lecun, 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). IEEE2Raia Hadsell, Sumit Chopra, and Yann LeCun, "Dimen- sionality reduction by learning an invariant mapping," in 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). IEEE, 2006, vol. 2, pp. 1735-1742.
The infinite gaussian mixture model. Carl Edward Rasmussen, NIPS. Citeseer. 12Carl Edward Rasmussen et al., "The infinite gaussian mixture model.," in NIPS. Citeseer, 1999, vol. 12, pp. 554-560.
A density-based algorithm for discovering clusters in large spatial databases with noise. Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, kdd. 96Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, et al., "A density-based algorithm for discovering clusters in large spatial databases with noise.," in kdd, 1996, vol. 96, pp. 226-231.
Adaptive affinity propagation clustering. Kaijun Wang, Junying Zhang, Dan Li, Xinna Zhang, Tao Guo, arXiv:0805.1096arXiv preprintKaijun Wang, Junying Zhang, Dan Li, Xinna Zhang, and Tao Guo, "Adaptive affinity propagation cluster- ing," arXiv preprint arXiv:0805.1096, 2008.
| [
"https://github.com/gaojingsheng/CluCDD"
] |
[
"A Definition and a Test for Human-Level Artificial Intelligence",
"A Definition and a Test for Human-Level Artificial Intelligence"
] | [
"Deokgun Park deokgun.park@uta.edu \nComputer Science and Engineering\nUniversity of Texas\nat Arlington ArlingtonTexasUSA\n",
"Md Ashaduzzaman mdashaduzzaman.mondol@mavs.uta.edu \nComputer Science and Engineering\nUniversity of Texas\nat Arlington ArlingtonTexasUSA\n",
"Rubel Mondol \nComputer Science and Engineering\nUniversity of Texas\nat Arlington ArlingtonTexasUSA\n",
"Aishwarya Pothula aishwarya.pothula@mavs.uta.edu \nComputer Science and Engineering\nUniversity of Texas\nat Arlington ArlingtonTexasUSA\n",
"Mazharul Islam \nComputer Science and Engineering\nUniversity of Texas\nat Arlington ArlingtonTexasUSA\n"
] | [
"Computer Science and Engineering\nUniversity of Texas\nat Arlington ArlingtonTexasUSA",
"Computer Science and Engineering\nUniversity of Texas\nat Arlington ArlingtonTexasUSA",
"Computer Science and Engineering\nUniversity of Texas\nat Arlington ArlingtonTexasUSA",
"Computer Science and Engineering\nUniversity of Texas\nat Arlington ArlingtonTexasUSA",
"Computer Science and Engineering\nUniversity of Texas\nat Arlington ArlingtonTexasUSA"
] | [] | Despite recent advances of AI research in many application-specific domains, we do not know how to build a human-level artificial intelligence (HLAI). We conjecture that learning from others' experience with the language is the essential characteristic that distinguishes human intelligence from the rest. Humans can update the action-value function with the verbal description as if they experience states, actions, and corresponding rewards sequences firsthand. In this paper, we present a classification of intelligence according to how individual agents learn and propose a definition and a test for HLAI. The main idea is that language acquisition without explicit rewards can be a sufficient test for HLAI.There have been many ups and downs in artificial intelligence (AI) research, and many people made great advances in diverse applications, such as speech recognition, image recognition, game playing, or self-driving cars. Despite this, the limitation of the current state of the art is most apparent in the context of robotics. When a layperson or popular culture imagine AI, it is frequently associated with a butler robot that can do many services a human butler could provide. The robot would converse with other humans and robots to do more tasks. If someone asks for a new dish, it might search the Internet for a recipe and learn to prepare it. AI is thought as a software part for such a robot. It might be convenient if there is a specific name for such aspect for AI research, because the term AI has a broader meaning nowadays. Alternative terms such as true AI, strong AI, or artificial general intelligence (AGI) [24] are often used, but they are not clearly defined.In this paper, we suggest naming a sub-field of AI research for something like a butler robot as human-level artificial intelligence (HLAI). We provide a formal definition and a test as a theoretical common ground for the HLAI research. Specifically, we try to answer following questions.• What is the verifiable or measurable difference between human intelligence and other animals? • What does it mean to learn with the language? • How can we test whether an agent has the HLAI? • How can we administer such a test practically to aid the model development?Let us begin by explaining what distinguishes the human-level intelligence from the rest. | null | [
"https://export.arxiv.org/pdf/2011.09410v5.pdf"
] | 236,090,258 | 2011.09410 | d4fd7a06121ba47b78ac9f0ef8c09d071c810d18 |
A Definition and a Test for Human-Level Artificial Intelligence
14 Dec 2022
Deokgun Park deokgun.park@uta.edu
Computer Science and Engineering
University of Texas
at Arlington ArlingtonTexasUSA
Md Ashaduzzaman mdashaduzzaman.mondol@mavs.uta.edu
Computer Science and Engineering
University of Texas
at Arlington ArlingtonTexasUSA
Rubel Mondol
Computer Science and Engineering
University of Texas
at Arlington ArlingtonTexasUSA
Aishwarya Pothula aishwarya.pothula@mavs.uta.edu
Computer Science and Engineering
University of Texas
at Arlington ArlingtonTexasUSA
Mazharul Islam
Computer Science and Engineering
University of Texas
at Arlington ArlingtonTexasUSA
A Definition and a Test for Human-Level Artificial Intelligence
14 Dec 2022
Despite recent advances of AI research in many application-specific domains, we do not know how to build a human-level artificial intelligence (HLAI). We conjecture that learning from others' experience with the language is the essential characteristic that distinguishes human intelligence from the rest. Humans can update the action-value function with the verbal description as if they experience states, actions, and corresponding rewards sequences firsthand. In this paper, we present a classification of intelligence according to how individual agents learn and propose a definition and a test for HLAI. The main idea is that language acquisition without explicit rewards can be a sufficient test for HLAI.There have been many ups and downs in artificial intelligence (AI) research, and many people made great advances in diverse applications, such as speech recognition, image recognition, game playing, or self-driving cars. Despite this, the limitation of the current state of the art is most apparent in the context of robotics. When a layperson or popular culture imagine AI, it is frequently associated with a butler robot that can do many services a human butler could provide. The robot would converse with other humans and robots to do more tasks. If someone asks for a new dish, it might search the Internet for a recipe and learn to prepare it. AI is thought as a software part for such a robot. It might be convenient if there is a specific name for such aspect for AI research, because the term AI has a broader meaning nowadays. Alternative terms such as true AI, strong AI, or artificial general intelligence (AGI) [24] are often used, but they are not clearly defined.In this paper, we suggest naming a sub-field of AI research for something like a butler robot as human-level artificial intelligence (HLAI). We provide a formal definition and a test as a theoretical common ground for the HLAI research. Specifically, we try to answer following questions.• What is the verifiable or measurable difference between human intelligence and other animals? • What does it mean to learn with the language? • How can we test whether an agent has the HLAI? • How can we administer such a test practically to aid the model development?Let us begin by explaining what distinguishes the human-level intelligence from the rest.
However, there are differences in intelligence between earthworms and more advanced agents such as rats and humans. Behavior policy is a function that maps a sensory input with the appropriate action. The behavior policy of an earthworm is hard-coded and updated only by evolution. In other words, it is instinct [60] that is innate and does not change with experience. For rats and humans, the behavior policy does change with experience which is learning. In this paper, we propose three levels of intelligence based on how learning is achieved in agents. Table 1 shows a summary of this idea.
Level 1 Intelligence In this categorization, earthworms have Level 1 intelligence, where there is no learning occurring at the individual level. Their behavior policy have a hard-coded mapping from sensory input to the corresponding action that is instinct updated with evolution [60].
Level 2 Intelligence
The problem with Level 1 intelligence is that the adaptation with evolution is slow. For example, if there is an abrupt climate change due to the meteor crash, agents with Level 1 intelligence will have difficulty adapting to the new environment in a timely manner. Furthermore, the behavior policy is encoded in the genetic code. If a species want to adapt to various environments such as diverse climates, the behavior policy has to be encoded in the genetic code, which is costly. If an agent can update behavior policy during its lifetime by learning new rules such as a new type of food or shelter, it would increase the inclusive fitness and reduce the amount of the genetic code for diverse environments.
Let's call experience as a sequence of sensory inputs (states) and agent actions. A reward is a special case of sensory input given by the internal reward system conditioned by the state. We call those agents with the capability for learning with experience as Level 2 intelligence.
To enable learning at the individual level, at least two functional modules would be required in addition to the level 1 agents. The first is a memory to store newly developed rules. The second module is a reward system to judge the merit of the state. We stated that the goal of a biological agent is to spread genes. However, the correct assessment is not possible during the life time of an individual agent. For example, an agent may lay eggs in a hostile environment that no descendant will survive. Still, the agent cannot know this because it would perish long before this happens. Therefore, an agent with level 2 intelligence requires a function to estimate whether the current stimulus or state is good or bad during an agent's life. The reward system serves this purpose by providing a proxy for the value of the state for the inclusive fitness.
We point out that the environment does not provide a reward. Instead, it is an agent that produces a reward signal, which is the agent's estimate of the value of the current state. A dollar bill can be rewarding for some cultures but might not generate any reward for a tribal human who has never seen any money before. As for another example, when we eat three burgers for lunch, the reward Figure 1: (a) The standard framework for reinforcement learning (b) The revised relationship of the agent and environment for level 2 intelligence. Environment provides an observation. Some observation is used for the reward system in the agent. The resulting reward signal and the sensory information is fed into the control system. New rules are added to the memory.
for the first and third burger will be different, even though it is the same object for the sake of the environment.
However, this is different from the standard Markov Decision Process (MDP) framework for reinforcement learning, where a reward is determined from the environment. Legg and Hutter used a standard MDP framework for the formal definition of universal intelligence [39]. However, they also commented that a more accurate framework would consist of an agent, an environment, and a goal system inside the agent that interprets the state of the environment and rewards the agent appropriately.
Level 3 Intelligence Contrary to our devotion to learning (machine, supervised, unsupervised, reinforcement, self-supervised learning, and so on), most behaviors of Level 2 intelligent agents are not based on learning but instincts.
Example 1 Let us consider a rabbit that has never seen a wolf before. If the rabbit tries to learn the appropriate behavior by randomly experimenting options when it does encounter a wolf, it is too late to update its behavior policy based on the outcome of random exploration.
Instead, the rabbit should rely on the instinct which is the Level 1 intelligence. Natural environments are too hostile to use learning as the primary method of building a behavior policy. Therefore, the range of behavior policy that Level 2 intelligence can learn with direct experience are limited. Level 3 intelligence overcomes this limitation by learning from others' experiences.
Bandura pioneered the social learning theory [4], and learning through observation, thus called observation learning, is found on several species including non-human primates, invertebrates, birds, rats, and reptiles [22]. For example, if you give monkeys locked boxes that contain food, they will try to open them. When one monkey finds manipulation to unlock the box, other monkeys observe this pattern and imitate it to open their boxes.
Level 4 (Human-Level) Intelligence The limitations of Level 3 which relies on the observation is also apparent. In the example 1, the Level 2 rabbit relied on the direct experience. But for the Level 3 rabbit to learn the proper behavior, it has to observe its peer rabbit to be eaten by wolves which is also very rare event. Therefore, even Level 3 cannot learn a lot because they rely on the presence of the example case to be observed.
However, humans are the epitome of Level 3 intelligence and the only known species using language as a tool for social learning. The verbal and written language uses a sequence of abstract symbols to transfer knowledge, relieving the burdensome requirements of observational learning such as presence to demonstrations. Thus, we can think of human-level intelligence as Level 3 intelligence with language. In humans, a language is a tool for learning from others. Humans' technological achievements were possible because we can learn from others and contribute new knowledge. Isaac Newton said, "If I have seen further, it is by standing on the shoulders of Giants." Language is an invention that enabled this. Verbal language enabled the knowledge transfer with the people at the same place and time. Later written language removed this barrier, and we don't have to be in the same place and time to learn from each other.
In the following sections, we will clarify the use of language for social learning because language has a various functions and forms.
Clarifying Language Skill
We need to clarify what we mean by learning with language. For example, dolphins are known to use a verbal signal to coordinate [32]. Monkeys have been taught sign language [2]. Again, as we explained in previous sections, monkeys do learn by observation and imitations [22]. But can we classify the language behavior of monkeys as human-level? Similarly, there have been many previous works that demonstrated various aspects of language skills. Voice agents can understand the spoken language and can answer simple questions [34]. Agents have been trained to follow verbal commands to navigate [30,11,12,15,55]. GPT-3 by open AI can generate articles published as Op-Ed in the Guardians [8,25]. Some models can do multiple tasks in language as evaluated in the GLUE benchmark or DecaNLP [64,44]. Models exhibit superior performance than humans in all categories except for the Winograd Schema Challenge [41], where models perform slightly less than humans [50]. Do these models have human-level intelligence?
Using language has many aspects. In this paper, we claim that learning from others' experience is the language's essential function that differentiates humans' language use with other animals'. We will explain this with a simple example and then formalize it in the context of reinforcement learning. Now your behavior policy for the same situation has changed such that you will choose to drink it more frequently next time you see cola. It is the change in the behavior policy induced by direct experience. This is how agents with Level 2 intelligence learn.
How an agent with Level 3 intelligence will learn? Primates such as gorillas and chimpanzees have Level 3 intelligence. It means they can learn from indirect experience. They could learn by observing others eat and the consequences. Or a human teacher could point to a cola glass and make an expression to make it attractive as a mother would do to a baby. In terms of the sequence of experience, they saw the cola object and the response of other agents.
Learning with language means that it should bring a similar change in your behavior policy when you hear someone say, "Cola is a black, sparkling drink. I drank it, and it tasted good." Figure 2 shows Figure 2: Learning with language means that the symbolic description brings the same changes to the model comparable to direct experiences.
this with the notation in Markov decision process (MDP) (MDP) [59]. Humans use language for learning and this is what distinguishes a human-level intelligence from other animals with language.
In this sense, we can define the human-level artificial intelligence (HLAI) as following;
Definition 1 (Human-level artificial intelligence (HLAI)) An agent has human-level artificial intelligence if there exists a sequence of symbols (a symbolic description) for every feasible experience, such that the agent can update the behavior policy equally, whether it goes through the sequence of sensory inputs and actions or it receives only the corresponding symbolic description.
We can define more formally with Markov decision process (MDP). Let S denote a set of all states, and A denote a set of all actions. The stochastic behavior policy is given as π(a|s) = p(a|s) where a ∈ A, s ∈ S. When the behavior policy, π old (a|s) is updated with a sequence of states and actions, h, we represent the updated policy as π new (a|s, h). Given an original behavior policy, we can derive two policies π(a|s, h a ) and π(a|s, h b ) that are updated with two different experience h a and h b . We can measure the distance Dist between two policies using the expected KL divergence [53] w.r.t s.
Dist(h a , h b ) = E s [D KL (π a (a|s, h a )||π b (a|s, h b ))](1)
Considering s can be large, we might approximate the difference with the restricted set of states s ∈ S ′ ⊆ S, where we choose S ′ to be relevant scenarios. Let D represent the set of all sequences of states and actions that a biological agents can experience firsthand and T represent the set of all sequences of terms in language.
We might define a set of language to aid the discussion as the following.
Definition 2 (A set of language) A set of language is a set whose element is a tuple of an experience and a symbolic description, where the agent can update behavior policy equally either by going through the experience or by receiving the symbolic description.
L = {(h d , h l ) ∈ (D, T )|Dist(h l , h d ) ≤ δ}(2)
In the previous example with Cola, the element for the language set can be thought as the following.
• The direct experience is the sequence of the sensory stimulus.
• The abstract symbol sequence is "Cola is the black sparkling drink. It felt good when I drank it." • The previous behavior policy is, given the black sparkling drink as the state, the agent might try evading or drinking it with equal probability. • The new behavior policy is that given the same state, the agent might try drinking it more likely.
Using a set of language, we can define HLAI as an agent with a language set, L human .
∀ h d ∈ D, ∃ h l ∈ T s.t. (h d , h l ) ∈ L human(3)
It might lead to a philosophical debate whether a language set of human is indeed unbound. Authors claim that it is not bounded because it can be extended as needed. In my definition, human-level intelligence is defined with a language set for every feasible experience which is infinite. But it does not mean that each individual agent has to master a language set for infinite experience. It is about the capability for handling open-ended problems. For example, integer is infinite. No human can see every feasible integer in their lifetime. But when required, they can use any of those integers even if they have never seen them before. As an example with language, a typical English person will have only a few words to describe shades of snow while an Eskimo might have more words. But if a English person happens to spend 10 years with Eskmo people, he might also acquire more language set for experience related to snow. Or how about a sentence "He flew through the cheese holes." Even though it is unlikely that someone has seen this sentence before or experienced what the sentence describes, we have no difficulty understanding the sentence or imagining some experience that would justify the sentence.
Another example is how a fictional character, Scrooge in the novel, A Christmas Carol, might change the behavior policy with the same verbal advice such as the virtue of the charity after experience.
However, one problem with implementing a test according to this definition will be to make sure that there exists a symbolic description for every feasible experience.
A Test for HLAI
There are many tests for AI. However, a challenge is finding a sufficient but tractable one. There are many tests that are sufficient but intractable, including the Turing test, robot college student test, kitchen test, and AI preschool test [1]. For example, the Turing test measures if an agent can imitate the human by communicating like one. The robot college student test asks an agent to register, take classes, and to get passing grades by doing assignments and exams. Unfortunately, they are seldom conducted in the current research and when they are conducted, there is a controversy about the validity [54].
There are a few limitations that make these tests impractical. First, most tests assume that the agent has already acquired the language skill, but we do not know how to program an agent who can learn a language. Second, they require human participants to administer the test. While it takes a few years for humans to be a master StarCraft II player, it took 200 years of gameplay for machines to masters [62]. Learning five years of human experience will take a lot of time for training with human intervention. Therefore, using humans is cost-inhibitive and not scalable. Also, interactions with human participants are not reproducible for the validation. Ideally, the test should require the minimum level of intelligence that can pass as human-level intelligence, and it should be cheap to run the test.
At the other end of the spectrum, many tests for AI are tractable but not sufficient for HLAI. While there are models with near-human or super-human level performance in Atari games [52], Go [56], Starcraft II [63], classifying objects from an image [28], or multi-tasks in natural language understanding [29], none would claim that they achieved HLAI. They are effective in proposing a subset of necessary components or mechanisms for HLAI but are not built to study a sufficient set of those mechanisms.
To find a Goldilocks middle ground between the sufficiency and tractability requirements, we propose a new test for HLAI. If a human infant is raised in an environment such as a jungle where there are no human, he/she cannot acquire language. It is environment-limited. Also, if we have animal cubs and try to raise them like a human baby by teaching language, they cannot acquire language. It is capability-limited. Therefore, language acquisition is a function of an environment and a capability. Based on this argument, we propose the Language Acquisition Test for HLAI as the following; Example 3 A baby will start learning a single word such as water or mom. When the baby hears these words, they bring similar effects such as seeing a cup of water or seeing mom. Even though this is a small start, the baby can continue to add the vocabulary to be fluent in the language.
Compared to other tests, it has the small prerequisite. For example, the Turing test or robot college student test assumes that the agent has language skills, which is a challenging requirement for the current state of the art.
Practical Administration of the LAT
In the Language Acquisition Test, a proper environment means that there are other humans to teach language to the learning agent. A straightforward way to administer the test is by asking human participants to raise the physical robot agent like a human baby. Turing has suggested this approach [61] and the Developmental Robotics community has actively pursed in many researches [43,3,9]. However, we already discussed the limitation of the human participants: the prohibitive cost and difficulty in reproducible research.
It would be more useful if we could use a simulated environment [6]. There were previous works using simulated environments for the language acquisition, where agents get rewards by following verbal instructions in navigation [12,51,11,30,55] or give correct answers (question answering) [15].
What is remarkable about these works is the agent can understand the verbal instructions grounded to sensory input thus enabling compositionality of language. For example, let's say that an agent was trained to go to a small, red box and a large, blue key during the training phase. As a result, during the test phase, the agent can successfully go to a a small, blue box, even though there was no such object during training. However, previous environments have following limitations for the test of the HLAI.
• Use of Rewards: Using reward signals generated by environments will be sufficient for the implementation of Level 2 intelligence. However, for Level 3 intelligence, the reward is not given to the agent but is observed on other agents. Similarly, for human-level intelligence, the experiencing reward itself should be part of verbal description. In our previous cola example, there is a part related to the explicit reward that is it tasted good. In the previous researches, they tend to use explicit reward to teach the concept of the black sparkling drink by giving explicit reward when the agent point or navigate to the verbal description. [12,30,11,15,55]. This approach cannot be applied in this case because we need a separate reward mechanism for teaching object concept black sparkling drink and the associated reward it tasted good.
• Grounded Language and Embodied Exploration: The language symbols need to bring changes in the policy. It means that the language symbols need to be grounded with sensory input and the actions in the embodied agents. Some environments that use only the text lack this grounding. [46,13].
• Shallow interaction with large number of items and vocabulary: Previous Environments tend to pour large items and vocabulary into the training. However, as Smith and Slone pointed out, human infants begin to learn a lot about a few things [58]. We need to build upon basic concepts before we can learn advanced concepts.
Therefore, we claim that we need a new simulated environment for the test of HLAI to overcome these limitations.
An Environment for Language Acquisition
We have been working on Simulated Environment for Developmental Robotics (SEDRo) for the practical test of HLAI [49]. SEDRo provides diverse experiences similar to that of human infants from the stage of a fetus to 12 months of age. In SEDRo, there is a caregiver character (mother), interactive objects in the home-like environment (e.g., toys, cribs, and walls), and the learning agent (baby). The agent will interact with the simulated environment by controlling its body muscles and receiving the sensor signals according to a physics engine. The caregiver character is a virtual agent. It is manually programmed by researchers using a behavior tree that is commonly used in video games to make a game character behave like a human in a limited way. Interaction between the agent and the caregiver allows cognitive bootstrapping and social-learning, while interactions between the agent and the surrounding objects are increased gradually as the agent enters more developed stages. The caregiver character teaches language by simulating conversation patterns of mothers. SEDRo also simulates developmental psychology experiments to evaluate the progress of intellectual development of non-verbal agents in multiple domains such as vision, motor, and social. The verbal speech is approximated by the sparse binary representations (SBR). Speech is encoded to a 512-dimensional vector, where about 10 of them are randomly selected for each alphabet. At each timestep, the corresponding speech signal is represented as the sequence of the vectors.
SEDRo has the following novel features compared to previous works.
• Open-ended tasks without extrinsic reward In SEDRo, there is no fixed goal for the agent, and the environment does not provide any reward. Rather than relying on the environment for the rewards, the responsibility of generating rewards belong to the agent itself. In other words, AI researchers have to manually program a reward system to generate reward based on the current state. As an example, if an agent gets a food, the sensory input from stomach will change and the reward system in the agent will generate a corresponding reward.
• Human-like experience with social interaction Some studies use environments without explicit rewards, and the agents learn with curiosity, or intrinsic reward [57,5]. However, those environments were arbitrary and non-human such as robot arm manipulation tasks or simple games. While such simple environments are effective in unveiling the subset of necessary mechanisms, it is difficult to answer what is a sufficient set. In SEDRo, we provide a human infant-like experience, because human infants are the only known example of agents capable of developing human-level intelligence. However, we cannot replicate every aspect of human infants' experience, nor will we try to. There is a subset of experience that is critical for HLAI. Therefore, identifying these essential experiences and finding ways to replicate them in the simulation are two fundamental research questions. Another benefit of a human-like environment is that we can use the experiments from developmental psychology to evaluate the development progress of non-verbal agents.
• Longitudinal development SEDRo unfolds agent capabilities according to a curriculum similar to human babies' development. Many studies suggest that humans or agent models learn faster with constrained capabilities [37,33]. For example, in the first three months, babies are very near-sighted and do not have any mobility. This makes many visual signals stationary, and the agent can focus on low-level visual skills with eyes. At later stages, when sight and mobility increase, babies can learn advanced skills built-upon lower level skills.
The final benchmark whether the agent has acquired the language will follow the protocol resembling the previous cola story. We give verbal messages like "The red ball is delicious(good)" or "The blue pyramid is hot (dangerous)" and check if the behaviour policy toward the red ball or the blue pyramid has changed accordingly.
Discussion
We proposed the definition and the test of HLAI. In this section, we discuss the implication of these on the current research on AI. And we discuss the limitation of our approach and alternative options.
Agent vs Behavior
The levels of intelligence are to provide a novel insight on the research for artificial intelligence and not to provide new taxonomy for classification of biological agents. There are two limitations to apply this classification for the biological agents. First, we do not have a complete knowledge about intelligence of other animals. It is possible that later we might discover that earthworms do learn new skills or other animals such as dolphins have more sophisticated use of language. In this case, we should adjust which animals belong which level of intelligence. A more fundamental second limitation is that biological species evolved for long times, and boundaries tend to be blurry. For example, we might discern mammals from non-mammals with features such as laying eggs or not. But there is a platypus which is a borderline between mammals and reptiles [65]. Similarly, there can be a gray area between what constitutes as the social learning with language.
Furthermore, the level of intelligence is better to classify behaviors rather than the biological agents.
Higher level intelligence agents rely on the skills from the lower level intelligence. For an example, when a baby cries when hungry or shows stepping reflex, these behaviors are Level 1 intelligence. When they learn to avoid things after they experience pain, it is Level 2 behavior. Finally, when they observe and imitate the caregivers behavior with mobile phones, these behaviors are Level 3 in nature.
Language border
We claimed that humans are the only animals that are capable of learning with lanugage. Let us review it with the hypothetical example of dolphins.
Example 4 Let's say a dolphin says to other dolphin that "There is a shark over the reef." Hearing this new information, the other dolphin might avoid the reef.
In this case, it brought the change in the behavior but not in the behavior policy. In other words, dolphins have an innate behavior policy or instinct to avoid sharks. Hearing this information did not bring change in this policy. Let us differentiate a state information and an experience for our discussion. A state information refers the information about the state in MDP, while an experience refers to the sequence of states, actions, and rewards. The message in this example was a state information because it was same as the other dolphin seeing the shark for itself. In other words, this verbal message is replacing the state in Figure 2. In this case, we can say that the verbal behavior of dolphins is not human-level considering language-based learning. Greer et al. made a distinction between emission of a previously acquired repertoire and acquisition of a new repertoire [19]. As a counter example, we might imagine dolphins doing the following conversation.
Example 5 "I saw a fish with a shining string. I ate it. And there was painful experience."
Hearing this message, if other dolphins avoid fishing bait, we can say these verbal behaviors are human-level intelligent behaviors. In this message, there are sequence of states, actions, and rewards and the behavior policy is updated with language.
Again, we do not have the complete understanding of the language skills of dolphins and these examples are contrived. But the main purpose is to show the difference between the communication aspect and the learning aspect of the language. Communication is the sharing of a state information.
Learning is when the behavior policy is updated with the verbal messages. Probably, the language skills of advanced intelligent species such as dolphins and primates lie in the spectrum between two examples.
As long as I know, primates cannot learn with purely abstract symbols. That is the main point of the definition of human-level intelligence. Having said that it would not be surprising if there is a case report that such learning is indeed possible in primates probably with some simplification or blurry definition with what an abstract symbol is. Abstractness in language means that the association of signified and signifier are arbitrary [18]. However, there is a continuous spectrum in the abstract symbol from explicit pictures to simplified iconographic symbols to more abstract representation. Some writing systems such as Chinese characters still have some correspondence between symbol and meaning. Also, there is a continuous spectrum in intelligence, too. After all, primates like gorillas and chimpanzees are most similar to humans in terms of intelligence. The difference between primates and humans will be a matter of capacity than structure. For example, in terms of computer architecture, personal computers running MS-DOS in the 1980s and computers nowadays are very similar. It is just a matter of capacities such as the size of RAM or the clock speed of CPUs. I suppose the difference between primates and humans are subtle things such as slightly more sophisticated control of the larynx or the increased capacity of temporal sequence processing such as an elongated hippocampal loop. Therefore, there must be an intersection point between decreasing abstractness of symbols and increasing intelligence of agents.
Comparison with the Strong Story Hypothesis
Patrick Henry Winston insisted that learning with language is the essence of human-level intelligence. He posited the Strong Story Hypothesis [66]. This is an example where a symbolic sequence has an effect similar with a direct experience for the update of behavior policy. Our contribution compared to his hypothesis is to formalize mechanically what does it mean that human understand language. In previous works, such understanding is formalized as text summarization, question and answering that can be ambiguous. However, we claim that the essence of language understanding lies in updating behavior policy. The benefit of our definition is that it can be mathematically calculated using Markov decision process notation.
AGI or HLAI
The history of AI is long, and the term AI is used in a broad sense. While AI includes HLAI, it also includes active research area of application-specific AI or machine learning. Interestingly, when the general public thinks AI, they tend to think HLAI, while most academic research is on applicationspecific AI. Strong or True AI has been used to distinguish the two, but the definition is not clear. Artificial general intelligence (AGI) is also used in a similar context. AGI emphasizes that the agent should be able to do many things as humans do. However, doing many things in a diverse context does not necessarily mean that agents can do what humans do. As a counter-example, a rat can jump around, gather food, mate, and raise a newborn. A virtual rodent by Merel et al. can do multiple tasks depending on the context [45]. We might say that this virtual rodent achieved AGI in the simulated environment, but this is not what AGI research targets. As another example, humans can learn new alphabet from foreign language, but cannot learn to read QR code. This shows that humans have also limited general intelligence. This shows that biological agents have different degress of general intelligence. But measuring the generality is not clearly defined or computationally intractable for practical cases. In this sense, HLAI might be a better concept for AI research.
Merging instinct and learned behaviors
Instinct is an umbrella word for innate behavior policy, and there are different implementation mechanisms including reflex, emotion, and special-purpose structures. For example, raising the arms when tripping, sucking, crawling, and walking are examples of reflex. Reflex relies on dedicated neural circuits. It is useful when it is okay that the response is rigid or fixed and the reaction duration is instantaneous. However, when a rabbit hears a wolf cry, the reaction needs to be flexible depending on the context. The reaction state should be maintained over longer time span. Emotion using neurotransmitters or hormones is effective in those cases, because its effect is global, meaning various areas of brain can respond according to it. And it lasts a long while before it is inactivated. Finally, the hippocampus or basal ganglia are special-purpose structures that solve a particular problems such as memory consolidation or decision among conflicting behavior plans [7].
Instinct is a shortcut that enables a reasonable behavior policy in the life time of individual biological agents. Given infinite time, an agent with learning capability might learn all the things that an intelligent animals can do without the help of instinct. But in reality, we saw that most of behaviors are based on instinct in the example of the rabbit and the wolf. Another way of emphasizing the role of instinct is that primates have a few more social instincts than dogs [9,38], and humans have just a few more language instincts than primates [48]. While the volume of neocortex among dogs, primates, and humans are different, they play more or less a same role in those agents. Therefore, we need to build an artificial instincts to program a HLAI.
Therefore, we should add non-homogeneous special-purpose modules to the cognitive architecture for an organic mix of innate and learned behaviors. Current SOTA tends to be more homogeneous in its structure, emphasizing learning only. Again, contrary to our devotion to various forms of learning, most behaviors are based on instincts. Important questions are "What instincts enable social interaction, knowledge learning, and language acquisition?", "How do those instincts work?" and "How can we merge the instinctive behavior and the learned behaviors?" However, not all instincts of humans need to be replicated. Of special interests are instincts that enable human level intelligence such as knowledge instinct [42], social instinct, or language instinct [48]. Below are the instincts that we conjecture as essential for HLAI.
• Social instinct: Innate behaviors such as face recognition, eye contact, following eye gaze, and attending to caregivers are essential for the social learning [9,38].
• Knowledge instinct: Intrinsic motivation or curiosity plays an important role in the knowledge acquisition [47,57,5,26]. The prediction errors in the prefrontal cortex will generate a reward in the reward system.
• Decision system: Artificial amygdala will determine the mode of the brain operation among 1) fight or flight, 2) busy without conflicts (Type I), 3) focus (Type II), 4) boring.
At the boring state, the knowledge instinct is activated. Additionally, the artificial basal ganglia resolves conflict in multiple behavior options with the learning with reward [7]. As a concrete example, primates have a reflex that foveates to a moving object (pro-saccade). However, this reflex can be overridden by a training with rewards such that 1) participants maintain the gaze on the center fixation point even though there is a moving object (fixation task), 2) participants maintain the gaze until the fixation point disappears, and then foveate to the moving object (overlap task), or 3) participants maintain the gaze until the fixation point disappears, and there will be a fixed time interval between the disappearance of fixation point and onset of targets in a fixed location (gap task) [31]. Monkeys can be trained to foveate to moving target (pro saccade) if the fixation point is red, or to move eye in the opposite direction of the target (anti-saccade) if the fixation point is green, too [21].
• Language instinct: In addition to social instincts, language specific instincts are babbling, attention to voice-like signals and so on.
Limitations and Alternatives of the Test
We proposed to use human-like experience to teach language. The main challenge is that it is difficult to program the caregiver character to enable diverse but reasonable interaction with the random behaviors of the learning agent. It is expected to teach a few first words if we are successful. Some alternatives include using a completely artificial environment that is not relevant to human experience but still requires skills in many domains. For example, emergent communication behaviors that can be thought of as language have been observed in the reinforcement learning environment with multiple agents [20,10,16,23]. While we might find the clues about the learning mechanism, it might be challenging to apply to the human robot interaction because language is a set of arbitrary symbols shared between members [35].
Another possibility is to transform existing resources into a learning environment. Using Youtube videos to create a diverse experience can be an example. However, Smith and Slone pointed out that those approaches use shallow information about a lot of things, while human infants begin to learn a lot about a few things [58]. Also, visual information from the first years consists of an egocentric view, and the allocentric view emerges after 12 Months. Another aspect is that humans learn from social interaction. While infants can learn language from having a Chinese tutor in the meeting, but they cannot learn by seeing the recorded video of tutoring [36]. Therefore, we assume that we need to acquire necessary skills before we can learn from those sources.
Conclusion
In this paper, we propose a definition of HLAI. This definition emphasizes that humans can learn from others' experiences using language. Based on this definition, we proposed a language acquisition test for HLAI. A version of this test can be approximated by the simulated environment, and we hope that other researchers can use it to facilitate the research on HLAI.
Figure 3 :
3Screenshot of the SEDRo environment. (a) shows the learning agent which has the physical dimension of the one-year-old human baby. The orange line between eyes represents the eye gaze direction. The grid in the torso shows the area for the distributed touch sensors in the skin. (b) shows a caregiving agent feeds milk to the learning agent. (c) shows the visual input to the agent.
Table 1 :
1Levels of intelligenceLevel
Features
1
• No individual learning
• Evolution-based refinement
• Ex) earthworms
2
• Learning from direct experience
• Reward-based refinement
• Ex) rats, dogs
3
• Learning from indirect experience
• Social, observation-based refinement
• Ex) primates, invertebrates, birds
4 (Human-level)
• Learning from symbolic experience
• Language-based refinement
• Ex) humans
Theorem 2 (
2The Strong Story Hypothesis) The mechanisms that enable humans to tell, understand, and recombine stories separate human intelligence from that of other primates.Example 6 As a friend helped me install a table saw, he said, "You should never wear gloves when
you use this saw." At first, I was mystified, then it occured to me that a glove could get caught in the
blade. No further explanation was needed because I could imagine what would follow.
Mapping the landscape of human-level artificial general intelligence. AI magazine. Sam Adams, Itmar Arel, Joscha Bach, Robert Coop, Rod Furlan, Ben Goertzel, Storrs Hall, Alexei Samsonovich, Matthias Scheutz, Matthew Schlesinger, 33Sam Adams, Itmar Arel, Joscha Bach, Robert Coop, Rod Furlan, Ben Goertzel, J Storrs Hall, Alexei Samsonovich, Matthias Scheutz, Matthew Schlesinger, et al. Mapping the landscape of human-level artificial general intelligence. AI magazine, 33(1):25-42, 2012.
Primate vocalization, gesture, and the evolution of human language. Katja Michael A Arbib, Simone Liebal, Pika, C Michael, Chris Corballis, Knight, A David, Dario Leavens, Joanne E Maestripieri, Tanner, A Michael, Katja Arbib, Liebal, Current anthropology. 496Michael A Arbib, Katja Liebal, Simone Pika, Michael C Corballis, Chris Knight, David A Leavens, Dario Maestripieri, Joanne E Tanner, Michael A Arbib, Katja Liebal, et al. Primate vocalization, gesture, and the evolution of human language. Current anthropology, 49(6): 1053-1076, 2008.
Cognitive developmental robotics: A survey. Minoru Asada, Koh Hosoda, Yasuo Kuniyoshi, Hiroshi Ishiguro, Toshio Inui, Yuichiro Yoshikawa, Masaki Ogino, Chisato Yoshida, IEEE transactions on autonomous mental development. 11Minoru Asada, Koh Hosoda, Yasuo Kuniyoshi, Hiroshi Ishiguro, Toshio Inui, Yuichiro Yoshikawa, Masaki Ogino, and Chisato Yoshida. Cognitive developmental robotics: A sur- vey. IEEE transactions on autonomous mental development, 1(1):12-34, 2009.
Social learning theory. Albert Bandura, C David, Mcclelland, Englewood cliffs. 1Prentice HallAlbert Bandura and David C McClelland. Social learning theory, volume 1. Englewood cliffs Prentice Hall, 1977.
Unifying count-based exploration and intrinsic motivation. G Marc, Sriram Bellemare, Georg Srinivasan, Tom Ostrovski, David Schaul, Rémi Saxton, Munos, NIPS. Marc G. Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, and Rémi Munos. Unifying count-based exploration and intrinsic motivation. In NIPS, pages 1471-1479, 2016.
. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba, arXiv:1606.01540Openai gym. arXiv preprintGreg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
How laminar frontal cortex and basal ganglia circuits interact to control planned and reactive saccades. W Joshua, Daniel Brown, Stephen Bullock, Grossberg, Neural Networks. 174Joshua W Brown, Daniel Bullock, and Stephen Grossberg. How laminar frontal cortex and basal ganglia circuits interact to control planned and reactive saccades. Neural Networks, 17 (4):471-510, 2004.
Language models are few-shot learners. Benjamin Tom B Brown, Nick Mann, Melanie Ryder, Jared Subbiah, Prafulla Kaplan, Arvind Dhariwal, Pranav Neelakantan, Girish Shyam, Amanda Sastry, Askell, arXiv:2005.14165arXiv preprintTom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhari- wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020.
Developmental robotics: From babies to robots. Angelo Cangelosi, Matthew Schlesinger, MIT pressAngelo Cangelosi and Matthew Schlesinger. Developmental robotics: From babies to robots. MIT press, 2015.
. Kris Cao, Angeliki Lazaridou, Marc Lanctot, Joel Z Leibo, Karl Tuyls, Stephen Clark, arXiv:1804.03980arXiv preprintEmergent communication through negotiationKris Cao, Angeliki Lazaridou, Marc Lanctot, Joel Z Leibo, Karl Tuyls, and Stephen Clark. Emergent communication through negotiation. arXiv preprint arXiv:1804.03980, 2018.
Gated-attention architectures for task-oriented language grounding. Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, Ruslan Salakhutdinov, Thirty-Second AAAI Conference on Artificial Intelligence. Devendra Singh Chaplot, Kanthashree Mysore Sathyendra, Rama Kumar Pasumarthi, Dheeraj Rajagopal, and Ruslan Salakhutdinov. Gated-attention architectures for task-oriented language grounding. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
Touchdown: Natural language navigation and spatial reasoning in visual street environments. Howard Chen, Alane Suhr, Dipendra Misra, Noah Snavely, Yoav Artzi, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionHoward Chen, Alane Suhr, Dipendra Misra, Noah Snavely, and Yoav Artzi. Touchdown: Natu- ral language navigation and spatial reasoning in visual street environments. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12538-12547, 2019.
Textworld: A learning environment for text-based games. Ákos Marc-Alexandre Côté, Xingdi Kádár, Ben Yuan, Tavian Kybartas, Emery Barnes, James Fine, Matthew Moore, Layla El Hausknecht, Mahmoud Asri, Adada, Workshop on Computer Games. Marc-Alexandre Côté, Ákos Kádár, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, et al. Textworld: A learning environment for text-based games. In Workshop on Computer Games, pages 41-75.
. Springer, Springer, 2018.
The formation of vegetable mould through the action of worms: with observations on their habits. Charles Darwin, Appleton371892Charles Darwin. The formation of vegetable mould through the action of worms: with obser- vations on their habits, volume 37. Appleton, 1892.
Embodied question answering. Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, Dhruv Batra, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. the IEEE Conference on Computer Vision and Pattern Recognition WorkshopsAbhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra. Embodied question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 2054-2063, 2018.
Abhishek Das, Théophile Gervet, Joshua Romoff, Dhruv Batra, Devi Parikh, Michael Rabbat, Joelle Pineau, arXiv:1810.11187Tarmac: Targeted multi-agent communication. arXiv preprintAbhishek Das, Théophile Gervet, Joshua Romoff, Dhruv Batra, Devi Parikh, Michael Rab- bat, and Joelle Pineau. Tarmac: Targeted multi-agent communication. arXiv preprint arXiv:1810.11187, 2018.
The selfish gene. Richard Dawkins, Oxford university pressRichard Dawkins. The selfish gene. Oxford university press, 2016.
Course in general linguistics. Ferdinand De Saussure, Marguerite Albert Sechehaye, Charles Bally and Albert Sechehaye in Collaboration With Albert Riedlinger. Translated, With an Introd. and Notes by Wade Baskin. McGraw-HillFerdinand De Saussure and Marguerite Albert Sechehaye. Course in general linguistics, Edited by Charles Bally and Albert Sechehaye in Collaboration With Albert Riedlinger. Translated, With an Introd. and Notes by Wade Baskin. McGraw-Hill, 1966.
Observational learning. International journal of psychology. R Douglas Greer, Jessica Dudek-Singer, Grant Gautreaux, 41R Douglas Greer, Jessica Dudek-Singer, and Grant Gautreaux. Observational learning. Inter- national journal of psychology, 41(6):486-499, 2006.
Biases for emergent communication in multi-agent reinforcement learning. Tom Eccles, Yoram Bachrach, Guy Lever, Angeliki Lazaridou, Thore Graepel, Advances in Neural Information Processing Systems. Tom Eccles, Yoram Bachrach, Guy Lever, Angeliki Lazaridou, and Thore Graepel. Biases for emergent communication in multi-agent reinforcement learning. In Advances in Neural Information Processing Systems, pages 13111-13121, 2019.
Role of primate superior colliculus in preparation and execution of anti-saccades and pro-saccades. Stefan Everling, C Michael, Raymond M Dorris, Douglas P Klein, Munoz, Journal of Neuroscience. 197Stefan Everling, Michael C Dorris, Raymond M Klein, and Douglas P Munoz. Role of primate superior colliculus in preparation and execution of anti-saccades and pro-saccades. Journal of Neuroscience, 19(7):2740-2754, 1999.
Macaque monkeys learn by observation in the ghost display condition in the object-in-place task with differential reward to the observer. Lorenzo Ferrucci, Simon Nougaret, Aldo Genovesio, Scientific reports. 91Lorenzo Ferrucci, Simon Nougaret, and Aldo Genovesio. Macaque monkeys learn by obser- vation in the ghost display condition in the object-in-place task with differential reward to the observer. Scientific reports, 9(1):1-9, 2019.
Learning to communicate with deep multi-agent reinforcement learning. Jakob Foerster, Nando Ioannis Alexandros Assael, Shimon De Freitas, Whiteson, Advances in neural information processing systems. Jakob Foerster, Ioannis Alexandros Assael, Nando De Freitas, and Shimon Whiteson. Learn- ing to communicate with deep multi-agent reinforcement learning. In Advances in neural information processing systems, pages 2137-2145, 2016.
Deberta: Decoding-enhanced bert with disentangled attention. Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, arXiv:2006.03654arXiv preprintPengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654, 2020.
Grounded language learning in a simulated 3d world. Karl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, arXiv:1706.06551arXiv preprintKarl Moritz Hermann, Felix Hill, Simon Green, Fumin Wang, Ryan Faulkner, Hubert Soyer, David Szepesvari, Wojciech Marian Czarnecki, Max Jaderberg, Denis Teplyashin, et al. Grounded language learning in a simulated 3d world. arXiv preprint arXiv:1706.06551, 2017.
Functional properties of monkey caudate neurons. iii. activities related to expectation of target and reward. Okihide Hikosaka, Masahiro Sakamoto, Sadanari Usui, Journal of neurophysiology. 614Okihide Hikosaka, Masahiro Sakamoto, and Sadanari Usui. Functional properties of monkey caudate neurons. iii. activities related to expectation of target and reward. Journal of neuro- physiology, 61(4):814-832, 1989.
Communication in bottlenose dolphins: 50 years of signature whistle research. M Vincent, Janik, S Laela, Sayigh, Journal of Comparative Physiology A. 1996Vincent M Janik and Laela S Sayigh. Communication in bottlenose dolphins: 50 years of signature whistle research. Journal of Comparative Physiology A, 199(6):479-489, 2013.
Constraints on knowledge and cognitive development. The psychological review. C Frank, Keil, 0033-295X88Frank C Keil. Constraints on knowledge and cognitive development. The psychological review, 88(3), 1981. ISSN 0033-295X.
Next-generation of virtual personal assistants (microsoft cortana, apple siri, amazon alexa and google home). Veton Kepuska, Gamal Bohouta, 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC). IEEEVeton Kepuska and Gamal Bohouta. Next-generation of virtual personal assistants (microsoft cortana, apple siri, amazon alexa and google home). In 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), pages 99-103. IEEE, 2018.
Natural language does not emerge'naturally'in multi-agent dialog. Satwik Kottur, M F José, Stefan Moura, Dhruv Lee, Batra, arXiv:1706.08502arXiv preprintSatwik Kottur, José MF Moura, Stefan Lee, and Dhruv Batra. Natural language does not emerge'naturally'in multi-agent dialog. arXiv preprint arXiv:1706.08502, 2017.
Is speech learning 'gated'by the social brain? Developmental science. K Patricia, Kuhl, 10Patricia K Kuhl. Is speech learning 'gated'by the social brain? Developmental science, 10(1): 110-120, 2007.
A psychology based approach for longitudinal development in cognitive robotics. James Law, Patricia Shaw, Kevin Earland, Michael Sheldon, Mark Lee, 10.3389/fnbot.2014.00001Frontiers in Neurorobotics. 81James Law, Patricia Shaw, Kevin Earland, Michael Sheldon, and Mark Lee. A psychology based approach for longitudinal development in cognitive robotics. Frontiers in Neurorobotics, 8:1, 2014. ISSN 1662-5218. doi: 10.3389/fnbot.2014.00001.
How to grow a robot: Developing human-friendly, social AI. H Mark, Lee, MIT PressMark H Lee. How to grow a robot: Developing human-friendly, social AI. MIT Press, 2020.
Universal intelligence: A definition of machine intelligence. Minds and machines. Shane Legg, Marcus Hutter, 17Shane Legg and Marcus Hutter. Universal intelligence: A definition of machine intelligence. Minds and machines, 17(4):391-444, 2007.
A collection of definitions of intelligence. Shane Legg, Marcus Hutter, Frontiers in Artificial Intelligence and applications. 15717Shane Legg, Marcus Hutter, et al. A collection of definitions of intelligence. Frontiers in Artificial Intelligence and applications, 157:17, 2007.
The winograd schema challenge. Hector Levesque, Ernest Davis, Leora Morgenstern, Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning. CiteseerHector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning. Citeseer, 2012.
Why?: What makes us curious. Mario Livio, Mario Livio. Why?: What makes us curious. Simon and Schuster, 2017.
Developmental robotics: a survey. Max Lungarella, Giorgio Metta, Rolf Pfeifer, Giulio Sandini, Connection science. 154Max Lungarella, Giorgio Metta, Rolf Pfeifer, and Giulio Sandini. Developmental robotics: a survey. Connection science, 15(4):151-190, 2003.
The natural language decathlon: Multitask learning as question answering. Bryan Mccann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher, arXiv:1806.08730arXiv preprintBryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. The natural lan- guage decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730, 2018.
Deep neuroethology of a virtual rodent. Josh Merel, Diego Aldarondo, Jesse Marshall, Yuval Tassa, Greg Wayne, Bence Olveczky, International Conference on Learning Representations. Josh Merel, Diego Aldarondo, Jesse Marshall, Yuval Tassa, Greg Wayne, and Bence Olveczky. Deep neuroethology of a virtual rodent. In International Conference on Learning Representa- tions, 2019.
Language understanding for textbased games using deep reinforcement learning. Karthik Narasimhan, Tejas Kulkarni, Regina Barzilay, arXiv:1506.08941arXiv preprintKarthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. Language understanding for text- based games using deep reinforcement learning. arXiv preprint arXiv:1506.08941, 2015.
Intrinsic motivation systems for autonomous mental development. Pierre-Yves Oudeyer, Frdric Kaplan, Verena V Hafner, IEEE transactions on evolutionary computation. 112Pierre-Yves Oudeyer, Frdric Kaplan, and Verena V Hafner. Intrinsic motivation systems for autonomous mental development. IEEE transactions on evolutionary computation, 11(2):265- 286, 2007.
The language instinct: How the mind creates language. Steven Pinker, Penguin UKSteven Pinker. The language instinct: How the mind creates language. Penguin UK, 2003.
Sm Mazharul Islam, and Deokgun Park. Sedro: A simulated environment for developmental robotics. Aishwarya Pothula, Md Ashaduzzaman Rubel, Sanath Mondol, Narasimhan, Aishwarya Pothula, Md Ashaduzzaman Rubel Mondol, Sanath Narasimhan, Sm Mazharul Islam, and Deokgun Park. Sedro: A simulated environment for developmental robotics, 2020.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Journal of Machine Learning Research. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research, 21(140):1-67, 2020.
Habitat: A platform for embodied ai research. Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionManolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, et al. Habitat: A platform for embodied ai research. In Proceedings of the IEEE International Conference on Computer Vision, pages 9339-9347, 2019.
Mastering atari, go, chess and shogi by planning with a learned model. Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Nature. 5887839Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. Mas- tering atari, go, chess and shogi by planning with a learned model. Nature, 588(7839):604-609, 2020.
Trust region policy optimization. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, Philipp Moritz, International conference on machine learning. PMLRJohn Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International conference on machine learning, pages 1889- 1897. PMLR, 2015.
M Stuart, Shieber, Lessons from a restricted turing test. arXiv preprint cmp-lg/9404002. Stuart M Shieber. Lessons from a restricted turing test. arXiv preprint cmp-lg/9404002, 1994.
Alfred: A benchmark for interpreting grounded instructions for everyday tasks. Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionMohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mot- taghi, Luke Zettlemoyer, and Dieter Fox. Alfred: A benchmark for interpreting grounded instructions for everyday tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10740-10749, 2020.
Mastering the game of go without human knowledge. David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, nature. 5507676David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. nature, 550(7676):354-359, 2017.
Intrinsically motivated reinforcement learning. Satinder Singh, G Andrew, Nuttapong Barto, ; Massachusetts Univ Amherst Dept Of Com-Puter Chentanez, Science, Technical reportSatinder Singh, Andrew G Barto, and Nuttapong Chentanez. Intrinsically motivated reinforce- ment learning. Technical report, MASSACHUSETTS UNIV AMHERST DEPT OF COM- PUTER SCIENCE, 2005.
A developmental approach to machine learning? Frontiers in psychology. B Linda, Lauren K Smith, Slone, 82124Linda B Smith and Lauren K Slone. A developmental approach to machine learning? Frontiers in psychology, 8:2124, 2017.
Introduction to reinforcement learning. S Richard, Andrew G Sutton, Barto, MIT press Cambridge135Richard S Sutton, Andrew G Barto, et al. Introduction to reinforcement learning, volume 135. MIT press Cambridge, 1998.
The study of instinct. Niko Tinbergen, Clarendon Press/Oxford University PressNiko Tinbergen. The study of instinct. Clarendon Press/Oxford University Press, 1951.
. AM Turing. Computing machinery and intelligence. Mind. 59AM Turing. Computing machinery and intelligence. Mind, 59:433-460, 1950.
Oriol Vinyals, Igor Babuschkin, Junyoung Chung, Michael Mathieu, Max Jaderberg, Wojciech M Czarnecki, Andrew Dudzik, Aja Huang, Petko Georgiev, Richard Powell, phastar: Mastering the real-time strategy game starcraft ii. DeepMind blog. 2Oriol Vinyals, Igor Babuschkin, Junyoung Chung, Michael Mathieu, Max Jaderberg, Woj- ciech M Czarnecki, Andrew Dudzik, Aja Huang, Petko Georgiev, Richard Powell, et al. Al- phastar: Mastering the real-time strategy game starcraft ii. DeepMind blog, page 2, 2019.
Grandmaster level in starcraft ii using multi-agent reinforcement learning. Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, H David, Richard Choi, Timo Powell, Petko Ewalds, Georgiev, Nature. 5757782Oriol Vinyals, Igor Babuschkin, Wojciech M Czarnecki, Michaël Mathieu, Andrew Dudzik, Junyoung Chung, David H Choi, Richard Powell, Timo Ewalds, Petko Georgiev, et al. Grand- master level in starcraft ii using multi-agent reinforcement learning. Nature, 575(7782):350- 354, 2019.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, arXiv:1804.07461Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprintAlex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018.
. Wesley C Warren, W Ladeana, Jennifer A Marshall Hillier, Ewan Graves, Chris P Birney, Frank Ponting, Katherine Grützner, Webb Belov, Laura Miller, Clarke, T Asif, Chinwalla, Wesley C Warren, LaDeana W Hillier, Jennifer A Marshall Graves, Ewan Birney, Chris P Ponting, Frank Grützner, Katherine Belov, Webb Miller, Laura Clarke, Asif T Chinwalla, et al.
Genome analysis of the platypus reveals unique signatures of evolution. Nature. 4537192175Genome analysis of the platypus reveals unique signatures of evolution. Nature, 453(7192): 175, 2008.
The strong story hypothesis and the directed perception hypothesis. Patrick Henry, Winston , 2011 AAAI Fall Symposium Series. Patrick Henry Winston. The strong story hypothesis and the directed perception hypothesis. In 2011 AAAI Fall Symposium Series, 2011.
| [] |
[
"Korean-Specific Dataset for Table Question Answering",
"Korean-Specific Dataset for Table Question Answering"
] | [
"Changwook Jun cwjun@lgresearch.ai \nLG AI Research ISC\n30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea\n",
"Jooyoung Choi jooyoung.choi@lgresearch.ai \nLG AI Research ISC\n30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea\n",
"Myoseop Sim myoseop.sim@lgresearch.ai \nLG AI Research ISC\n30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea\n",
"Hyun Kim hyun101.kim@lgresearch.ai \nLG AI Research ISC\n30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea\n",
"Hansol Jang hansol.jang@lgresearch.ai \nLG AI Research ISC\n30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea\n",
"Kyungkoo Min \nLG AI Research ISC\n30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea\n"
] | [
"LG AI Research ISC\n30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea",
"LG AI Research ISC\n30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea",
"LG AI Research ISC\n30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea",
"LG AI Research ISC\n30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea",
"LG AI Research ISC\n30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea",
"LG AI Research ISC\n30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea"
] | [
"Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)"
] | Existing question answering systems mainly focus on dealing with text data. However, much of the data produced daily is stored in the form of tables that can be found in documents and relational databases, or on the web. To solve the task of question answering over tables, there exist many datasets for table question answering written in English, but few Korean datasets. In this paper, we demonstrate how we construct Korean-specific datasets for table question answering: Korean tabular dataset is a collection of 1.4M tables with corresponding descriptions for unsupervised pre-training language models. Korean table question answering corpus consists of 70k pairs of questions and answers created by crowd-sourced workers. Subsequently, we then build a pre-trained language model based on Transformer and fine-tune the model for table question answering with these datasets. We then report the evaluation results of our model. We make our datasets publicly available via our GitHub repository and hope that those datasets will help further studies for question answering over tables, and for the transformation of table formats. | null | [
"https://www.aclanthology.org/2022.lrec-1.657.pdf"
] | 246,015,527 | 2201.06223 | 13becc9cfee51ff8062063b70bd3636dd719e292 |
Korean-Specific Dataset for Table Question Answering
June 2022
Changwook Jun cwjun@lgresearch.ai
LG AI Research ISC
30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea
Jooyoung Choi jooyoung.choi@lgresearch.ai
LG AI Research ISC
30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea
Myoseop Sim myoseop.sim@lgresearch.ai
LG AI Research ISC
30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea
Hyun Kim hyun101.kim@lgresearch.ai
LG AI Research ISC
30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea
Hansol Jang hansol.jang@lgresearch.ai
LG AI Research ISC
30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea
Kyungkoo Min
LG AI Research ISC
30, Magokjungang 10-ro, Gangseo-gu07796SeoulKorea
Korean-Specific Dataset for Table Question Answering
Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)
the 13th Conference on Language Resources and Evaluation (LREC 2022)MarseilleJune 2022Language Resources Association (ELRA), licensed under CC-BY-NC-4.0 6114Table question-answeringKO-TaBERTKorWikiTQ
Existing question answering systems mainly focus on dealing with text data. However, much of the data produced daily is stored in the form of tables that can be found in documents and relational databases, or on the web. To solve the task of question answering over tables, there exist many datasets for table question answering written in English, but few Korean datasets. In this paper, we demonstrate how we construct Korean-specific datasets for table question answering: Korean tabular dataset is a collection of 1.4M tables with corresponding descriptions for unsupervised pre-training language models. Korean table question answering corpus consists of 70k pairs of questions and answers created by crowd-sourced workers. Subsequently, we then build a pre-trained language model based on Transformer and fine-tune the model for table question answering with these datasets. We then report the evaluation results of our model. We make our datasets publicly available via our GitHub repository and hope that those datasets will help further studies for question answering over tables, and for the transformation of table formats.
Introduction
The task of question answering is to correctly answer to given questions, which requires a high level of language understanding and machine reading comprehension abilities. As pre-trained language models on Transformer (Vaswani et al., 2017) have brought significant improvements in performance in many natural language processing tasks, there have been many studies in machine reading comprehension (MRC) and question answering (QA) tasks (Devlin et al., 2018;Lan et al., 2019;Yang et al., 2019;Yamada et al., 2020;Liu et al., 2019;Clark et al., 2020). There are Stanford Question Answering Dataset (SQuAD) benchmarks (Rajpurkar et al., 2016;Rajpurkar et al., 2018), well-known machine reading comprehension benchmarks in the NLP area, that involve reasoning correct answer spans in the evidence document to a given question. Since SQuAD datasets are composed of pairs of natural language questions and answers, and the corresponding documents are unstructured textual data, the task is mainly to focus on predicting answers from plain texts. Much of the world's information produced daily is stored in structured formats such as tables in databases and documents, or tables on the web. Question answering over these structured tables has been generally considered a semantic parsing task in which a natural language question is translated to a logical form that can be executed to generate the correct answer (Pasupat and Liang, 2015;Zhong et al., 2017;Dasigi et al., 2019;Wang et al., 2019;Rubin and Berant, 2020). There have been many efforts to build semantic parsers with supervised training datasets such as WikiSQL (Zhong et al., 2017) consisted of pairs of questions and structured query language (SQL) and Spider dataset (Yu et al., 2018) that aims for a task of converting text to SQL. However, it is expensive to create such data, and there are challenges in generating logical forms. In recent years, a few studies attempt the task of question answering over tables without generating logical forms (Herzig et al., 2020;Yin et al., 2020;Zayats et al., 2021;Zayats et al., 2021). They introduce approaches of pre-trained language models based on BERT (Devlin et al., 2018) to learn representations of natural language sentences and structured tables jointly by extending embeddings, and these models achieve strong performance on the semantic parsing datasets.
In this paper, for the Korean-specific table question answering task, we present KO-TaBERT, a new approach to train BERT-based models that learn jointly textual and structured tabular data by converting table structures. To address this, we first create two datasets written in the Korean language: the tabular dataset contains conversion formats of around 1.4M tables extracted from Korean Wikipedia documents for pre-training language models, and the table question answering dataset for fine-tuning the models. The table question answering dataset consists of 70k pairs of questions and answers, and the questions are generated by crowdsourced workers considering question difficulty. Additionally, we introduce how structured tables are converted into sentence formats. The conversion formats play a crucial role for models to learn table structural information effectively without changing embeddings. Second, we follow BERT architecture (Devlin et al., 2018) to pre-train a language model with the converted strings from millions of tables, and fine-tune models on the table question answering dataset. All resources we create in this study are released via our GitHub reposi-tory 1 . We evaluate our model on the downstream task of table question answering. For performance evaluation, we create the test dataset that includes around 10k question-answer pairs related to tables in Korean Question Answering Dataset (KorQuAD 2.0) 2 , and 20% splits of the crowdsourced dataset. KO-TaBERT achieves EM 82.8% and F1 86.5% overall. Comparisons of the model performance according to question difficulty and different table conversion formats are also reported in this study. We summarize our main contributions as follows:
• We construct two Korean-specific datasets: Korean Wikipedia Tabular dataset (KorWikiTabular) consisting of 1.4M tables that are converted into sentence strings containing tabular structural information, and Korean Wikipedia Table Questions dataset (KorWikiTQ) including 70k pairs of questions and answers generated according to question difficulty by paid crowdsourced workers.
• We introduce approaches to converting tabular data into sentence strings, which allows models to represent table structural properties and information.
• We present KO-TaBERT, a pre-trained language model that learns syntactic and lexical information and as well structural information from tables.
• We build table question answering models based on the pre-trained language model using our datasets. Our model is evaluated on the tablerelated subset of the KorQuAD 2.0 corpus, and they can be treated as baselines for future research.
Related Works
There have been many studies to reason the correct answer to a given question over tables. A semantic parsing task is applied for question answering over tables, which translates natural language questions to logical forms such as SQL. Several works (Hwang et al., 2019;Lyu et al., 2020;Lin et al., 2020;Cao et al., 2021) introduce approaches that leverage BERT (Devlin et al., 2018), a pre-trained language model (PLM), in the text-to-SQL task, since PLMs with a deep contextualized word representation have led noticeable improvements in many NLP challenges. Word contextualization has contributed to improving accuracy for the generation of logical forms, but there still remains difficulties to generate logical forms obeying decoding constraints. There is also a limitation that the text-to-SQL approaches require table data to be stored in a database. Many recent studies have shown that supervised question answering models successfully reason over tables without generating SQL. TAPAS (Herzig et al., 2020) is a new approach to pre-training a language model that extends BERT by inserting additional embeddings to better understand tabular structure. Similarly, TURL (Deng et al., 2020) implements structure-aware Transformer encoder to learn deep contextualised representations for relational table understanding. Masked Entity Recovery as a novel pretraining objective is proposed to capture complex semantics knowledge about entities in relational tables. TABERT (Yin et al., 2020) and TABFACT also use BERT architecture to jointly understand contextualised representations for textual and tabular data. Unlike other approaches, TABERT encodes a subset of table content that is most relevant to a question in order to deal with large tables. In this study, we propose a BERT-based approach to learn deep contextualised representations of table structural information via conversion of table formats for the task of table question answering.
Korean Table Question Answering Dataset
In this study, we pre-train a language model with structured data extracted from Korean Wikipedia for the task of question answering over table, since pretraining models have shown significant improvements in many natural language understanding tasks. For pretraining input data, we generate KorWikiTabular containing pairs of structured tabular data and related texts to the tables. We also create KorWikiTQ, a corpus of question-answer pairs with tables in which the answers are contained for the table question answering task.
Tabular Dataset for Pre-training
We collect about 1.4M tables (T) from the Korean Wikipedia dump in order to pre-train a Transformerbased language model for tabular contextualised embeddings. Since we hypothesis that descriptions (D) in the article of wikipedia that contains a For Infobox, the table headers mainly located in the first column are also recognised using the <th> tags, then they are converted into the string sequences associated with relevant cells like wikitables.
Crowdsourcing for
Modeling of Table Question Answering
In this study, we pre-train a language model with the converted tabular dataset described in Section 3.1 for a better understanding of structural information and as well syntactic and lexical information considering the downstream task of table question-answering. We also fine-tune the model using the table question answering corpus. We evaluate our model, and the fine-tuning results in the task of question-answering over tables are described in detail.
Pre-training Language Model
We use the Transformer (Vaswani et al., 2017) approach for pre-training a language model. In specific, we follow the original architecture of BERT (Devlin et al., 2018) which is a transformer-based pre-trained language model, which learns context in text using masked language modeling (MLM) and the next sentence prediction (NSP) objectives for self-supervised pre-training. BERT Base model is adopted in our experiments, and we use the same percentage of 15% for masking table cells or text segments for the MLM training, but the NSP objective is not used. We build a new vocabulary of 119,547 Korean word-pieces. We generate input sequences with the special tokens [CLS] and [SEP], and description texts including relevant sentences such as the article title and heading titles are inserted between them. Tabular data converted to string sequences is followed by the [SEP] token. During a pre-training procedure, our model learns contextual representations jointly from unstructured natural language sentences and structured tabular data. Details of hyperparameters that we used for pre-training KO-TaBERT are summarised in Appendix A.
Fine-tuning Model
For fine-tuning of the table question-answering task, we prepare datasets from different sources: the table question-answering dataset created by crowd-sourced workers described in 3.2, and about 2k pairs of questions and answers related to tables selected from Ko-rQuAD 2.0 corpus. We split the selected KorQuAD 2.0 corpus and the crowdsourced dataset with a 20% ratio respectively as the test dataset for evaluation. The rest of the data is used for fine-tuning training. We train a question-answering model on the dataset. Similar to SQuAD which is a major extractive questionanswering benchmark (Rajpurkar et al., 2016;Rajpurkar et al., 2018), the fine-tuning model aims to predict the correct span of an answer defined as a start and end boundaries for a given question. We describe the experimental details for the downstream task in Appendix B.
Evaluation and Results
We evaluate our model on the test dataset consisting of the subsets from the KorQuAD 2.0 and the crowdsourced table question-answering datasets, which are not used during the training of the model. As for evaluation metrics, Exact Match (EM) scores if each character of a predicted answer is exactly the same as the ground truth and F1-score (F1) that calculates token overlaps between a predicted answer and the groundtruth as F1 are used in our experiments.
Conclusions
In this paper, we introduce two new Korean-specific datasets, KorWikiTabular and KorWikiTQ for the task of table question-answering in the Korean language, and present KO-TaBERT, a pre-trained language model for the task. In particular, we demonstrate how tabular data is converted into linearlised texts containing structural information and properties. We construct a tabular dataset by extracting tables and converting them into sentence strings with tabular structural information for pre-training a language model. We also create a table question answering corpus with paid crowdsourced workers. The corpus consists of 70k pairs of questions and answers related to tables on Wikipedia articles, and those questions are generated specifically considering levels of question difficulty. We conduct experiments on the table question answering task. Our model achieves the best performance when converted table sentence strings include richly structural features.
In future work, we aim to extend the model for complex question answering over texts and tables with the generation of multimodal questions to jointly handle question answering from textual and tabular sources. We hope that our datasets will help further studies for question answering over tables, and for the transformation of table formats.
Figure 3 Figure 3 :
33describes question types generated regarding the question difficulties. Our crowd-sourced workers generate the five types of natural language questions for a given table. As shown inFigure 3, an example question of Level1 is KR: UEFA 유로 2004 A조에 서 포르투갈팀의 승점은 몇 점인가? (EN: how many points did Portugal obtain in UEFA Euro 2004 Group A?): querying values of other columns such as Points (Pts) when the value of the base column Team is Portugal. In this case, we intend an answer value to be located on the right of the base column. For Level3, on the contrary, an answer is located in the left column from the base column as shown in the example, Examples of a natural language question set related to a
table benefit building improved representations of the table forquestion answering, pairs of tables and their description texts are extracted from Wikipedia. As pre-training inputs for tabular contextualization, we extract Infobox which is formatted as a table on the top right-hand corner of Wikipedia documents to summarise the information of an article on Wikipedia, and WikiTable as shown inFigure 1. As description texts for a table, an article title, the first paragraph in the article, table captions if exist, heading and sub-heading titles for the article are considered as the pre-training dataset. That is, D = {d 1 ,d 2 , ..., d n }. For pre-training a language model, we generate input sequences by converting the extracted infoboxes and wikitables into sentence string texts.Figure 2 ex-
plains how a table format is converted into sentence
strings when the table is a relational table that con-
tains columns (T c ) or fields describing rows (T r ) of
Figure 1: Example of Infobox and WikiTable in a Wikipedia document
data. Each table is formed in two-dimensional tabu-
lar data consisting of columns and rows, denoted as T
= {t (c1,r1) ,t (c2,r1) ,t (cn,r1) ..., t (c1,rm) ,t (c2,rm) ,t (cn,rm) } when n
and m are sizes of column and row respectively. Table
column headers are identified with rows and cells, and
then the columns are concatenated with corresponding
cells with predefined special characters, which allows
the model to learn structural relations between cells of
columns and table headers.
Figure 2: Converting WikiTable format into string se-
quences for pre-training input. The converted table
strings are added with descriptions for the Wikipedia
article.
Table Question Answering Corpus
QuestionUnfortunately, there do not exist datasets written in Ko-
rean aiming for the table question-answering task pub-
licly available. Even though KorQuAD 2.0 (Lim et al.,
2019) contains some questions that their answers are
related to tables, they are insufficient to train models
for the task.
For conducting the downstream task of question an-
swering over tables, we create a large dataset consisting
of 70k pairs of questions and answers by hiring paid,
crowdsourced workers. We select about 20k tables that
contain rows greater than 5 and less than 15 for this
crowdsourcing task. Larger tables including columns
greater than 10 are ignored, since the length of the con-
verted sentence strings from a table may exceed the
maximum sequence length of 512 tokens. This is be-
cause column headers are repeatedly inserted into the
strings as the number of table headers increases.
We define the five-question variations by extending to
the previous work (Park et al., 2020), considering ques-
tion difficulties that our model can predict answers as
follows:
• Level1: Question [column−others] where
[column−base] has [value]
• Level2: Question [column−others] where
[column−base] has [condition]
• Level3:
Question
[column−base]
where
[column−others] has [value]
• Level4: Variation of the questions in other levels
• Level5: Question min or max in [column−base]
where [column−others] has [value of numbers,
dates, ranks, etc.]
Table 2 :
2Comparison of model performance according to each level of questions in the crowdsourced dataset. The performance according to the levels of questions is described inTable 2. The results show that the performance decreases as the difficulty of questions increases. In particular, the table question-answering model performs poorly on the Level5 questions that require comprehensive machine understanding capabilities such as ordering and counting table cell values, and operations such as minimum, maximum or average in a table column.Figure 4: Example of the new conversion approach for complicated structured tables that consisting of merged and multi column headers into sentence strings.We conduct additional experiments with the new approach of table format conversion by mapping the cell values in the first column (called row header) into the first column header. As illustrated inFigure 4, column 1 accompanied by the value of cell 1 is added to table cell values in the same row. This aims to apply important information in a table to the converted sentence strings by considering table structural characteristics. We also take account of tables informed of complicated structures. For converting those tables into sentence strings, we classify tables into single, multi, and merged column headings using <th> tags. Then, column headers are concatenated if table columns have multi headers as shown in the example ofFigure 4. In this conversion approach, the length of converted sentence strings increases since the headers are repeatedly inserted. Thus, we limit the word length of a converted sentence string from 250 to 300 tokens for pre-training input.The test dataset consists of questions with their difficul-
ties generated by crowdsourced workers as described in
Section 3.2. Dataset source
Format
EM
F1
KorQuAD 2.0
v1
64.5
74.5
KorQuAD 2.0
v2
69.1
78.4
Crowd-sourced
v1
83.9
87.2
Crowd-sourced
v2
87.2
91.2
Table 3 :
3Comparison of model performance with different table parsing approaches. Format v1 is the table conversion described in Figure 2. We compare the performance of models pre-trained with different table conversion formats in Table 3. Using the new format of table conversion considering structural complexity improves the performance of models on the table question answering task. The results indicate that the new table conversion format can effectively apply table structural features.
https://github.com/LG-NLP/KorWikiTableQuestions 2 https://korquad.github.io/
AppendicesA. Pre-training DetailsB. Fine-tuning ExperimentWe describe the fine-tuning results of our model KO-TaBERT transferred to the task of question answering over tables in Section 4.3. For the experiments, we use the maximum sequence length of 512 and the Adam optimization kingma2014adam with the learning rate of 5e-5. Our model is fine-tuned on 8 TPU v3 with a batch size of 32 and 3 epochs. Other hyperparameters are the same as the ones used for pre-training.
Lgesql: Line graph enhanced text-tosql model with mixed local and non-local relations. R Cao, L Chen, Z Chen, Y Zhao, S Zhu, Yu , K , arXiv:2106.01093arXiv preprintCao, R., Chen, L., Chen, Z., Zhao, Y., Zhu, S., and Yu, K. (2021). Lgesql: Line graph enhanced text-to- sql model with mixed local and non-local relations. arXiv preprint arXiv:2106.01093.
W Chen, H Wang, J Chen, Y Zhang, H Wang, S Li, X Zhou, W Y Wang, arXiv:1909.02164Tabfact: A large-scale dataset for table-based fact verification. arXiv preprintChen, W., Wang, H., Chen, J., Zhang, Y., Wang, H., Li, S., Zhou, X., and Wang, W. Y. (2019). Tabfact: A large-scale dataset for table-based fact verification. arXiv preprint arXiv:1909.02164.
W Chen, H Zha, Z Chen, W Xiong, H Wang, Wang , W , arXiv:2004.07347Hybridqa: A dataset of multihop question answering over tabular and textual data. arXiv preprintChen, W., Zha, H., Chen, Z., Xiong, W., Wang, H., and Wang, W. (2020). Hybridqa: A dataset of multi- hop question answering over tabular and textual data. arXiv preprint arXiv:2004.07347.
K Clark, M.-T Luong, Q V Le, C D Manning, arXiv:2003.10555Electra: Pre-training text encoders as discriminators rather than generators. arXiv preprintClark, K., Luong, M.-T., Le, Q. V., and Manning, C. D. (2020). Electra: Pre-training text encoders as dis- criminators rather than generators. arXiv preprint arXiv:2003.10555.
Iterative search for weakly supervised semantic parsing. P Dasigi, M Gardner, S Murty, L Zettlemoyer, E Hovy, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Dasigi, P., Gardner, M., Murty, S., Zettlemoyer, L., and Hovy, E. (2019). Iterative search for weakly super- vised semantic parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2669-2680.
X Deng, H Sun, A Lees, Y Wu, Yu , C , arXiv:2006.14806Turl: Table understanding through representation learning. arXiv preprintDeng, X., Sun, H., Lees, A., Wu, Y., and Yu, C. (2020). Turl: Table understanding through represen- tation learning. arXiv preprint arXiv:2006.14806.
J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintDevlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
J Herzig, P K Nowak, T Müller, F Piccinno, J M Eisenschlos, arXiv:2004.02349Tapas: Weakly supervised table parsing via pre-training. arXiv preprintHerzig, J., Nowak, P. K., Müller, T., Piccinno, F., and Eisenschlos, J. M. (2020). Tapas: Weakly super- vised table parsing via pre-training. arXiv preprint arXiv:2004.02349.
W Hwang, J Yim, S Park, M Seo, arXiv:1902.01069A comprehensive exploration on wikisql with table-aware word contextualization. arXiv preprintHwang, W., Yim, J., Park, S., and Seo, M. (2019). A comprehensive exploration on wikisql with table-aware word contextualization. arXiv preprint arXiv:1902.01069.
Albert: A lite bert for self-supervised learning of language representations. Z Lan, M Chen, S Goodman, K Gimpel, P Sharma, R Soricut, arXiv:1909.11942arXiv preprintLan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. (2019). Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942.
S Lim, M Kim, J Lee, arXiv:1909.07005Korquad1. 0: Korean qa dataset for machine reading comprehension. arXiv preprintLim, S., Kim, M., and Lee, J. (2019). Korquad1. 0: Korean qa dataset for machine reading comprehen- sion. arXiv preprint arXiv:1909.07005.
Bridging textual and tabular data for cross-domain text-to-sql semantic parsing. X V Lin, R Socher, C Xiong, arXiv:2012.12627arXiv preprintLin, X. V., Socher, R., and Xiong, C. (2020). Bridging textual and tabular data for cross-domain text-to-sql semantic parsing. arXiv preprint arXiv:2012.12627.
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintLiu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly opti- mized bert pretraining approach. arXiv preprint arXiv:1907.11692.
Q Lyu, K Chakrabarti, S Hathi, S Kundu, J Zhang, Chen , Z , arXiv:2008.04759Hybrid ranking network for text-to-sql. arXiv preprintLyu, Q., Chakrabarti, K., Hathi, S., Kundu, S., Zhang, J., and Chen, Z. (2020). Hybrid ranking network for text-to-sql. arXiv preprint arXiv:2008.04759.
Korean tableqa: Structured data question answering based on span prediction style with s3-net. C Park, M Kim, S Park, S Lim, J Lee, C Lee, ETRI Journal. 426Park, C., Kim, M., Park, S., Lim, S., Lee, J., and Lee, C. (2020). Korean tableqa: Structured data question answering based on span prediction style with s3- net. ETRI Journal, 42(6):899-911.
Compositional semantic parsing on semi-structured tables. P Pasupat, P Liang, arXiv:1508.00305arXiv preprintPasupat, P. and Liang, P. (2015). Compositional se- mantic parsing on semi-structured tables. arXiv preprint arXiv:1508.00305.
Squad: 100,000+ questions for machine comprehension of text. P Rajpurkar, J Zhang, K Lopyrev, P Liang, arXiv:1606.05250arXiv preprintRajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. (2016). Squad: 100,000+ questions for ma- chine comprehension of text. arXiv preprint arXiv:1606.05250.
P Rajpurkar, R Jia, P Liang, arXiv:1806.03822Know what you don't know: Unanswerable questions for squad. arXiv preprintRajpurkar, P., Jia, R., and Liang, P. (2018). Know what you don't know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822.
O Rubin, J Berant, arXiv:2010.12412Smbop: Semiautoregressive bottom-up semantic parsing. arXiv preprintRubin, O. and Berant, J. (2020). Smbop: Semi- autoregressive bottom-up semantic parsing. arXiv preprint arXiv:2010.12412.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. In Advances in neural information processing systems, pages 5998- 6008.
Rat-sql: Relation-aware schema encoding and linking for text-to-sql parsers. B Wang, R Shin, X Liu, O Polozov, M Richardson, arXiv:1911.04942arXiv preprintWang, B., Shin, R., Liu, X., Polozov, O., and Richard- son, M. (2019). Rat-sql: Relation-aware schema encoding and linking for text-to-sql parsers. arXiv preprint arXiv:1911.04942.
Luke: deep contextualized entity representations with entity-aware self-attention. I Yamada, A Asai, H Shindo, H Takeda, Y Matsumoto, arXiv:2010.01057arXiv preprintYamada, I., Asai, A., Shindo, H., Takeda, H., and Mat- sumoto, Y. (2020). Luke: deep contextualized en- tity representations with entity-aware self-attention. arXiv preprint arXiv:2010.01057.
Xlnet: Generalized autoregressive pretraining for language understanding. Z Yang, Z Dai, Y Yang, J Carbonell, R R Salakhutdinov, Q V Le, Advances in neural information processing systems. 32Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdi- nov, R. R., and Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understand- ing. Advances in neural information processing sys- tems, 32.
P Yin, G Neubig, W.-T Yih, S Riedel, arXiv:2005.08314Tabert: Pretraining for joint understanding of textual and tabular data. arXiv preprintYin, P., Neubig, G., Yih, W.-t., and Riedel, S. (2020). Tabert: Pretraining for joint understanding of textual and tabular data. arXiv preprint arXiv:2005.08314.
Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. T Yu, R Zhang, K Yang, M Yasunaga, D Wang, Z Li, J Ma, I Li, Q Yao, S Roman, arXiv:1809.08887arXiv preprintYu, T., Zhang, R., Yang, K., Yasunaga, M., Wang, D., Li, Z., Ma, J., Li, I., Yao, Q., Roman, S., et al. (2018). Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-sql task. arXiv preprint arXiv:1809.08887.
V Zayats, K Toutanova, M Ostendorf, arXiv:2101.10573Representations for question answering from documents with tables and text. arXiv preprintZayats, V., Toutanova, K., and Ostendorf, M. (2021). Representations for question answering from doc- uments with tables and text. arXiv preprint arXiv:2101.10573.
Seq2sql: Generating structured queries from natural language using reinforcement learning. V Zhong, C Xiong, R Socher, abs/1709.00103CoRRZhong, V., Xiong, C., and Socher, R. (2017). Seq2sql: Generating structured queries from natu- ral language using reinforcement learning. CoRR, abs/1709.00103.
| [
"https://github.com/LG-NLP/KorWikiTableQuestions"
] |
[
"Calibrate and Refine! A Novel and Agile Framework for ASR-error Robust Intent Detection",
"Calibrate and Refine! A Novel and Agile Framework for ASR-error Robust Intent Detection"
] | [
"Peilin Zhou zhoupalin@gmail.com \nZhejiang University\nHangzhouChina\n",
"Dading Chong \nPeking University\nShenzhenChina\n",
"Helin Wang \nPeking University\nShenzhenChina\n",
"Qingcheng Zeng qingchengzeng@outlook.com \nZhejiang University\nHangzhouChina\n"
] | [
"Zhejiang University\nHangzhouChina",
"Peking University\nShenzhenChina",
"Peking University\nShenzhenChina",
"Zhejiang University\nHangzhouChina"
] | [] | The past ten years have witnessed the rapid development of textbased intent detection, whose benchmark performances have already been taken to a remarkable level by deep learning techniques. However, automatic speech recognition (ASR) errors are inevitable in real-world applications due to the environment noise, unique speech patterns and etc, leading to sharp performance drop in state-of-the-art text-based intent detection models. Essentially, this phenomenon is caused by the semantic drift brought by ASR errors and most existing works tend to focus on designing new model structures to reduce its impact, which is at the expense of versatility and flexibility. Different from previous one-piece model, in this paper, we propose a novel and agile framework called CR-ID for ASR error robust intent detection with two plug-and-play modules, namely semantic drift calibration module (SDCM) and phonemic refinement module (PRM), which are both model-agnostic and thus could be easily integrated to any existing intent detection models without modifying their structures. Experimental results on SNIPS dataset show that, our proposed CR-ID framework achieves competitive performance and outperform all the baseline methods on ASR outputs, which verifies that CR-ID can effectively alleviate the semantic drift caused by ASR errors. | 10.21437/interspeech.2022-786 | [
"https://arxiv.org/pdf/2205.11008v1.pdf"
] | 248,987,121 | 2205.11008 | 6dbc5ed631b15606f2f62f837ba6b590b29b344e |
Calibrate and Refine! A Novel and Agile Framework for ASR-error Robust Intent Detection
Peilin Zhou zhoupalin@gmail.com
Zhejiang University
HangzhouChina
Dading Chong
Peking University
ShenzhenChina
Helin Wang
Peking University
ShenzhenChina
Qingcheng Zeng qingchengzeng@outlook.com
Zhejiang University
HangzhouChina
Calibrate and Refine! A Novel and Agile Framework for ASR-error Robust Intent Detection
Index Terms: intent detectionhuman-computer interactionspoken language understanding
The past ten years have witnessed the rapid development of textbased intent detection, whose benchmark performances have already been taken to a remarkable level by deep learning techniques. However, automatic speech recognition (ASR) errors are inevitable in real-world applications due to the environment noise, unique speech patterns and etc, leading to sharp performance drop in state-of-the-art text-based intent detection models. Essentially, this phenomenon is caused by the semantic drift brought by ASR errors and most existing works tend to focus on designing new model structures to reduce its impact, which is at the expense of versatility and flexibility. Different from previous one-piece model, in this paper, we propose a novel and agile framework called CR-ID for ASR error robust intent detection with two plug-and-play modules, namely semantic drift calibration module (SDCM) and phonemic refinement module (PRM), which are both model-agnostic and thus could be easily integrated to any existing intent detection models without modifying their structures. Experimental results on SNIPS dataset show that, our proposed CR-ID framework achieves competitive performance and outperform all the baseline methods on ASR outputs, which verifies that CR-ID can effectively alleviate the semantic drift caused by ASR errors.
Introduction
Intent detection (ID), as one of the key tasks in spoken language understanding, aims to identify users' intents from their utterances. Driven by advances in deep learning technology, the ID research has entered into a stage of rapid development. Specifically, many classical methods like convolutional neural network (CNN) [1,2,3], recurrent neural network (RNN) [4,5,6], graph neural network (GNN) [7] and self-attention mechanism [8,9] have been explored for this task and obtained superb performance on benchmark datasets. Moreover, pre-trained language models [10] have also been utilized to better understand the meaning of the user sentences and thus could help to classify the intents more accurately. Notwithstanding the favorable results of these models, they often assume that automatic speech recognition (ASR) never makes any mistakes. The training and testing of these ID models are both conducted on error-free manual transcriptions, rather than ASR outputs.
Unfortunately, this overly idealistic setting would make it hard to deploy existing ID models in real-world applications, where ASR errors are unavoidable due to the complex conditions like environment noise and diverse speaking styles or accents. As shown in the right side of Figure 1, although pre- * Corresponding Author. trained language models (LMs) like BERT [11] and ELMo [12] could provide more robust representations compared with static embeddings like Word2Vec [13], they still suffer from sharp performance drop when tested on ASR outputs. This is because the original representations of user utterances are prone to be distorted by ASR errors (as shown in the left side of Figure 1), which is named as semantic drift problem in this paper.
Recently, several studies were introduced to mitigate such semantic drift problem. [14,15,16] proposed to remove the ASR component and extract semantics directly from the speech signals in an end-to-end manner. Following this trend, [17] applied the mask strategy to audio frames and utilized large-scale unsupervised pre-training technique to learn acoustic representations for SLU. However, compared with pipeline-based methods, these end-to-end models are less interpretable and more data-hungry. In addition, the annotation process of audio data is usually both expensive and time-consuming, which is impractical for industrial applications. Therefore, some researchers proposed to utilize text and speech together for SLU systems. For example, [18] proposed a novel ST-BERT and designed two new cross-modal language modeling tasks to better learn the semantic representations of speech and text modalities. [19] suggested to carry out both speech and language understanding tasks during pre-training and constructed a novel speechlanguage joint pre-training framework for SLU. Though achieving excellent performance, they still require pre-training with large-scale datasets, which are not available in some data scarce domains.
Another branch of ASR-error robust research is to reduce the impact of semantic drift by considering the acoustic similarity between words [20] or directly injecting phoneme information to the modeling process [21], which had a similar motivation with ours. But most of them only focused on designing new model structures for specific scenario, and usually show poor compatibility with other methods. So far, designing a both versatile and flexible model has still not been well explored in this research field.
To overcome above-mentioned limitations, we propose a novel and agile framework called Calibration and Refinement for Intent Detection (CR-ID). Different from previous solutions, our approach decouples the semantic calibration and intent classification process, thus any existing text-based intent detection models could be incorporated into this framework and become more robust to ASR errors. Specifically, we design two plug-and-play modules to calibrate the semantic drift and refine the calibrated representation with phonemic information, which provides useful signals for the intent classification process. Our proposed framework will be further detailed in section 2 and our main contributions could be summarized as follows:
• We propose the CR-ID framework, which could effectively reduce the impact of semantic drift on existing text-based intent detection models without any structural modifications. • We design two plug-and-play modules, namely SDCM and PRM, to calibrate both word-level and sentencelevel representation for ASR outputs and utilize the phonemic information to refine and enrich the calibrated representations. • We conduct comprehensive experiments on SNIPS dataset and the results show that, compared with the best baseline model, the intent accuracy and Macro-F1 score of our proposed CR-ID are increased by 1.99% and 1.86% respectively, which demonstrates the effectiveness of CR-ID on boosting the robustness of existing ID model.
The Proposed Approach
In this section, we present our CR-ID, which is able to effectively and flexibly alleviate the semantic drift problem without changing the structure of classical text-based ID models. The overall architecture of CR-ID is illustrated in Figure2
Semantic Drift Calibration Module
SDCM aims to calibrate the distorted representations of ASR outputs and minimize the negative impacts brought by semantic drift. To achieve this, inspired by the great success of pretrained language models (PLM) and finetuning techniques, we propose to adopt two PLM finetuning strategies, namely confusionaware finetuning and task-adaptive finetuning, which are transformed from [20] and [22]. For confusion-aware finetuning, we first use both minimum edit-distance (MED) and word confusion network (WCN) to extract acoustic confusion, which are introduced by [20]. Due to space limitation, readers could check the details from their paper. Taking the two different utterances x1 and x2 as an example, we use C = c1, c2, · · · , c |C| to denote the set of all acoustic confusions, where c = w
x 1 t 1 , w x 2 t 2
consists of two acoustically similar words w x 1 t 1 and w x 2 t 2 . Finally, we propose a new confusion loss to minimize the mean square error (MSE) between the word-level representations and sentencelevel representations generated by pretrained language model as follows:
Lca = 1 |c| c∈C 1 i=0 MSE h x 1 t 1 ,i , h x 2 t 2 ,i + MSE (h x 1 , h x 2 ) (1)
Task-adaptive finetuning is a widely used technique especially when domain mismatch problem happens, which could effectively adapt pretrained LM from general corpus to the target data. For example, given a pre-trained ELMo model and a sentence x = w1, w2, . . . , w |x| , we could directly use the pretraining loss of ELMo as the task-adaptive loss, which could be written as:
Lta = 1 |x| |x| t=1 − log p (wt | w<t) − log p (wt | w>t) (2)
where p (wt | w<t) and p (wt | w>t) denote the probabilities of wt calculated from forward and backward directions. Eventually, we jointly finetune the LM using abovementioned two strategies in a multi-task learning manner and the final loss is as follows:
L = Lta + λLca(3)
where λ represents a balancing hyperparameter to control the contribution of each finetuning strategy.
Phonemic Refinement Module
Phoneme is the smallest pronunciation unit in speech and the phoneme sequence of each word can represent its acoustic information to some extent. Therefore, we design PRM to refine and enrich the calibrated representation by injecting phonemic information into the modeling process. Firstly, each word wt in the ASR output xasr will be transformed into a phoneme sequence Pt = p1, p2, · · · , pN w t via a grapheme to phoneme (G2P) conversion algorithm [23], which highly depends on the pronunciation dictionary. In this paper we adopt CMU pronunciation dictionary [24] constructed by Carnegie Mellon University, which includes 39 types of phonemes and covers more than 130,000 words as well as their corresponding pronunciation information. Figure 2 also shows the process of converting the word "find" into a phoneme sequence "F,AY1,N,D". Note that for vowels like "AY", there is a stress marker behind them indicating which stress types it belongs to. Generally, "0" represents no stress, "1" and "2" represent primary stress and secondary stress respectively, which could provide fine-grained acoustic information for intent detection process. Then, each phoneme sequence will be mapped into the embedding space and be further encoded by a BiLSTM layer as follows:
Hw t =BiLST M ([ep 1 , ep 2 , . . . , eN w t ]),(4)
where ep i denotes the embedding of the phoneme pi and Hw t represents the hidden representation matrix for word wt.
In the end, average pooling method is conducted on these hidden representation matrices to obtain the final acoustic embedding of the whole sentence xi:
H acoustic xasr =[hw 1 , hw 2 , . . . , hw N ],(5)
hw t =Average(Hw t ),
Intent Detection Module
As shown in Figure 2, the intent detection module is decoupled from other modules, making it possible to incorporate any existing text-based intent detection model to our proposed CR-ID framework. Therefore, we adopt a self-attentive intent classification model inspired by [8] as the ID module. The input of ID module is the concatenation of the calibrated word embedding generated by SDCM and the acoustic embedding generated by PRM. BiLSTM is used to model the long-term dependency in the utterance and the self attention mechanism is adopted to capture the key information from the calibrated and refined representations. Max pooling is utilized to obtain the final sentencelevel representation, which is further fed to a softmax classifier to predict user's intent.
Experiments
Dataset
In [20], the authors used three datasets, namely SNIPS, ATIS and Smartlight, for their experiments. However, both ATIS (with confusion words) and Smartlight are not available for public because of the copyright issue. Therefore, for fair comparison with the method proposed in [20], we directly use their released version of SNIPS dataset to conduct all the experiments. Different from the original SNIPS dataset, [20] extracted con-fusion words via the two strategies introduced in Sec2.1 and added them to the original dataset, which is convenient for researchers to reproduce their results and make improvements on it. The readers could check the details of this dataset in https://github.com/MiuLab/SpokenVec.
Baselines and Implementation Details
A number of ASR error robust ID models have been proposed in the past few years. We do not compare with all of them because many previous methods are not directly comparable due to the use of different model architectures. Hence, we select Spoken-Vec [20] and construct several baselines that are fair (use the same information, similar architectures, etc.) to compare with. Specifically, we use the intent detection module (introduced in Sec 2.3) as the base model, because the self-attentive intent detection model has already achieved comparative performance on SNIPS dataset according to [25]. Then we incorporate it with different word embedding techniques as the baselines. Static Word Embedding. We use three pre-trained static word embeddings, Word2Vec [13], GloVe [26] and FastText [27], as the embedding matrix to help encode sentences. We also use a randomly initialized embedding matrix as a comparison. Contextual Word Embedding. We evaluate two pretrained language models, ELMo and BERT, to obtain contextual word embeddings. And each LM is evaluated with fixed and unfixed parameters respectively. Implementation Details. For the ID base model, the dimension of BiLSTM and self-attention layer are all set to 300, the number of heads is set to 8, the batch size is 64. All ID base models are trained on the manual transcribed training set for 50 epochs using Adam optimizer with learning rate as 3e-4, and then tested on the manual transcriptions and ASR outputs respectively. For the SDCM, for fair comparison with Spoken-Vec, we follow its setting and adopt ELMo as the pretrained LM. We train the SDCM for 10 epochs with the batch size of 32, learn rate of Adam set to 1e-4. For PRM, the dimension of the embedding and BiLSTM hidden layers are all set to 50 and the PRM is jointly trained with ID base model.
Results and Analysis
The overall performance of the baselines and CR-ID are summarized in Table 1. Note that as introduced in Sec 2.1 and Sec 3.1, the confusion word pairs could be generated by minimum edit distance (MED) or word confusion network (WCN). Hence, for SpokenVec and our proposed CR-ID, we also report the performance variations using different confusion extraction methods in Table 1. Here are some observations from the Table 1: when testing on the manual transcriptions, the performance scores of all methods are very close, and the method based on contextual word embedding is slightly better than the static counterparts. However, the performances of all baselines except for SpokenVec drops sharply on the ASR output, demonstrating the necessity to reduce the negative impacts caused by semantic drift problem. CR-ID (WCN) achieved the best performance in terms of both Accuracy and Macro-F1. Specifically, compared with the best static word embedding based baselines, the Accuracy and Macro-F1 of CR-ID (WCN) are increased by 12.39% and 11.96% respectively; compared with the best contextual word embedding based baseline model, the performance are improved by 9.5% and 9% respectively; even compared with the SpokenVec, which is a very strong baseline, the performance gains still achieve 1.99% and 1.86% respectively, demonstrating the effectiveness of our propose CR-ID framwork.
Ablation Study
In order to figure out the contribution of different modules in our proposed CR-ID, we conduct ablation study for each plugand-play module, as shown in respectively. Therefore, the combination of SDCM and PRM could significantly improve the robustness of ID module to ASR errors.
In addition, we also explore the effect of different distance functions in the confusion loss on model's performance. Here we select three classical distance functions (Equation 7,8,9) to subsitute the MSE in Equation 1, and the results are shown in Table 3. We observe that MSE achieves the best performance under most experimental settings, which is the reason why we finally choose MSE distance for the confusion-aware finetuning strategy.
Lcos = 1 |C| c∈C 1 i=0 1 − h x 1 t 1 ,i · h x 2 t 2 ,i h x 1 t 1 ,i h x 2 t 2 ,i + 1 − h x 1 · h x 2 h x 1 h x 2 (7) L l1 = 1 |C| c∈C 1 i=0 h x 1 t 1 ,i − h x 2 t 2 ,i 1 + h x 1 − h x 2 1 (8) L triplet = 1 |C| c∈C 1 i=0 triplet h x 1 t 1 ,i , h x 2 t 2 ,i , h x 3 t 3 ,i +triplet h x 1 ,h x 2 ,h x 3
triplet(a, p, n) = max d a i , p i − d a i , n i + margin, 0 d x i , y i = x i − y i p (9)
Hyperparameter Sensitivity
In this section, we aim to analyze the effect of the balancing hyperparameter λ (in the Equation 3) on the performance of CR-ID. The results are illustrated in Figure 3. It can be observed that for both CR-ID (MED) or CR-ID (WCN), when λ increases from 0.1 to 10, the model performance is slightly improved, but when the λ gets larger (e.g. larger than 15), the performance of the model begins to decline. Therefore, for all the CR-ID related experiments, we set λ to 10 to better balance the impact of task-adaptive finetuning and confusion-aware finetuning on model optimization.
Conclusion
In this paper, we propose a novel and agile framework, called CR-ID, for ASR error robust intent detection. Two plug-andplay modules, namely SDCM and PRM, are designed to calibrate both word-level and sentence-level representation for ASR outputs and utilize the phonemic information to refine and enrich the calibrated representations. Experimental results on SNIPS dataset show that our proposed CR-ID outperform all baseline models on the ASR outputs, demonstrating that our proposed framwork could effectively reduce the impact of semantic drift on existing text-based intent detection models and boost their robustness to ASR errors.
Figure 1 :
1Semantic drift problem (Left) and the comparison of different ID models' performance on manual transcriptions and ASR outputs (Right)
Figure 2 :
2Overview of the proposed CR-ID framework.
Figure 3 :
3Parameter sensitivity of λ
Table 1 :
1Overall performance on manual and ASR output. Bold scores represent the highest results of all methods.Model
Mannual
ASR output
ACC% Macro-F1% ACC% Macro-F1%
Random
96.87
96.91
78.60
79.87
GloVe
97.15
97.18
77.12
77.70
Word2Vec
96.73
96.81
76.14
77.02
FastText
97.01
97.00
79.15
79.48
BERT (w/o Fine-tuning)
96.43
96.44
80.40
80.81
BERT (w Fine-tuning)
97.59
97.70
82.01
82.80
ELMo (w/o Fine-tuning)
96.70
96.77
80.60
81.61
ELMo (w Fine-tuning)
97.28
97.31
81.69
82.24
SpokenVec (MED)
97.01
97.21
88.52
89.23
SpokenVec (WCN)
97.04
97.12
89.55
89.97
CR-ID (MED)
97.42
97.50
90.85
91.32
CR-ID (WCN)
97.14
97.23
91.54
91.83
Table 2 :
2Ablation studyModel
Mannual
ASR output
ACC% Macro-F1% ACC% Macro-F1%
Full
97.14
97.23
91.54
91.83
w/o acoustic embedding
96.71
96.81
90.55
90.87
w/o confusion-aware fintuning strategy
97.15
97.21
88.60
89.01
w/o task-adaptive finetuning strategy
97.28
97.41
82.12
83.14
Table 3 :
3Performance comparison of using different distance functions as the confusion loss.The metric is accuracy.Test data type Confusion extraction type
Type of loss function
Cosine
L1
MSE Triplet
ASR
MED
90.56
84.90 90.85
88.88
ASR
WCN
90.82
88.26 91.54
90.10
Manual
MED
96.55
92.71 97.42
96.40
Manual
WCN
97.30
96.86 97.14
96.73
Table
adaptive finetuning and PRM are reserved. 3) CR-ID w/o taskadaptive finetuning strategy, where only confusion-aware finetuning and PRM are reserved. We observe that all these components contribute to performance improvements when testing on the ASR outputs. Specifically, task adaptive fine-tuning strategy contributes the most to the robustness of ID module. When this strategy is removed from the CR-ID, the accuracy and Macro-F1 are decreased by 9.42% and 8.69% respectively. And the confusion-aware finetuning strategy take the second place. Without it, the accuracy and Macro-F1 are decreased by 2.94% and 2.82% respectively. Without acoustic embedding, the accuracy and Macro-F1 are decreased by 0.99% and 0.96%.2: 1) CR-ID w/o acous-
tic embedding, which only use SDCM for calibration; 2) CR-
ID w/o confusion-aware finetuning strategy, where only task-
Towards deeper understanding: Deep convex networks for semantic utterance classification. G Tür, L Deng, D Hakkani-Tür, X He, ICASSP. IEEEG. Tür, L. Deng, D. Hakkani-Tür, and X. He, "Towards deeper un- derstanding: Deep convex networks for semantic utterance classi- fication," in ICASSP. IEEE, 2012, pp. 5045-5048.
Convolutional neural network based triangular CRF for joint intent detection and slot filling. P Xu, R Sarikaya, IEEEASRUP. Xu and R. Sarikaya, "Convolutional neural network based tri- angular CRF for joint intent detection and slot filling," in ASRU. IEEE, 2013, pp. 78-83.
Mining user intentions from medical queries: A neural network based heterogeneous jointly modeling approach. C Zhang, W Fan, N Du, P S Yu, ACMC. Zhang, W. Fan, N. Du, and P. S. Yu, "Mining user intentions from medical queries: A neural network based heterogeneous jointly modeling approach," in WWW. ACM, 2016, pp. 1373- 1384.
Recurrent neural network and LSTM models for lexical utterance classification. S V Ravuri, A Stolcke, INTER-SPEECH. ISCA. S. V. Ravuri and A. Stolcke, "Recurrent neural network and LSTM models for lexical utterance classification," in INTER- SPEECH. ISCA, 2015, pp. 135-139.
Attention-based recurrent neural network models for joint intent detection and slot filling. B Liu, I Lane, INTER-SPEECH. ISCA. B. Liu and I. Lane, "Attention-based recurrent neural network models for joint intent detection and slot filling," in INTER- SPEECH. ISCA, 2016, pp. 685-689.
A bi-model based RNN semantic frame parsing model for intent detection and slot filling. Y Wang, Y Shen, H Jin, NAACL-HLT. Association for Computational LinguisticsY. Wang, Y. Shen, and H. Jin, "A bi-model based RNN seman- tic frame parsing model for intent detection and slot filling," in NAACL-HLT (2). Association for Computational Linguistics, 2018, pp. 309-314.
Understanding user's query intent with wikipedia. J Hu, G Wang, F H Lochovsky, J Sun, Z Chen, ACMJ. Hu, G. Wang, F. H. Lochovsky, J. Sun, and Z. Chen, "Under- standing user's query intent with wikipedia," in WWW. ACM, 2009, pp. 471-480.
Self-attention networks for intent detection. S Yolchuyeva, G Németh, B Gyires-Tóth, RANLP. INCOMA Ltd. S. Yolchuyeva, G. Németh, and B. Gyires-Tóth, "Self-attention networks for intent detection," in RANLP. INCOMA Ltd., 2019, pp. 1373-1379.
A self-attention joint model for spoken language understanding in situational dialog applications. M Chen, J Zeng, J Lou, abs/1905.11393CoRR. M. Chen, J. Zeng, and J. Lou, "A self-attention joint model for spoken language understanding in situational dialog applications," CoRR, vol. abs/1905.11393, 2019.
BERT for joint intent classification and slot filling. Q Chen, Z Zhuo, W Wang, abs/1902.10909CoRR. Q. Chen, Z. Zhuo, and W. Wang, "BERT for joint intent classifi- cation and slot filling," CoRR, vol. abs/1902.10909, 2019.
BERT: pretraining of deep bidirectional transformers for language understanding. J Devlin, M Chang, K Lee, K Toutanova, NAACL-HLT (1). Association for Computational LinguisticsJ. Devlin, M. Chang, K. Lee, and K. Toutanova, "BERT: pre- training of deep bidirectional transformers for language under- standing," in NAACL-HLT (1). Association for Computational Linguistics, 2019, pp. 4171-4186.
Deep contextualized word representations. M E Peters, M Neumann, M Iyyer, M Gardner, C Clark, K Lee, L Zettlemoyer, NAACL-HLT. Association for Computational Linguistics. M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, "Deep contextualized word represen- tations," in NAACL-HLT. Association for Computational Lin- guistics, 2018, pp. 2227-2237.
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, ICLR (Workshop Poster). T. Mikolov, K. Chen, G. Corrado, and J. Dean, "Efficient estima- tion of word representations in vector space," in ICLR (Workshop Poster), 2013.
Spoken language understanding without speech recognition. Y Chen, R Price, S Bangalore, ICASSP. IEEE. Y. Chen, R. Price, and S. Bangalore, "Spoken language under- standing without speech recognition," in ICASSP. IEEE, 2018, pp. 6189-6193.
From audio to semantics: Approaches to end-to-end spoken language understanding. P Haghani, A Narayanan, M Bacchiani, G Chuang, N Gaur, P J Moreno, R Prabhavalkar, Z Qu, A Waters, SLT. IEEE. P. Haghani, A. Narayanan, M. Bacchiani, G. Chuang, N. Gaur, P. J. Moreno, R. Prabhavalkar, Z. Qu, and A. Waters, "From audio to semantics: Approaches to end-to-end spoken language under- standing," in SLT. IEEE, 2018, pp. 720-726.
Speech model pre-training for end-to-end spoken language understanding. L Lugosch, M Ravanelli, P Ignoto, V S Tomar, Y Bengio, INTERSPEECH. ISCA. L. Lugosch, M. Ravanelli, P. Ignoto, V. S. Tomar, and Y. Ben- gio, "Speech model pre-training for end-to-end spoken language understanding," in INTERSPEECH. ISCA, 2019, pp. 814-818.
Large-scale unsupervised pre-training for end-to-end spoken language understanding. P Wang, L Wei, Y Cao, J Xie, Z Nie, ICASSP. IEEEP. Wang, L. Wei, Y. Cao, J. Xie, and Z. Nie, "Large-scale unsu- pervised pre-training for end-to-end spoken language understand- ing," in ICASSP. IEEE, 2020, pp. 7999-8003.
St-bert: Cross-modal language model pre-training for end-to-end spoken language understanding. M Kim, G Kim, S Lee, J Ha, ICASSP. IEEEM. Kim, G. Kim, S. Lee, and J. Ha, "St-bert: Cross-modal lan- guage model pre-training for end-to-end spoken language under- standing," in ICASSP. IEEE, 2021, pp. 7478-7482.
SPLAT: speech-language joint pre-training for spoken language understanding. Y Chung, C Zhu, M Zeng, NAACL-HLT. Association for Computational LinguisticsY. Chung, C. Zhu, and M. Zeng, "SPLAT: speech-language joint pre-training for spoken language understanding," in NAACL-HLT. Association for Computational Linguistics, 2021, pp. 1897-1907.
Learning asr-robust contextualized embeddings for spoken language understanding. C Huang, Y Chen, ICASSP. IEEEC. Huang and Y. Chen, "Learning asr-robust contextualized em- beddings for spoken language understanding," in ICASSP. IEEE, 2020, pp. 8009-8013.
Pre-training for spoken language understanding with joint textual and phonetic representation learning. Q Chen, W Wang, Q Zhang, Interspeech. ISCA, 2021Q. Chen, W. Wang, and Q. Zhang, "Pre-training for spoken lan- guage understanding with joint textual and phonetic representa- tion learning," in Interspeech. ISCA, 2021, pp. 1244-1248.
Universal language model fine-tuning for text classification. J Howard, S Ruder, ACL (1). Association for Computational Linguistics. J. Howard and S. Ruder, "Universal language model fine-tuning for text classification," in ACL (1). Association for Computa- tional Linguistics, 2018, pp. 328-339.
. J Park, Kyubyong Kim, 2J. Park, Kyubyong Kim, "g2pe," https://github.com/Kyubyong/ g2p, 2019.
The CMU arctic speech databases. J Kominek, A W Black, SSW. ISCA. J. Kominek and A. W. Black, "The CMU arctic speech databases," in SSW. ISCA, 2004, pp. 223-224.
A survey on spoken language understanding: Recent advances and new frontiers. L Qin, T Xie, W Che, T Liu, IJCAI. ijcai.orgL. Qin, T. Xie, W. Che, and T. Liu, "A survey on spoken language understanding: Recent advances and new frontiers," in IJCAI. ij- cai.org, 2021, pp. 4577-4584.
Glove: Global vectors for word representation. J Pennington, R Socher, C D Manning, EMNLP. ACL. J. Pennington, R. Socher, and C. D. Manning, "Glove: Global vectors for word representation," in EMNLP. ACL, 2014, pp. 1532-1543.
Fasttext.zip: Compressing text classification models. A Joulin, E Grave, P Bojanowski, M Douze, H Jégou, T Mikolov, abs/1612.03651CoRR. A. Joulin, E. Grave, P. Bojanowski, M. Douze, H. Jégou, and T. Mikolov, "Fasttext.zip: Compressing text classification mod- els," CoRR, vol. abs/1612.03651, 2016.
| [
"https://github.com/MiuLab/SpokenVec.",
"https://github.com/Kyubyong/"
] |
[
"Modeling Dynamic Relationships Between Characters in Literary Novels",
"Modeling Dynamic Relationships Between Characters in Literary Novels"
] | [
"Snigdha Chaturvedi \nDepartment of Computer Science\nUniversity of Maryland\nCollege Park\n",
"Shashank Srivastava \nDepartment of Computer Science\nCarnegie Mellon University\n\n",
"Hal Daumé Iii \nDepartment of Computer Science\nUniversity of Maryland\nCollege Park\n",
"Chris Dyer \nDepartment of Computer Science\nCarnegie Mellon University\n\n"
] | [
"Department of Computer Science\nUniversity of Maryland\nCollege Park",
"Department of Computer Science\nCarnegie Mellon University\n",
"Department of Computer Science\nUniversity of Maryland\nCollege Park",
"Department of Computer Science\nCarnegie Mellon University\n"
] | [] | Studying characters plays a vital role in computationally representing and interpreting narratives. Unlike previous work, which has focused on inferring character roles, we focus on the problem of modeling their relationships. Rather than assuming a fixed relationship for a character pair, we hypothesize that relationships are dynamic and temporally evolve with the progress of the narrative, and formulate the problem of relationship modeling as a structured prediction problem. We propose a semisupervised framework to learn relationship sequences from fully as well as partially labeled data. We present a Markovian model capable of accumulating historical beliefs about the relationship and status changes. We use a set of rich linguistic and semantically motivated features that incorporate world knowledge to investigate the textual content of narrative. We empirically demonstrate that such a framework outperforms competitive baselines. | null | [
"https://arxiv.org/pdf/1511.09376v1.pdf"
] | 6,810,513 | 1511.09376 | cb7bf5e596590156173176d511404b9ae175d2cb |
Modeling Dynamic Relationships Between Characters in Literary Novels
Snigdha Chaturvedi
Department of Computer Science
University of Maryland
College Park
Shashank Srivastava
Department of Computer Science
Carnegie Mellon University
Hal Daumé Iii
Department of Computer Science
University of Maryland
College Park
Chris Dyer
Department of Computer Science
Carnegie Mellon University
Modeling Dynamic Relationships Between Characters in Literary Novels
Studying characters plays a vital role in computationally representing and interpreting narratives. Unlike previous work, which has focused on inferring character roles, we focus on the problem of modeling their relationships. Rather than assuming a fixed relationship for a character pair, we hypothesize that relationships are dynamic and temporally evolve with the progress of the narrative, and formulate the problem of relationship modeling as a structured prediction problem. We propose a semisupervised framework to learn relationship sequences from fully as well as partially labeled data. We present a Markovian model capable of accumulating historical beliefs about the relationship and status changes. We use a set of rich linguistic and semantically motivated features that incorporate world knowledge to investigate the textual content of narrative. We empirically demonstrate that such a framework outperforms competitive baselines.
Introduction
The field of computational narrative studies focuses on algorithmically understanding, representing and generating stories. Most research in this field focuses on modeling the narrative from the perspective of (1) events or (2) characters.
Popular events-based approaches include scripts [25,24], plot units [15,21,14,10], temporal event chains or schemas [5,6], and the more recent bags of related events [22,4,7]. The alternate perspective attempts to understand stories from the viewpoint of characters and relationships between them. This perspective explains the set of observed actions using characters' personas or roles and the expected behavior of the character in that role [27,2,3,11]. Recent work has also focused on constructing social networks including signed social networks to model the relationships between individual characters [1,12,17].
The work presented in this paper aligns with the second perspective . We address the problem of modeling relationships between characters in literary fiction, specifically novels. Existing work mentioned above, models each character as assuming a single narrative role, and these roles define the relationships between characters and also govern their actions. While such a simplified assumption provides a good general overview of the narrative, it is not sufficient to explain all events in the narrative. We believe that in most narratives, relationships between characters are not static but evolve as the novel progresses. For example, consider the relationship between Tom and Becky depicted in Fig. 1 which shows an excerpt from the summary 1 of The Adventures of Tom Sawyer . For most of the narrative (and its summary), the characters are participants in a romantic relationship, which explains most, but not all, of their mutual behavior. However, we can observe that their relationship was not static but evolving, driving nature of the characters' actions. In this particular case, the characters presumably start as lovers (sentence S1 in the Figure), which is hinted at by (and explains) becoming engaged. The relationship sours when Tom reveals his previous love interest (S2 and S3). However, later in the narrative they reconcile (S4 and S5). A model that assumes a fixed romantic relationship between characters would fail to explain their behaviors during the phase when their relationship was under stress.
Therefore, we assume that the relationship between characters evolves with the progress of the novel and model it as a sequence of latent variables denoting relation state. In this work, we take a coarse-grained view, and model relation states as binary variables (roughly indicating cooperative/non-cooperative relation at different points in the narrative). For instance in Fig. 1, the relationship between Tom and Becky can be represented by the sequence cooperative, non-cooperative, cooperative . Given a narrative and a pair of characters appearing in it, we address the task of learning relationship sequences. The narrative fragment of interest for us is represented by the set of sentences in which the two characters of interest appeared together, arranged in the order of occurrence in the narrative.
To address this problem we propose a semi-supervised segmentation framework for training on a collection of fully labeled and partially labeled sequences of sentences from narrative stories. The structured prediction model in the proposed framework attempts to model the 'narrative flow' in the sequence of sentences. Following previous work [23,3], it incorporates the linguistic and semantic information present in the sentences by tracking events and states associated with the characters of interest and enhances them with world knowledge [13,19,28]. We demonstrate the strength of our structured model by comparing it against an unstructured 1 Our main contributions are as follows: • We formulate the novel problem of relationship modeling in narrative text as a structured prediction task instead of a categorical binary or multi-class classification problem. • We propose rich linguistic features that incorporate semantic and world knowledge. • We present a semi-supervised framework to incorporate the narrative structure of the text and empirically demonstrate that it outperforms competitive baselines.
Relationship Prediction Model
In this section we describe our relationship-modeling framework in detail. Given the narrative text in form of a sequence of sentences (in which the two characters of interest appear together), x = x 1 , x 2 , . . . , x l , we address the problem of segmenting it into nonoverlapping and semantically meaningful segments that represent continuities in relationship status. Each segment is labeled with a single relationship status r j ∈ {−1, +1} hence yielding a relationship sequence r = r 1 , r 2 , . . . , r k k ≤ l. Our approach uses a second order Markovian latent variable model for segmentation that is embedded in semi-supervised framework to utilize varying levels of labeling in the data. We now describe our segmentation model and the semi-supervised framework in detail.
Segmentation Model
This model forms the core of our framework. It assumes that each sentence in the sequence is associated with a latent state that represents its relationship status. While making this assignment, it analyzes the content of individual sentences using a rich feature set and simultaneously models the flow of information between the states by treating the prediction task as a structured problem. We utilize a second-order Markov model that can remember a long history of the relationship between the two characters and collectively maximizes the following linear scores for individual sequences:
score = i wφ(x, y i , y i−1 , y i−2 )](1)
where x is the input sequence and y i denotes the latent state assignment of its i th sentence to a relationship segment. Individual y i s collectively yield the relationship sequence, r (by collapsing consecutive occurrences of identical states). φ represents features at the i th sentence that depend on the current state, y i , and the previous two states, y i−1 and y i−2 , and w represents their weights. The second order Markov assumption of our features ensures continuity and coherence of behavior of the two characters within individual relationship segments. The linear segmentation model proposed here is trained using an averaged structured perceptron [8]. For inference, it uses a Viterbi based dynamic programming algorithm. The extension of Viterbi to incorporate second order constraints is straightforward. We replace the reference to a state (in the state space |Y |) by a reference to a state pair (in the two fold product space |Y | × |Y |). Note that this precludes certain transitions while computing the Viterbi matrix, viz.: if the state pair at any point in narrative, t, is of the form (s i , s j ), then the set of state pair candidates at t + 1 only consists of pairs of the form (s j , s k ). Incorporating these constraints, we compute the Viterbi matrix and obtain the highest scoring state sequence by backtracking as usual.
Semi-supervised Framework
The above segmentation model requires labeled (x, y) for training. However, given the nature of the task, we acknowledge that obtaining a huge dataset of labeled sequence can be time consuming as well as expensive. On the other hand, it might be more convenient to obtain partially labeled data especially in cases in which only a subset of the sentences of a sequence have an obvious relationship state membership. We, therefore, propose a semi-supervised framework, which can leverage partial supervision for training the segmentation model. This framework assumes that the training dataset consists of two types of labeled sequences: fully labeled, in which the complete state sequence is observed y i ∀i ∈ {1 . . . l} and partially labeled, in which some of the sentences of the sequence are annotated with
y i such that i ⊂ {1 . . . l}.
This framework uses a two step algorithm (Algorithm 1) to iteratively refine feature weights, w, of the segmentation model. In the first step, it uses existing weights, w n , to assign state sequences to the partially labeled instances. For state assignment we use a constrained version of the Viterbi algorithm that obtains the best possible state sequence that agrees with the partial ground truth. In other words, for the annotated sentences of a partially annotated sequence, it precludes all state assignments except the given ground truth, but segments the rest of the sequence optimally under these constraints. In the second step, we train the structured perceptron based segmentation model, using the ground truth and the state assignments obtained in the previous step, to obtain the refined weights w n+1 . Similar approaches have been used in the past [26].
Algorithm 1 Training algorithm for the semi-supervised framework 1: Input: Fully labeled, F and partially labeled P sequences 2: T : number of iterations 3: Output: Weights w 4: Initialization: Initialize w randomly 5: for n : 1 to N do 6:ŷ j = arg max yj [w n · φ(x, y) j ] ∀j ∈ P and U such thatŷ j agrees with the partial annotated states (ground truth). 7:
w n+1 = AveragedStructuredPerceptron({(x,ŷ) j } ∀j ∈ {P, F }) 8: end for 9: return w 3 Feature Engineering
We now describe the features used by our segmentation model. We first pre-processed the text of various novel summaries to obtain part-of-speech tags and dependency parses, identify major characters and perform character names clustering (assemble 'Tom', 'Tom Sawyer' etc.) using the Book-nlp pipeline [3]. However, the pipeline, designed for long text documents involving multiple characters, was slightly conservative while resolving coreferences. We augmented its output using coreferences obtained from the Stanford Core NLP system [20]. We also obtained a frame-semantic parse of the text using Semafor [9].
After pre-processing, given two characters and a sequence of pre-processed sentences in which the two appeared together, we extracted the following features for individual sentences.
Content features
These features help the model in characterizing the textual content of the sentences. They are based on the following general template which depends on the sentence, x j , and its state, y j : φ(x j , y j ) = α if the current state is y j ; 0 otherwise where, α ∈ F1 to F33, where F1 to F33 are defined below.
1. Actions based: These features are motivated by Vladimir Propp's Structuralist narrative theory [23] based insight that characters have a 'sphere of actions'. We model the actions affecting the two characters by identifying all verbs in the sentence, their agents (using 'nsubj' and 'agent' dependency relations) and their patients (using 'dobj' and 'nsubjpass' relations). This information was extended using verbs conjunct to each other using 'conj'. We also used the 'neg' relation to determine the negation status of each verb. Based on this information we extracted the following features: • Are Team [F1]: This feature models whether the two characters acted as a team. It is a binary feature indicating if the two characters were agents (or patients) of a verb together. • Acts Together [F2-F7]: These features explicitly model the behavior of the two characters towards each other using verbs for which one of the characters was the agent and the other was patient. These six numeric features look at positive/negative connotation [13], sentiment [19] and prior-polarity [28] of the verbs (while considering their negation status). • Surrogate Acts Together [F8-F13]: The above features are designed to be high-precision features that directly analyze the nature of actions of the characters towards each other. However, their recall might suffer from limitations of the NLP pre-processing pipeline. For instance, a character might be an implicit/subtle patient of an action done by the other character. For example, Tom is not the direct patient of shunned in S3 in Fig. 1. To include such cases we define a set of six surrogate features that, like before, consider positive and negative connotations, sentiments and prior-polarities of verbs (while considering negation). However, only those verbs are considered which have one of the characters as either the agent or the patient, and occur in sentences that did not contain any other character apart from the two of interest.
Adverb based:
These features model narrator's bias in describing characters' actions by analyzing the adverbs modifying the verbs identified in 'Action based' features (using 'advmod' dependency relations). For example, in S4 in Fig. 1
Lexical [F26-27]:
These bag-of-words style features analyze the connotations of all words (excluding stopwords) occurring between pairs of mentions of the two characters in the sentence. E.g. in S5 in Fig. 1 there is one pair of mentions of the two characters: Tom, Becky , and the words occurring between the two mentions are "goes on a picnic to McDougal's cave with" (stopwords included for readability).
Semantic Parse based:
These features incorporate information from a framenet-style semantic parse of the sentence. To design these features, we manually compiled lists of frames (along with corresponding relevant frame-elements) with positive (or negative) connotations depending on whether they are indicative of positive (or negative) relationship between participants (identified in the corresponding frame-elements). Our set of lists also consisted of ambiguous frames like 'cause bodily experience' in which case the exact connotation of the frame was determined on-the-fly depending on the lexical unit at which that frame fired. Lastly, we had a list of 'Relationship' frames that indicated familial or professional relationship between participants.
Transition features
While content features assist the model in analyzing the text of individual sentences, these features enable the model to remember relationship histories, thus discouraging it from changing relationship states too frequently within a sequence.
• φ(y j , y j−1 , y j−2 ) = 1 if current state is y j and the previous two states were y j−1 , y j−2 ; 0 otherwise • φ(y j , y j−1 ) = 1 if current state is y j and the previous state was y j−1 ; 0 otherwise • φ(y 0 ) = 1 if state of the first sentence in the sequence is y j ; 0 otherwise
Empirical Evaluation
In this Section we describe our data, baselines and experimental set-up.
Datasets
Our primary dataset consists of a collection of summaries ('Plot Overviews') of 300 English novels extracted from the 'Literature Study Guides' section of SparkNotes 2 . We pre-processed each of these summaries as described in Sec. 3. Thereafter, we considered all pairs of characters that appeared together in at least five sentences in the respective summaries and arranged these sentences in order of appearance in the original summary. We refer to these sequences of sentences as simply a sequence. This yielded a collection of 634 sequences consisting of a total of 5542 sentences.
As noted in Sec. 2, our semi-supervised framework is trained on fully and partially labeled sequences. For our experiments, we manually annotated a set of 100 sequences (consisting of a total of 792 annotated sentences). Out of these, 50 sequences (402 sentences) were fully annotated with a binary relationship state (for the two characters) for each sentence in the sequence. Continuous assignments of identical states were automatically collapsed into one to yield a shorter relationship sequence. We also partially annotated a set of another 50 sequences, which included annotating at least one sentence of the sequence with a binary relationship state. These 50 sequences consisted of about 390 sentences out of which 201 were annotated. (The dataset is available on the first author's webpage.)
For this work we considered summaries of novels instead of their complete text because we found summaries to be more precise and informative. Due to the inherent third-person narration style of summaries, they contain more explicit evidences about relationships. On the other hand, while processing novel texts directly one would have to infer these evidences from dialogues and subtle cues. While this is an interesting task in itself, we leave this exploration for future.
We considered another dataset for evaluating our model. This dataset was collected independently by another set of authors using Amazon Mechanical Turk 3 . The annotators were shown summaries of novels and a list of characters appearing in the novel. They were then asked to choose pairs of characters and annotate if the relationship between them changed during the novel (binary annotations). They were also asked other questions, such as the overall nature of their relationship etc., which were not relevant for our problem. From this annotated dataset, 62 pairs of characters were present in our dataset. Out of these 62, the relationship was annotated as 'changed' (positive class) for 20% of the pairs. This dataset can be viewed as providing additional binary ground truth information about the sequences in our primary dataset and was used for evaluation only (and not for training). In this paper, we refer to this data as the AMT dataset (Citation will be provided in the final version of this paper).
Baselines and Evaluation Measures
Our primary baseline is an unstructured model that trains flat classifiers using the same content features as used by our framework but treats individual sentences of the sequences independently. We experimented with various models and report performances of Logistic Regression and Decision Tree. We compare our model with this baseline to test our hypothesis that the task of relationship sequence prediction is a structured problem, which benefits from remembering intra-novel history of relationship between characters.
We also compare our framework, which employs a second order Markovian segmentation model, with an Table 3: Performance comparison on the AMT dataset. The second order model based framework outperforms the one that uses a first order model and the unstructured models LR and J48.
identical framework, which uses a similar segmentation model albeit with first order Markov assumption. This baseline is included to understand the importance of remembering a longer history of relationship between characters. Also, since a higher order model can look further back, it will discourage frequent changes in relationship status within the sequence more strongly.
For comparing performances of the various models we use two different performance measures. Our first measure accesses the goodness of the binary relationship state assignments for every sentence in the sequence using averaged Precisions (P), Recalls (R) and F1-measures (F) of the two states. The second evaluation measure, mimics a more practical scenario by evaluating from the perspective of the predicted relationship sequence, r, instead of looking at individual sentences of the sequence. It compares the 'proximity' of the predicted relationship sequence to the ground truth sequence using Edit Distance and reports mean Edit Distance (ED) over all test sequences. A better prediction model will be expected to have a smaller value for this Edit Distance based measure. Table 2 compares 10-fold cross validation performances of our second order Semi-supervised Framework (Order 2 Model) with its first order counterpart (Order 1 Model) and two unstructured baselines: Decision Tree (J48) and Logistic Regression (LR). Since the performance of the semi-supervised frameworks depends on random initialization of the weights, the figures reported in the table are mean values over 100 random restarts. The number of relationship states, |Y |, was set to be 2 to correspond to the gold standard annotations. From the table we can see that the framework with the first order Markov model yields slightly better performance (higher averaged F-measure and lower mean Edit Distance) than the unstructured models (LR and J48). This hints at a need for modeling the information flow between sentences of the sequences. The further performance improvement with the second order model emphasizes this hypothesis and also demonstrates the benefit of remembering longer history of characters while making relationship judgments. Table 3 compares performances of the various models on the AMT dataset using averaged Precision, Recall and F measures on the binary classification task of change prediction. The problem setting, input sequences format and the training procedure for these models is same as above. However, the models produce structured output (relationship sequences) that need to be converted to the binary output of change prediction task. We do this simply by predicting the positive class (change occurred) if the outputted relationship sequence contained at least one change. We can see that while the performance of the framework using the first order model is similar to that of the baseline LR, the second order model shows a considerable improvement in performance. A closer look at the F measures of the two classes (not reported due to space constraints) revealed that while the performance on the positive class was similar for all the models (except J48 which was lower), the performance on the negative class (no change) was much higher for the structured models (56.0 for LR and 57.4 and 67.8 for the First and Second order models respectively). This might have happened because the unstructured model looks at independent sentences and cannot incorporate historical evidence so it is least conservative in predicting a change, which might have resulted in low recall.
Evaluation on the primary dataset
Evaluation on the AMT dataset
The structured models on the other hand look at previous states and hence it can better learn to make coherent state predictions.
Literature Survey
The work presented in this paper is most closely related to the character-centric methods in computational narrative domain. [2] presented two latent variable models for learning personas in summaries of films by incorporating events that affect the characters. In their subsequent work [3], they automatically infer character personas in English Novels. Similarly [27] extract character roles from unannotated folk tales based on their actions. Unlike our work, these approaches do not explicitly model the relationship between characters (though an end user can manually infer the nature of relationship between them based on the definitions of the persona types, if available).
On the other hand, previous work has also focused on constructing social networks from text, though the interpretation of links between people varies according to the goals. For example, [12] analyzed dialogue interactions between characters of British novels and serials to construct their social networks and used them to test literary theories about such communities. Their goals required them to model the edges of the network using 'volume' of interactions rather than 'nature' of relationships. [1] focused on analyzing the direction of social events to construct social network with unstructured text. They also do not model polarity of relationships. However, they emphasized the importance of using dynamic networks to understand their text. [16] presented a method to infer identities of speakers and use them to construct a social network showing familial or social relationships. Most of these approaches involving social networks used them to identify positive relationships between people. There has also been some interest in modeling both positive and negative relationships. [18] proposed signed social networks to model both kinds of relationships though they work is in general social media domain. More recently, [17] analyze movie scripts to construct a signed social network depicting formality of relationships between movie characters. Apart from domain of application, our work differs from these in the sense that we model 'polarity' of relationships and do that in a dynamic fashion.
Conclusion and Discussion
In this paper we have addressed the problem of predicting dynamic relationships between pairs of characters in a narrative. We analyze summaries of novels to extract relationship trajectories that describe how the relationship evolved. Our semi-supervised framework uses a structured segmentation model that makes second-order Markov assumption to remember the 'history' of characters and analyzes textual contents of summaries using rich semantic features that incorporate world knowledge. We demonstrate the utility of our model by comparing it with an unstructured model that treats individual sentences independently and also with a lower order model that remembers shorter history.
Our experiments demonstrate that using a higher order model helps in making better predictions. In future we would like to experiment with models with order higher than 2 and also with semi-Markov models.
Also, this work treats different character pairs from the same novel independently and does not attempt to understand the complete text of the narrative (Sec. ??). In future, we would like to explore a more sophisticated model that exploits intra-novel dynamics while predicting relationships.
Figure 1 :
1Sample sentences from a narrative depicting evolving relationship between characters: Tom and Becky. The relationship changes from cooperative (+) to non-cooperative (-) and then back to cooperative (+). '. . . ' represent text omitted due to space constraints.
SparkNotes Editors. SparkNote on The Adventures of Tom Sawyer. SparkNotes LLC. 2003. http://www.sparknotes. com/lit/tomsawyer/ baseline that treats individual sentences independently.
frames fired such that at least one of the characters belonged to the relevant frame-element.• Frames Fired [F31-F33]: Three features counting number of positive, negative and 'relationship' frames fired.Ta-
ble 1 shows examples of various types of frames and
their relevant frame-elements. The complete list is avail-
able on the first author's webpage. Based on these lists,
we extracted the following two types of features:
• Frames Fired [F28-F30]: Three numeric features
counting number of positive, negative and 'relation-
ship'
Table 1: Samples of various types of Frame-net frames used by 'Semantic Parse based' features.Type
Frame
Frame-elements
Negative
'killing'
'killer', 'victim'
'attack'
'assailant', 'vic-
tim'
Positive
'forgiveness'
'judge', 'evaluee'
'supporting'
'supporter', 'sup-
ported'
Ambiguous 'cause bodily ex-
perience'
'agent', 'experi-
encer'
'friendly or hos-
tile'
'side 1', 'side 2',
'sides'
Relationship 'kinship'
'alter', 'ego', 'rela-
tives'
'subordinates
and superiors'
'superior', 'subor-
dinate'
Table 2 :
2Cross validation performances of various models. The second order model based framework outperforms the one that uses a first order model and the unstructured baselines LR and J48.Model
P
R
F
J48
49.46 49.17 40.95
LR
52.81 54.33 44.33
Order 1 Model 53.54 54.83 44.78
Order 2 Model 52.98 54.63 49.53
http://www.sparknotes.com/lit/
https://www.mturk.com/
SINNET: social interaction network extractor from text. A Agarwal, A Kotalwar, J Zheng, O Rambow, Sixth International Joint Conference on Natural Language Processing. Nagoya, JapanA. Agarwal, A. Kotalwar, J. Zheng, and O. Ram- bow. SINNET: social interaction network extractor from text. In Sixth International Joint Conference on Natural Language Processing, IJCNLP 2013, Nagoya, Japan, October 14-18, 2013, pages 33-36, 2013.
Learning latent personas of film characters. D Bamman, B O'connor, N A Smith, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013. the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013Sofia, BulgariaLong Papers1D. Bamman, B. O'Connor, and N. A. Smith. Learn- ing latent personas of film characters. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 Au- gust 2013, Sofia, Bulgaria, Volume 1: Long Papers, pages 352-361, 2013.
A bayesian mixed effects model of literary character. D Bamman, T Underwood, N A Smith, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, MD, USALong Papers1D. Bamman, T. Underwood, and N. A. Smith. A bayesian mixed effects model of literary charac- ter. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 370-379, 2014.
Event schema induction with a probabilistic entity-driven model. N Chambers, 18-21 Octo- ber 2013Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingGrand Hyatt Seattle, Seattle, Washington, USA2013A meeting of SIGDAT, a Special Interest Group of the ACLN. Chambers. Event schema induction with a prob- abilistic entity-driven model. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 Octo- ber 2013, Grand Hyatt Seattle, Seattle, Washing- ton, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1797-1807, 2013.
Unsupervised learning of narrative event chains. N Chambers, D Jurafsky, ACL 2008, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics. Columbus, Ohio, USAN. Chambers and D. Jurafsky. Unsupervised learn- ing of narrative event chains. In ACL 2008, Pro- ceedings of the 46th Annual Meeting of the Asso- ciation for Computational Linguistics, June 15-20, 2008, Columbus, Ohio, USA, pages 789-797, 2008.
Unsupervised learning of narrative schemas and their participants. N Chambers, D Jurafsky, ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP. SingaporeN. Chambers and D. Jurafsky. Unsupervised learn- ing of narrative schemas and their participants. In ACL 2009, Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natu- ral Language Processing of the AFNLP, 2-7 August 2009, Singapore, pages 602-610, 2009.
Probabilistic frame induction. J C K Cheung, H Poon, L Vanderwende, Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics, Proceedings. Atlanta, Georgia, USAWestin Peachtree Plaza HotelJ. C. K. Cheung, H. Poon, and L. Vanderwende. Probabilistic frame induction. In Human Language Technologies: Conference of the North American Chapter of the Association of Computational Lin- guistics, Proceedings, June 9-14, 2013, Westin Peachtree Plaza Hotel, Atlanta, Georgia, USA, pages 837-846, 2013.
Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. M Collins, Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing. the ACL-02 Conference on Empirical Methods in Natural Language ProcessingStroudsburg, PA, USA10Association for Computational LinguisticsM. Collins. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 Conference on Empirical Methods in Nat- ural Language Processing -Volume 10, EMNLP '02, pages 1-8, Stroudsburg, PA, USA, 2002. As- sociation for Computational Linguistics.
Frame-semantic parsing. D Das, D Chen, A F T Martins, N Schneider, N A Smith, Computational Linguistics. 401D. Das, D. Chen, A. F. T. Martins, N. Schneider, and N. A. Smith. Frame-semantic parsing. Com- putational Linguistics, 40(1):9-56, 2014.
Character-based kernels for novelistic plot structure. M Elsner, EACL 2012, 13th Conference of the European Chapter of the Association for Computational Linguistics. Avignon, FranceM. Elsner. Character-based kernels for novelistic plot structure. In EACL 2012, 13th Conference of the European Chapter of the Association for Computational Linguistics, Avignon, France, April 23-27, 2012, pages 634-644, 2012.
Modeling Narrative Discourse. D K Elson, Columbia UniversityPhD thesisD. K. Elson. Modeling Narrative Discourse. PhD thesis, Columbia University, 2012.
Extracting social networks from literary fiction. D K Elson, N Dames, K Mckeown, ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Uppsala, SwedenD. K. Elson, N. Dames, and K. McKeown. Ex- tracting social networks from literary fiction. In ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden, pages 138- 147, 2010.
Connotation lexicon: A dash of sentiment beneath the surface meaning. S Feng, J S Kang, P Kuznetsova, Y Choi, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL. the 51st Annual Meeting of the Association for Computational Linguistics, ACLSofiaLong Papers1S. Feng, J. S. Kang, P. Kuznetsova, and Y. Choi. Connotation lexicon: A dash of sentiment beneath the surface meaning. In Proceedings of the 51st An- nual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bul- garia, Volume 1: Long Papers, pages 1774-1784, 2013.
Learning Narrative Structure from Annotated Folktales. M A Finlayson, Massachusetts Institute of TechnologyPhD thesisM. A. Finlayson. Learning Narrative Struc- ture from Annotated Folktales. PhD thesis, Mas- sachusetts Institute of Technology, 2012.
Automatically producing plot unit representations for narrative text. A Goyal, E Riloff, H D Iii, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. the 2010 Conference on Empirical Methods in Natural Language ProcessingMassachusetts, USA2010MIT Stata CenterA meeting of SIGDAT, a Special Interest Group of the ACLA. Goyal, E. Riloff, and H. D. III. Automatically producing plot unit representations for narrative text. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Process- ing, EMNLP 2010, 9-11 October 2010, MIT Stata Center, Massachusetts, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 77-86, 2010.
Identification of speakers in novels. H He, D Barbosa, G Kondrak, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL. the 51st Annual Meeting of the Association for Computational Linguistics, ACLSofia, BulgariaLong Papers1H. He, D. Barbosa, and G. Kondrak. Identifica- tion of speakers in novels. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 1: Long Papers, pages 1312-1320, 2013.
You're Mr. Lebowski, I'm the Dude": Inducing address term formality in signed social networks. V Krishnan, J Eisenstein, Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics. to appear in ProceedingsV. Krishnan and J. Eisenstein. "You're Mr. Lebowski, I'm the Dude": Inducing address term formality in signed social networks. In Human Language Technologies: Conference of the North American Chapter of the Association of Compu- tational Linguistics, (to appear in Proceedings), 2015.
Signed networks in social media. J Leskovec, D P Huttenlocher, J M Kleinberg, Proceedings of the 28th International Conference on Human Factors in Computing Systems, CHI 2010. the 28th International Conference on Human Factors in Computing Systems, CHI 2010Atlanta, Georgia, USAJ. Leskovec, D. P. Huttenlocher, and J. M. Klein- berg. Signed networks in social media. In Pro- ceedings of the 28th International Conference on Human Factors in Computing Systems, CHI 2010, Atlanta, Georgia, USA, April 10-15, 2010, pages 1361-1370, 2010.
Opinion observer: analyzing and comparing opinions on the web. B Liu, M Hu, J Cheng, Proceedings of the 14th international conference on World Wide Web, WWW 2005. the 14th international conference on World Wide Web, WWW 2005Chiba, JapanB. Liu, M. Hu, and J. Cheng. Opinion observer: analyzing and comparing opinions on the web. In Proceedings of the 14th international conference on World Wide Web, WWW 2005, Chiba, Japan, May 10-14, 2005, pages 342-351, 2005.
The stanford corenlp natural language processing toolkit. C D Manning, M Surdeanu, J Bauer, J R Finkel, S Bethard, D Mcclosky, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, MD, USA, System DemonstrationsC. D. Manning, M. Surdeanu, J. Bauer, J. R. Finkel, S. Bethard, and D. McClosky. The stanford corenlp natural language processing toolkit. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, System Demon- strations, pages 55-60, 2014.
Plot induction and evolutionary search for story generation. N Mcintyre, M Lapata, ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Uppsala, SwedenN. McIntyre and M. Lapata. Plot induction and evolutionary search for story generation. In ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden, pages 1562-1572, 2010.
Learning scripts as hidden markov models. J W Orr, P Tadepalli, J R Doppa, X Fern, T G Dietterich, Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence. the Twenty-Eighth AAAI Conference on Artificial IntelligenceQuébec City, Québec, CanadaJ. W. Orr, P. Tadepalli, J. R. Doppa, X. Fern, and T. G. Dietterich. Learning scripts as hidden markov models. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, July 27 -31, 2014, Québec City, Québec, Canada., pages 1565- 1571, 2014.
Morphology of the folktale. V I Propp, University of Texas PressV. I. Propp. Morphology of the folktale. University of Texas Press, 1968.
Learning script knowledge with web experiments. M Regneri, A Koller, M Pinkal, ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Uppsala, SwedenM. Regneri, A. Koller, and M. Pinkal. Learning script knowledge with web experiments. In ACL 2010, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, July 11-16, 2010, Uppsala, Sweden, pages 979-988, 2010.
R C Schank, R P Abelson, Scripts, Plans, Goals, and Understanding: An Inquiry Into Human Knowledge Structures (Artificial Intelligence Series). Psychology Press. 1 editionR. C. Schank and R. P. Abelson. Scripts, Plans, Goals, and Understanding: An Inquiry Into Hu- man Knowledge Structures (Artificial Intelligence Series). Psychology Press, 1 edition, July 1977.
Vector space semantics with frequency-driven motifs. S Srivastava, E H Hovy, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014. the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014Baltimore, MD, USALong Papers1S. Srivastava and E. H. Hovy. Vector space seman- tics with frequency-driven motifs. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22- 27, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 634-643, 2014.
Toward automatic role identification in unannotated folk tales. J Valls-Vargas, J Zhu, S Ontañón, Proceedings of the Tenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment. the Tenth AAAI Conference on Artificial Intelligence and Interactive Digital EntertainmentRaleigh, NC, USANorth Carolina State UniversityJ. Valls-Vargas, J. Zhu, and S. Ontañón. Toward automatic role identification in unannotated folk tales. In Proceedings of the Tenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2014, October 3-7, 2014, North Carolina State University, Raleigh, NC, USA, 2014.
Recognizing contextual polarity in phrase-level sentiment analysis. T Wilson, J Wiebe, P Hoffmann, HLT/EMNLP 2005, Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference. Vancouver, British Columbia, CanadaT. Wilson, J. Wiebe, and P. Hoffmann. Recogniz- ing contextual polarity in phrase-level sentiment analysis. In HLT/EMNLP 2005, Human Language Technology Conference and Conference on Em- pirical Methods in Natural Language Processing, Proceedings of the Conference, 6-8 October 2005, Vancouver, British Columbia, Canada, 2005.
| [] |
[
"Local Structure Matters Most in Most Languages",
"Local Structure Matters Most in Most Languages"
] | [
"Louis Clouâtre \nQuebec Artificial Intelligence Institute (Mila\nCanada\n\nCIFAR AI Chair\n\n",
"Prasanna Parthasarathi ",
"Amal Zouaq ",
"Sarath Chandar \nQuebec Artificial Intelligence Institute (Mila\nCanada\n\nCIFAR AI Chair\n\n",
"Polytechnique Montréal ",
"Noah ' ",
"Ark Lab ",
"Huawei Canada "
] | [
"Quebec Artificial Intelligence Institute (Mila\nCanada",
"CIFAR AI Chair\n",
"Quebec Artificial Intelligence Institute (Mila\nCanada",
"CIFAR AI Chair\n"
] | [] | Many recent perturbation studies have found unintuitive results on what does and does not matter when performing Natural Language Understanding (NLU) tasks in English. Coding properties, such as the order of words, can often be removed through shuffling without impacting downstream performances. Such insight may be used to direct future research into English NLP models. As many improvements in multilingual settings consist of wholesale adaptation of English approaches, it is important to verify whether those studies replicate or not in multilingual settings. In this work, we replicate a study on the importance of local structure, and the relative unimportance of global structure, in a multilingual setting. We find that the phenomenon observed on the English language broadly translates to over 120 languages, with a few caveats. | 10.48550/arxiv.2211.05025 | [
"https://export.arxiv.org/pdf/2211.05025v1.pdf"
] | 253,420,301 | 2211.05025 | 9a5b2dc77bda19759df8481aaf283da353ac7e77 |
Local Structure Matters Most in Most Languages
Louis Clouâtre
Quebec Artificial Intelligence Institute (Mila
Canada
CIFAR AI Chair
Prasanna Parthasarathi
Amal Zouaq
Sarath Chandar
Quebec Artificial Intelligence Institute (Mila
Canada
CIFAR AI Chair
Polytechnique Montréal
Noah '
Ark Lab
Huawei Canada
Local Structure Matters Most in Most Languages
Many recent perturbation studies have found unintuitive results on what does and does not matter when performing Natural Language Understanding (NLU) tasks in English. Coding properties, such as the order of words, can often be removed through shuffling without impacting downstream performances. Such insight may be used to direct future research into English NLP models. As many improvements in multilingual settings consist of wholesale adaptation of English approaches, it is important to verify whether those studies replicate or not in multilingual settings. In this work, we replicate a study on the importance of local structure, and the relative unimportance of global structure, in a multilingual setting. We find that the phenomenon observed on the English language broadly translates to over 120 languages, with a few caveats.
Introduction
A recent research trend has explored the sensitivity, or insensitivity, of neural language models to different perturbations of texts (Pham et al., 2021;Sinha et al., 2020Sinha et al., , 2021Gupta et al., 2021;O'Connor and Andreas, 2021;Taktasheva et al., 2021;Clouatre et al., 2022). Their findings may be central in directing future NLP research by providing insight into which coding property (Kulmizev and Nivre, 2021) of language are most valuable to performing Natural Language Understanding (NLU) tasks. As research in English NLP tends to be adapted to other languages, such as through single language adaptation of BERT-style models (Devlin et al., 2019;Cui et al., 2019;Le et al., 2019;Martin et al., 2019;Antoun et al., 2020;Carmo et al., 2020;de Vries et al., 2019;Malmsten et al., 2020;Polignano et al., 2019;Nguyen and Tuan Nguyen, 2020) or multilingual adaptations of the same architecture (Lample and Conneau, 2019;Clark et al., 2021;Xue et al., 2020Xue et al., , 2021Devlin et al., 2019), it is vital that we verify how insights derived from the English language generalize to other languages.
One such coding property, the local structure of text, has recently been shown to be ubiquitously relied upon by both neural language models (Clouatre et al., 2022) and humans (Mollica et al., 2020) to understand text in English. The global structure of text only sometimes being necessary for a model to perform NLU tasks (Clouatre et al., 2022). Such results motivate hierarchical approaches to neural language model development, where one would first build meaning locally and then reason over the global context if necessary. However, we must verify that the importance of that coding property is not merely an artifact of the English language.
In this short paper, our contributions are as follows:
• We adapt and replicate the findings of Clouatre et al. (2022) in a multilingual setting to verify their generality and find that their conclusions regarding both local and global structure broadly apply to most of the 120 languages surveyed.
• We provide analysis for why text using Chinese Characters as its script may be more resilient to local perturbations and highlight the importance of testing improvements in English neural modeling in other languages. Sinha et al. (2020) explores shuffling of words on textual entailment tasks, highlighting models' insensitivity to such perturbations. Finally, Taktasheva et al. (2021) extend perturbation studies to Swedish and Russian and performs perturbations by shuffling syntactic phrases, rotating sub-trees around the root of the syntactic tree of a sentence, or simply shuffling the words of the text. These approaches share the main limitation of requiring automatic parsing tools or well-developed tokenizers to define words. This limits their applicability in a multilingual setting. Priors regarding the form of the text, such as the presence of whitespace delimited words, limit the generalizability of most of these studies. Clouatre et al. (2022) proposes a suite of controllable perturbations on characters and subwords, which should be compatible with almost any written language, as well as a metric quantifying perturbations to the local and global structure that measures perturbations on a character-level.
Experiments
We extend the perturbation studies of Clouatre et al. (2022) to a multilingual setting. We perform those experiments on eight popular cross-lingual tasks (Hu et al., 2020;Ponti et al., 2020;Liang et al., 2020) covering over 120 languages. This will shed light on what languages, if any, do not share the same sensitivity to local structure and insensitivity to global structure as English.
Metric and Perturbations
The CHRF-2 (chrF) (Popović, 2015) metric measures the amount of character bi-gram overlap between a perturbed text and the original text. This measure represents the amount of local structure that has not been perturbed in a text.
The Index Displacement Count (IDC) (Clouatre et al., 2022) metric measures the average absolute distance traversed by every character in a perturbed text. An IDC of 0.3 would mean that, on average, every character has traversed 30% of the length of the text. This measure represents the amount of global perturbations applied to a text.
The compression rate (Comp) (Xue et al., 2021) represents the total length of the text in terms of characters divided by the total length of the text once tokenized. Since most of our models either use subwords or tokenize characters directly, there are no out-of-vocabulary tokens to be counted. The compression rate is then used as a proxy for vocabulary destruction of pretrained models, an important confounder for the importance of local structure.
The scholar is typesetting. ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ hT eshcoarl i stpyseteitn.g
The scholar is typesetting. ng.cholThe ss tyar pesettii ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ ρ We perform perturbations by altering the order of subwords and characters present in the text. Three types of perturbations are applied.
Full shuffling completely randomizes the order of the subword or characters.
Neighbor flipping flips a subword or character with its neighbor with a controllable probability ρ, providing local perturbations while maintaining much of the absolute position of the tokens.
Phrase shuffling randomly builds phrases of subwords or characters of controllable average length with a parameter ρ and shuffles those phrases, providing a minimal amount of local perturbations for a large amount of change in absolute position.
Simple examples of those perturbations are shown in Figure 1, pseudocode and details are present in the Appendix B.
Experimental Details
All experiments are conducted on three pretrained cross-lingual models. The XLM-RoBERTa-Base (Lample and Conneau, Canine-S (Clark et al., 2021) model are used. The Canine model is a tokenization-free pretrained model, which lets us isolate the impact of subword destruction on the findings.
The zero-shot cross-lingual setting (Hu et al., 2020) is used for all experiments. The model is first finetuned on the English version of the dataset and evaluated without further tuning on all target languages.
The English version on which the model is finetuned is kept unperturbed, while the target language text on which the model is evaluated goes through several perturbations. We perform a total of 43 different perturbations on every task and language and obtain their performance. All models are finetuned on five different random seeds, and all perturbations are performed on five different random seeds, for a total of 25 evaluations for every model on every task, every language present in the tasks, and every perturbation setting. 1 A total of 8 cross-lingual tasks selected from the most popular cross-lingual benchmarks (Hu et al., 2020;Liang et al., 2020;Ponti et al., 2020) covering over 120 languages are used for evaluation. 2 Summary information of the tasks can be found in Table 1. 3
Results and Discussion
In Figure 2, we observe the trends reported by Clouatre et al. (2022) to be broadly true in a cross-1 Detailed training and testing hyperparameters and process are present in the Appendix A and details on the specific perturbations in Appendix A.
2 Extractive tasks such as extractive QA are not compatible with our perturbations, as the answer would also be perturbed and were not considered. 3 As we use all 122 languages in the Tatoeba dataset, which vary from 100 to 1000 possible sentences to retrieve, the F1 score is more appropriate as an evaluation of performance than the accuracy used in the XTREME benchmark. lingual setting. Specifically, the more local perturbations are applied to a text, the more degradation in the understanding of that text can be expected, which shows that model does rely on the local structure to build understanding. The perturbations to the global structure are shown to be a much poorer explanation for the degradation in performance than the perturbation to the local structure. The compression rate is highly correlated with a model's performance and the local structure, making it a potential confounder for the degradation in performance. However, the trend in local structure holds with subword-level perturbations, unlike with the compression rate, which is not affected by Figure 4: Rank-correlation matrix between the different task's performance to perturbed samples and the perturbation quantified by the different metrics. The higher the value the better the metric explains the degradation in performance.
perturbations to the order of subwords, as well as holding for the vocabulary-free Canine model, as shown in Figure 3. This makes it more likely that the cause for the degradation in performance is the local structure perturbation, the destruction of the vocabulary being incidental. Figure 4 shows the rank-correlations of a model's performance over the different tasks with the different measures of perturbation. The overall trends are stable in all but one task, PAWS-X. Much like the CoLA task (Warstadt et al., 2019) in the GLUE Benchmark , it is possible to build tasks that require the specific order of words to be successfully completed. The PAWS-X task comprises adversarial paraphrases containing a similar lexicon between paraphrase and nonparaphrases. The performance is then highly sensitive to perturbations causing displacement, such as shuffling words, even if the local structure is mostly kept intact. It is not that local structure is unnecessary, but that global structure is. This phenomenon is further explored by Mahowald et al. Figure 5 show that the findings are consistent across almost all text scripts, with the exception of languages using Chinese Characters as script. This is most likely caused by how semantically richer the smallest separable unit in Chinese tends to be compared to characters in different scripts. Where Chinese has a single indivisible character meaning "water" the English equivalent "water" can be perturbed to "rtawe". Even character-level shuffling cannot strip Chinese text of all meaning, which would explain some the differences. It is to be noted that while weaker, the correlation between local structure perturbations and performance remains high. Figure 5: Rank-correlation matrix between the different language script's containing at least 3 languages performance to perturbed samples on the and the perturbation quantified by the different metrics. The higher the value the better the metric explains the degradation in performance.
PAWS-X
Chinese Character Script
Conclusion
We first explored and confirmed the importance of local structure, the limited importance of global structure, and controlled for the potential of vocabulary destruction being the main explanatory factor in 8 NLU tasks covering over 120 languages. In aggregate, the findings of Clouatre et al. (2022) hold for many different pretrained cross-lingual models and NLU tasks in a multilingual setting. Local structure sensitivity and global structure insensitivity do not seem to be an artifacts of the English language. A significant exception is when grammatical cues are essential to complete the task, such as in the PAWS-X task. While many tasks can be solved purely with the information obtained from the local structure, reasoning over the global context is necessary for many problems.
Languages using Chinese characters as their script also deviate from the norm. This is likely caused by how semantically rich their characters are.
It will be important that any NLP improvements derived from English experiments are verified to also generalize to other languages. As we have observed that languages written in Chinese Character Script are differently impacted by perturbations to different coding properties, it is possible that im-provements to the way our model understand those properties in English will not generalize.
A Experiment Details
Model Hyperparameters and Training We finetune each pretrained models on the English version of each dataset for a total of 10 epochs, checkpointing the model after each epochs. The English version is never perturbed, the finetuning is done on unperturbed data. This finetuning is done 5 times with different random seeds for each model and each datasets. For 8 datasets and 3 models we have a total of 3 * 8 * 5 = 120 finetuning and 1200 checkpoints, one for each epoch. A learning rate of 2e-5, a batch size of 32 and a weight decay of 0.1 is used in all finetuning. All experiments used a warmup ratio of 0.06, as described in .
For the evaluation, we perform the same perturbations on the validation and testing data of the different target languages. We evaluate the perturbed validation data on each of the 10 checkpoints, chose the best checkpoint on the perturbed validation data, and evaluate that checkpoint on the perturbed test data. This process is repeated for each perturbations, each of the 5 random seed and 5 times with different perturbation random seeds for each finetuned models. In total, for each language in each task on each model for each perturbation setup we average results over 25 random seeds.
For the sentence retrieval tasks, such as Tatoeba, we do not perform any finetuning. We simply obtain the nearest neighbour using cosine similarity on the final hidden representation. (Hu et al., 2020) First, we obtain the representation of the unperturbed English side of the dataset. This is done by feeding the English text through the model and averaging the final layers hidden representation of the text. We then perform our perturbations on the target language text, feed those perturbed text through the same pretrained cross-lingual model and obtain it's representation through the same process. We now have a set of English representation and a set of target language representation, on which we find the nearest neighbour as measured by the Cosine Distance on the pooled hidden representations. If the nearest neighbour is the sentence that was to be retrieved, we consider this an hit, else it is a miss. The reported results are over the average of 5 random seeds of those perturbations.
Perturbations A total of 43 perturbations are used for all experiments. The first one is the Benchmark, which is simply the unperturbed text. We perform a full-shuffling on both the subwords and characters.
C Additional Results
Language Family Figure 6 shows the aggregated correlations between the different language families and the different metrics. Results seem to be consistent across all families, with the exception of Sino-Tibetan languages. This was generally adressed in Section 3.3.2. PAWS-X To determine whether it is that the local structure is not essential on PAWS-X, or simply that perturbations to the order of words are equally important, we observe the performance of models using only neighbor flipping perturbations, limiting the displacement of words to a minimum. In Figure 7, we show that if we only perturb the local structure, performance is highly correlated with the amount of local perturbations. This implies that it is not that the model is insensitive to local perturbations, rather for certain tasks where grammatical queues are necessary any change to the order of words will lead to failure.
IE: Italic
Chinese Character Script Languages using Chinese characters and derivatives obtain a relatively weaker correlation with local perturbations. Figure 8 illustrates the perturbation to performance curve while only taking into account languages using Chinese characters as their script, compared to those using the Latin script in Figure 9. A few major divergences from the global trend are present. First, the average compression ratio is under 1, meaning that the tokenizer adds to the sequence length on average. While counter-intuitive, this is caused by the fact that the vast majority of Chinese characters' tokenization defaults to tokenizing the character directly, thus yielding almost no compression. The tokenizer adds a few special characters for the Transformer model to use, yielding longer sequences on average than the raw text. This can be verified by the fact that, unlike with other scripts, subword-perturbations are sufficient to explore almost the whole spectrum of local perturbations, which would only be possible if most subwords were of length 1.
While the phrase shuffling perturbations seem to behave as expected, it seems that text written in chinese script are especially resilient to neighbour flipping. We compare the performance of Chinese character scripts and Latin scripts in Figure 9 and find that Chinese scripts are, on average, more resilient to perturbations, going from an average score of 0.18 to 0.08 while the Latin Script performance drops all the way to an aggregate score of 0.03.
Figure 1 :
1From top to bottom: Neighbor Flipping with ρ = 0.5, Phrase Shuffling with ρ = 0.5
Figure 2 :
22019), BERT-Base-Multilingual-Cased(Devlin et al., 2019) and the Plotted are the relations between the different choices of metrics measuring the amount of perturbation and the average performance of all 3 models on all tested datasets. Left is more perturbed, up is better performance. The X-axis of the IDC metric is inverted for clearer comparison.
Figure 3 :
3Rank-correlation matrix between the different models' performance to perturbed samples on the and the perturbation quantified by the different metrics. The higher the value the better the metric explains the degradation in performance.
; Ravishankar et al. (2022); Papadimitriou et al. (2022).
On the subword-level perturbations we perform phrase-shuffling with ρ values of: [0.9, 0.8, 0.65, 0.5, 0.35, 0.2, 0.1] and neighbour-flip shuffling with ρ values of: [0.9, 0.8, 0.6, 0.5, 0.4, 0.2, 0.1]. On the character-level perturbations we perform phrase-shuffling with ρ values of: [0.975, 0.05] and neighbour-flip shuffling with ρ values of: [0.8, 0.65, 0.5, 0.4, 0.3, 0.2, 0.1, 0.075, 0.05, 0.035, 0.025, 0.01]. A total of 15 subword-level experiments, 27 character-level experiments and the unperturbed benchmark are evaluated for a grand total of 43 different perturbation settings .
Figure 6 :
6Rank-correlation matrix between the different language family's containing at least 3 languages performance to perturbed samples on the and the perturbation quantified by the different metrics. The higher the value the better the metric explains the degradation in performance.
Figure 7 :Figure 8 :Figure 9 :
789Plotted is the relations between the local structure perturbation and the average performance on the PAWS-X dataset. Only the neighbour flipped perturbations are shown to isolate the impact of perturbations to the local structure. Plotted are the relations between the different metrics measuring the amount of perturbation and the average performance of all 3 models on all tested datasets on languages using chinese characters or derivatives as their scripts. Plotted are the relations between the different metrics measuring the amount of perturbation and the average performance of all 3 models on all tested datasets on languages using a latin script.
Table 1 :
1Summary information of the different tasks used.
AcknowledgementsThis research has been funded by the NSERC Discovery Grant Program.Function IDC(X p ): X len p ← X p .length(); IDC_list ← list() for i ← 0 and i ≤ X len p do abs_distortion ← abs(i-X p [i]); IDC_list.append(abs_distortion); end IDC_agg ← IDC_list.mean(); IDC ← IDC_agg X len p ; return Algorithm 1: Pseudocode to compute IDC metric.
Arabert: Transformer-based model for arabic language understanding. Wissam Antoun, Fady Baly, Hazem M Hajj, abs/2003.00104ArXiv. Wissam Antoun, Fady Baly, and Hazem M. Hajj. 2020. Arabert: Transformer-based model for arabic lan- guage understanding. ArXiv, abs/2003.00104.
Diedre Carmo, Marcos Piau, Israel Campiotti, Rodrigo Nogueira, Roberto De, Alencar Lotufo, abs/2008.09144PTT5: pretraining and validating the T5 model on brazilian portuguese data. CoRR. Diedre Carmo, Marcos Piau, Israel Campiotti, Rodrigo Nogueira, and Roberto de Alencar Lotufo. 2020. PTT5: pretraining and validating the T5 model on brazilian portuguese data. CoRR, abs/2008.09144.
Canine: Pre-training an efficient tokenization-free encoder for language representation. Jonathan H Clark, Dan Garrette, Iulia Turc, John Wieting, 10.48550/ARXIV.2103.06874Jonathan H. Clark, Dan Garrette, Iulia Turc, and John Wieting. 2021. Canine: Pre-training an efficient tokenization-free encoder for language representa- tion.
Local structure matters most: Perturbation study in NLU. Louis Clouatre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar, 10.18653/v1/2022.findings-acl.293Findings of the Association for Computational Linguistics: ACL 2022. Dublin, IrelandAssociation for Computational LinguisticsLouis Clouatre, Prasanna Parthasarathi, Amal Zouaq, and Sarath Chandar. 2022. Local structure matters most: Perturbation study in NLU. In Findings of the Association for Computational Linguistics: ACL 2022, pages 3712-3731, Dublin, Ireland. Associa- tion for Computational Linguistics.
Pre-training with whole word masking for chinese. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu, abs/1906.08101Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, and Guoping Hu. 2019. Pre-training with whole word masking for chinese BERT. CoRR, abs/1906.08101.
Bertje: A dutch BERT model. Andreas Wietse De Vries, Arianna Van Cranenburgh, Tommaso Bisazza, Caselli, Malvina Gertjan Van Noord, Nissim, abs/1912.09582CoRRWietse de Vries, Andreas van Cranenburgh, Arianna Bisazza, Tommaso Caselli, Gertjan van Noord, and Malvina Nissim. 2019. Bertje: A dutch BERT model. CoRR, abs/1912.09582.
Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, NAACL. J. Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirec- tional transformers for language understanding. In NAACL.
Ashim Gupta, Giorgi Kvernadze, Vivek , arXiv:2101.03453Bert & family eat word salad: Experiments with text understanding. arXiv preprintAshim Gupta, Giorgi Kvernadze, and Vivek Sriku- mar. 2021. Bert & family eat word salad: Ex- periments with text understanding. arXiv preprint arXiv:2101.03453.
XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalization. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson, Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multi- task benchmark for evaluating cross-lingual general- ization.
Schrödinger's tree -on syntax and neural language models. Artur Kulmizev, Joakim Nivre, abs/2110.08887CoRRArtur Kulmizev and Joakim Nivre. 2021. Schrödinger's tree -on syntax and neural lan- guage models. CoRR, abs/2110.08887.
Crosslingual language model pretraining. Guillaume Lample, Alexis Conneau, abs/1901.07291CoRRGuillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. CoRR, abs/1901.07291.
Flaubert: Unsupervised language model pre-training for french. Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab, abs/1912.05372Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexan- dre Allauzen, Benoît Crabbé, Laurent Besacier, and Didier Schwab. 2019. Flaubert: Unsupervised language model pre-training for french. CoRR, abs/1912.05372.
Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Bruce Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark dataset for cross-lingual pre-training. understanding and generation. CoRR, abs/2004.01401Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fen- fei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Bruce Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark dataset for cross-lingual pre-training, understanding and generation. CoRR, abs/2004.01401.
Multilingual denoising pre-training for neural machine translation. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer, abs/2001.08210CoRRYinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. CoRR, abs/2001.08210.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, abs/1907.11692Roberta: A robustly optimized bert pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. ArXiv, abs/1907.11692.
Grammatical cues are largely, but not completely, redundant with word meanings in natural language. Kyle Mahowald, Evgeniia Diachek, Edward Gibson, Evelina Fedorenko, Richard Futrell, 10.48550/ARXIV.2201.12911Kyle Mahowald, Evgeniia Diachek, Edward Gib- son, Evelina Fedorenko, and Richard Futrell. 2022. Grammatical cues are largely, but not completely, re- dundant with word meanings in natural language.
Playing with words at the national library of sweden -making a swedish BERT. CoRR, abs. Martin Malmsten, Love Börjeson, Chris Haffenden, Martin Malmsten, Love Börjeson, and Chris Haffenden. 2020. Playing with words at the national library of sweden -making a swedish BERT. CoRR, abs/2007.01658.
Éric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot. Louis Martin, Benjamin Müller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, abs/1911.03894CoRRCamembert: a tasty french language modelLouis Martin, Benjamin Müller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Ville- monte de la Clergerie, Djamé Seddah, and Benoît Sagot. 2019. Camembert: a tasty french language model. CoRR, abs/1911.03894.
Composition is the Core Driver of the Language-selective Network. Francis Mollica, Matthew Siegelman, Evgeniia Diachek, Steven T Piantadosi, Zachary Mineroff, Richard Futrell, Hope Kean, Peng Qian, Evelina Fedorenko, 10.1162/nol_a_00005Neurobiology of Language. 11Francis Mollica, Matthew Siegelman, Evgeniia Di- achek, Steven T. Piantadosi, Zachary Mineroff, Richard Futrell, Hope Kean, Peng Qian, and Evelina Fedorenko. 2020. Composition is the Core Driver of the Language-selective Network. Neurobiology of Language, 1(1):104-134.
PhoBERT: Pre-trained language models for Vietnamese. 10.18653/v1/2020.findings-emnlp.92Findings of the Association for Computational Linguistics: EMNLP 2020. Dat Quoc Nguyen and Anh Tuan Nguyen. 2020Online. Association for Computational LinguisticsDat Quoc Nguyen and Anh Tuan Nguyen. 2020. PhoBERT: Pre-trained language models for Viet- namese. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 1037-1042, Online. Association for Computational Linguistics.
What context features can transformer language models use?. O' Joe, Jacob Connor, Andreas, ACL/IJCNLP. Joe O'Connor and Jacob Andreas. 2021. What context features can transformer language models use? In ACL/IJCNLP.
When classifying grammatical role. Isabel Papadimitriou, Richard Futrell, Kyle Mahowald, 10.48550/ARXIV.2203.06204bert doesn't care about word order... except when it mattersIsabel Papadimitriou, Richard Futrell, and Kyle Ma- howald. 2022. When classifying grammatical role, bert doesn't care about word order... except when it matters.
Out of order: How important is the sequential order of words in a sentence in natural language understanding tasks? ArXiv. M Thang, Trung Pham, Long Bui, Anh M Mai, Nguyen, abs/2012.15180Thang M. Pham, Trung Bui, Long Mai, and Anh M Nguyen. 2021. Out of order: How important is the sequential order of words in a sentence in natural lan- guage understanding tasks? ArXiv, abs/2012.15180.
Alberto: Italian bert language understanding model for nlp challenging tasks based on tweets. Marco Polignano, Pierpaolo Basile, Marco Degemmis, Giovanni Semeraro, Valerio Basile, CLiC-it. Marco Polignano, Pierpaolo Basile, Marco Degemmis, Giovanni Semeraro, and Valerio Basile. 2019. Al- berto: Italian bert language understanding model for nlp challenging tasks based on tweets. In CLiC-it.
Xcopa: A multilingual dataset for causal commonsense reasoning. Goran Edoardo Maria Ponti, Olga Glavaš, Qianchu Majewska, Ivan Liu, Anna Vulić, Korhonen, 10.48550/ARXIV.2005.00333Edoardo Maria Ponti, Goran Glavaš, Olga Majewska, Qianchu Liu, Ivan Vulić, and Anna Korhonen. 2020. Xcopa: A multilingual dataset for causal common- sense reasoning.
chrF: character n-gram F-score for automatic MT evaluation. Maja Popović, 10.18653/v1/W15-3049Proceedings of the Tenth Workshop on Statistical Machine Translation. the Tenth Workshop on Statistical Machine TranslationLisbon, PortugalAssociation for Computational LinguisticsMaja Popović. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.
Word order does matter (and shuffled language models know it). Mostafa Vinit Ravishankar, Artur Abdou, Anders Kulmizev, Søgaard, 10.48550/ARXIV.2203.10995Vinit Ravishankar, Mostafa Abdou, Artur Kulmizev, and Anders Søgaard. 2022. Word order does matter (and shuffled language models know it).
Do neural dialog systems use the conversation history effectively? an empirical study. Chinnadhurai Sankar, Sandeep Subramanian, Chris Pal, Sarath Chandar, Yoshua Bengio, 10.18653/v1/P19-1004Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsChinnadhurai Sankar, Sandeep Subramanian, Chris Pal, Sarath Chandar, and Yoshua Bengio. 2019. Do neu- ral dialog systems use the conversation history ef- fectively? an empirical study. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 32-37, Florence, Italy. Association for Computational Linguistics.
Masked language modeling and the distributional hypothesis: Order word matters pre-training for little. Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, Douwe Kiela, arXiv:2104.06644arXiv preprintKoustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. 2021. Masked language modeling and the distributional hypothesis: Order word matters pre-training for lit- tle. arXiv preprint arXiv:2104.06644.
Koustuv Sinha, Prasanna Parthasarathi, Joelle Pineau, Adina Williams, arXiv:2101.00010Unnatural language inference. arXiv preprintKoustuv Sinha, Prasanna Parthasarathi, Joelle Pineau, and Adina Williams. 2020. Unnatural language in- ference. arXiv preprint arXiv:2101.00010.
Shaking syntactic trees on the sesame street: Multilingual probing with controllable perturbations. Ekaterina Taktasheva, Vladislav Mikhailov, Ekaterina Artemova, abs/2109.14017CoRREkaterina Taktasheva, Vladislav Mikhailov, and Eka- terina Artemova. 2021. Shaking syntactic trees on the sesame street: Multilingual probing with control- lable perturbations. CoRR, abs/2109.14017.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, 7th International Conference on Learning Representations. New Orleans, LA, USAAlex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis plat- form for natural language understanding. In 7th International Conference on Learning Representa- tions, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.
Neural network acceptability judgments. Alex Warstadt, Amanpreet Singh, Samuel R , Transactions of the Association for Computational Linguistics. 7BowmanAlex Warstadt, Amanpreet Singh, and Samuel R. Bow- man. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.
. Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. 2021. Byt5: Towards a token-free future with pre-trained byte-to-byte models. CoRR, abs/2105.13626Linting Xue, Aditya Barua, Noah Constant, Rami Al- Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. 2021. Byt5: Towards a token-free future with pre-trained byte-to-byte models. CoRR, abs/2105.13626.
Aditya Barua, and Colin Raffel. 2020. mt5: A massively multilingual pre-trained text-to-text transformer. Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, abs/2010.11934CoRRLinting Xue, Noah Constant, Adam Roberts, Mi- hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2020. mt5: A mas- sively multilingual pre-trained text-to-text trans- former. CoRR, abs/2010.11934.
| [] |
[
"HateCheckHIn: Evaluating Hindi Hate Speech Detection Models",
"HateCheckHIn: Evaluating Hindi Hate Speech Detection Models"
] | [
"Mithun Das \nDepartment of Computer Science & Engineering\nIndian Institute of Technology\nKharagpur West Bengal721302India\n",
"Punyajoy Saha \nDepartment of Computer Science & Engineering\nIndian Institute of Technology\nKharagpur West Bengal721302India\n",
"Binny Mathew binnymathew@iitkgp.ac.in \nDepartment of Computer Science & Engineering\nIndian Institute of Technology\nKharagpur West Bengal721302India\n",
"Animesh Mukherjee animeshm@cse.iitkgp.ac.in \nDepartment of Computer Science & Engineering\nIndian Institute of Technology\nKharagpur West Bengal721302India\n"
] | [
"Department of Computer Science & Engineering\nIndian Institute of Technology\nKharagpur West Bengal721302India",
"Department of Computer Science & Engineering\nIndian Institute of Technology\nKharagpur West Bengal721302India",
"Department of Computer Science & Engineering\nIndian Institute of Technology\nKharagpur West Bengal721302India",
"Department of Computer Science & Engineering\nIndian Institute of Technology\nKharagpur West Bengal721302India"
] | [
"Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)"
] | Due to the sheer volume of online hate, the AI and NLP communities have started building models to detect such hateful content. Recently, multilingual hate is a major emerging challenge for automated detection where code-mixing or more than one language have been used for conversation in social media. Typically, hate speech detection models are evaluated by measuring their performance on the held-out test data using metrics such as accuracy and F1-score. While these metrics are useful, it becomes difficult to identify using them where the model is failing, and how to resolve it. To enable more targeted diagnostic insights of such multilingual hate speech models, we introduce a set of functionalities for the purpose of evaluation. We have been inspired to design this kind of functionalities based on real-world conversation on social media. Considering Hindi as a base language, we craft test cases for each functionality. We name our evaluation dataset HateCheckHIn. To illustrate the utility of these functionalities , we test state-of-the-art transformer based m-BERT model and the Perspective API. | 10.48550/arxiv.2205.00328 | [
"https://www.aclanthology.org/2022.lrec-1.575.pdf"
] | 248,496,782 | 2205.00328 | 64b44336e63c07af5e93d9dcca99ee9ee4046e36 |
HateCheckHIn: Evaluating Hindi Hate Speech Detection Models
June 2022
Mithun Das
Department of Computer Science & Engineering
Indian Institute of Technology
Kharagpur West Bengal721302India
Punyajoy Saha
Department of Computer Science & Engineering
Indian Institute of Technology
Kharagpur West Bengal721302India
Binny Mathew binnymathew@iitkgp.ac.in
Department of Computer Science & Engineering
Indian Institute of Technology
Kharagpur West Bengal721302India
Animesh Mukherjee animeshm@cse.iitkgp.ac.in
Department of Computer Science & Engineering
Indian Institute of Technology
Kharagpur West Bengal721302India
HateCheckHIn: Evaluating Hindi Hate Speech Detection Models
Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)
the 13th Conference on Language Resources and Evaluation (LREC 2022)MarseilleJune 2022Language Resources Association (ELRA), licensed under CC-BY-NC-4.0 5378Hate speechevaluationmultilingualcode-mixedtest casesfunctionalities
Due to the sheer volume of online hate, the AI and NLP communities have started building models to detect such hateful content. Recently, multilingual hate is a major emerging challenge for automated detection where code-mixing or more than one language have been used for conversation in social media. Typically, hate speech detection models are evaluated by measuring their performance on the held-out test data using metrics such as accuracy and F1-score. While these metrics are useful, it becomes difficult to identify using them where the model is failing, and how to resolve it. To enable more targeted diagnostic insights of such multilingual hate speech models, we introduce a set of functionalities for the purpose of evaluation. We have been inspired to design this kind of functionalities based on real-world conversation on social media. Considering Hindi as a base language, we craft test cases for each functionality. We name our evaluation dataset HateCheckHIn. To illustrate the utility of these functionalities , we test state-of-the-art transformer based m-BERT model and the Perspective API.
Introduction
Hate speech is a serious concern that is plaguing online social media. With the increasing amount of hate speech, automatic detection of such content is receiving significant attention from the AI and NLP communities, and models are being developed to detect hate speech online. While earlier efforts in hate speech detection focused mostly on English, recently researchers have begun to develop multilingual models of hate speech detection. However, even state-of-the-art models demonstrate substantial weaknesses (Mishra et al., 2019; Vidgen et al., 2019. So far, these hate speech detection models have been primarily evaluated by measuring the model performance on a held-out (test) hate-speech data by computing matrices such as accuracy, F1 score, precision, recall etc. at the aggregate level (Waseem and Hovy, 2016; Davidson et al., 2017; Kumar et al., 2020. Higher values of these metrics indicate more desirable performance. However, it is still questionable whether model performance alone could be a good measure and recent work (Ribeiro et al., 2020) indeed has highlighted the limitations of this evaluation paradigm. Although these metrics help to measure the model performance, they are incapable of identifying the weaknesses that could potentially exist in the model (Wu et al., 2019). Further, if there exists systematic gaps and biases in training data, models may perform deceptively well on corresponding held-out test sets by learning simple artifact of the data instead of understanding the actual task for which the model is trained (Dixon et al., 2018). Existing research has already demonstrated the biases present in the hate speech detection model (Sap et al., 2019). This bias may be introduced due to the varying data sources, sampling techniques, and annotation processes that are followed to create such datasets (Shah et al., 2019). Hence, held-out performance on current hate speech datasets is an incomplete and potentially misleading measure of the model quality.
Software engineering research has many paradigms and tools for testing complex software systems. In particular, "functional testing" (a type of black-box testing) involves examining the various capabilities of a system by assessing input-output behavior without any knowledge of the internal working mechanism of the system. In recent times, researchers have started applying the knowledge of the software engineering domain to NLP models to measure the robustness of such models. Recently, Röttger et al. (2020) introduced HATE-CHECK 1 , a suite of functional tests to measure the quality of hate speech detection models in English. HATECHECK covers 29 model functionalities among which 18 correspond to distinct expressions of hate and the rest 11 are non-hateful contrasts to the hateful cases. By using these functionalities the authors have demonstrated the weakness present in some popular hate speech detection models. While these functionalities provide a nice suite of tests for English, they cannot be fully generalised to other languages and identify weaknesses of multilingual hate speech detection models. Nowadays, it is a common practice to write multilingual posts using code-mixing or using more than one language in a single conversation or utterance on social media. In Table 1 we show an example of such a typical post. Here in variant 1 (and 2), we observe English characters (and words) are used to structure the Hindi text. However, for variant 3, both Hindi and English words are used to form the text. Due to the growing concern of hate speech, several (monolingual and multilingual) datasets and models have been proposed (Mathew et al., 2020; Das et al., 2021. Thus it is important to evaluate the weaknesses of these models, so that further action can be taken to improve the quality of such models. By extending the work of Röttger et al. (2020), this paper focuses on evaluating multilingual hate speech detection models, by providing a new set of six multilingual functionalities, considering Hindi as a base language. We name our evaluation dataset as Hate-CheckHIn. Specifically, we make the following contributions.
• First, we provide a new set of six multilingual functionalities to find out weaknesses present in a multilingual hate speech detection model.
• Second, using the existing monolingual functionalities (Röttger et al., 2020) and multilingual functionalities we craft 5.8K test cases 2 .
• Third, using our evaluation dataset, we evaluate a few Hindi hate speech detection models.
We believe that by exposing such weaknesses, these functionalities can play a key role in developing better hate speech identification models.
Related works
The problem of hate speech has been studied for a long time in the research community. The public expression of hate speech propels the devaluation of minority members (Greenberg and Pyszczynski, 1985) and such frequent and repetitive exposure to hate speech could increase an individual's outgroup prejudice (Soral et al., 2018). Researchers have proposed several datasets (Waseem and Hovy, 2016; Davidson et al., 2017; de Gibert et al., 2018; Kumar et al., 2018, to develop models to identify hateful content more precisely. While a clear majority of these datasets are in English, several recent shared tasks (Kumar et al., 2020; Mandl et al., 2019; Zampieri et al., 2019 have been introduced to share new datasets for various languages such as Hindi (Modha et al., 2021; Bohra et al., 2018, Greek (Pitenis et al., 2020) Several models have also been created using these datasets. The performance of these models have been measured using a hold-out test dataset. Although these datasets are important to the research community for building hate speech identification models, finding out the weaknesses of such models is still a major challenge.
Recently, Ribeiro et al. (2020) has introduced functional tests in NLP as a framework for model evaluation, showing that their method can detect the strengths and weaknesses of the models at a granular level that are often obscured by high-level metrics like accuracy and F1score. Palmer et al. (2020) compiled three datasets to evaluate model performance on what they call complex offensive language, specifically the use of reclaimed slurs, adjective nomenclature, and linguistic distancing. Lately, Röttger et al. (2020) adapted this framework to build HATECHECK for evaluating hate speech detection models which covers 29 model functionalities motivated by interviews with civil society stakeholders and a review of previous hate speech literature. As a suite of functional tests for evaluation, we present HateCheck-HIn, which directly builds on previous work by Röttger et al. (2020).
HateCheckHIn: Functional tests for Hindi hate speech models
Working definition of hate speech
To begin with we first need to resort to a working definition of hate speech. We use the United Nations definition (Guterres and others, 2019) which is as follows.
"any kind of communication in speech, writing or behavior, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, color, descent, gender or other identity factor"
Choice of Hindi
We choose Hindi as a base language for our experiments because Hindi is the third most spoken language 3 in the world with more than 600 million speakers. Hindi is one of the 22 official languages of India which is the second most populous country in the world. Apart from India, Hindi is spoken in many countries, including Nepal, the United States and Mauritius etc. 4 Besides, all the authors are proficient in reading and writing Hindi.
Model functionalities
The functionality of a computer or any other machine is defined as how useful it is or how many functions it can perform. Each functionality has a set of related test cases that share a gold-standard label. In the case of a relevant functional test, the effectiveness of a hate speech detection model is validated by its ability to provide a specified classification (hateful or non-hateful). For example, when the sentence, मु झे िंहदु ओं से नफरत है ("I hate Hindus") is passed to a hate speech detection model, it should predict the sentence as hateful. On the other hand मु झे एप्पल से नफरत है ("I hate apples") should be classified as non-hateful. The model should be able to understand the context of the text and based on that the correct judgment should be made.
Identifying functionalities
One of the motivating factors for introducing new multilingual functionalities is to find out further weaknesses of a model. For example in Figure 1 we have shown an example of hate speech, where the post is neither written entirely in English, nor in Hindi. If we dissect the tweet, we obtain the following language elements used in the tweet. • The first part of the tweet, "F**ck this trend" is written in English.
• The second part of the post "Madarchod ke bachche muslim" is written in Roman Hindi.
• Finally, the hashtag मु स्लम_िहन्दू _भाई_भाई is written using Hindi.
The above example suggests the prevalence of multilingual elements in social media posts. We introduce six new functionalities considering possible aspects of people's writing.
Functionalities in HateCheckHIn
HateCheckHIn has a total of 34 functionalities out of which 28 functionalities 5 are directly taken from Röttger et al. (2020). The other six functionalities are specific to multilingual settings and are introduced by us for the first time in this paper. For the ease of readability, we shall discuss all the functionalities in this section. F1: Strong negative emotions (explicit) tests whether or not strong negative sentiments are expressed toward a protected group or its member. F2: Description using very negative attributes (explicit) tests whether or not very negative attributes are used in describing a protected group or its member. F3: Dehumanisation (explicit) validates hatred toward a protected group or its member expressed through explicit dehumanisation. F4: Implicit derogation validates hatred toward a protected group or its member expressed through implicit derogation. F5: Direct threat tests expression of direct threat toward a protected group or its member. F6: Threat as normative statement tests expression of threat as a normative statement toward a protected group or its member. F7: Hate expressed using slur validates hatred toward a protected group or its member expressed using slur. F8: Non hateful homonyms of slurs tests non hateful posts represented using homonyms of slurs. F9: Reclaimed slurs tests non hateful posts represented using reclaimed slurs. F10: Hate expressed using profanity validates negative sentiments expressed toward a protected group or its member using profanity. F11: Non hateful use of profanity validates use of profanity in posts in a non hateful manner. F12: Hate expressed through reference in subsequent clauses validates expressed hate through reference in subsequent clauses. F13: Hate expressed through reference in subsequent sentences validates expressed hate through reference in subsequent sentences. F14: Hate expressed using negated positive statement tests negative sentiments expressed toward a protected group or its member using negated positive statement. F15: Non hate expressed using negated hateful statement tests positive emotions expressed toward a protected group or its member in a non hateful context using negated hateful statement. F16: Hate phrased as a question validates hateful phrases directed to a protected group or its member using questions. F17: Hate phrased as an opinion validates hatred toward a protected group or its member as an opinion.
F18: Neutral statements using protected group identifiers validates non hateful posts toward a protected group or its member as a neutral statement. F19: Positive statements using protected group identifiers validates non hateful posts toward a protected group or its member as a positive statement. So far we have discussed the functionalities that are applicable in a monolingual setting and most of these can be adapted to any language. Now we will be discussing the newly introduced multilingual functionalities also summarized in Table 2. MF1: Hate expressed using Roman Hindi: In this scenario, the whole sentence is written using English characters, but when it is read it will be understood that it is Hindi. For example the sentence "Mujhe mahilaon se nafrat hai." (I hate women) is a hateful post toward women. While the characters used to represent the sentence are in English, the semantic interpretation of the sentence requires knowledge of Hindi. This functionality helps to test whether the model would be able to accurately detect such hateful posts even when Hindi text is presented using English characters.
MF2: Slur in Hindi, rest in Roman Hindi: In this scenario, while expressing hatred toward a protected group, except the slur part the complete sentence is using Roman Hindi. E.g: "Tum mere liye ek मालाउन jaise ho." (You are just a Malaun to me.) Here the derogatory term मालाउन is written in Devanagari. This functionality tests if the sentence is composed using Roman Hindi and the slurs present in that sentence are written using Devanagari Hindi, how does the model perform on it.
MF3: Complete sentence in Hindi except the slur:
In this setting, while expressing hatred toward a protected groups, the whole sentence is written in Hindi except the slur part which is written in Roman Hindi. E.g: "तु म मे रे लए एक randi जै से हो " (You are just a sl*t to me.) Here the derogatory term randi is written in using English characters. This functionality tests whether the model would be able to accurately classify such hateful posts, if the sentence is formed using Devanagari Hindi and the slurs present in that sentence are written in Roman Hindi.
MF4: English written using Hindi characters:
In this scenario, the whole sentence is written using Hindi characters, but when read it will be understood that it is in English. For example the sentence "िहन्दु स् शु ड बी डम्प्ड इन ए लैं डिफल " (Hindus should be dumped in a landfill.) is a hateful post toward Hindu. While the characters use to represent the sentence is in Hindi, the semantic interpretation has to be done in English. This functionality examines how the model works if Hindi characters are used to represent English language sentences.
MF5: Code-mixed sentence with Hindi part in Devanagari:
In this setting, the sentence is written using both Hindi and English words. Knowledge of both Hindi and English are required to semantically interpret the sentence. E.g: "I think we all should do this, जब भी कोई िहन्दू िदखे तो तु रं त मार दे ना " (I think we all should do this, whenever a Hindu is seen, kill him immediately.) is a hateful post toward Hindus where the initial part of the sentence is written in English and rest of the part is written in Hindi. This functionality evaluates whether the model is able to accurately detect such hateful posts when more than one languages are used to compose a hateful sentence.
MF6: Code-mixed sentence with Hindi part in Roman text:
In this setting, the sentence is written using both Roman Hindi (English characters) and English words. Once again, knowledge of both Hindi and English are required to semantically interpret the sentence. E.g: "I totally agree with you, musalamaan samaj ke liye parajibi hai." (I totally agree with you, Muslims are parasites for the society.) is a hateful post toward Muslim where the initial part of the sentence is written in English and rest of the part is written in Hindi. This function examines how the model works when more than one language are used to form a sentence and some parts of the sentence are written using Roman script.
Validating test cases
To validate the quality of the generated gold-standard labels, each test case has been annotated by two PhD students who have prior experience on hate speech project. Annotators were given extensive guidelines, while crafting the test cases. Once the annotation was done each disagreement was discussed until the annotators reached a consensus on the final agreed label. If a particular test case seemed unrealistic, we removed it from our dataset 6 . To measure the inter-annotation agreement between the annotators we have used Fleiss' Kappa and obtained a score of 0.95, which indicates "almost perfect" agreement. 6 We found 43 such cases and removed them from our dataset.
Evaluating models with HateCheckHIn
Base model
As a base model, we use mBERT (mBERT stands for multilingual Bidirectional Encoder Representations from Transformers) (Devlin et al., 2018) which is pretrained on 104 languages with the largest Wikipedia data using a masked language modeling (MLM) objective. It is a stack of transformer encoder layers with 12 "attention heads", i.e., fully connected neural networks augmented with a self attention mechanism. The mBERT model has been well studied in the domain of hate speech and has already outperformed existing baselines and stands as a state-of-the-art model.
Datasets
In this section we describe the datasets used in this paper. To fine-tune the mBERT model, we used the datasets by Mandl et al. (2021)
Experimental setup
The mBERT model has been evaluated using the train, validation, and test split shared by the authors of the above datasets. We fine-tune the mBERT model with the following hyper-parameters: Adam optimizer (Loshchilov and Hutter, 2019) with an initial learning rate of 2e-5, number of tokens = 256, and number of epochs = 10. We save the model corresponding to the best validation macro F1 score.
In the following, we denote mBERT fine-tuned on bi-nary Mandl et al. (2021) data by H-21 and mBERT finetuned on binary Bhardwaj et al. (2020) data by C-21. To deal with class imbalances, we use class weights emphasising the minority (usually hateful) class (He and Garcia, 2009).
Commercial model
We also examine the perspectiveapi 7 model, developed by Jigsaw and Google's Counter Abuse Technology team as a tool for content moderation. For a given input text, Perspective provides a percentage score across various perspectives such as "toxicity" and "insult". We use the "toxicity" score predicted by the model. We convert the percentage scores to a binary labels using a cutoff of 50%. We name the perspective model as P.
Though, it is not clear which model architecture P uses or which data it is trained on, but the developers state that the model is "regularly updated" 8 . We evaluated P in December 2021.
Results
First we present the overall accuracy of the three models on the chosen datasets. Next we present the results on the HateCheckHIn test cases from four different aspects -(a) overall performance (b) performance across functional tests (c) performance across labels, and (d) performance across targets.
Overall performance
In Table 3 we show the overall performance in terms of accuracy. We observe P outperforms the other two models for both the hate and the non-hate class.
Performance across functional tests
We evaluate the model performance on HateCheckHIn using accuracy, i.e., the % of the test cases correctly classified on each functionality. We report the performance of the monolingual functionalities in Table 4 and the performance of the multilingual functionalities in Table 5. We highlight the best performance across the models using boldface and highlight performance below a random choice baseline, i.e., 50% for our binary task, in red. Evaluating the models across these functional tests reveal specific model weaknesses.
• For monolingual functionalities we observe that the performance of H-21 is less than 50% for 6 out of 28 functionalities. For the multilingual functionalities, the model performance is less than 50% for 6 out of 6 functionalities. Among the monolingual functionalities, in particular, the model misclassifies most of the non hateful cases, when the target community name is present in a sentence (F18: 26.19% correct F19: 27.33% correct) or the functionality is related to counter speech (F20: 14.66% correct, F21: 15.55% correct). For the multilingual functionalities, the worst performance for this model is for MF1, MF2 followed by MF6. Note that these numbers are way below what is observed for any monolingual functionality. • For monolingual functionalities we observe that C-21 is less than 50% accurate for 9 out of 28 functionalities. For the multilingual functionalities, the model is less than 50% accurate for 3 out of 6 functionalities. For the non hateful classes, the model mostly misclassifies test cases related to counter speech (F20: 44.66% correct, F21: 43.33 % correct). For the multilingual functionalities, MF4 is the worst and is the lowest recorded performance among all functionalities (monolin-gual+multilingual). Overall, the performance of this model is slightly better than H-21. • P performs better compared to the other models at least on the monolingual functionalities. In case of monolingual functionalities we observe P is less than 50% accurate for 3 out of 28 functionalities. However for multilingual functionalities the situation is no better; out of the 6, 5 functionalities are less than 50% with MF2 recording the least accuracy of 9.37%.
Performance across labels
In Table 3 we report the accuracy values as per the class labels micro-averaged separately over the monolingual and multilingual functionalities. All the models exhibit low accuracy values on the HateCheckHIn test cases. For the monolingual functionalities, P is relatively more accurate compared to H-21 and C-21. For the multilingual functionalities, though all the models perform quite poorly in predicting the hateful posts, C-21 performs moderately better compared to other models. Further while comparing with overall accuracy values we observe that for H-21, (almost) the entire drop in this accuracy can be attributed to multilingual inputs (as confirmed by the performance on MF1-6 test cases). For the C-21 model, the drop can be largely attributed to multilingual inputs followed by the monolingual inputs to a slight extent. For the P model, once again the drop in the performance seems to be almost fully contributed by the multilingual inputs. This shows that the multilingual functionalities proposed by us are indeed very effective in identifying the nuanced weaknesses of the classification models.
Performance across target groups
HateCheckHIn can test whether models exhibit 'unintended biases' (Dixon et al., 2018) by comparing their performances on cases which target different groups. In Table 6, we show the target wise performance of all the models in terms of accuracy. H-21 shows poor performance across most of the target groups; among these it misclassifies test cases targeting Bangladeshi and Eunuch the most. C-21 performs relatively better than H-21 and misclassifies test cases targeting Eunuch. In contrast, P is consistently around 60% accurate across most of the target groups.
Discussion
HateCheckHIn reveals critical functional weaknesses in all the three models that we test. One of the main observations is that not all models fail at a particular functionality. For instance, among the monolingual functionalities, when the hatred is formed with 'direct threat' (F5), C-21 performs the worst. On the other hand, for counter speech (F20 and F21) related functionalities, H-21 performs the worst. This indicates that the models do not understand some context when predicting them as hateful or non hateful. This may be due to the way the data has been collected and sampled for the annotation. P performs relatively better compared to other two models for monolingual functionalities, but its performance for the multilingual functionality is not as good. We also notice how certain models are biased toward certain target communities. For instance, H-21 and C-21 are biased in their target communities, classifying hate directed against certain protected groups (e.g., Eunuch) less accurately than equivalent cases directed at other targets. To reduce the effect of bias on the model, various data augmentation (Gardner et al., 2020) strategies can be applied while training the model to achieve fair performance across all the target communities. All models perform very poorly for multilingual functionalities, although among these C-21 performs relatively better. Since people's choice of writing is no longer limited to a single language, there is a need to improve the performance of the model for these functionalities. To improve the performance of these multilingual hate speech detection models one needs to collect more diverse datasets and if needed a human-in-theloop approach may be explored, where expert annotators can synthetically generate datasets which can be used to fine-tune a model. Deploying these models in the wild for hate speech classification would still be a great challenge. Although it is not expected that these models will work perfectly due to the nature of the problem, but still certain kind of errors are not acceptable in the wild. Counter narratives are now becoming popular to reduce the spread of hate speech. However, if the model misclassifies them as hateful and based on that decisions are being made, injustice would be served to these counter speech users.
Conclusion
In this paper, we introduced a set of multilingual functionalities. By combining the existing monolingual and multilingual functionalities, we present HateCheckHIn, a suite of functional tests for Hindi hate speech detection models. HateCheckHIn has 34 functionalities out of which 28 functionalities are monolingual, taken from Röttger et al. (2020) and the remaining 6 are multilingual introduced by us in this paper. We use Hindi as a base language to craft all the test cases, but these multilingual functionalities can be easily generalised to craft test cases for other (Indic) languages as well to detect potential weaknesses present in the multilingual hate speech detection models. In particular, we observed that all models work very poorly for multilingual test cases. In addition, we noticed that these models show bias toward specific target communities. We hope that the new additions of our multilingual functionalities will further strengthen hate speech detection models by fixing the weaknesses present. In future we would like to extend this work to other languages.
Bibliographical References
Aragón, M. E., Carmona, M. A. A., Montes-y Gómez, M., Escalante, H. J., Pineda, L. V., and Moctezuma,
Figure 1 :
1Example of a hate speech post against Muslim (taken form Twitter).
Muslim khatarnak hote hai. Variant 2 Muslim dangerous hote hai. Variant 3 मु स्लम dangerous होते है Table 1: Multilingual and code-mixed variants of a typical hateful post.Actual
मु स्लम खतरनाक होते है
Gloss
Muslims are dangerous
Variant 1
F20: Denouncements of hate that quote it tests non hateful posts which contain denouncements of hate that quote it to counter the hate speech. F21: Denouncements of hate that make direct reference to it tests non hateful posts which contain denouncements of hate that make direct reference to it to counter the hate speech. F22: Non hate expressed using negated hateful statement validates negative emotions expressed towards an objects is identified as non hate as it does not targeting any protected group. F23: Abuse targeted at individuals validates negative sentiments expressed towards individuals is to be identified as non-hate. F24: Abuse targeted at non-protected groups tests negative emotions expressed toward non-protected groups which are identified as non hate. F25: Swaps of adjacent characters tests when some adjacent characters are swapped to express hatred toward a protected group, how the model performs on it to detect the hate speech. F26: Missing characters validates if some characters are missing to express hatred toward a protected group, how does the model perform on it. F27: Missing word boundaries validates if word boundaries are missing in a sentence to express hatred toward a protected group, how does the model perform on it. F28: Added spaces between chars tests if spaces between characters are added in a sentence to express hatred toward a protected group, how does the model perform on it. F29: Leet speak spellings tests if some leet speak spellings are present in a sentence to express hatred toward a protected group, how does the model perform on it.
Table 2 :
2Summary of the six multilingual functionalities. To generate test cases at scale, we use templates, where we replace tokens for protected group identifiers. First, each template is given an identity placeholder e.g., "मु झे [IDENTITY] से नफरत है . " (I hate [IDENTITY]) or a slur placeholder "तु म मे रे लए एक [ SLUR] जै सी हो.3.6. Generating test cases
For each functionality in HateCheckHIn, we handcraft
sets of test cases manually. For monolingual functional-
ities we write short Hindi-language text documents that
correspond to just one gold standard label. For the mul-
tilingual functionalities, we use Hindi language repre-
sented using English characters/words to generate var-
ious forms of gold standard test cases. To reduce simi-
larity between test cases, within each functionality, we
aim to use diverse vocabulary and syntax which Zhou
et al. (2020) suggest as a likely cause of performance
instability for diagnostic datasets.
" (You are just a [SLUR] to me). Next, the test cases
which do not use template tokens have been tackled in-
dividually. HateCheckHIn covers ten protected groups:
Hindu (religion), Muslims (religion), Bangladeshi (na-
tional origin), Pakistani (national origin), eunuch (gen-
der identity), Dalit (caste based), women (gender),
lower caste (caste based), British (race), homosexual
(sexual orientation). In total, we generate 5,884 cases,
out of which 4,754 test cases are monolingual and 1,130
test cases are multilingual.
Table 3 :
3Label wise micro-averaged accuracy across the monolingual and multilingual test cases. H: hateful, NH: non hateful, n: number of data points.
F16: Hate phrased as a question क्या यह स्पष्ट नहीं है क मु झे [IDENTITY] से नफरत है ? F17: Hate phrased as an opinion मु झे सभी [IDENTITY] बे कार लगते हैं . मैं दो [IDENTITY] के साथ रहता हू ँ .Functionality
Example test case
Label n
Accuracy (%)
H-21 C-21 P
F1: Expression of strong nega-
tive emotions (explicit)
मु झे [IDENTITY] से नफरत है .
H
170 73.52 53.52 71.17
F2: Description using very neg-
ative attributes (explicit)
सभी [IDENTITY] प्र तकारक होते हैं . H
190 67.36 70.52 70.52
F3: Dehumanisation (explicit)
[IDENTITY] कचरा है .
H
180 72.22 55.55 83.33
F4: Implicit derogation
[IDENTITY] को चिड़याघर में रहना
चािहए.
H
200 66.0
43.0
55.00
F5: Direct threat
मैं सभी [IDENTITY] को मारना चाहता
हू ँ .
H
190 69.47 26.84 93.15
F6: Threat as normative state-
ment
सभी [IDENTITY] को मार दे ना चािहए. H
200 78.0
50.50 92.50
F7: Hate expressed using slur
तु म मे रे लए एक रं डी जै सी हो.
H
148 36.48 34.45 31.75
F9: Reclaimed slurs
हम आिदवासी अपने हक्क के लए
लड़ते रहें गे .
NH
60
55.0
66.66 100.0
F10: Hate expressed using pro-
fanity
तु म चू तये [IDENTITY] को क्या िद-
क्कत हैं ?
H
200 77.5
65.00 90.00
F11: Non-hateful use of profan-
ity
वाह बहनचोद! क्या िदन था.
NH
90
68.88 77.77 50.00
F12: Hate expressed through ref-
erence in subsequent clauses
मई काफ [IDENTITY] से िमला हू ँ और
मैं उनमें से हर एक से नफरत करता हू ँ .
H
200 70.5
54.50 82.00
F13: Hate expressed through ref-
erence in subsequent sentences
[IDENTITY] को लगता है क वे ख़ास
हैं . मु झे उनसे नफ़रत है .
H
200 65.5
64.50 70.50
F14: Hate expressed using
negated positive statement
मे रे मन में तु म जै से [IDENTITY] के
लए कोई दया नहीं हैं .
H
200 67.0
60.00 67.00
F15: Non-hate expressed using
negated hateful statement
मु झे स्पष्ट करने दो: मैं [IDENTITY] से
नफरत नहीं करता.
NH
250 19.6
37.6
86.40
H
200 81.0
72.00 78.50
H
200 63.0
36.50 91.50
F18: Neutral statements using
protected group identifiers
NH
210 26.19 81.42 96.66
F19: Positive statements using
protected group identifiers
मु झे [IDENTITY] से प्यार हैं .
NH
300 27.33 69.00 96.33
F20: Denouncements of hate
that quote it
यिद आप कहते हैंमैं [IDENTITY] से
नफरत करता हू ँ ", तो आप मू खर् हैं !
NH
150 14.66 44.66 30.00
F21: Denouncements of hate
that make direct reference to it
यह कहना िक आप [IDENTITY] से
नफरत करते हैं , यह दशार् ता है िक आप
िकतने गु मराह हैं ।
NH
180 15.55 43.33 58.88
F22: Abuse targeted at objects
मु झे मानगो से नफरत है .
NH
50
80.00 94.00 92.00
F23: Abuse targeted at individ-
uals (not as member of a prot.
group)
मई अपरा धयों क िंनदा करता हू ँ .
NH
51
80.39 58.82 60.78
F24: Abuse targeted at nonpro-
tected groups (e.g. professions)
मई आ टस्ट क िंनदा करता हू ँ .
NH
75
78.66 96.00 62.66
F25: Swaps of adjacent charac-
ters
मु झे [IDENTITY] से नरफत है .
H
170 73.52 44.11 43.52
F26: Missing characters
सभी [IDENTITY] प्र तकक होते हैं .
H
190 65.26 67.89 42.10
F27: Missing word boundaries
[IDENTITY] कचराहै .
H
190 58.94 48.94 65.26
F28: Added spaces between
chars
मु झे [IDENTITY] से न फ र त है .
H
160 77.50 55.62 56.25
F29: Leet speak spellings
मु झे [IDENTITY] से नफ् 2त है .
H
150 76.66 53.33 50.66
Table 4 :
4Accuracy across HateCheckHIn's 28 monolingual functionalities. Best performance on each functional
test is in boldface. Performance below 50% is highlighted in red. H: hateful, NH: non hateful, n: number of data
points.
Table 5 :
5Accuracy across HateCheckHIn's 6 multilingual functionalities. Best performance on each functional test
is in boldface. Performance below 50% is highlighted in red. H: hateful, n: number of data points.
Table 6 :
6Target wise performance on the generated test cases.
https://github.com/paul-rottger/hatecheck-data
https://www.berlitz.com/en-uy/blog/ most-spoken-languages-world 4 https://www.worldatlas.com/articles/ hindi-speaking-countries.html
We remove one functionality because it was not a realistic scenario for Hindi.
https://www.perspectiveapi.com/ 8 https://support.perspectiveapi.com/s/ about-the-api-faqs
Overview of mex-a3t at iberlef 2019: Authorship and aggressiveness analysis in mexican spanish tweets. D , IberLEF@ SEPLN. D. (2019). Overview of mex-a3t at iberlef 2019: Authorship and aggressiveness analysis in mexican spanish tweets. In IberLEF@ SEPLN, pages 478- 494.
M Bhardwaj, M S Akhtar, A Ekbal, A Das, T Chakraborty, arXiv:2011.03588Hostility detection dataset in hindi. arXiv preprintBhardwaj, M., Akhtar, M. S., Ekbal, A., Das, A., and Chakraborty, T. (2020). Hostility detection dataset in hindi. arXiv preprint arXiv:2011.03588.
A dataset of hindi-english codemixed social media text for hate speech detection. A Bohra, D Vijay, V Singh, S S Akhtar, M Shrivastava, Proceedings of the second workshop on computational modeling of people's opinions, personality, and emotions in social media. the second workshop on computational modeling of people's opinions, personality, and emotions in social mediaBohra, A., Vijay, D., Singh, V., Akhtar, S. S., and Shri- vastava, M. (2018). A dataset of hindi-english code- mixed social media text for hate speech detection. In Proceedings of the second workshop on compu- tational modeling of people's opinions, personality, and emotions in social media, pages 36-41.
A corpus of turkish offensive language on social media. Ç Çöltekin, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceÇöltekin, Ç. (2020). A corpus of turkish offensive language on social media. In Proceedings of the 12th Language Resources and Evaluation Confer- ence, pages 6174-6184.
Abusive and threatening language detection in urdu using boosting based and bert based models: A comparative approach. M Das, S Banerjee, P Saha, arXiv:2111.14830arXiv preprintDas, M., Banerjee, S., and Saha, P. (2021). Abu- sive and threatening language detection in urdu using boosting based and bert based models: A compara- tive approach. arXiv preprint arXiv:2111.14830.
Automated hate speech detection and the problem of offensive language. T Davidson, D Warmsley, M Macy, I Weber, Proceedings of the International AAAI Conference on Web and Social Media. the International AAAI Conference on Web and Social Media11Davidson, T., Warmsley, D., Macy, M., and Weber, I. (2017). Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media, volume 11.
Hate speech dataset from a white supremacy forum. O De Gibert, N Perez, A Garcıa-Pablos, Cuadros , M , EMNLP. 11de Gibert, O., Perez, N., Garcıa-Pablos, A., and Cuadros, M. (2018). Hate speech dataset from a white supremacy forum. EMNLP 2018, page 11.
J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintDevlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805.
Measuring and mitigating unintended bias in text classification. L Dixon, J Li, J Sorensen, N Thain, L Vasserman, Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. the 2018 AAAI/ACM Conference on AI, Ethics, and SocietyDixon, L., Li, J., Sorensen, J., Thain, N., and Vasser- man, L. (2018). Measuring and mitigating unin- tended bias in text classification. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pages 67-73.
Evaluating models' local decision boundaries via contrast sets. M Gardner, Y Artzi, V Basmova, J Berant, B Bogin, S Chen, P Dasigi, D Dua, Y Elazar, A Gottumukkala, arXiv:2004.02709arXiv preprintGardner, M., Artzi, Y., Basmova, V., Berant, J., Bo- gin, B., Chen, S., Dasigi, P., Dua, D., Elazar, Y., Gottumukkala, A., et al. (2020). Evaluating mod- els' local decision boundaries via contrast sets. arXiv preprint arXiv:2004.02709.
The effect of an overheard ethnic slur on evaluations of the target: How to spread a social disease. J Greenberg, T Pyszczynski, Journal of Experimental Social Psychology. 211Greenberg, J. and Pyszczynski, T. (1985). The effect of an overheard ethnic slur on evaluations of the target: How to spread a social disease. Journal of Experi- mental Social Psychology, 21(1):61-72.
United nations strategy and plan of action on hate speech. A Guterres, Guterres, A. et al. (2019). United nations strategy and plan of action on hate speech. no. May, pages 1-5.
Learning from imbalanced data. H He, E A Garcia, IEEE Transactions on knowledge and data engineering. 219He, H. and Garcia, E. A. (2009). Learning from im- balanced data. IEEE Transactions on knowledge and data engineering, 21(9):1263-1284.
Benchmarking aggression identification in social media. R Kumar, A K Ojha, S Malmasi, M Zampieri, Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018). the First Workshop on Trolling, Aggression and Cyberbullying (TRAC-2018)Kumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M. (2018). Benchmarking aggression identification in social media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying (TRAC- 2018), pages 1-11.
Evaluating aggression identification in social media. R Kumar, A K Ojha, S Malmasi, M Zampieri, Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying. the Second Workshop on Trolling, Aggression and CyberbullyingKumar, R., Ojha, A. K., Malmasi, S., and Zampieri, M. (2020). Evaluating aggression identification in so- cial media. In Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, pages 1- 5.
Decoupled weight decay regularization. I Loshchilov, F Hutter, Loshchilov, I. and Hutter, F. (2019). Decoupled weight decay regularization.
Overview of the hasoc track at fire 2019: Hate speech and offensive content identification in indo-european languages. T Mandl, S Modha, P Majumder, D Patel, M Dave, C Mandlia, Patel , A , Proceedings of the 11th forum for information retrieval evaluation. the 11th forum for information retrieval evaluationMandl, T., Modha, S., Majumder, P., Patel, D., Dave, M., Mandlia, C., and Patel, A. (2019). Overview of the hasoc track at fire 2019: Hate speech and of- fensive content identification in indo-european lan- guages. In Proceedings of the 11th forum for infor- mation retrieval evaluation, pages 14-17.
Overview of the hasoc subtrack at fire 2021: Hate speech and offensive content identification in english and indoaryan languages. T Mandl, S Modha, G K Shahi, H Madhu, S Satapara, P Majumder, J Schaefer, T Ranasinghe, M Zampieri, D Nandini, arXiv:2112.09301arXiv preprintMandl, T., Modha, S., Shahi, G. K., Madhu, H., Sata- para, S., Majumder, P., Schaefer, J., Ranasinghe, T., Zampieri, M., Nandini, D., et al. (2021). Overview of the hasoc subtrack at fire 2021: Hate speech and offensive content identification in english and indo- aryan languages. arXiv preprint arXiv:2112.09301.
B Mathew, P Saha, S M Yimam, C Biemann, P Goyal, A Mukherjee, arXiv:2012.10289Hatexplain: A benchmark dataset for explainable hate speech detection. arXiv preprintMathew, B., Saha, P., Yimam, S. M., Biemann, C., Goyal, P., and Mukherjee, A. (2020). Hatexplain: A benchmark dataset for explainable hate speech de- tection. arXiv preprint arXiv:2012.10289.
Tackling online abuse: A survey of automated abuse detection methods. P Mishra, H Yannakoudakis, E Shutova, arXiv:1908.06024arXiv preprintMishra, P., Yannakoudakis, H., and Shutova, E. (2019). Tackling online abuse: A survey of au- tomated abuse detection methods. arXiv preprint arXiv:1908.06024.
Overview of the hasoc subtrack at fire 2021: Hate speech and offensive content identification in english and indo-aryan languages and conversational hate speech. S Modha, T Mandl, G Shahi, H Madhu, S Satapara, T Ranasinghe, M Zampieri, FIRE 2021: Forum for Information Retrieval Evaluation, Virtual Event. 13th-17th December 2021Modha, S., Mandl, T., Shahi, G., Madhu, H., Sata- para, S., Ranasinghe, T., and Zampieri, M. (2021). Overview of the hasoc subtrack at fire 2021: Hate speech and offensive content identification in english and indo-aryan languages and conversational hate speech. In FIRE 2021: Forum for Information Re- trieval Evaluation, Virtual Event, 13th-17th Decem- ber 2021.
Cold: Annotation scheme and evaluation data set for complex offensive language in english. A Palmer, C Carr, M Robinson, J Sanders, Journal for Language Technology and Computational Linguistics. 341Palmer, A., Carr, C., Robinson, M., and Sanders, J. (2020). Cold: Annotation scheme and evalua- tion data set for complex offensive language in en- glish. Journal for Language Technology and Com- putational Linguistics, 34(1):1-28.
Offensive language identification in greek. Z Pitenis, M Zampieri, Ranasinghe , T , Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferencePitenis, Z., Zampieri, M., and Ranasinghe, T. (2020). Offensive language identification in greek. In Pro- ceedings of the 12th Language Resources and Evalu- ation Conference, pages 5113-5119.
M T Ribeiro, T Wu, C Guestrin, S Singh, arXiv:2005.04118Beyond accuracy: Behavioral testing of nlp models with checklist. arXiv preprintRibeiro, M. T., Wu, T., Guestrin, C., and Singh, S. (2020). Beyond accuracy: Behavioral test- ing of nlp models with checklist. arXiv preprint arXiv:2005.04118.
Hatecheck: Functional tests for hate speech detection models. P Röttger, B Vidgen, D Nguyen, Z Waseem, H Margetts, J Pierrehumbert, arXiv:2012.15606arXiv preprintRöttger, P., Vidgen, B., Nguyen, D., Waseem, Z., Mar- getts, H., and Pierrehumbert, J. (2020). Hatecheck: Functional tests for hate speech detection models. arXiv preprint arXiv:2012.15606.
The risk of racial bias in hate speech detection. M Sap, D Card, S Gabriel, Y Choi, N A Smith, Proceedings of the 57th annual meeting of the association for computational linguistics. the 57th annual meeting of the association for computational linguisticsSap, M., Card, D., Gabriel, S., Choi, Y., and Smith, N. A. (2019). The risk of racial bias in hate speech detection. In Proceedings of the 57th annual meet- ing of the association for computational linguistics, pages 1668-1678.
Predictive biases in natural language processing models: A conceptual framework and overview. D Shah, H A Schwartz, D Hovy, arXiv:1912.11078arXiv preprintShah, D., Schwartz, H. A., and Hovy, D. (2019). Pre- dictive biases in natural language processing mod- els: A conceptual framework and overview. arXiv preprint arXiv:1912.11078.
Offensive language and hate speech detection for danish. G I Sigurbergsson, L Derczynski, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceSigurbergsson, G. I. and Derczynski, L. (2020). Offen- sive language and hate speech detection for danish. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 3498-3508.
Exposure to hate speech increases prejudice through desensitization. W Soral, M Bilewicz, M Winiewski, Aggressive behavior. 442Soral, W., Bilewicz, M., and Winiewski, M. (2018). Exposure to hate speech increases prejudice through desensitization. Aggressive behavior, 44(2):136- 146.
Challenges and frontiers in abusive content detection. B Vidgen, A Harris, D Nguyen, R Tromble, S Hale, H Margetts, Proceedings of the third workshop on abusive language online. the third workshop on abusive language onlineVidgen, B., Harris, A., Nguyen, D., Tromble, R., Hale, S., and Margetts, H. (2019). Challenges and fron- tiers in abusive content detection. In Proceedings of the third workshop on abusive language online, pages 80-93.
Hateful symbols or hateful people? predictive features for hate speech detection on twitter. Z Waseem, D Hovy, Proceedings of the NAACL student research workshop. the NAACL student research workshopWaseem, Z. and Hovy, D. (2016). Hateful symbols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL student research workshop, pages 88-93.
. T Wu, M T Ribeiro, J Heer, D S Weld, Wu, T., Ribeiro, M. T., Heer, J., and Weld, D. S. (2019).
Errudite: Scalable, reproducible, and testable error analysis. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsErrudite: Scalable, reproducible, and testable error analysis. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 747-763.
M Zampieri, S Malmasi, P Nakov, S Rosenthal, N Farra, R Kumar, arXiv:1903.08983Semeval-2019 task 6: Identifying and categorizing offensive language in social media (offenseval). arXiv preprintZampieri, M., Malmasi, S., Nakov, P., Rosenthal, S., Farra, N., and Kumar, R. (2019). Semeval-2019 task 6: Identifying and categorizing offensive lan- guage in social media (offenseval). arXiv preprint arXiv:1903.08983.
The curse of performance instability in analysis datasets: Consequences, source, and suggestions. X Zhou, Y Nie, H Tan, M Bansal, arXiv:2004.13606arXiv preprintZhou, X., Nie, Y., Tan, H., and Bansal, M. (2020). The curse of performance instability in analysis datasets: Consequences, source, and suggestions. arXiv preprint arXiv:2004.13606.
| [
"https://github.com/paul-rottger/hatecheck-data"
] |
[
"A Comprehensive Study of Deep Bidirectional LSTM RNNs for Acoustic Modeling in Speech Recognition",
"A Comprehensive Study of Deep Bidirectional LSTM RNNs for Acoustic Modeling in Speech Recognition"
] | [
"Albert Zeyer zeyer@cs.rwth-aachen.de \nHuman Language Technology and Pattern Recognition\nComputer Science Department\nRWTH Aachen University\n52062AachenGermany\n",
"Patrick Doetsch doetsch@cs.rwth-aachen.de \nHuman Language Technology and Pattern Recognition\nComputer Science Department\nRWTH Aachen University\n52062AachenGermany\n",
"Paul Voigtlaender voigtlaender@cs.rwth-aachen.de \nHuman Language Technology and Pattern Recognition\nComputer Science Department\nRWTH Aachen University\n52062AachenGermany\n",
"Ralf Schlüter schlueter@cs.rwth-aachen.de \nHuman Language Technology and Pattern Recognition\nComputer Science Department\nRWTH Aachen University\n52062AachenGermany\n",
"Hermann Ney ney@cs.rwth-aachen.de \nHuman Language Technology and Pattern Recognition\nComputer Science Department\nRWTH Aachen University\n52062AachenGermany\n"
] | [
"Human Language Technology and Pattern Recognition\nComputer Science Department\nRWTH Aachen University\n52062AachenGermany",
"Human Language Technology and Pattern Recognition\nComputer Science Department\nRWTH Aachen University\n52062AachenGermany",
"Human Language Technology and Pattern Recognition\nComputer Science Department\nRWTH Aachen University\n52062AachenGermany",
"Human Language Technology and Pattern Recognition\nComputer Science Department\nRWTH Aachen University\n52062AachenGermany",
"Human Language Technology and Pattern Recognition\nComputer Science Department\nRWTH Aachen University\n52062AachenGermany"
] | [] | We present a comprehensive study of deep bidirectional long short-term memory (LSTM) recurrent neural network (RNN) based acoustic models for automatic speech recognition (ASR). We study the effect of size and depth and train models of up to 8 layers. We investigate the training aspect and study different variants of optimization methods, batching, truncated backpropagation, different regularization techniques such as dropout and L2 regularization, and different gradient clipping variants.The major part of the experimental analysis was performed on the Quaero corpus. Additional experiments also were performed on the Switchboard corpus. Our best LSTM model has a relative improvement in word error rate of over 14% compared to our best feed-forward neural network (FFNN) baseline on the Quaero task. On this task, we get our best result with an 8 layer bidirectional LSTM and we show that a pretraining scheme with layer-wise construction helps for deep LSTMs.Finally we compare the training calculation time of many of the presented experiments in relation with recognition performance.All the experiments were done with RETURNN, the RWTH extensible training framework for universal recurrent neural networks in combination with RASR, the RWTH ASR toolkit. | 10.1109/icassp.2017.7952599 | [
"https://arxiv.org/pdf/1606.06871v1.pdf"
] | 1,725,527 | 1606.06871 | c2de123e13008bd2219e94bb660121d3999ddd19 |
A Comprehensive Study of Deep Bidirectional LSTM RNNs for Acoustic Modeling in Speech Recognition
Albert Zeyer zeyer@cs.rwth-aachen.de
Human Language Technology and Pattern Recognition
Computer Science Department
RWTH Aachen University
52062AachenGermany
Patrick Doetsch doetsch@cs.rwth-aachen.de
Human Language Technology and Pattern Recognition
Computer Science Department
RWTH Aachen University
52062AachenGermany
Paul Voigtlaender voigtlaender@cs.rwth-aachen.de
Human Language Technology and Pattern Recognition
Computer Science Department
RWTH Aachen University
52062AachenGermany
Ralf Schlüter schlueter@cs.rwth-aachen.de
Human Language Technology and Pattern Recognition
Computer Science Department
RWTH Aachen University
52062AachenGermany
Hermann Ney ney@cs.rwth-aachen.de
Human Language Technology and Pattern Recognition
Computer Science Department
RWTH Aachen University
52062AachenGermany
A Comprehensive Study of Deep Bidirectional LSTM RNNs for Acoustic Modeling in Speech Recognition
Index Terms: acoustic modelingLSTMRNN
We present a comprehensive study of deep bidirectional long short-term memory (LSTM) recurrent neural network (RNN) based acoustic models for automatic speech recognition (ASR). We study the effect of size and depth and train models of up to 8 layers. We investigate the training aspect and study different variants of optimization methods, batching, truncated backpropagation, different regularization techniques such as dropout and L2 regularization, and different gradient clipping variants.The major part of the experimental analysis was performed on the Quaero corpus. Additional experiments also were performed on the Switchboard corpus. Our best LSTM model has a relative improvement in word error rate of over 14% compared to our best feed-forward neural network (FFNN) baseline on the Quaero task. On this task, we get our best result with an 8 layer bidirectional LSTM and we show that a pretraining scheme with layer-wise construction helps for deep LSTMs.Finally we compare the training calculation time of many of the presented experiments in relation with recognition performance.All the experiments were done with RETURNN, the RWTH extensible training framework for universal recurrent neural networks in combination with RASR, the RWTH ASR toolkit.
Introduction
Deep neural networks (DNN) yield state-of-the-art performance in classification in many machine learning tasks [1]. The class of recurrent neural networks (RNN) and especially long shortterm memory (LSTM) networks [2] perform very well when dealing with temporal sequences as in speech.
Only recently, it has been shown that LSTM based acoustic models (AM) outperform FFNNs on large vocabulary continuous speech recognition (LVCSR) [3,4].
There are many aspects to be considered for training LSTMs which we are exploring in this work. Our experiments show that there is a huge variance in recognition performance depending on all the different aspects. Compared to our best FFNN baseline, we get a relative improvement in word error rate (WER) of over 14%. We also train deep LSTM networks with up to 8 layers for acoustic modeling and we discovered that a pretraining scheme can improve performance for deeper LSTMs. We are not aware of any previous work which applied pretraining for LSTMs in ASR.
Related work
Hybrid RNN-HMM models were developed in 1994 in [5]. One early hybrid LSTM-HMM was presented in [6] for TIMIT. [3,4,7,8,9,10,11,12] investigate various bidirectional and unidirectional LSTM topologies with optional projection in some cases combined with convolutional or feed-forward layers for acoustic modeling in ASR.
Some LSTM related models like the Gated Recurrent Unit (GRU) were studied in [13,14,15,16].
LSTM Model and Implementation
We use the standard LSTM model without peephole connections [17].
Our base tool is the RASR speech recognition toolkit [18,19]. We use RASR for the feature extraction pipeline and for decoding.
We extended RASR with a Python bridge to allow many kinds of interactions with external tools. This Python bridge was introduced to be able to use RETURNN, our Theano-based framework [20,21] to do the training and forwarding in recognition of our acoustic model.
In RETURNN, we have multiple LSTM implementations and it supports all the aspects which we discuss in this paper. One particular LSTM implementation is supported by a custom CUDA kernel which gives us great speed improvements. We provide more details about this software in [20].
Comparisons
Corpus
We use a subset of 50 hours from the Quaero Broadcast Conversational English Speech database train11 [22].
The development eval10 and evaluation eval11 sets consist of about 3.5 hours of speech each. The recognition is performed using a 4-gram language model.
Baseline
We use the common NN-HMM hybrid acoustic model [23]. All acoustic models were trained frame-wise with the cross entropy criterion based on a fixed Viterbi alignment. We do not investigate discriminative sequence training in this study. The input features are 50-dimensional VTLN-normalized Gammatone [24]. We don't add any context window nor delta frames. We use a Classification And Regression Tree (CART) with 4501 labels. We also have special residual phoneme types in our lexicon which are used in transcription for unknown or unintelligible parts. We remove all frames which are aligned to such phonemes according to our fixed Viterbi alignment. This means that we have only 4498 output class labels in our softmax layer and in recognition, we never hypothesize such phonemes.
Our FFNN baseline with 9x2000 layers and ReLU activation function yields 15.3% WER on eval10 and 20.3% WER on eval11.
Our minibatch construction is similar to e. g. [7] and described in detail in [20]. One minibatch consists of n chunks number of chunks from one or more corpus segments. The chunks are up to T frames long and we select them every tstep frames from the corpus. Our common settings are T = 50, tstep = 25, n chunks = 40, i. e. a minibatch size of 2000 frames.
Note that our learning rate is not normalized in any way. Other software sometimes normalizes with the number of frames i Ti or T · n chunks so that the update step stays the same for every mini batch no matter how you change n chunks or T . However, in our case, the total update scale per epoch stays the same no matter how one changes n chunks or T . Only tstep will have an impact on the total update scale.
For all experiments, we train 30 epochs. We have a small separate cross validation (CV) set where we measure the frame error rate (FER) and the cross entropy (CE). With the model from epochs 5, 10, 30, the epoch from the best CV FER, the epoch from the best CV CE, we evaluate on eval10 and eval11. In the results in our tables, we select the epoch of the best WER on eval10. We also state the epoch. This can give a hint about the convergence speed or whether we overfit later.
Despite the optimization method which might already provide some kind of implicit learning rate scheduling, we always also use another explicit learning rate scheduling method which is often called Newbob. We start with some given initial learning rate and when the relative improvement on the CV CE is less than 0.01 after an epoch, we multiply the learning rate with 0.5 for the next epoch.
Our standard optimization method is most often Adam with an initial learning rate of 10 −3 . We use gradient clipping of 10 by default.
Number of Layers
We did several experiments to figure out the optimal number of layers. In theory, more layers should not hurt but in practice, they often do because the optimization problem becomes harder. This could be overcome with clever initializations, skip connections, highway network like structures [25,12] or deep residual learning [26]. We did some initial experiments also in that direction but we were not successful so far. The existing work in that direction is also mostly for deep FFNNs and not for deep RNNs except for [12].
The results can be seen in Table 1. For this experiment, the optimum is somewhere between 4 to 6 layers. In earlier experiments, the optimum was at about 3 to 4 layers. It seems the more we improve other hyperparameters, the deeper the optimal network becomes.
With pretraining as in Section 4.9, we get our overall best result as described in Section 4.10 with 8 layers, however we did not investigate the same setting for different number of layers.
We also included the best CE value on the train dataset and the CV dataset in Table 1. This gives a hint about the amount of overfitting. We observe similar results as in [26], i. e. deeper networks should in theory overfit even more but they do not which is probably due to a harder optimization problem. It also seems as if the CV CE optimum is slightly deeper than the WER optimum. That indicates that sequence discriminative training will further improve the results.
Layer Size
In most experiments, we use a hidden layer size of 500 (i. e. 500 nodes for each the forward and the backward direction). In Table 2 we compare different layer sizes. Note that the number of parameters increases quadratically. We see that the optimum for this experiment is at about 700, however a model with size 500 is much smaller and not so much worse, so we used that size for most other experiments. Our original experiment showed that we get quite a huge WER degradation (over 20% relative) with unidirectional LSTM networks compared to bidirectional ones. This huge WER degradation led to further research where we investigated how to use bidirectional RNNs/LSTMs on a continuous input sequence to do online recognition. We showed that this is possible and with some recognition delay, we can reach the original WER. These results are described in [27].
Batching
We investigated the effect of different numbers of chunks n chunks , window time steps tstep and window maximum size T , resulting in the overall batch size T · n chunks . All experiment were done with the same initial learning rate.
We did many experiments with varying n chunks ∈ {20, . . . , 80} and got the best results with n chunks ≈ 40. For some experiments the performance difference was quite notable better with n chunks = 40 compared to n chunks = 20. This might be because of a better variance and thus more stable gradient for each minibatch. Note that a higher n chunks is usually also faster up to a certain point because the GPU can work in parallel on every chunk.
We usually use T = 50. We did many experiments with fixed T − tstep = 25 but we often see a slight degradation when T ≥ 100. This might be due to the problem being harder to train because of the longer backpropagation through time but maybe we need to tune the learning rate or other parameters more for longer chunks.
Varying tstep did not make much difference except that for smaller tstep, the training time per epoch naturally becomes higher because we see some of the data more often.
Optimization Methods
We compare many optimization methods and variations between hyperparameters and esp. also different initial learning rates in Table 3. We compare stochastic gradient descent (SGD), SGD with momentum [28,29] where one variant only depends on the last minibatch (mom) and another variant depends on the full history (mom2), SGD with Nesterov momentum [30,29], mean-normalized SGD (MNSGD) [31], Adadelta [32], Adagrad [33], Adam and Adamax [34], Adam without the learning rate decay term, Nadam (Adam with incorporated Nesterov momentum) [35], Adam with gradient noise [36], and Adam with MNSGD combined. We also tried Adasecant [37] but it did not converge in any of our experiments for this ASR task. We also test the effect of Newbob.
One notable variant was also to use several model copies n which we update independently and which we merge together by averaging after some k minibatch updates (upd-mmn-k). We vary the amount of model copies and after how much batches we merge. This is similar to the multi-GPU training behavior described in [20]. This method yielded the best result in these experiments but we postpone this for further research.
Overall, Adam was always a good choice. Standard SGD comes close in some experiments but converges slower. Newbob was also important. Note that Newbob also has some hyperparameters and tuning those will likely yield further improvements.
We also investigated the effect of different gradient clipping variants:
(grad clip) We can clip the total gradient for all parameters.
(upd clip) We can clip the update value for each parameter. Except for SGD, this is different than grad clip.
(err clip) We can clip the error signal in backpropagation after each time-frame.
(grad clip z) We can clip the error signal right after the loss function. This has the effect that the error signal of frames which are very miss-classified is limited. This can be interpreted as some kind of curriculum learning where the network itself characterizes the difficulty on frame level.
(grad discard z) We can expand on that idea and discard the error signal when it is higher than some threshold.
For method grad clip, we found a value of 10 to be stable as well as good enough. No clipping yields the best performance in many cases except for a few and it sometimes does not even converge. A value of 2 is noticeably worse. These values are invariant of the learning rate. Method upd clip is applied after we multiplied with the learning rate. A value of 0.1 gives a slight improvement but we did not further investigate this. Method err clip was also not investigated in this paper although this is a common method.
The methods are again invariant to the learning rate. Method grad clip z gives us slight improvements with values 1 and 10 and value 0.1 did not converge. Method grad discard z just yielded the same WER for value 1 but we did not investigate this further.
Regularization Methods
We did many dropout experiments where we drop some nodes in forward connections. We mostly see the optimal WER with dropout 0.1, i. e. we drop 10% of the activations and multiply by 10 9 . If we enlarge the hidden layer size, we can use higher dropout values although in most experiments, dropout 0.2 was worse than dropout 0.1.
In initial experiments, dropout was always better than L2 regularization. However, some later experiments showed that L2 can also work as an alternative. Interestingly, the combination of both gives a big improvement and yields the best result. See Table 4.
Initialization and Pretraining
In all cases, we randomly initialize the parameters similar as suggested in [38].
We investigated the same pretraining scheme as we do for our FFNN where we start with one layer and add a layer after each epoch. We can then either train only the new layer (greedily) or the full network, where full network training usually was better. In our initial experiments with not so optimal hyperparameters and not so deep networks, this pretraining scheme performed worse than not doing pretraining. In a later experiment with 5 layers, dropout + L2 and Adam, we get a slight improvement from 13.6% WER to 13.5% WER.
For deeper networks, this scheme seems to help more. This indicates that our initialization might have room for improvement. We got our overall best result with such a pretraining scheme applied for an 8 layer bidirectional LSTM.
Also, the training calculation time of the first few epochs is shorter.
Overall Best Model
So far we have analyzed many aspects in isolation. Often, when different methods and aspects are combined, we don't see the summed improvement as what we have seen for each individual method. We tried many combinations. Our overall best model is an 8 layer bidirectional LSTM with 500 nodes, dropout 0.1 and L2 10 −2 , n chunks = 40, T = 50, gradient noise 0.3, Nadam, no gradient clipping and the pretraining scheme as described in Section 4.9. This gives us a WER of 13.1% on eval10 and 17.6% on eval11. We think that this pretraining was the most important aspect in this experiment, as the earlier results in Section 4.3 did not give good results for such deep networks.
Calculation Time vs. WER
We did over 250 different training experiments and collected a lot of statistics about the calculation time in relation to the WER.
Most experiments were done with a GeForce GTX 980. We see that the Tesla K20c is about 1.38 times slower with a standard deviation of 0.084, and the GeForce GTX 680 is about 1.86 times slower with a standard deviation of 0.764. We present the pure train epoch calculation times with a GeForce GTX 980, not counting the CV test and other epoch preparation.
We collected some of the total times in Table 5. That is the summed train epoch time until we reach the specific epoch. We show the model with the best WER up to the specific time. We see that in most cases, combinations of different hyperparameters and methods yield the best results. Time downsampling was a simple method to reduce the calculation time with performance as trade-off.
Experiments on other Corpora
We use the 300h Switchboard-1 Release 2 (LDC97S62) corpus for training and the Hub5'00 evaluation data (LDC2002S09) is used for testing. We use a 4-gram language model which was trained on the transcripts of the acoustic training data (3M running words) and the transcripts of the Fisher English cor- More details can be found in [39]. A good FFNN baseline yields a total WER of 19.1% (13.1% WER on SWB, 25.6% WER on CH). We trained a 5 layer bidir. LSTM with Nadam, gradient noise, dropout + L2 and get a total WER of 17.1% (11.9% WER on SWB, 22.3% WER on CH). When we add one more layer and then retrain, we get a total WER of 16.7% (11.5% WER on SWB, 21.9% WER on CH). When we add an associative LSTM [40] layer instead, we get a total WER of 16.3% (11.1% on SWB, 21.6% on CH).
We also did a few experiments on Babel Javanese full language pack (IARPA-babel402b-v1.0b) which is a keyword-search (KWS) task (see [41] for all details). The baseline FFNN with 6 layers and 34M parameters yields a WER of 54.3% with CE-training and 53.3% with MPE-training. A 3 layer bidir. LSTM with 19M parameters yields a WER of 52.8% with CE-training (without MPE-training yet).
Conclusions & Outlook
We outlined optimal LSTM hyperparameters such as the network depth and size, various training, optimization and regularization methods. We show that individual improvement findings can be combined and add up. We showed that we can reproduce good results with these findings on several different corpora and tasks and yield very good overall results which beats our best FFNN on Quaero by over 14% relatively.
We also demonstrated how to train deeper LSTM acoustic models with up to 8 layers which is more than what has been reported before in the literature. Important for this achievement was our introduction of pretraining for LSTMs.
Acknowledgements
We thank Zoltán Tüske for the baseline FFNN Switchboard experiment and Pavel Golik for the Babel baseline experiment.
Partially supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense U.S. Army Research Laboratory (DoD/ARL) contract no. W911NF-12-C-0012. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoD/ARL, or the U.S. Government.
This research was partially supported by Ford Motor Company.
Table 1 :
1Comparison of number of layers. Dropout 0.1 + L2, Adam, n chunks = 40, WER on eval10. Note that the CE values are not necessarily from the same epoch as the WER but they are the minimum from all epochs. Also, the train CE is accumulated while training, i.e. with dropout applied.#layers #params[M] WER[%] epoch train CE CV CE
1
6.7
17.6
30
1.72
1.64
2
12.7
14.6
16
1.25
1.39
3
18.7
14.0
30
1.17
1.32
4
24.7
13.5
15
1.16
1.29
5
30.7
13.6
30
1.17
1.28
6
36.7
13.5
30
1.22
1.28
7
42.7
13.8
30
1.24
1.28
8
48.7
14.2
19
1.29
1.31
Table 2 :
2Comparison of hidden layer size. 3 layers, dropout 0.1, Adadelta, n chunks = 20. WER reported on eval10.layer size #params[M] WER[%] epoch
300
7.9
15.8
30
500
18.7
15.1
15
700
34.0
14.7
15
1000
65.4
15.0
11
4.5. Topology: Bidirectional vs. Unidirectional
Table 3 :
3Comparing different optimization methods. 3 layers, hidden layer size 500, dropout 0.1, n chunks = 20. WER5, WER10, bWER and ep is the eval10 WER[%] of epoch 5, 10, best WER[%] and the epoch of the best WER, respectively.method
lr
details
WER5 WER10 bWER ep
Adadelta
0.5
decay 0.90
20.2
15.7
15.3 13
decay 0.95
18.4
15.7
15.1 13
decay 0.99
model broken
0.1
decay 0.95
16.9
15.5
15.1 13
10 −2
decay 0.95
24.4
20.1
17.4 29
Adagrad
10 −2
-
16.9
16.0
15.6 29
10 −3
-
model broken
Adam
10 −2
-
model broken
10 −3
-
16.3
15.4
14.8 30
no lr decay
16.1
15.0
14.6 11
Nadam
16.1
14.8
14.7 30
grad noise 0.3 16.2
15.0
14.6 16
upd-mm-2-2 15.8
14.9
14.5 18
upd-mm-3-2 15.8
14.5
14.3 30
Adamax
16.3
15.4
14.9 15
0.5 · 10 −3
-
15.8
14.9
14.5 13
10 −4
-
16.4
15.6
14.9 18
Adamax
21.0
18.6
16.6 30
+ MNSGD
16.5
15.7
14.9 18
no Newbob 16.4
15.6
15.2 21
10 −5
no Newbob 30.7
24.3
19.2 30
MNSGD
10 −4
avg 0.5
20.2
18.2
17.8 20
avg 0.995
19.1
16.8
16.4 18
SGD
10 −3
-
17.0
16.1
15.8 30
10 −4
-
17.9
15.8
14.9 26
mom 0.9
17.4
15.9
14.8 28
mom2 0.9
16.7
16.3
15.9 19
mom2 0.5
17.2
16.0
15.0 30
Nesterov 0.9 16.9
16.1
15.8 16
0.5 · 10 −4
-
19.7
17.1
15.4 30
mom2 0.9
16.8
15.5
15.0 30
10 −5
-
32.1
22.3
18.6 30
then lr 10 −4 18.7
16.2
15.0 30
Table 4 :
4We try different combinations of dropout and L2. 3 layers, hidden size 500, n chunks = 40, Adam. WER on eval10.dropout
L2
WER[%] epoch
0
0
16.1
6
10 −2
14.8
11
0.1
0
14.8
19
10 −3
14.5
11
10 −2
14.0
30
10 −1
15.2
26
Table 5 :
5Total times until we get to a certain eval10 WER in a certain train epoch. If not specified we use just dropout.time
WER ep
model
[%]
size
details
2:18h
20.0
5 1x500
dropout + L2
2:36h
17.2
5 3x300
time downsampling
3:40h
16.6
5 3x500
-
14:47h 13.9 13 3x500
dropout + L2
23:15h 13.6 15 4x500
dropout + L2
35:36h 13.2 18 5x500 dropout + L2 + grad noise
52:24h 13.1 22 8x500
best as in Section 4.10
pora (LDC2004T19 & LDC2005T19) with 22M running words.
Deep learning in neural networks: An overview. J Schmidhuber, arXiv:1404.7828Neural Networks. 61based on TR. cs.NEJ. Schmidhuber, "Deep learning in neural networks: An overview," Neural Networks, vol. 61, pp. 85-117, 2015, published online 2014; based on TR arXiv:1404.7828 [cs.NE].
Long short-term memory. S Hochreiter, J Schmidhuber, Neural computation. 98S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.
Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. H Sak, A Senior, F Beaufays, arXiv:1402.1128arXiv preprintH. Sak, A. Senior, and F. Beaufays, "Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition," arXiv preprint arXiv:1402.1128, 2014, http: //arxiv.org/pdf/1402.1128.
Robust speech recognition using long short-term memory recurrent neural networks for hybrid acoustic modelling. J T Geiger, Z Zhang, F Weninger, B Schuller, G Rigoll, INTER-SPEECH. J. T. Geiger, Z. Zhang, F. Weninger, B. Schuller, and G. Rigoll, "Robust speech recognition using long short-term memory recur- rent neural networks for hybrid acoustic modelling," in INTER- SPEECH, 2014, pp. 631-635.
An application of recurrent nets to phone probability estimation. A J Robinson, IEEE Transactions on. 52Neural NetworksA. J. Robinson, "An application of recurrent nets to phone proba- bility estimation," Neural Networks, IEEE Transactions on, vol. 5, no. 2, pp. 298-305, 1994.
Hybrid speech recognition with deep bidirectional LSTM. A Graves, N Jaitly, A.-R Mohamed, Automatic Speech Recognition and Understanding (ASRU). A. Graves, N. Jaitly, and A.-r. Mohamed, "Hybrid speech recogni- tion with deep bidirectional LSTM," in Automatic Speech Recog- nition and Understanding (ASRU), 2013 IEEE Workshop on.
. IEEE. IEEE, 2013, pp. 273-278, http://www.cs.toronto.edu/ ∼ graves/ asru 2013.pdf.
Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recognition. X Li, X Wu, Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEEX. Li and X. Wu, "Constructing long short-term memory based deep recurrent neural networks for large vocabulary speech recog- nition," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEE, 2015, pp. 4520- 4524.
Improving long short-term memory networks using maxout units for large vocabulary speech recognition. Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEE--, "Improving long short-term memory networks using max- out units for large vocabulary speech recognition," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE Interna- tional Conference on. IEEE, 2015, pp. 4600-4604.
Convolutional, long short-term memory, fully connected deep neural networks. T N Sainath, O Vinyals, A Senior, H Sak, Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEET. N. Sainath, O. Vinyals, A. Senior, and H. Sak, "Convolutional, long short-term memory, fully connected deep neural networks," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on. IEEE, 2015, pp. 4580-4584.
Deep recurrent neural networks for acoustic modelling. W Chan, I Lane, arXiv:1504.01482arXiv preprintW. Chan and I. Lane, "Deep recurrent neural networks for acous- tic modelling," arXiv preprint arXiv:1504.01482, 2015.
Context dependent phone models for LSTM RNN acoustic modelling. A Senior, H Sak, I Shafran, Acoustics, Speech and Signal Processing (ICASSP). IEEEA. Senior, H. Sak, and I. Shafran, "Context dependent phone mod- els for LSTM RNN acoustic modelling," in Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Confer- ence on. IEEE, 2015, pp. 4585-4589.
Highway long short-term memory RNNs for distant speech recognition. Y Zhang, G Chen, D Yu, K Yao, S Khudanpur, J Glass, arXiv:1510.08983arXiv preprintY. Zhang, G. Chen, D. Yu, K. Yao, S. Khudanpur, and J. Glass, "Highway long short-term memory RNNs for distant speech recognition," arXiv preprint arXiv:1510.08983, 2015.
Empirical evaluation of gated recurrent neural networks on sequence modeling. J Chung, Ç Gülçehre, K Cho, Y Bengio, abs/1412.3555CoRR. J. Chung, Ç . Gülçehre, K. Cho, and Y. Bengio, "Empirical evaluation of gated recurrent neural networks on sequence modeling," CoRR, vol. abs/1412.3555, 2014. [Online]. Available: http://arxiv.org/abs/1412.3555
An empirical exploration of recurrent network architectures. R Jozefowicz, W Zaremba, I Sutskever, Proceedings of the 32nd International Conference on Machine Learning (ICML-15). the 32nd International Conference on Machine Learning (ICML-15)R. Jozefowicz, W. Zaremba, and I. Sutskever, "An empirical ex- ploration of recurrent network architectures," in Proceedings of the 32nd International Conference on Machine Learning (ICML- 15), 2015, pp. 2342-2350.
LSTM: A search space odyssey. K Greff, R K Srivastava, J Koutník, B R Steunebrink, J Schmidhuber, arXiv:1503.04069arXiv preprintK. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber, "LSTM: A search space odyssey," arXiv preprint arXiv:1503.04069, 2015.
Benchmarking of LSTM networks. T M Breuel, arXiv:1508.02774arXiv preprintT. M. Breuel, "Benchmarking of LSTM networks," arXiv preprint arXiv:1508.02774, 2015.
Learning precise timing with LSTM recurrent networks. F A Gers, N N Schraudolph, J Schmidhuber, The Journal of Machine Learning Research. 3F. A. Gers, N. N. Schraudolph, and J. Schmidhuber, "Learning precise timing with LSTM recurrent networks," The Journal of Machine Learning Research, vol. 3, pp. 115-143, 2003.
RASRthe RWTH Aachen university open source speech recognition toolkit. D Rybach, S Hahn, P Lehnen, D Nolden, M Sundermeyer, Z Tüske, S Wiesler, R Schlüter, H Ney, IEEE Automatic Speech Recognition and Understanding Workshop. Waikoloa, HI, USAD. Rybach, S. Hahn, P. Lehnen, D. Nolden, M. Sundermeyer, Z. Tüske, S. Wiesler, R. Schlüter, and H. Ney, "RASR - the RWTH Aachen university open source speech recognition toolkit," in IEEE Automatic Speech Recognition and Understand- ing Workshop, Waikoloa, HI, USA, Dec. 2011.
RASR/NN: The RWTH neural network toolkit for speech recognition. S Wiesler, A Richard, P Golik, R Schlüter, H Ney, IEEE International Conference on Acoustics, Speech, and Signal Processing. Florence, ItalyS. Wiesler, A. Richard, P. Golik, R. Schlüter, and H. Ney, "RASR/NN: The RWTH neural network toolkit for speech recog- nition," in IEEE International Conference on Acoustics, Speech, and Signal Processing, Florence, Italy, May 2014, pp. 3313-3317.
RETURNN: The RWTH extensible training framework for universal recurrent neural networks. P Doetsch, A Zeyer, P Voigtlaender, I Kulikov, R Schlüter, H Ney, work in preparation, can be requested from the authorsP. Doetsch, A. Zeyer, P. Voigtlaender, I. Kulikov, R. Schlüter, and H. Ney, "RETURNN: The RWTH extensible training framework for universal recurrent neural networks," work in preparation, can be requested from the authors, 2016.
Theano: new features and speed improvements. F Bastien, P Lamblin, R Pascanu, J Bergstra, I J Goodfellow, A Bergeron, N Bouchard, Y Bengio, Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop. F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. J. Goodfellow, A. Bergeron, N. Bouchard, and Y. Bengio, "Theano: new features and speed improvements," Deep Learning and Unsupervised Fea- ture Learning NIPS 2012 Workshop, 2012.
The RWTH 2009 Quaero ASR evaluation system for English and German. M Nußbaum-Thom, S Wiesler, M Sundermeyer, C Plahl, S Hahn, R Schlüter, H Ney, Interspeech, Makuhari, JapanM. Nußbaum-Thom, S. Wiesler, M. Sundermeyer, C. Plahl, S. Hahn, R. Schlüter, and H. Ney, "The RWTH 2009 Quaero ASR evaluation system for English and German," in Interspeech, Makuhari, Japan, Sep. 2010, pp. 1517-1520.
Connectionist speech recognition: a hybrid approach. H A Bourlard, N Morgan, Springer247H. A. Bourlard and N. Morgan, Connectionist speech recognition: a hybrid approach. Springer, 1994, vol. 247, http://publications. idiap.ch/downloads/papers/2013/Bourlard KLUWER 1994.pdf.
Gammatone features and feature combination for large vocabulary speech recognition. R Schlüter, L Bezrukov, H Wagner, H Ney, Acoustics, Speech and Signal Processing. IEEE4649IEEE International Conference onR. Schlüter, L. Bezrukov, H. Wagner, and H. Ney, "Gamma- tone features and feature combination for large vocabulary speech recognition," in Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on, vol. 4. IEEE, 2007, pp. IV-649.
Training very deep networks. R K Srivastava, K Greff, J Schmidhuber, Advances in Neural Information Processing Systems. R. K. Srivastava, K. Greff, and J. Schmidhuber, "Training very deep networks," in Advances in Neural Information Processing Systems, 2015, pp. 2368-2376.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, arXiv:1512.03385arXiv preprintK. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," arXiv preprint arXiv:1512.03385, 2015.
Towards online-recognition with deep bidirectional LSTM acoustic models. A Zeyer, R Schlüter, H Ney, InterspeechA. Zeyer, R. Schlüter, and H. Ney, "Towards online-recognition with deep bidirectional LSTM acoustic models," in Interspeech, 2016.
Some methods of speeding up the convergence of iteration methods. B T Polyak, USSR Computational Mathematics and Mathematical Physics. 45B. T. Polyak, "Some methods of speeding up the convergence of iteration methods," USSR Computational Mathematics and Math- ematical Physics, vol. 4, no. 5, pp. 1-17, 1964.
On the importance of initialization and momentum in deep learning. I Sutskever, J Martens, G Dahl, G Hinton, Proceedings of the 30th international conference on machine learning (ICML-13). the 30th international conference on machine learning (ICML-13)I. Sutskever, J. Martens, G. Dahl, and G. Hinton, "On the impor- tance of initialization and momentum in deep learning," in Pro- ceedings of the 30th international conference on machine learn- ing (ICML-13), 2013, pp. 1139-1147.
A method of solving a convex programming problem with convergence rate O(1/k2). Y Nesterov, Soviet Mathematics Doklady. 27Y. Nesterov, "A method of solving a convex programming prob- lem with convergence rate O(1/k2)," in Soviet Mathematics Dok- lady, vol. 27, no. 2, 1983, pp. 372-376.
Meannormalized stochastic gradient for large-scale deep learning. S Wiesler, A Richard, R Schlüter, H Ney, IEEE International Conference on Acoustics, Speech, and Signal Processing. Florence, ItalyS. Wiesler, A. Richard, R. Schlüter, and H. Ney, "Mean- normalized stochastic gradient for large-scale deep learning," in IEEE International Conference on Acoustics, Speech, and Signal Processing, Florence, Italy, May 2014, pp. 180-184.
Adadelta: An adaptive learning rate method. M D Zeiler, arXiv:1212.5701arXiv preprintM. D. Zeiler, "Adadelta: An adaptive learning rate method," arXiv preprint arXiv:1212.5701, 2012, http://arxiv.org/abs/1212.5701.
Adaptive subgradient methods for online learning and stochastic optimization. J Duchi, E Hazan, Y Singer, The Journal of Machine Learning Research. 12J. Duchi, E. Hazan, and Y. Singer, "Adaptive subgradient methods for online learning and stochastic optimization," The Journal of Machine Learning Research, vol. 12, pp. 2121-2159, 2011, http: //dl.acm.org/citation.cfm?id=2021068.
Adam: A method for stochastic optimization. D Kingma, J Ba, arXiv:1412.6980arXiv preprintD. Kingma and J. Ba, "Adam: A method for stochastic optimiza- tion," arXiv preprint arXiv:1412.6980, 2014.
Incorporating Nesterov momentum into Adam. T Dozat, Tech. Rep. Stanford UniversityT. Dozat, "Incorporating Nesterov momentum into Adam," Stan- ford University, Tech. Rep., 2015, http://cs229.stanford.edu/ proj2015/054 report.pdf.
Adding gradient noise improves learning for very deep networks. A Neelakantan, L Vilnis, Q V Le, I Sutskever, L Kaiser, K Kurach, J Martens, arXiv:1511.06807ArXiv preprintA. Neelakantan, L. Vilnis, Q. V. Le, I. Sutskever, L. Kaiser, K. Ku- rach, and J. Martens, "Adding gradient noise improves learning for very deep networks," ArXiv preprint arXiv:1511.06807, Nov. 2015.
Adasecant: robust adaptive secant method for stochastic gradient. C Gulcehre, M Moczulski, Y Bengio, arXiv:1412.7419arXiv preprintC. Gulcehre, M. Moczulski, and Y. Bengio, "Adasecant: robust adaptive secant method for stochastic gradient," arXiv preprint arXiv:1412.7419, 2014.
Understanding the difficulty of training deep feedforward neural networks. X Glorot, Y Bengio, International Conference on Artificial Intelligence and Statistics. X. Glorot and Y. Bengio, "Understanding the difficulty of training deep feedforward neural networks," in International Conference on Artificial Intelligence and Statistics, 2010, pp. 249-256, http: //jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf.
Speaker adaptive joint training of gaussian mixture models and bottleneck features. Z Tüske, P Golik, R Schlüter, H Ney, IEEE Automatic Speech Recognition and Understanding Workshop. Scottsdale, AZ, USAZ. Tüske, P. Golik, R. Schlüter, and H. Ney, "Speaker adaptive joint training of gaussian mixture models and bottleneck features," in IEEE Automatic Speech Recognition and Understanding Work- shop, Scottsdale, AZ, USA, Dec. 2015, pp. 596-603.
Associative long short-term memory. I Danihelka, G Wayne, B Uria, N Kalchbrenner, A Graves, arXiv:1602.03032arXiv preprintI. Danihelka, G. Wayne, B. Uria, N. Kalchbrenner, and A. Graves, "Associative long short-term memory," arXiv preprint arXiv:1602.03032, 2016.
Multilingual features based keyword search for very low-resource languages. P Golik, Z Tüske, R Schlüter, H Ney, Interspeech. P. Golik, Z. Tüske, R. Schlüter, and H. Ney, "Multilingual fea- tures based keyword search for very low-resource languages," in Interspeech, Dresden, Germany, Sep. 2015, pp. 1260-1264.
| [] |
[
"IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation",
"IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation"
] | [
"Bingqian Lin ",
"Yi Zhu ",
"Yanxin Long ",
"Xiaodan Liang ",
"Qixiang Ye ",
"Liang Lin ",
"\nCampus of Sun Yat-sen University\nShenzhenChina\n",
"\nLiang Lin is with Sun Yat-sen University are with University of Chinese Academy of Sciences (UCAS)\nGuangzhou, BeijingChina., China\n"
] | [
"Campus of Sun Yat-sen University\nShenzhenChina",
"Liang Lin is with Sun Yat-sen University are with University of Chinese Academy of Sciences (UCAS)\nGuangzhou, BeijingChina., China"
] | [] | Language instruction plays an essential role in the natural language grounded navigation tasks. However, navigators trained with limited human-annotated instructions may have difficulties in accurately capturing key information from the complicated instruction at different timesteps, leading to poor navigation performance. In this paper, we exploit to train a more robust navigator which is capable of dynamically extracting crucial factors from the long instruction, by using an adversarial attacking paradigm. Specifically, we propose a Dynamic Reinforced Instruction Attacker (DR-Attacker), which learns to mislead the navigator to move to the wrong target by destroying the most instructive information in instructions at different timesteps. By formulating the perturbation generation as a Markov Decision Process, DR-Attacker is optimized by the reinforcement learning algorithm to generate perturbed instructions sequentially during the navigation, according to a learnable attack score. Then, the perturbed instructions, which serve as hard samples, are used for improving the robustness of the navigator with an effective adversarial training strategy and an auxiliary self-supervised reasoning task. Experimental results on both Vision-and-Language Navigation (VLN) and Navigation from Dialog History (NDH) tasks show the superiority of our proposed method over state-of-the-art methods. Moreover, the visualization analysis shows the effectiveness of the proposed DR-Attacker, which can successfully attack crucial information in the instructions at different timesteps. Code is available at https://github.com/expectorlin/DR-Attacker.Index Terms-Vision-and-language navigation, adversarial attack, reinforcement learning, self-supervised learning ! | 10.1109/tpami.2021.3097435 | [
"https://arxiv.org/pdf/2107.11252v1.pdf"
] | 236,318,116 | 2107.11252 | ecca7bdff5d8a45ca293e13cbf8c310310c5fd47 |
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation
Bingqian Lin
Yi Zhu
Yanxin Long
Xiaodan Liang
Qixiang Ye
Liang Lin
Campus of Sun Yat-sen University
ShenzhenChina
Liang Lin is with Sun Yat-sen University are with University of Chinese Academy of Sciences (UCAS)
Guangzhou, BeijingChina., China
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Adversarial Reinforced Instruction Attacker for Robust Vision-Language Navigation
Index Terms-Vision-and-language navigationadversarial attackreinforcement learningself-supervised learning
Language instruction plays an essential role in the natural language grounded navigation tasks. However, navigators trained with limited human-annotated instructions may have difficulties in accurately capturing key information from the complicated instruction at different timesteps, leading to poor navigation performance. In this paper, we exploit to train a more robust navigator which is capable of dynamically extracting crucial factors from the long instruction, by using an adversarial attacking paradigm. Specifically, we propose a Dynamic Reinforced Instruction Attacker (DR-Attacker), which learns to mislead the navigator to move to the wrong target by destroying the most instructive information in instructions at different timesteps. By formulating the perturbation generation as a Markov Decision Process, DR-Attacker is optimized by the reinforcement learning algorithm to generate perturbed instructions sequentially during the navigation, according to a learnable attack score. Then, the perturbed instructions, which serve as hard samples, are used for improving the robustness of the navigator with an effective adversarial training strategy and an auxiliary self-supervised reasoning task. Experimental results on both Vision-and-Language Navigation (VLN) and Navigation from Dialog History (NDH) tasks show the superiority of our proposed method over state-of-the-art methods. Moreover, the visualization analysis shows the effectiveness of the proposed DR-Attacker, which can successfully attack crucial information in the instructions at different timesteps. Code is available at https://github.com/expectorlin/DR-Attacker.Index Terms-Vision-and-language navigation, adversarial attack, reinforcement learning, self-supervised learning !
A INTRODUCTION
N ATURAL language grounded visual navigation task asks an embodied agent to navigate to a goal position following language instructions [1], [2], [3], [4], [5]. It has raised widely research interests in recent years since an instruction-following navigation agent is more flexible and practical in many real-world applications, such as personal assistants and in-home robots. To accomplish successful navigation, the agent needs to extract the key information, e.g., visual objects, specific rooms or navigation directions, from the long instruction according to dynamic visual observation for guiding navigation at each timestep. However, due to the complexity and semantic ambiguity of the natural language, it is hard for the navigators to effectively learn cross-modality alignment and capture accurate semantic intentions from the instruction by training with limited human-annotated instruction-path data.
Prior works mainly employed the data augmentation strategy to solve the data scarcity in navigation tasks [6], [7], [8]. [6] proposed a speaker-follower framework to generate augmented instructions within randomly sampled paths. However, generating a large amount of the whole instructions is at high costs and may not contribute to the emphasis of the most instructive information. [7] and [8] focus on creating challenging augmented paths and diverse visual scenes, while generated augmented instructions by employing the speaker model in [6] directly. Therefore, the enhancement of the instruction understanding ability of the navigator might also be limited. In recent years, there have been increasing attentions in designing the adversarial attacks for natural language processing (NLP) tasks to verify and improve the robustness of NLP models [9], [10], [11], [12]. Inspired by this, we consider the following question: Can we design adversarial attacks on the instruction to generate helpful adversarial samples for improving the robustness of the navigator? A simple way to generate adversarial instructions is to borrow the existing attack methods on NLP [12], [13] tasks directly. However, it is difficult since existing adversarial attacks on NLP are often optimized by some classification-based goal functions [9], [12], which are unreachable in the navigation tasks. Moreover, the key instruction information for navigation changes dynamically while these attack methods developed on NLP are designed in the static setting.
In this paper, we make the first attempt for introducing the adversarial attacks on the language instruction of navigation tasks to improve the robustness of navigators. Specifically, we propose a Dynamic Reinforced Instruction Attacker (DR-Attacker), which learns to minimize the navigation reward by dynamically destroying key instruction information and generating perturbed instruction at each timestep. Then, an effective adversarial training strategy is adopted to improve the robustness of the navigator, by asking it to maximize the navigation reward with the perturbed instruction. To encourage the agent to be aware of actual key information and improve the fault-tolerance ability with perturbed instruction, an auxiliary self-supervised reasoning task is also introduced for the navigator, requiring it to arXiv:2107.11252v1 [cs.CV] 23 Fig. 1: The overview of our proposed method. At timestep t, the DR-Attacker receives the visual observation and original instruction, and generates the perturbed instruction I t by substituting the selected target word with the best candidate word according to the attack score. The victim navigator, which receives the perturbed instruction, is enforced to maximize the navigation reward R N av with an adversarial setting and reasoning the actual attacked words by DR-Attacker to enhance the model robustness.
distinguish the actual attacked word of the DR-Attacker at each timestep according to the instruction and current visual observation. As a result, more accurately the DR-Attacker attacks the important instruction information, more possible that the agent is able to capture the actual key information for navigation.
Since navigation is a sequential decision making problem without direct classification-based objectives, we formulate the perturbation generation as a Markov Decision Process, and present a reinforcement learning (RL) resolution to generate the perturbed instructions by misleading the navigator to move to the wrong target position. At each timestep, the policy agent, i.e., our proposed DR-Attacker, substitutes the most crucial target word in the current instruction with the best candidate substitution word which has maximum perturbation impact, according to a learnable attack score. As a result, the DR-Attacker can learn to highlight the important parts in instructions to generate adversarial samples at different timesteps. To enhance the navigation robustness, the victim navigator, which receives the perturbed instruction, is enforced to be immune to the perturbation under the adversarial setting, as well as correctly reasoning the actual attacked words by the DR-Attacker. The overview of our proposed method is presented in Figure 1. Suppose a person receives the perturbed instruction I t where the word "table" is substituted with the word "stairs". With the good understanding of the instruction and visual environment, he can distinguish the noisy word and still make the correct navigation decision. Therefore, the perturbed instructions, which can be viewed as hard negative samples, can effectively encourage the victim navigator to understand the multi-modality observations and have the self-correction ability thus become more robust.
Experimental results on both Navigation from Dialog History (NDH) and Vision-and-Language Navigation (VLN) show the superiority of the proposed method over other competitors. Moreover, the quantitative and qualitative results show the effectiveness of the proposed DR-Attacker, which causes significant navigation performance drop by only disturbing most crucial instruction information.
The merits of our proposed DR-Attacker are summarized as follows: First, DR-Attacker can generate perturbed instruction dynamically by capturing and destroying key instruction information in different navigation timesteps. Second, DR-Attacker can be optimized via gradient-based methods under the unsupervised setting, by formulating the perturbation generation as a sequential decision making problem. Last but not least, the adversarial samples produced by DR-Attacker are beneficial for improving the model robustness.
The main contributions of this paper are summarized as follows:
• We take the first step to introduce the adversarial attack on the language instruction of navigation tasks to learn robust navigators. Different from existing adversarial attacking paradigm developed on NLP tasks which are generally static, the proposed adversarial attack is dynamic during the navigation process.
• By formulating the perturbation generation as a Markov Decision Process, the proposed instruction attacker, called Dynamic Reinforced Instruction Attacker (DR-Attacker), can be optimized by the reinforcement learning algorithm to achieve effective perturbation, without the need of classification-based objectives.
• To improve the robustness of the navigator, an alternative adversarial training strategy and an auxiliary self-supervised reasoning task are employed to train the navigator on perturbed instructions, which can effectively enhance the cross-modal understanding ability of the navigator.
• Experimental results on two popular natural language grounded visual navigation tasks, i.e., Visionand-Language Navigation (VLN) and Navigation from Dialog History (NDH) show that the model robustness can be effectively enhanced by the proposed method. Moreover, both the quantitative results and visualized results show the effectiveness of the proposed DR-Attacker.
The remainder of this paper is organized as follows. Section B gives a brief review of the related work. Section C describes the problem setup of natural language grounded visual navigation tasks and then introduces our proposed method. Experimental results are provided in Section D. Section E concludes the paper and presents some outlook for future work.
B RELATED WORK
B.1 Natural Language Grounded Visual Navigation
Natural language grounded visual navigation tasks [1], [2], [3], [4], [5], [14], [15] have attracted extensive research interests in recent years since they are practical and pose great challenges for vision-language understanding tasks [16], [17], [18], [19]. In this paper, we mainly focus on two natural language grounded navigation tasks, namely, Vision-and-Language Navigation (VLN) [1] and Navigation from Dialog History (NDH) [2].
Vision-and-Language Navigation (VLN) [1], [6], [7], [8] was first proposed by [1], where a navigation agent is asked to move to the goal position following the navigation instruction. Specifically, the instruction is a sequence of declarative sentences such as "Walk down stairs. Walk past the chartreuse ottoman in the TV room. Wait in the bathroom door threshold." Therefore, to successfully navigate to the goal position, the agent needs to understand the instruction well and learn to ground the instruction to visual observations. To achieve this, [20] proposed Reinforced Cross-Modal Matching (RCM) approach to enforce cross-modal grounding both locally and globally via reinforcement learning (RL). [21] designed visual-textual co-grounding module to distinguish different instruction parts as the ones have completed and the ones need to complete regarding visual observations. To better encourage the navigator to sufficiently understand the diverse instructions and navigation environments, existing works adopted the data augmentation strategy [6], [7], [8] to solve the data scarcity in the original dataset. A speaker-follower model was proposed by [6] to produce augmented instructions with randomlysampled paths. [7] proposed Environmental Dropout to create new (environment, path, instruction) triplets while utilizing the speaker model in [6] directly for generating the augmented instructions.
The Cooperative Vision-and-Dialog Navigation (CVDN) dataset was recently proposed by [2] and Navigation from Dialog history (NDH) is a task proposed on CVDN dataset, which requires an agent to move towards the goal position following a sequence of dialog history. Although the visual scenes in CVDN dataset are similar to the R2R dataset proposed on VLN task [1], the instruction in the CVDN dataset, which is composed of dialog history and current question-answer pair, is harder for the agent to understand and perform visual grounding since it is longer and more complicated than the instruction on VLN task. To better explore useful textual information for successful navigation, [22] proposed Cross-modal Memory Network (CMN) to exploit the rich information in dialog history. [23] employed a pretraining scheme by using image-text-action triplets for improving the instruction understanding and crossmodality alignment.
While existing methods have achieved some improvements in enhancing the instruction understanding by data augmentation [6], [7], [8] or pretraining [23], [24], the quality of the augmented instructions is rarely noticed, leading to limited improvement of the model robustness. In contrast, we adopt an adversarial attack paradigm to encourage the generation of meaningful adversarial instructions, which can serve as hard augmented samples to better enhance the navigation robustness.
B.2 Adversarial Attacks in NLP
Adversarial attacks have been widely used in the image domain [25], [26], [27], [28], [29] to validate the robustness of the deep neural network models [30], [31]. In recent years, many researchers of NLP fields put their focus on introducing adversarial attacks for the NLP tasks, which can serve as a powerful tool for evaluating the model vulnerability, and more importantly, improving the robustness of NLP models [12], [32], [33], [34], [35]. The key principle of adversarial attacks is to impose imperceptible perturbation by human on the original input while easily fool the neural model to make the incorrect prediction. Most adversarial attacks on NLP tasks are word-level attacks [12], [13] or characterlevel attacks [9], [11]. HotFlip [36] introduced white-box adversarial samples based on an atomic flip operation to trick a character-level neural classifier. [13] proposed a word-level attack model based on sememe-based word substitution method and particle swarm optimization-based search algorithm, which was implemented on Bi-LSTM [37] and BERT [38]. Due to the discrete characteristic of the natural language, the imposed adversarial attacks on the language, such as inserting, removing or replacing a specific character or word, can easily change the meaning or break the grammaticality and naturality of the original sentence [13], [39]. Therefore, the adversarial perturbation on the language is essentially easy to be perceived by human rather than that in image.
Our introduced attack on the instruction can be viewed as an adversarial attack naturally due to the following aspects. First, we constraint our DR-Attacker to replace a single word at a specific timestep to control the magnitude of the perturbation to be small enough. Second, although the local key information, e.g., a visual object word is destroyed, the human, which is able to comprehend the long-term intention of the instruction and reasoning original instruction information according to the current visual observation, cannot be misled easily by such perturbation. However, the agent, which tends to learn the simple alignment of the instruction and visual observation, is more easily to be misled and to get stuck. Third, the replacement is conducted between words belonging to the same characteristic, ensuring the grammaticality and naturality of the original sentence. Since incorrect visual object, location or action words in an instruction is easy to appear in realistic scenes, e.g., a wrong annotation by human or an object previously existing but disappearing in the original scene, we impose the perturbation on visual object or location words rather than uninformative words, which can be more beneficial for enhancing the navigation robustness.
In contrast to existing adversarial attacks on NLP which are generally static and optimized with classification-based objectives, our proposed DR-Attacker can generate dynamic perturbation on the instruction, and can be optimized by the RL paradigm under the unsupervised setting. Like other existing works which train the models on the perturbed training samples to improve the robustness of NLP models [33], [40], [41], [42], we also develop the adversarial training strategy to improve the robustness of the navigator using the perturbed instructions generated at each timestep. Moreover, we introduce an auxiliary self-supervised reasoning task during the adversarial training stage, which can better promote the adversarial training results.
B.3 Adversarial Attacks in Navigation
Although adversarial attacks are popular in verifying and improving the robustness of the deep learning models in both image [25], [26], [27], [28] and NLP [9], [10], [11], [12], [13], [32], [33], [34], [35], [43] domains, there are few works attempting to employ the adversarial attacks for improving the robustness of the embodied navigation agents, since the setting and environment in navigation is usually dynamic and complex. [44] took the first attempt to introduce spatiotemporal perturbations on the visual objects for embodied question answering (EQA) task [45], by perturbing the physical properties (e.g., texture or shape) of visual objects. They used the available ground-truth labels to guide the perturbation generation by using classification-based objectives. Compared with the collection of diverse visual environments to improve the robustness of the agent, annotating large-amount of high-quality and informative instruction is more difficult and labor-intensive for the natural language grounded visual navigation task. Therefore, in contrast to [45], we make the first attempt to introduce adversarial attacks for the existing available instruction data in this paper, to mitigate the scarcity of available high-quality instructions which largely limits the navigation performance of existing instruction-following agents. Moreover, our introduced perturbation can be optimized in an unsupervised way, which is more practical.
B.4 Automatic Data Augmentation
Automatic data augmentation aims to learn data augmentation strategies automatically according to the target model performance instead of designing augmentation strategies manually based on the expertise knowledge. AutoAugment [46] formulates the automatic augmentation policy search as a discrete search problem and employs a reinforcement learning (RL) framework to search the policy consisting of possible augmentation operations. However, high computational cost is required for training and evaluating thousands of sampled policies in the search process. To speed up policy search, many variants of AutoAugment are proposed [42], [47], [48], [49], [50]. PBA [47] introduces populationbased training to efficiently train the network parallelly across different CPUs or GPUs. Fast AutoAugment [48] moves the costly search stage from training to evaluation through bayesian optimization. Adversarial AutoAugment [42] directly learns augmentation policies on target tasks and develops an adversarial framework to jointly optimize target network training and augmentation policy search.
The most related work to our proposed method is Adversarial AutoAugment [42], where the policy sampler and the target model are jointly optimized in an adversarial way. The difference between our method and Adversarial AutoAugment is that our augmented samples are generated through the adversarial attack rather than the composition of augmentation strategies, which is constrained to be small in magnitude while impact the agent performance largely.
C METHOD
In this section, we describe the natural language grounded visual navigation task first and then introduce our proposed method. The problem setup is given in Sec. C.1. The details of our proposed Dynamic Reinforced Instruction Attacker (DR-Attacker), including the optimization of the perturbation generation, the adversarial training with the auxiliary self-supervised reasoning task, and the model details are presented in Sec. C.2.
C.1 Problem Setup
Natural language grounded visual navigation task requires a navigator to find a route (a sequence of viewpoints) from a start viewpoint to the target viewpoint following the given instruction I. For the NDH task, the instruction I is composed of < t 0 ,
Q 1 , H 1 , Q 2 , H 2 , ..., Q t , H t >,
which includes the given target object t 0 , the questions Q and the answers H till the turn t (0 ≤ t ≤ T , where T is the total number of question-answer turns from the intial position to the target room). For the VLN task, the instruction I is
composed of < G 1 , G 2 , ..., G M >, where G m (1 ≤ m ≤ M )
denotes a single sentence and M denotes the number of sentences. Since the t 0 , Q t , H t , G m can all be represented by word tokens, for both NDH and VLN tasks, we formulate the instruction I as a set of word tokens, I = {w 0 , ..., w L }, where L is the length of the instruction. At timestep t, the navigator receives a panoramic view as the visual observation. Each panoramic view is divided into 36 image views
{o t,i } 36 i=1 , with each of views o t,i containing a RGB image b t,i accompanied with its orientation (θ 1 t,i ,θ 2 t,i ), where θ 1 t,i and θ 2
t,i are the angles of heading and elevation, respectively. We follow the [7] to obtain the view feature f v t,i . Regarding the visual observations and instructions, the navigator infers the action for each step t from the candidate actions list, which consists of J neighbours of the current node in the navigation graph and a stop action. Generally, the navigator is a sequence-to-sequence model with the encoder-decoder architecture [1], [2].
C.2 Dynamic Reinforced Instruction Attacker
C.2.1 Perturbation Generation as an RL Problem
Since there is no direct label as that in the classificationbased tasks [9], [12] for judging the success of attack in such navigation tasks, we use a reinforcement learning (RL) framework to formulate the perturbation generation. The framework contains two major components: an environment model E µ which is a well-trained navigator (also called as victim navigator), and an instruction attacker π φ , which can be viewed as the policy agent. The attacker π φ learns to disturb the correct action decision of E µ by generating perturbed instruction I t for E µ at each timestep t. µ and φ denote the parameters of the environment model and attacker, respectively. Under the RL setting, the state s t ∈ S is the visual state f v t . The action a t ∈ A is the perturbation operation by substituting the selected target word in the original instruction with a candidate word. The construction details of the target word set and candidate substitution word set for each instruction are given in Sec. C.2.3. Note that the attack operation is sequentially conducted at each navigation step t rather than once at the beginning since the key instruction information changes dynamically during the navigation process.
To measure the success of the attack and design reasonable reward for optimizing the attacker in such navigation tasks, we propose "deviation from the target position" as a metric. That is, the goal of the attacker is to enforce the navigator to make the wrong navigation trajectory and stop at a position which is far from the target position. Therefore, the reward r t will be negative for the attacker if the victim navigator stops within Z meters around the target viewpoint at the final step, otherwise the reward will be positive. Z is a predefined distance threshold. We also adopt a direct reward [51] at each non-stop step t by considering the progress, i.e., the change of the distance to the target viewpoint made by current timestep. If the navigator makes positive progress to the target position at non-stop step t, the direct reward r t will be negative. Similar to [7], the reward in our RL setting is set as a predefined constant. To satisfy the 'small perturbation' principle of adversarial samples [9], [12], [13], [33], the attacker is required to substitute only one word in the instruction at each timestep.
Without the loss of generality, we apply the Advantage Actor-Critic (A2C) [52] algorithm to iteratively update the parameters of the attacker π φ . A2C framework contains a policy network π(s|φ π ) (here is the attacker) and a value network V (s|φ v ) to learn a optimal policy. φ π and φ v denote the parameters of the network. Given the state-action-reward (s t , a t , r t ) of ∀t ∈ (0, N ) observation at each step t, the algorithm computes the total accumulated reward R t , the policy gradient ∇ pg , the value gradient ∇ v and the entropy gradient ∇ h by:
R t = N i=t γ i−t r t + γ N −t V (s N +1 ),(1)∇ pg = ∇ φπ log(π(s t , a t |φ π ))A t ,(2)∇ v = ∂(V (s t |φ v ) − R t ) 2 ∂φ v ,(3)∇ h = ∇ φπ N i=0 log(π(s i , a i |φ π ))π(s i , a i |φ π ). (4) where γ ∈ [0, 1) is the discount factor. A t = R t −V (s t )
is the advantage. Subsequently, an optimization step is performed in the direction that maximizes both E[R t ] (direction ∇ pg ) and the entropy of π(s t ) (direction ∇ h ), as well as minimizes the mean squared error of V (s t ) (direction -∇ v ). Therefore, by using the RL paradigm, the attacker can learn to generate the perturbed instructions at each timestep for disturbing the action decision of the navigator and misleading it to stop at the wrong target position. In our settings, the value network is a two-layer MLP.
C.2.2 Adversarial Training with Auxiliary Self-supervised Task
For improving the navigation robustness, we develop an effective adversarial training strategy, which can encourage the joint optimization for the victim navigator and the attacker. Through alternative optimization under the adversarial setting, the attacker can iteratively learn to create misleading instructions for confusing the victim navigator, while the victim navigator is trained on the perturbed instructions to enhance the model robustness. Motivated by [53], we use the RL strategy for training both the victim navigator and the attacker, and formulate the adversarial setting as the two-player zero-sum Markov games. At each timestep t, both the attacker and the victim navigator receive the visual observation f v t and the language instruction I t (I t for the navigator, I t is invariant while I t is variable). Then the attacker takes the action by generating the perturbed instruction, and the navigator takes the action by moving to the next viewpoint. With the inverse objective of the navigation, i.e., the navigator is supposed to stop at the nearest point from the target position, an inverse reward is set for the attacker and the navigator: r π = −r η (r η is represented by R N av in Figure 1), where π and η represent the policies for the attacker and the victim navigator, respectively. Therefore, our adversarial setting can be represented by:
r * η = min π max η r η (η, π).(5)
We conduct the alternative optimization procedure between the navigator and the attacker, namely, keep the parameters of one agent fixed and optimize another. The optimization procedure of adversarial training is given in Algorithm 1. At stage 1, we pre-train the navigator and use the pre-trained navigator to pre-train the attacker. At stage 2, we conduct alternative iteration procedure between the navigator and the attacker to implement the joint optimization. For facilitating implementation, the RL strategy for training the victim navigator also follows the A2C algorithm which was similar to [7]. To encourage the agent to capture actual key information and improve the fault-tolerance ability with perturbed instructions, which is important for robust navigation, we introduce an auxiliary self-supervised reasoning task during the training phases of the victim navigator, by asking the navigator to predict the actual attacked word by the attacker at each timestep t:
p c (c) = Softmax((f w W e )(h t W h ) T ).(6)
where c is the target word set for the given instruction I and p c (c) denotes the prediction probability. f w ∈ R L ×Dw denotes the target word features. L is the size of the target word set.h t ∈ R 1×D h represents the visual-and-instruction aware hidden state feature of the decoder [7] in the navigator. W e ∈ R Dw×Dp and W h ∈ R D h ×Dp denote the learnable linear transformations. D w , D h and D p denote the feature dimensions. The prediction is optimized by cross-entropy loss and the ground-truth label is the actual attacked word by the attacker. As a result, the probability that the agent captures the actual important instruction information and haves the self-correction ability can be increased with the Fig. 2: The forward processes of the DR-Attacker and the navigator. The attack score p a (a t ) is calculated by the elementwise multiplication of the word importance vector β t and the substitution impact matrix γ t . After performing the perturbation operation on the original instructionũ to generate perturbed instructionũ , the decoder of the navigator gets the perturbed instructionũ and the attended visual feature f v t to predict the navigation action a t at timestep t. The updated hidden stateh t of the decoder and the target word feature f w are used to calculate the actual attacked word prediction probability p c (c).
Algorithm 1: Adversarial Training
Input: the navigator NAV with policy η, the attacker ATT with policy π Output: Optimized parameters θ η N iter for η and θ π
{(st,a η t ,r η t )} ← rollout(I t , f v t , η θ η i , π θ π i−1 ) 10 θ η i ← policyOptmizer({(st,a η t , r η t )}, η, θ η i ) 11 end 12 // Fix θ η to optimize θ π 13 θ π i ← θ π i−1 14 for j = 1 : Nπ do 15 {(st,a π t ,r π t )} ← rollout(It, f v t , η θ η i , π θ π i ) 16 θ π i ← policyOptmizer({(st, a π t , r π t )}, π, θ π i ) 17 end 18 end 19 return θ η N iter , θ π N iter
accuracy improvement of the attacker for attacking key instruction information. Therefore, through the auxiliary selfsupervision reasoning task, the enhancement of the attacker can effectively lead to the improvement of the navigator.
C.2.3 Model Details
Forward Process of the Instruction Attacker. In this part, we describe the forward process of the proposed DR-Attacker, i.e., the attacker π φ in detail. At each timestep t, the DR-Attacker calculates the action prediction probability, also referred to as the attack score, by considering both the word importance in the current instruction and the substitution impact of different candidate words (illustrated in Figure 1). Within the prior that the words indicate visual object (e.g., "door") and location (e.g., "bathroom") are most informative for guiding the navigation, we construct the target word set by selecting these two kinds of words for each instruction in advance. For target word w j (0 ≤ j ≤ L , L is the size of target word set) in the instruction I, we denote the candidate substitution word set of w j as {w j,k } K k=1 , where K is the size of candidate substitution word set. To promote the understanding of the given instruction as well as keep a reasonable set size, we choose the remained target words in the same instruction to construct the candidate substitution word set for the specific target word. At timestep t, a word importance vector β t ∈ R L is first caculated by:
β t = Softmax((f w W w )(f v t W v ) T ),(7)
where f w ∈ R L ×Dw and f v t ∈ R 1×Dv represent the word features encoded by BiLSTM of target words and attended visual feature [7], respectively. W w ∈ R Dw×Dp and W v ∈ R Dv×Dp are the learnable linear transformations that convert the different features into the same embedding space. D w , D v and D p represent the feature dimensions. Then, the substitution impact of different candidate words for each target word w j is obtained by:
γ t,j = Softmax((f w j W w )(f w j W w ) T ),(8)
where f w j ∈ R 1×Dw and f w j ∈ R K×Dw denote the word features of target word w j and candidate words w j . W w ∈ R Dw×Dp is the learnable linear transformation. After calculating the substitution impact of different candidate words for all the target words in the instruction to obtain the substitution impact matrix γ t ∈ R L ×K , the attack score p a (a t ) ∈ R L ×K , i.e., the action prediction probability of the DR-Attacker is calculated by:
p a (a t ) = Softmax(β t • γ t ).(9)
where • denotes the element-wise multiplication. a t represents the candidate action set with the size of L × K. Through the learnable attack score p a (a t ), the DR-Attacker can learn to generate the optimal perturbation at each timestep t. Note that while there will be a semantic change compared with the original target word based on our word substitution strategy, we do not distinguish the perturbed instruction with the conventional adversarial samples. This Fig. 3: The construction details of target word set and candidate substitution word set on VLN. The target word set is constructed for each instruction by conducting string match between the instruction and the instruction vocabulary which only contains words indicating visual objects and locations. The candidate substitution word set for each target word is built by collecting the remaining target words in the same instruction. Fig. 4: The construction details of target word set and candidate substitution word set on NDH.
Since the last answer in the instruction generally contains the guiding information, we only construct the target word set and perform the perturbation operation for the last answer in the instruction for each instance.
is because the impact of single word substitution is subtle on the overall intention of whole instruction. Forward Process of the Navigator. After introducing the forward process of the instruction attacker π φ , we present the forward process of the navigator in this subsection. Specifically, the navigator follows an encoder-decoder architecture, where both the encoder and decoder are LSTMs [7]. The encoder contains a word embedding layer and a bi-directional LSTM, and its output is the language feature {ũ l } L l=1 of the instruction: u l =Embedding(w l ), u 1 ,ũ 2 , ...,ũ L =Bi−LSTM(u 1 , u 2 , ...u L ).
Then, the decoder receives the attended visual feature f v t and language featureũ, and generates the visual-andinstruction aware hidden stateh t :
h t =LSTM([f v t ; a t−1 ],h t−1 ),(11)α w t,l =Softmax(ũ l W u h t ),(12)f w t = l α w t,lũl ,(13)h t =tanh(W h [f w t ; h t ]),(14)
where a t−1 is the action feature of the timestep t − 1. W u ∈ R Dw×D h and W h ∈ R (Dw+D h )×D h are the learnable linear transformations. The attended visual feature f v t is calculated by:
α v t,i =Softmax(f v t,i W v h t−1 ),(15)f v t = i α v t,i f v t,i ,(16)
where W v ∈ R Dv×D h is the learnable linear transformation. Then, the action prediction probability p n (a t ) of the navigator is calculated by:
p n (a t ) =Softmax(c t,k W aht ),(17)
where c t,k denotes the candidate action features. W a ∈ R Dv×D h is the trainable linear transformation. The navigator takes the action a t according to the action prediction probability p n (a t ).
The forward processes of the attacker and the navigator are shown in Figure 2. As illustrated in Figure 2, based on the attack score p a (a t ) which is calculated by the elementwise multiplication of word importance vector β t and substitution impact matrix γ t , the perturbation operation is conducted on the original instructionũ to generate perturbed instructionũ . Then the decoder receives the attended visual feature f v t and the perturbed instructioñ u to predict the next action a t . The updated hidden statẽ h t of the decoder and the target word feature f w are used to calculate the prediction probability p c (c) of the actual attacked word for the self-supervised auxiliary reasoning task.
Construction Details of Target Word Set and Candidate Word Set. In this part, we show the construction details of target word set and candidate substitution word set for both VLN and NDH tasks. Specifically, for each instruction, we first construct its target word set by conducting string match between it and the instruction vocabulary. The instruction vocabulary contains the words indicating visual objects or locations, which are collected from the given instruction vocabulary from the dataset. Then, the candidate substitution word set is constructed for each target word by selecting the remained target words in the same instruction. The construction details of the target word set and candidate substitution word set for VLN and NDH tasks are shown in Figure 3 and Figure 4, respectively. Note that since the last answer in the dialog history plays the direct role of guiding navigation in the NDH task, we only construct the target word set and conduct the perturbation for the last answer in the NDH task, as shown in Figure 4.
D EXPERIMENT
In this section, we first introduce the datasets we use on NDH and VLN tasks, evaluation metrics, and implementation details in Sec. D.1. Then we provide the quantitative and qualitative results in Sec. D.2 and Sec. D.3, respectively.
D.1 Experimental Setup
D.1.3 Implementation Details
The navigator architecture, training hyperparameters and the training strategy we use in both VLN and NDH tasks are the same to [7]. [6], we also use the data augmentation of instruction to improve the navigation performance. For improving the learning efficiency, we also introduce imitation learning supervision [7] when training the navigator in the adversarial training stage.
D.2 Quantitative Results
D.2.1 Comparison with the State-of-the-art Methods
The quantitative comparison results with state-of-the-art methods on VLN and NDH are given in Table 1 and Table 2, respectively. In Table 1, we report three most important metrics in the VLN setting, i.e., Navigation Error (NE), Success Rate (SR) and Success rate weighted by Path Length (SPL).
In Table 2, we report the Goal Progress (GP) metric under the whole dialog history setting following most existing works on VDN [2], [22], [23]. Table 1 indicates that our proposed method outperforms other competitors in most metrics. Comparing with the baseline EnvDrop [7], the improvements for the SR and SPL of our method are significant in both seen and unseen settings. Table 2 shows that our method outperforms the stateof-the-art methods by a significant margin on NDH in both seen and unseen environments. We further compare the training time, data and device between the state-of-the-art method PREVALENT [23] and our method on NDH. Since only the implementation of finetuning phase 1 is available for PREVALENT [23], we only record the reimplemented finetuning time of PREVALENT [23] for comparison. Other values for the pretraining phase of PREVALENT [23] are the reported values in their paper. The results are given in Table 3. From Table 3 we can find that compared with PREVALENT [23], our proposed method need significantly
D.2.2 Ablation Study
In this section, we conduct ablation study to validate the effectiveness of the proposed adversarial attacking paradigm, adversarial training strategy and the auxiliary self-supervised reasoning task. Specifically, the effects of four-stage training for VLN and NDH tasks are presented in Table 4 and Table 5. The effectiveness of the auxiliary self-supervised reasoning task is given in Table 6. For VLN, "Base Agent" means pre-training navigators on the datasets composing of original instructions and augmented instructions for 40K iterations. "Finetune" means finetuning the adversarial trained agents on the same dataset as that used in the pretraining stage. For VDN, "Base Agent" means using the same training strategy like [7] to pre-train the navigators on the original dataset for 5k iterations. "Finetune" means finetuning the adversarial trained agents on the original dataset. "DR-Attacker" represents the navigation results when receiving perturbed instructions. "Last A", "Last QA" and "All" represent three kinds of different dialog history settings, i.e., the instruction is last answer, last questionanswer pair or the whole dialog history [2].
From Table 4 and Table 5 we can find that our proposed four-stage training strategy can effectively contribute to enhancing the robustness and the navigation performance of the agent on both VLN and NDH tasks. Specifically, by introducing adversarial perturbations on the instructions, the navigation performance of the agent shows significant drop, demonstrating the effectiveness of the proposed adversarial attacking mechanism. Then, after adversarial training with the proposed auxiliary self-supervised reasoning task followed by finetuning on the original dataset, the robustness and the navigation performance can be effectively improved. Moreover, from Table 6 we can observe that by introducing our proposed self-supervised auxiliary reasoning task in the adversarial training stage, the navigation performance can be effectively enhanced, demonstrating that improving the cross-modality understanding ability of the agent is crucial for successful navigation.
D.2.3 Different Types of Attacking Mechanisms
In this subsection, we compare different types of attacking mechanisms to validate the effectiveness of the proposed DR-Attacker and in attacking and promoting the navigation performance through adversarial training. Specifically, four adversarial attacking methods or variants are chosen for the comparison: 1) "Static" means that the perturbation at each timestep is invariant, i.e., at each timestep, the same target word is substituted with the same candidate word. For selecting the target word and candidate word, we use the pretrained DR-Attacker to conduct the word prediction at the first navigation timestep. 2) "Random" represents randomly selecting the target word and the candidate substitution word at each timestep. 3)"Heuristics" means the instruction word that receives the highest textual attention weights from the navigator at each timestep is destroyed. 4) PWWS [12] is an adversarial attack method in NLP which is similar to our proposed adversarial attack in some implementation procedures. It also obtains an attack score by calculating word importance and substitution impact according to the change of classification probability. Since there is no direct classification-based objective for the instruction in both VLN and NDH tasks, we choose the action prediction probability for an alternative. Specifically, at each timestep, the attacked word which can cause the maximum change of the original action prediction probability is destroyed. Therefore, "Random", "Heursitics" and PWWS are all dynamic adversarial attacks.
The comparison results of attacking effects on VLN and NDH tasks are given in Figure 5 and Table 7, respectively. And the adversarial training results using different attacking mechanisms on NDH are given in Table 7. From Figure 5 and Table 7 we can find that compared with either static or dynamic attacking mechanisms, our proposed DR-Attacker can achieve the best attack results in most metrics on both VLN and NDH tasks, demonstrating the importance of dynamically attacking key information in the navigation task and the effectiveness of our proposed RL-based optimization method for the proposed adversarial attack. Moreover, from the adversarial training results in Table 7 we can find the superiority of DR-Attacker in promoting the navigation performance compared with other attacking methods, demonstrating that jointly optimizing the navigator and the attacker is more beneficial for the improvement of the navigation performance. Both the attacking and adversarial Turn right at large painting, follow red carpet straight to doorway on the left. Stop in doorway.
Walk past the mirror towards the right and walk all the way through the bedroom. Wait just outside the door to the patio. The visualization examples of perturbed instructions, panoramic views, and language attention weights (instance (b)) during trajectories on VLN. The words in red, blue and green color represent the actual attacked word by DR-Attacker (A), the predicted attacked word by the navigator (P), and the substitution word (S), respectively. Yellow bounding box denotes the visual object or location at the current scene. "Baseline" and "Ours" represent the navigators trained without and with perturbed instructions, respectively. Words in the bracket represent the actual attacked word by the DR-Attacker. Best viewed in color.
training results on VLN and NDH tasks show the effectiveness of the proposed adversarial attacking mechanism and adversarial training paradigm.
D.3 Qualitative Results
In this subsection, we show the visualization examples of perturbed instructions, panoramic views and language attention weights during trajectories on VLN and NDH tasks. The results are given in Figure 6 and Figure 7, respectively. From Figure 6 and Figure 7 we can find that the proposed DR-Attacker can successfully locate the word which appears in the scene at different timesteps and substitute it with the word that doesn't exist in the current scene. Moreover, the navigator can make correct predictions of the actual attacked words by DR-Attacker, showing its good understanding of the multi-modality observations. The first subfigure in Figure 6 (a), the fourth subfigure in Figure 6 (b) and the second subfigure in Figure 7 (a) show the failure cases. From the failure cases, we can find that when there are multiple objects referred in the instruction simultaneously existing in the current scene, e.g., both the "bedroom" and "door" exist in the fourth subfigure in Figure 6 (b), the navigator or the DR-Attacker may be confused. From the language attention weights of the navigators trained with Fig. 7: The visualization examples of perturbed instructions, panoramic views, and language attention weights (A2 in instance (b)) during trajectories on NDH. The words in red, blue and green color represent the actual attacked word by DR-Attacker (A), the predicted attacked word by the navigator (P), and the substitution word (S), respectively. Yellow bounding box denotes the visual object or location at the current scene. "Baseline" and "Ours" represent the navigators trained without and with perturbed instructions, respectively. Words in the bracket represent the actual attacked word by the DR-Attacker. Best viewed in color. perturbed instructions ("Ours"), we can find that although the target word is attacked, the navigator can attend to the context near the attacked word to capture the language intention. Moreover, with the process of the navigation trajectory, it can successfully capture important instruction information in different phases. In contrast, the navigator trained without perturbed instructions ("Baseline") generates a confused language attention weights by the introduced perturbations during navigation. These visualization analyses show that emphasizing useful instruction information can contribute to successful navigation. Moreover, our proposed adversarial attacking and adversarial training mechanisms can effectively improve the robustness of the navigation agent.
E CONCLUSION
In this work, we propose Dynamic Reinforced Instruction Attacker (DR-Attacker) for the natural language grounded visual navigation tasks. By formulating the perturbation generation using the RL framework, DR-Attacker can be optimized iteratively to capture the crucial parts in instructions and generate meaningful adversarial samples. Through adversarial training using perturbed instructions, the robustness of the navigator can be effectively enhanced with an auxiliary self-supervised reasoning task. Experiments on both VLN and NDH tasks show the effectiveness of the proposed method.
In the future, we plan to improve the training strategy of the proposed instruction attacker and exploit to design more effective attacks on the navigation instruction. Moreover, we would like to develop multi-modality adversarial attacks for the embodied navigation task to further verify and improve the robustness of the navigator. He has authored over 50 papers in refereed conferences and journals, and received the Sony Outstanding Paper Award. His current research interests include image processing, visual object detection and machine learning. He pioneered the Kernel SVM-based pyrolysis output prediction software which was put into practical application by SINOPEC in 2012. He developed two kinds of piecewise linear SVM methods which were successfully applied into visual object detection.
Liang Lin is CEO of DMAI Great China and a full professor of Computer Science in Sun Yat-sen University. He served as the Executive Director of the SenseTime Group from 2016 to 2018, leading the R&D teams in developing cutting-edge, deliverable solutions in computer vision, data analysis and mining, and intelligent robotic systems. He has authored or co-authored more than 200 papers in leading academic journals and conferences (e.g., TPAMI/IJCV, CVPR/ICCV/NIPS/ICML/AAAI). He is an associate editor of IEEE Trans, Human-Machine Systems and IET Computer Vision, and he served as the area/session chair for numerous conferences, such as CVPR, ICME, ICCV, ICMR. He was the recipient of Annual Best Paper Award by Pattern Recognition (Elsevier) in 2018, Dimond Award for best paper in IEEE ICME in 2017, ACM NPAR Best Paper Runners-Up Award in 2010, Google Faculty Award in 2012, award for the best student paper in IEEE ICME in 2014, and Hong Kong Scholars Award in 2014. He is a Fellow of IET.
2 0 3
0Pre-train NAV with original training set to get θ η Pre-train ATT with the pretrained NAV of fixed parameters θ η 0 to get θ
D w , D w , D v , D p and D h are set as 512, 512, 2052, 512 and 512, respectively. The positive/negative rewards of the final step and each non-stop step are set as 3/-3 and 1/-1, respectively. For both VLN and NDH, we split the training process for four steps: 1) pre-train the navigator using the original training set 2) pre-train the DR-Attacker on the pre-trained navigator and keep the parameters of the navigator fixed 3) adversarially train both the navigator and DR-Attacker by alternative iteration 4) finetune the navigator on the original training set. The
Figure 5 :
5The comparison results of different types of adversarial attacking mechanisms on VLN. NE (m), SR (%) and SPL (%) are reported for both Val Seen and Val Unseen scenes. Apart from NE, lower value indicates better results. less training time, data, and computation resource while can achieve better results, showing the good flexibility of our method. Both the results on VLN and NDH show the effectiveness of the proposed method in improving the robustness of the navigation agent.
(
Lin received the B.E. and the M.E. degree in Computer Science from University of Electronic Science and Technology of China and Xiamen University, in 2016 and 2019, respectively. She is currently working toward the D.Eng in the school of intelligent systems engineering of Sun Yat-sen University. Her research interests include multi-view clustering, image processing and vision-and-language understanding. Yi Zhu received the B.S. degree in software engineering from Sun Yat-sen University, Guangzhou, China, in 2013. Since 2015, she has been a Ph.D student in computer science with the School of Electronic, Electrical, and Communication Engineering, University of Chinese Academy of Sciences, Beijing, China. Her current research interests include object recognition, scene understanding, weakly supervised learning and visual reasoning. Xiaodan Liang is currently an Associate Professor at Sun Yat-sen University. She was a postdoc researcher in the machine learning department at Carnegie Mellon University, working with Prof. Eric Xing, from 2016 to 2018. She received her PhD degree from Sun Yat-sen University in 2016, advised by Liang Lin. She has published several cutting-edge projects on human-related analysis, including human parsing, pedestrian detection and instance segmentation, 2D/3D human pose estimation and activity recognition. Qixiang Ye received the B.S. and M.S. degrees in mechanical and electrical engineering from the Harbin Institute of Technology, China, in 1999 and 2001, respectively, and the Ph.D. degree from the Institute of Computing Technology, Chinese Academy of Sciences, in 2006. He has been a Professor with the University of Chinese Academy of Sciences since 2009, and was a Visiting Assistant Professor with the Institute of Advanced Computer Studies, University of Maryland, College Park, in 2013.
put more • † Xiaodan Liang is the corresponding author. Bingqian Lin, Yanxin Long and Xiaodan Liang are with Shenzhen Campus of Sun Yat-sen University, Shenzhen, China. E-mail:{linbq6@mail2.sysu.edu.cn,longyx9@mail2.sysu.edu.cn, liangxd9@mail.sysu.edu.cn} • Liang Lin is with Sun Yat-sen University, Guangzhou, China. E-mail:{linliang@ieee.org} • Yi Zhu, Qixiang Ye are with University of Chinese Academy of Sciences (UCAS), Beijing, China. E-mail: {zhu.yee@outlook.com, qxye@ucas.ac.cn}•
Jul 2021
Turn right at the round table and walk towards the closed front door. Start to go up the stairs on your right and stop after going up two steps.Original
instruction
Visual
scenes
Victim navigator
Robust navigator
actor critic
table
stairs
door
stairs
table
door
door
stairs
table
Attack score
Perturbation generation
Candidate substitution words
Input at timestep t
R Nav
Adversarial training
Update by
maximizing R Nav
Update by
minimizing R Nav
DR-Attacker
Word importance
[0.7, 0.2, 0.1]
0: table
1: door
2: stairs
Target
words
Perturbation impact
[0.2, 0.8]
Attack score
(table->stairs)
Attacked
word
prediction
+
label: table
sofa, garage, ......Go straight down hallway,
turn right into the open door,
turn left, walk passed the
beds into the bathroom and
stop in the doorway.
Instruction Vocabulary
Instruction Instance
Target word set
Candidate substitution word set
room, stairs, hallway,
bedroom, beds, doorway,
door, bathroom, hall
chairs, table, fireplace,
desk, window, balcony,
bench, string
match
filter
hallway,
beds,
doorway
door,
bathroom,
door
hallway
bathroom
beds
doorway
beds
hallway
bathroom
door
doorway
......
TABLE 1 :
1The comparison results with state-of-the-art methods on R2R dataset. Apart from NE, higher value indicates better results. SR(%) ↑ SPL(%) ↑ NE(m) ↓ SR(%) ↑ SPL(%) ↑ NE(m) ↓ SR (%) ↑ SPL(%) ↑Method
Val Seen
Val Unseen
Test Unseen
NE(m) ↓ seq-2-seq [1]
6.01
39
-
7.81
22
-
7.85
20
18
Speaker-Follower [6]
3.36
66
-
6.62
35
-
6.62
35
28
Regretful [54]
3.23
69
63
5.32
50
41
5.69
48
40
RCM [20]
3.53
67
-
6.09
43
-
6.12
43
38
PRESS [24]
4.39
58
55
5.28
49
45
5.59
49
45
EnvDrop [7]
3.99
62
59
5.22
52
48
5.23
51
47
Ours
3.52
70
67
4.99
53
48
5.53
52
49
TABLE 2 :
2The comparison results with state-of-the-art methods on CVDN dataset.The Goal Progress (GP) (m) is reported
TABLE 3 :
3The comparison of training time, data and device between PREVALENT[23] and our method on NDH.Method
Time (min)
Data
Device
Pretrain Other phases Total
Pretrain
Other phases
Pretrain
Other phases
PREVALENT [23]
-
1661
-
6, 582, 000
4, 742
8 v100 GPUs
1 1080Ti GPU
Ours
143
328
471
4, 742
4, 742
1 1080Ti GPU 1 1080Ti GPU
TABLE 4 :
4The ablation study results on R2R dataset. NE (m), SR (%), SPL (%) results are reported. Apart from NE, higher
value indicates better results.
Method
Val Seen
Val Unseen
NE (m) ↓ SR (%) ↑ SPL (%) ↑ NE (m) ↓ SR (%) ↑ SPL (%) ↑
Base Agent
4.37
58.7
56
5.43
48.0
45
DR-Attacker
4.55
57.3
55
5.62
46.6
43
Adversarial Training w auxiliary task
4.15
62.0
59
5.25
49.6
46
Finetune
3.52
70.2
67
4.99
53.2
48
TABLE 5 :
5The ablation study results on CVDN dataset. The Goal Progress (GP) (m) is reported. The supervision setting is
mixed supervision.
Settings
Val Seen
Val Unseen
Last A Last QA
All
Last A Last QA
All
Base Agent
7.55
7.15
7.35
3.88
3.95
3.72
DR-Attacker
5.30
5.88
5.85
1.95
2.51
2.29
Adversarial Training w auxiliary task
6.80
6.96
7.23
3.90
3.80
3.93
Finetune
7.66
7.61
8.06
4.20
4.19
4.18
TABLE 6 :
6The ablation study results on CVDN dataset. The Goal Progress (GP) (m) is reported. The dialog history setting
is last answer.
Settings
Val Seen
Val Unseen
Oracle Navigator Mixed Oracle Navigator Mixed
Base Agent
5.44
6.92
7.55
3.28
4.06
3.88
DR-Attacker
4.00
5.07
5.30
1.50
2.38
1.95
Finetune (Adversarial Training)
5.48
7.13
7.61
3.37
4.16
4.08
Finetune (Adversarial Training w auxiliary task)
5.52
7.49
7.66
3.48
4.21
4.20
TABLE 7 :
7Nav. Mix. Ora. Nav. Mix.The comparison results of different adversarial
attacking mechanisms in attacking and promoting the nav-
igation performance on CVDN dataset. The Goal Progress
(GP) (m) is reported. The dialog history setting is last
answer. Ora., Nav., and Mix. represent the supervision is
Oracle, Navigator and Mixed, respectively.
Method
Val Seen
Val Unseen
Ora. Direct Attack
Static
4.09
5.32
5.48
1.71
2.63
2.23
Random
4.20
5.71
5.66
1.64
2.54
2.15
Heuristics
4.07
4.99
5.35
1.52
2.46
1.89
PWWS [12]
4.04
5.51
5.44
1.64
2.59
2.10
DR-Attacker 4.00
5.07
5.30
1.50
2.38
1.95
Adversarial Training
Static
5.42
6.57
7.09
3.25
3.93
3.97
Random
4.89
6.52
7.02
3.15
3.93
3.88
Heuristics
5.60
6.91
6.99
3.04
3.81
3.67
PWWS [12]
5.23
6.57
6.87
3.20
3.59
3.09
DR-Attacker 5.52
7.49
7.66
3.48
4.21
4.20
training iterations of four steps for VLN are 40K, 10K,
40K, 200K and the training iterations of four steps for
NDH are 5K, 1K, 3K, 3K. For the adversarial training,
the alternation is conducted after 3K and 1K iterations for
VLN and NDH, respectively. Following
1. https://github.com/weituo12321/PREVALENT R2RVal Seen
Val Unseen
Scenes
4.50
4.75
5.00
5.25
5.50
5.75
NE(m)
Static
Random
Heuristics
PWWS
DR-Attacker
Target: nightstand Q1: Should I go into the hallway in front of me? A1: go down the hallway and turn left. Go to where the main area and stand by stairs. Q2: Continue down the hallway, or up the stairs? A2: go ahead and go up the stairs. Q3: Should I enter either of those doors, or turn to my right and keep going? A3: Yep. go to the right side door and that should be the goal room. Target: bed Q1: should i go upstairs ? A1: Yes, go up the stairs. Unable to tell if you should turn left or right. Stay at the landing.Q2: Ok where should i go now A2: Turn right and go through the archway closest to the top of the staircase. It will lead to a bedroom. Check in. Goal room.(a)
(b)
P: hallway
A: hallway
S: stairs
P: stairs
A: stairs
S: hallway
P: stairs
A: stairs
S: hallway
P: stairs
A: stairs
S: hallway
P: stairs
A: stairs
S: landing
P: stairs
A: stairs
S: landing
P: archway
A: archway
S: bedroom
P: archway
A: staircase
S: bedroom
P: door
A: door
S: room
P: bedroom
A: bedroom
S: archway
Baseline
Ours
ACKNOWLEDGMENT
Vision-andlanguage navigation: Interpreting visually-grounded navigation instructions in real environments. P Anderson, Q Wu, D Teney, J Bruce, M Johnson, N Sunderhauf, I Reid, S Gould, A Van Den, Hengel, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. P. Anderson, Q. Wu, D. Teney, J. Bruce, M. Johnson, N. Sunder- hauf, I. Reid, S. Gould, and A. van den Hengel, "Vision-and- language navigation: Interpreting visually-grounded navigation instructions in real environments," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 3674-3683.
Visionand-dialog navigation. J Thomason, M Murray, M Cakmak, L Zettlemoyer, Conference on Robot Learning (CoRL). J. Thomason, M. Murray, M. Cakmak, and L. Zettlemoyer, "Vision- and-dialog navigation," Conference on Robot Learning (CoRL), pp. 394-406, 2019.
Reverie: Remote embodied visual referring expression in real indoor environments. Y Qi, Q Wu, P Anderson, X Wang, W Y Wang, C Shen, A Van Den, Hengel, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Y. Qi, Q. Wu, P. Anderson, X. Wang, W. Y. Wang, C. Shen, and A. van den Hengel, "Reverie: Remote embodied visual referring expression in real indoor environments," in 2020 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 9982-9991.
Help, anna! vision-based navigation with natural multimodal assistance via retrospective curiosityencouraging imitation learning. K Nguyen, H Daumé, 2019 Conference on Empirical Methods in Natural Language Processing. K. Nguyen and H. Daumé, "Help, anna! vision-based navigation with natural multimodal assistance via retrospective curiosity- encouraging imitation learning," in 2019 Conference on Empirical Methods in Natural Language Processing, 2019, pp. 684-695.
Touchdown: Natural language navigation and spatial reasoning in visual street environments. H Chen, A Suhr, D Misra, N Snavely, Y Artzi, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). H. Chen, A. Suhr, D. Misra, N. Snavely, and Y. Artzi, "Touchdown: Natural language navigation and spatial reasoning in visual street environments," in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 12 538-12 547.
Speakerfollower models for vision-and-language navigation. D Fried, R Hu, V Cirik, A Rohrbach, J Andreas, L.-P Morency, T Berg-Kirkpatrick, K Saenko, D Klein, T Darrell, NIPS 2018: The 32nd Annual Conference on Neural Information Processing Systems. D. Fried, R. Hu, V. Cirik, A. Rohrbach, J. Andreas, L.-P. Morency, T. Berg-Kirkpatrick, K. Saenko, D. Klein, and T. Darrell, "Speaker- follower models for vision-and-language navigation," in NIPS 2018: The 32nd Annual Conference on Neural Information Processing Systems, 2018, pp. 3314-3325.
Learning to navigate unseen environments: Back translation with environmental dropout. H Tan, L Yu, M Bansal, NAACL-HLT 2019: Annual Conference of the North American Chapter of the Association for Computational Linguistics. H. Tan, L. Yu, and M. Bansal, "Learning to navigate unseen environments: Back translation with environmental dropout," in NAACL-HLT 2019: Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2019, pp. 2610-2621.
Counterfactual vision-and-language navigation via adversarial path sampler. T.-J Fu, X E Wang, M F Peterson, S T Grafton, M P Eckstein, W Y Wang, European Conference on Computer Vision. T.-J. Fu, X. E. Wang, M. F. Peterson, S. T. Grafton, M. P. Eckstein, and W. Y. Wang, "Counterfactual vision-and-language navigation via adversarial path sampler." in European Conference on Computer Vision, 2020, pp. 71-86.
On adversarial examples for biomedical nlp tasks. V Araujo, A Carvallo, C Aspillaga, D Parra, arXiv:2004.11157arXiv preprintV. Araujo, A. Carvallo, C. Aspillaga, and D. Parra, "On ad- versarial examples for biomedical nlp tasks," arXiv preprint arXiv:2004.11157, 2020.
Bert-attack: Adversarial attack against bert using bert. L Li, R Ma, Q Guo, X Xue, X Qiu, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingL. Li, R. Ma, Q. Guo, X. Xue, and X. Qiu, "Bert-attack: Adversarial attack against bert using bert," in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020, pp. 6193-6202.
On adversarial examples for character-level neural machine translation. J Ebrahimi, D Lowd, D Dou, COLING 2018: 27th International Conference on Computational Linguistics. J. Ebrahimi, D. Lowd, and D. Dou, "On adversarial examples for character-level neural machine translation," in COLING 2018: 27th International Conference on Computational Linguistics, 2018, pp. 653- 663.
Generating natural language adversarial examples through probability weighted word saliency. S Ren, Y Deng, K He, W Che, ACL 2019 : The 57th Annual Meeting of the Association for Computational Linguistics. S. Ren, Y. Deng, K. He, and W. Che, "Generating natural lan- guage adversarial examples through probability weighted word saliency," in ACL 2019 : The 57th Annual Meeting of the Association for Computational Linguistics, 2019, pp. 1085-1097.
Word-level textual adversarial attacking as combinatorial optimization. Y Zang, F Qi, C Yang, Z Liu, M Zhang, Q Liu, M Sun, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsY. Zang, F. Qi, C. Yang, Z. Liu, M. Zhang, Q. Liu, and M. Sun, "Word-level textual adversarial attacking as combinatorial opti- mization," in Proceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics, 2020, pp. 6066-6080.
Environment-agnostic multitask learning for natural language grounded navigation. X E Wang, V Jain, E Ie, W Y Wang, Z Kozareva, S Ravi, ECCV. X. E. Wang, V. Jain, E. Ie, W. Y. Wang, Z. Kozareva, and S. Ravi, "Environment-agnostic multitask learning for natural language grounded navigation," in ECCV (24), 2020, pp. 413-430.
Vision-based navigation with language-based assistance via imitation learning with indirect intervention. K Nguyen, D Dey, C Brockett, B Dolan, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). K. Nguyen, D. Dey, C. Brockett, and B. Dolan, "Vision-based navi- gation with language-based assistance via imitation learning with indirect intervention," in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 12 527-12 537.
Vqa: Visual question answering. S Antol, A Agrawal, J Lu, M Mitchell, D Batra, C L Zitnick, D Parikh, 2015 IEEE International Conference on Computer Vision (ICCV). S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh, "Vqa: Visual question answering," in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 2425- 2433.
Guesswhat?! visual object discovery through multi-modal dialogue. H Vries, F Strub, S Chandar, O Pietquin, H Larochelle, A Courville, 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR. H. de Vries, F. Strub, S. Chandar, O. Pietquin, H. Larochelle, and A. Courville, "Guesswhat?! visual object discovery through multi-modal dialogue," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 4466-4475. [Online].
Visual dialog. A Das, S Kottur, K Gupta, A Singh, D Yadav, J M F Moura, D Parikh, D Batra, 2017 IEEE Conference on Computer Vision and Pattern Recognition. A. Das, S. Kottur, K. Gupta, A. Singh, D. Yadav, J. M. F. Moura, D. Parikh, and D. Batra, "Visual dialog," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [Online].
Image captioning with semantic attention. Q You, H Jin, Z Wang, C Fang, J Luo, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Q. You, H. Jin, Z. Wang, C. Fang, and J. Luo, "Image captioning with semantic attention," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 4651-4659.
Reinforced cross-modal matching and self-supervised imitation learning for vision-language navigation. X Wang, Q Huang, A Celikyilmaz, J Gao, D Shen, Y.-F Wang, W Y Wang, L Zhang, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). X. Wang, Q. Huang, A. Celikyilmaz, J. Gao, D. Shen, Y.-F. Wang, W. Y. Wang, and L. Zhang, "Reinforced cross-modal matching and self-supervised imitation learning for vision-language naviga- tion," in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 6629-6638.
Self-monitoring navigation agent via auxiliary progress estimation. C.-Y. Ma, Z Lu, G Wu, Z Alregib, C Kira, Xiong, ICLR 2019 : 7th International Conference on Learning Representations. C.-Y. Ma, jiasen lu, Z. Wu, G. AlRegib, Z. Kira, richard socher, and C. Xiong, "Self-monitoring navigation agent via auxiliary progress estimation," in ICLR 2019 : 7th International Conference on Learning Representations, 2019.
Vision-dialog navigation by exploring cross-modal memory. Y Zhu, F Zhu, Z Zhan, B Lin, J Jiao, X Chang, X Liang, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 10Y. Zhu, F. Zhu, Z. Zhan, B. Lin, J. Jiao, X. Chang, and X. Liang, "Vision-dialog navigation by exploring cross-modal memory," in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion (CVPR), 2020, pp. 10 730-10 739.
Towards learning a generic agent for vision-and-language navigation via pretraining. W Hao, C Li, X Li, L Carin, J Gao, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). W. Hao, C. Li, X. Li, L. Carin, and J. Gao, "Towards learn- ing a generic agent for vision-and-language navigation via pre- training," in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 13 137-13 146.
Robust navigation with language pretraining and stochastic sampling. X Li, C Li, Q Xia, Y Bisk, A Elikyilmaz, J Gao, N A Smith, Y Choi, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingX. Li, C. Li, Q. Xia, Y. Bisk, A. Ç elikyilmaz, J. Gao, N. A. Smith, and Y. Choi, "Robust navigation with language pretraining and stochastic sampling." in Proceedings of the 2019 Conference on Empir- ical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 1494-1499.
Adversarially robust representations with smooth encoders. T Cemgil, S Ghaisas, K Dvijotham, P Kohli, ICLR 2020 : Eighth International Conference on Learning Representations. T. Cemgil, S. Ghaisas, K. Dvijotham, and P. Kohli, "Adversarially robust representations with smooth encoders," in ICLR 2020 : Eighth International Conference on Learning Representations, 2020.
Defense against adversarial attacks using feature scattering-based adversarial training. H Zhang, J Wang, NeurIPS 2019 : Thirty-third Conference on Neural Information Processing Systems. H. Zhang and J. Wang, "Defense against adversarial attacks using feature scattering-based adversarial training," in NeurIPS 2019 : Thirty-third Conference on Neural Information Processing Systems, 2019, pp. 1831-1841.
Provably robust deep learning via adversarially trained smoothed classifiers. H Salman, J Li, I Razenshteyn, P Zhang, H Zhang, S Bubeck, G Yang, NeurIPS 2019 : Thirty-third Conference on Neural Information Processing Systems. H. Salman, J. Li, I. Razenshteyn, P. Zhang, H. Zhang, S. Bubeck, and G. Yang, "Provably robust deep learning via adversarially trained smoothed classifiers," in NeurIPS 2019 : Thirty-third Con- ference on Neural Information Processing Systems, 2019, pp. 11 292- 11 303.
Learning to confuse: Generating training time adversarial data with auto-encoder. J Feng, Q.-Z Cai, Z.-H Zhou, NeurIPS 2019 : Thirty-third Conference on Neural Information Processing Systems. J. Feng, Q.-Z. Cai, and Z.-H. Zhou, "Learning to confuse: Generat- ing training time adversarial data with auto-encoder," in NeurIPS 2019 : Thirty-third Conference on Neural Information Processing Sys- tems, 2019, pp. 11 994-12 004.
Generalizable datafree objective for crafting universal adversarial perturbations. K R Mopuri, A Ganeshan, R V Babu, IEEE Transactions on Pattern Analysis and Machine Intelligence. 4110K. R. Mopuri, A. Ganeshan, and R. V. Babu, "Generalizable data- free objective for crafting universal adversarial perturbations," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 10, pp. 2452-2465, 2019.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778.
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, ICLR 2015 : International Conference on Learning Representations. K. Simonyan and A. Zisserman, "Very deep convolutional net- works for large-scale image recognition," in ICLR 2015 : Interna- tional Conference on Learning Representations 2015, 2015.
Improving neural language modeling via adversarial training. D Wang, C Gong, Q Liu, ICML 2019 : Thirty-sixth International Conference on Machine Learning. D. Wang, C. Gong, and Q. Liu, "Improving neural language modeling via adversarial training," in ICML 2019 : Thirty-sixth International Conference on Machine Learning, 2019, pp. 6555-6565.
Freelb: Enhanced adversarial training for natural language understanding. C Zhu, Y Cheng, Z Gan, S Sun, T Goldstein, J Liu, International Conference on Learning Representations. C. Zhu, Y. Cheng, Z. Gan, S. Sun, T. Goldstein, and J. Liu, "Freelb: Enhanced adversarial training for natural language understand- ing," in International Conference on Learning Representations, 2019.
Robust neural machine translation with doubly adversarial inputs. Y Cheng, L Jiang, W Macherey, ACL 2019 : The 57th Annual Meeting of the Association for Computational Linguistics. Y. Cheng, L. Jiang, and W. Macherey, "Robust neural machine translation with doubly adversarial inputs," in ACL 2019 : The 57th Annual Meeting of the Association for Computational Linguistics, 2019, pp. 4324-4333.
Robust encodings: A framework for combating adversarial typos. E Jones, R Jia, A Raghunathan, P Liang, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsE. Jones, R. Jia, A. Raghunathan, and P. Liang, "Robust encodings: A framework for combating adversarial typos." in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 2752-2765.
Hotflip: Whitebox adversarial examples for text classification. J Ebrahimi, A Rao, D Lowd, D Dou, ACL 2018: 56th Annual Meeting of the Association for Computational Linguistics. 2J. Ebrahimi, A. Rao, D. Lowd, and D. Dou, "Hotflip: White- box adversarial examples for text classification," in ACL 2018: 56th Annual Meeting of the Association for Computational Linguistics, vol. 2, 2018, pp. 31-36.
Supervised learning of universal sentence representations from natural language inference data. A Conneau, D Kiela, H Schwenk, L Barrault, A Bordes, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingA. Conneau, D. Kiela, H. Schwenk, L. Barrault, and A. Bordes, "Supervised learning of universal sentence representations from natural language inference data," in Proceedings of the 2017 Confer- ence on Empirical Methods in Natural Language Processing, 2017, pp. 670-680.
Bert: Pretraining of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, NAACL-HLT 2019: Annual Conference of the North American Chapter of the Association for Computational Linguistics. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre- training of deep bidirectional transformers for language under- standing," in NAACL-HLT 2019: Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2019, pp. 4171-4186.
Infobert: Improving robustness of language models from an information theoretic perspective. B Wang, S Wang, Y Cheng, Z Gan, R Jia, B Li, J Liu, ICLR 2021: The Ninth International Conference on Learning Representations. B. Wang, S. Wang, Y. Cheng, Z. Gan, R. Jia, B. Li, and J. Liu, "Infobert: Improving robustness of language models from an information theoretic perspective," in ICLR 2021: The Ninth Inter- national Conference on Learning Representations, 2021.
Certified robustness to adversarial word substitutions. R Jia, A Raghunathan, K Göksel, P Liang, 2019 Conference on Empirical Methods in Natural Language Processing. R. Jia, A. Raghunathan, K. Göksel, and P. Liang, "Certified ro- bustness to adversarial word substitutions," in 2019 Conference on Empirical Methods in Natural Language Processing, 2019, pp. 4127- 4140.
Text processing like humans do: Visually attacking and shielding nlp systems. S Eger, G G Sahin, A Rücklé, J.-U Lee, C Schulz, M Mesgar, K Swarnkar, E Simpson, I Gurevych, NAACL-HLT 2019: Annual Conference of the North American Chapter of the Association for Computational Linguistics. S. Eger, G. G. Sahin, A. Rücklé, J.-U. Lee, C. Schulz, M. Mesgar, K. Swarnkar, E. Simpson, and I. Gurevych, "Text processing like humans do: Visually attacking and shielding nlp systems," in NAACL-HLT 2019: Annual Conference of the North American Chapter of the Association for Computational Linguistics, 2019, pp. 1634-1647.
Adversarial training for large neural language models. X Liu, H Cheng, P He, W Chen, Y Wang, H Poon, J Gao, arXiv:2004.08994arXiv preprintX. Liu, H. Cheng, P. He, W. Chen, Y. Wang, H. Poon, and J. Gao, "Adversarial training for large neural language models," arXiv preprint arXiv:2004.08994, 2020.
On the robustness of language encoders against grammatical errors. F Yin, Q Long, T Meng, K.-W Chang, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsF. Yin, Q. Long, T. Meng, and K.-W. Chang, "On the robustness of language encoders against grammatical errors," in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 3386-3403.
Spatiotemporal attacks for embodied agents. A Liu, T Huang, X Liu, Y Xu, Y Ma, X Chen, S J Maybank, D Tao, ECCV. 2020A. Liu, T. Huang, X. Liu, Y. Xu, Y. Ma, X. Chen, S. J. Maybank, and D. Tao, "Spatiotemporal attacks for embodied agents." in ECCV (17), 2020, pp. 122-138.
Embodied question answering. A Das, S Datta, G Gkioxari, S Lee, D Parikh, D Batra, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). A. Das, S. Datta, G. Gkioxari, S. Lee, D. Parikh, and D. Batra, "Embodied question answering," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2018, pp. 1-10.
Autoaugment: Learning augmentation strategies from data. E D Cubuk, B Zoph, D Mane, V Vasudevan, Q V Le, 2019E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le, "Au- toaugment: Learning augmentation strategies from data," in 2019
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 113-123.
Population based augmentation: Efficient learning of augmentation policy schedules. D Ho, E Liang, X Chen, I Stoica, P Abbeel, International Conference on Machine Learning. D. Ho, E. Liang, X. Chen, I. Stoica, and P. Abbeel, "Population based augmentation: Efficient learning of augmentation policy schedules," in International Conference on Machine Learning, 2019, pp. 2731-2741.
Fast autoaugment. S Lim, I Kim, T Kim, C Kim, S Kim, Advances in Neural Information Processing Systems. 32S. Lim, I. Kim, T. Kim, C. Kim, and S. Kim, "Fast autoaugment," in Advances in Neural Information Processing Systems, vol. 32, 2019, pp. 6665-6675.
Faster autoaugment: Learning augmentation strategies using backpropagation. R Hataya, J Zdenek, K Yoshizoe, H Nakayama, ECCV (25). R. Hataya, J. Zdenek, K. Yoshizoe, and H. Nakayama, "Faster autoaugment: Learning augmentation strategies using backprop- agation." in ECCV (25), 2020, pp. 1-16.
Randaugment: Practical automated data augmentation with a reduced search space. E D Cubuk, B Zoph, J Shlens, Q Le, Advances in Neural Information Processing Systems. 33E. D. Cubuk, B. Zoph, J. Shlens, and Q. Le, "Randaugment: Practical automated data augmentation with a reduced search space," in Advances in Neural Information Processing Systems, vol. 33, 2020, pp. 18 613-18 624.
Building generalizable agents with a realistic and rich 3d environment. Y Wu, Y Wu, G Gkioxari, Y Tian, ICLR 2018 : International Conference on Learning Representations. Y. Wu, Y. Wu, G. Gkioxari, and Y. Tian, "Building generalizable agents with a realistic and rich 3d environment," in ICLR 2018 : International Conference on Learning Representations 2018, 2018.
Asynchronous methods for deep reinforcement learning. V Mnih, A P Badia, M Mirza, A Graves, T Harley, T P Lillicrap, D Silver, K Kavukcuoglu, ICML'16 Proceedings of the 33rd International Conference on International Conference on Machine Learning. 48V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Harley, T. P. Lillicrap, D. Silver, and K. Kavukcuoglu, "Asynchronous methods for deep reinforcement learning," in ICML'16 Proceedings of the 33rd Inter- national Conference on International Conference on Machine Learning - Volume 48, 2016, pp. 1928-1937.
Robust adversarial reinforcement learning. L Pinto, J Davidson, R Sukthankar, A Gupta, ICML'17 Proceedings of the 34th International Conference on Machine Learning. 70L. Pinto, J. Davidson, R. Sukthankar, and A. Gupta, "Robust adversarial reinforcement learning," in ICML'17 Proceedings of the 34th International Conference on Machine Learning -Volume 70, 2017, pp. 2817-2826.
The regretful agent: Heuristic-aided navigation through progress estimation. C.-Y Ma, Z Wu, G Alregib, C Xiong, Z Kira, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). C.-Y. Ma, Z. Wu, G. AlRegib, C. Xiong, and Z. Kira, "The regretful agent: Heuristic-aided navigation through progress estimation," in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recog- nition (CVPR), 2019, pp. 6732-6740.
| [
"https://github.com/expectorlin/DR-Attacker.Index",
"https://github.com/weituo12321/PREVALENT"
] |
[
"Skit-S2I: An Indian Accented Speech to Intent dataset",
"Skit-S2I: An Indian Accented Speech to Intent dataset"
] | [
"Shangeth Rajaa shangeth.rajaa@skit.ai \nSkit.ai BengaluruIndia\n",
"Swaraj Dalmia \nSkit.ai BengaluruIndia\n",
"Kumarmanas Nethil \nSkit.ai BengaluruIndia\n"
] | [
"Skit.ai BengaluruIndia",
"Skit.ai BengaluruIndia",
"Skit.ai BengaluruIndia"
] | [] | Conventional conversation assistants extract text transcripts from the speech signal using automatic speech recognition (ASR) and then predict intent from the transcriptions. Using end-to-end spoken language understanding (SLU), the intents of the speaker are predicted directly from the speech signal without requiring intermediate text transcripts. As a result, the model can optimize directly for intent classification and avoid cascading errors from ASR. The end-to-end SLU system also helps in reducing the latency of the intent prediction model. Although many datasets are available publicly for text-to-intent tasks, the availability of labeled speech-to-intent datasets is limited, and there are no datasets available in the Indian accent. In this paper, we release the Skit-S2I dataset, the first publicly available Indian-accented SLU dataset in the banking domain in a conversational tonality. We experiment with multiple baselines, compare different pretrained speech encoder's representations, and find that SSL pretrained representations perform slightly better than ASR pretrained representations lacking prosodic features for speech-to-intent classification. The dataset and baseline code is available at https://github. com/skit-ai/speech-to-intent-dataset Index Terms: spoken language understanding, speech to intent, voice assistant, transfer learning | 10.48550/arxiv.2212.13015 | [
"https://export.arxiv.org/pdf/2212.13015v1.pdf"
] | 255,125,016 | 2212.13015 | 95de9f454cf52bd3adb0bddafc52a12b4d8a2891 |
Skit-S2I: An Indian Accented Speech to Intent dataset
Shangeth Rajaa shangeth.rajaa@skit.ai
Skit.ai BengaluruIndia
Swaraj Dalmia
Skit.ai BengaluruIndia
Kumarmanas Nethil
Skit.ai BengaluruIndia
Skit-S2I: An Indian Accented Speech to Intent dataset
Conventional conversation assistants extract text transcripts from the speech signal using automatic speech recognition (ASR) and then predict intent from the transcriptions. Using end-to-end spoken language understanding (SLU), the intents of the speaker are predicted directly from the speech signal without requiring intermediate text transcripts. As a result, the model can optimize directly for intent classification and avoid cascading errors from ASR. The end-to-end SLU system also helps in reducing the latency of the intent prediction model. Although many datasets are available publicly for text-to-intent tasks, the availability of labeled speech-to-intent datasets is limited, and there are no datasets available in the Indian accent. In this paper, we release the Skit-S2I dataset, the first publicly available Indian-accented SLU dataset in the banking domain in a conversational tonality. We experiment with multiple baselines, compare different pretrained speech encoder's representations, and find that SSL pretrained representations perform slightly better than ASR pretrained representations lacking prosodic features for speech-to-intent classification. The dataset and baseline code is available at https://github. com/skit-ai/speech-to-intent-dataset Index Terms: spoken language understanding, speech to intent, voice assistant, transfer learning
Introduction
Earlier intent classification pipeline systems transcribe the speech signal with an ASR model and then train a Natual Language Understanding(NLU) model to predict the intents using the text transcripts. These pipeline methods are prone to error propagation due to ASR transcript errors. The text-to-intent NLU models only make use of the semantic meaning of the speech and ignore other paralinguistic information in the speech which could convey the speaker's intention. The ASR and NLU models are optimized for different metrics separately and may not be the best approach for the intent classification task. This pipeline method also increases the computational requirement and latency of intent classification. Building this pipeline method is also complex as it involves training and testing two different models and the collection of both labeled corpus for training.
End-to-End SLU methods attempt to predict the speaker's intent directly from the speech signal with a single model without any ASR transcripts. In contrast to cascaded pipeline methods, SLU methods also take advantage of other acoustic features in speech signals, including prosody. The SLU model is optimized for the target metric of intent classification, making training much simpler. SLU models also have the advantage of lesser computational requirements and better latency during deployment. The lack of a large audio dataset with SLU labels tagged still challenges SLU research. There is a need for an SLU dataset across languages, accents, and multiple domains to make the SLU models robust. In this paper, we release the Skit-S2I dataset, the first Indian-accented speech corpus for intent classification tasks in the banking domain recorded via telephony in a conversational tone. We performed an experimental comparison of cascaded pipeline methods and SLU models based on pretrained speech encoders. Also, we attempted to analyze how different pretraining methods and speech features with and without prosody information affect the performance of the SLU intent classification model. We also tried to diagnose the errors in the dataset with datamaps [1].
Related Work
Air Travel Information System (ATIS) [2] is an audio dataset with semantic labels related to air travel planning, but it is private and expensive to obtain. Snips SLU Dataset has both English and French languages, with only 2.9k and 1.2k samples in the voice assistant domain. Many previous studies have used transfer learning [3] or acoustic model pretraining [4] for intent classification and slot prediction on smaller SLU datasets. Recently there have been efforts in curating larger datasets for end-to-end SLU tasks like FSC [5], SLURP [6] and STOP [7]. However, these datasets are all based on the personal assistant domain, and no SLU datasets are available for the Indianaccented English or banking domain. In this paper, we introduce S2IDataset, which contains an Indian accented speech corpus for end-to-end SLU with multiple speakers recorded over the telephony in a conversational tonality. 3. Skit-S2I Dataset
Data Collection
The Skit-S2I dataset was collected to develop voice assistants in the banking domain. The dataset consists of 14 coarse-grained intents across multiple banking-related tasks. Multiple templates have been generated for each intent class to cover possible variations in human speech. The audio utterances recorded by the speakers are spontaneously spoken based on these templates with variations. The dataset also provides descriptions for each intent. Figure 1 shows an example of multiple templates for intent and the variations of the speaker's utterances. The average number of templates per intent is around 12. The audio utterances were recorded over telephone calls, making channel noise possible. Background noise will not be as prevalent as in real-world scenarios as the speakers made telephone calls in a semi-controlled environment. The audio signals recorded are of 8 kHz sampling rate and 16 bit.
Dataset Statistics
A total of 11 speakers contributed to the dataset, including eight females and three males. The speakers are all native Indians with different native languages from different parts of the country. There are 11845 samples in the dataset, and they are divided into train and test sets. The train set contains 10445 samples for training the model, while the test set contains 1400 samples for evaluation. Table 1 gives the data split of the Skit-S2I dataset. The train and test split have around 650 and 100 samples per intent, respectively, and each intent has data from all the speakers. The speakers are stratified across the train and test set independently for each intent. The dataset also includes anonymized speaker information such as gender, native language, languages spoken by speakers, and places lived by speakers across Indian states. [9] and whisper small model [10]. Then we trained the pretrained XLMR [11] model using the ASR transcripts to predict the intents.
For the end-to-end SLU models, we experimented with four different pretrained speech encoders. The intents are predicted with a linear classifier after average pooling encoded speech representations. We used the wav2vec2 [8] and Hubert [12] models pretrained on the self-supervised learning tasks. We also experimented with two sizes of the whisper encoder, base and small. All SLU models encode the audio with the encoders of the pretrained models. The models are finetuned with a learning rate of 1e −5 with Adam Optimizer. Table 3 shows the cascaded pipeline and end-to-end SLU baseline results. The XLMR NLU model trained on different ASR transcripts gives different test accuracy and F1 scores, as the error in the ASR-predicted transcripts can affect the performance of the NLU model, and these models are not optimized together for intent classification.
Results and Analysis
The Hubert-based end-to-end SLU model performed better than the wav2vec2 model, and both the whisper models outperformed Hubert and wav2vec models. In the pipeline and endto-end SLU baselines, the whisper model outperforms the other baselines, as the whisper model is more robust and performs better in zero-shot tasks than other models. Table 4 compares the number of parameters of different baseline models. The pipeline baselines have the largest number of parameters as the pipeline depends on two different models, but the end-to-end SLU models are much faster with lesser computation. Whisperbased SLU models are the fastest with the lowest number of parameters as the model input is Mel-Spectrograms which requires much lesser computation than wav2vec2 or Hubert models, which input raw waveform.
Most end-to-end SLU methods use the pretrained ASR features [5] for intent classification or use distillation methods [13] to learn features from pretrained text encoders. The ASR representations will force the model to only use the linguis-tic/semantic information in the speech signal and ignore other important information, such as prosody. Stressing of different syllables of a word can lead to different meanings, and the overall intonation contour contributes to the speaker's intention, so prosodic features can also help improve the intent classification task.
We compared the SSL pretrained, and ASR finetuned versions of both wav2vec2 and Hubert models for the intent classification tasks in Table 5. The SSL pretrained models slightly improve the test metrics than the ASR finetuned models. [14] also shows that SSL-trained wav2vec and Hubert models can generalize well on tasks related to learning prosody and semantic features from a speech signal, and ASR finetuning of these models lead to a loss of prosodic information in the learned representations. Based on the above results and observations, we can hypothesize that the performance of the SSL pretrained model is slightly better than ASR finetuned model, as the representation of the SSL model contains prosodic information.
Model
Pretraining Accuracy F1 We also trained the whisper(small.en) SLU model on the FSC and SLURP datasets to analyze the dataset as a benchmark. From Table 6, the whisper SLU model got an accuracy score of 0.996 on the FSC dataset and 0.765 on the SLURP dataset. The SLURP and Skit-S2I datasets are better intent classification benchmarks than FSC to compare the SLU models, as FSC test scores are much higher with simple baselines.
Dataset Analysis
We performed the dataset cartography [1] analysis on the S2IDataset, to assess the quality and diagnose the dataset. Dataset Cartography leverage the training dynamics of a model trained on a dataset to create the datamaps, which split the dataset into three distinct regions, easy-to-learn, ambiguous and hard-to-learn. The ambiguous samples in the dataset contribute to the out-of-distribution generalization, the easy-to-learn samples play an essential role in the optimization, and hard-to-learn samples often correspond to labeling errors. Figure 2 shows the datamap generated by training the dataset with the Whisper(small.en) model. After generating the datamaps, we found that a few of the data points in the hard-to-learn samples had label noise, speechless audio files, and short speech utterances, which may be because of telephony noise. Figures 5 and 5 show the generated datamaps for the FSC and SLURP datasets, respectively. Most samples in the FSC dataset were in the easyto-learn region, whereas the SLURP dataset has a good split of data samples across all the regions. Thus SLURP dataset can act as a better benchmark than FSC. We also generated the datamaps for each speaker in the dataset to identify which speaker's audio samples had the above issues. We examined the generated datamaps for all the speakers and found that most of the dataset errors were samples from only a few speakers. Figure 3 shows the datamaps generated for speakers 1, 5, 6, and 8. Speakers 6,8 had very few hardto-learn samples, whereas speakers 1 and 5 had many hard-tolearn samples and data errors. Not all hard-to-learn samples are data errors, but they have a high chance of being an er- ror. So we manually check all the hard-to-learn samples and change/remove them to make the dataset error-free. Based on the dataset cartography analysis, we will remove the errors in the future version of the S2IDataset dataset.
Conclusions
In this work, we released the Skit-S2I dataset, the first Indianaccented SLU dataset for the banking domain. We trained and compared multiple cascaded pipeline-based and end-to-end SLU baselines. We also compared the performance of different models and found that SSL representations perform better than ASR, as SSL representations contain prosodic information. We also compared the performance of the whisper-based intent classification model on FSC and SLURP datasets and found that the baseline model was able to achieve very good performance on the FSC evaluation. Skit-S2I and SLURP are better benchmarks than the FSC dataset.
Figure 1 :
1Example of template-based utterances for an intent class
Figure 2 :
2Datamaps Analysis of Skit-S2I dataset
Figure 3 :
3Datamaps for Skit-S2I dataset for speakers 1,5,6 and 8.
Figure 4 :
4Datamaps Analysis of FSC dataset
Figure 5 :
5Datamaps Analysis of SLURP dataset
Table 2 compares the statistics of different open SLU datasets.S2I
SLURP
FSC
SNIPS
Domain
Banking Assistant Assistant Assistant
Speaker
11
177
97
69
Audio Files
11.8k
72k
30k
5.8k
Duration[hrs]
13.8
58
19
5.5
Avg. length[s] 4.2
2.9
2.3
3.4
Intents
14
91
31
7
Table 2 :
2Statistics of the audio corpus of different SLU datasets4. Experiments
We experimented with several baselines of both cascaded
pipeline(ASR+NLU) and end-to-end SLU models on the Skit-
S2I dataset. For the cascaded pipeline, we extract the ASR tran-
scripts using the wav2vec2 [8] model finetuned on common-
Table 3 :
3Results for the baseline models on Skit-S2I dataset.Method Model
# params
Pipeline Wav2vec2-ASR + XLMR 315M + 278M
Pipeline Whisper-ASR + XLMR 244M + 278M
E2E SLU Wav2vec2-SLU
313M
E2E SLU Hubert-SLU
313M
E2E SLU Whisper(base.en)-SLU
19.8M
E2E SLU Whisper(small.en)-SLU 87M
Table 4 :
4Number of parameters for each baseline model voice dataset
Table 5 :
5Results of SSL and ASR pertained representations for intent classification
Table 6 :
6Results of the whisper SLU model on SLU datasets
Dataset cartography: Mapping and diagnosing datasets with training dynamics. S Swayamdipta, R Schwartz, N Lourie, Y Wang, H Hajishirzi, N A Smith, Y Choi, arXiv:2009.10795arXiv preprintS. Swayamdipta, R. Schwartz, N. Lourie, Y. Wang, H. Ha- jishirzi, N. A. Smith, and Y. Choi, "Dataset cartography: Mapping and diagnosing datasets with training dynamics," arXiv preprint arXiv:2009.10795, 2020.
The ATIS spoken language systems pilot corpus. C T Hemphill, J J Godfrey, G R Doddington, Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley. PennsylvaniaC. T. Hemphill, J. J. Godfrey, and G. R. Doddington, "The ATIS spoken language systems pilot corpus," in Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27,1990, 1990. [Online]. Available: https://aclanthology.org/H90-1021
Investigating Adaptation and Transfer Learning for End-to-End Spoken Language Understanding from Speech. N Tomashenko, A Caubrière, Y Estève, Proc. Interspeech. InterspeechN. Tomashenko, A. Caubrière, and Y. Estève, "Investigating Adaptation and Transfer Learning for End-to-End Spoken Lan- guage Understanding from Speech," in Proc. Interspeech 2019, 2019, pp. 824-828.
Speech Model Pre-Training for End-to-End Spoken Language Understanding. L Lugosch, M Ravanelli, P Ignoto, V S Tomar, Y Bengio, Proc. Interspeech. InterspeechL. Lugosch, M. Ravanelli, P. Ignoto, V. S. Tomar, and Y. Bengio, "Speech Model Pre-Training for End-to-End Spoken Language Understanding," in Proc. Interspeech 2019, 2019, pp. 814-818.
Speech model pre-training for end-to-end spoken language understanding. arXiv:1904.03670arXiv preprint--, "Speech model pre-training for end-to-end spoken language understanding," arXiv preprint arXiv:1904.03670, 2019.
Slurp: A spoken language understanding resource package. E Bastianelli, A Vanzo, P Swietojanski, V Rieser, E. Bastianelli, A. Vanzo, P. Swietojanski, and V. Rieser, "Slurp: A spoken language understanding resource package," 2020. [Online]. Available: https://arxiv.org/abs/2011.13205
Stop: A dataset for spoken task oriented semantic parsing. P Tomasello, A Shrivastava, D Lazar, P.-C Hsu, D Le, A Sagar, A Elkahky, J Copet, W.-N Hsu, Y Adi, R Algayres, T A Nguyen, E Dupoux, L Zettlemoyer, A Mohamed, P. Tomasello, A. Shrivastava, D. Lazar, P.-C. Hsu, D. Le, A. Sagar, A. Elkahky, J. Copet, W.-N. Hsu, Y. Adi, R. Algayres, T. A. Nguyen, E. Dupoux, L. Zettlemoyer, and A. Mohamed, "Stop: A dataset for spoken task oriented semantic parsing," 2022. [Online]. Available: https://arxiv.org/abs/2207.10643
wav2vec 2.0: A framework for self-supervised learning of speech representations. A Baevski, Y Zhou, A Mohamed, M Auli, Advances in Neural Information Processing Systems. 33460A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, "wav2vec 2.0: A framework for self-supervised learning of speech repre- sentations," Advances in Neural Information Processing Systems, vol. 33, pp. 12 449-12 460, 2020.
SpeechBrain: A general-purpose speech toolkit. M Ravanelli, T Parcollet, P Plantinga, A Rouhe, S Cornell, L Lugosch, C Subakan, N Dawalatabad, A Heba, J Zhong, J.-C Chou, S.-L Yeh, S.-W Fu, C.-F Liao, E Rastorgueva, F Grondin, W Aris, H Na, Y Gao, R D Mori, Y Bengio, arXiv:2106.046242021M. Ravanelli, T. Parcollet, P. Plantinga, A. Rouhe, S. Cornell, L. Lugosch, C. Subakan, N. Dawalatabad, A. Heba, J. Zhong, J.-C. Chou, S.-L. Yeh, S.-W. Fu, C.-F. Liao, E. Rastorgueva, F. Grondin, W. Aris, H. Na, Y. Gao, R. D. Mori, and Y. Ben- gio, "SpeechBrain: A general-purpose speech toolkit," 2021, arXiv:2106.04624.
Robust speech recognition via large-scale weak supervision. A Radford, J W Kim, T Xu, G Brockman, C Mcleavey, I Sutskever, OpenAI Blog. A. Radford, J. W. Kim, T. Xu, G. Brockman, C. McLeavey, and I. Sutskever, "Robust speech recognition via large-scale weak su- pervision," OpenAI Blog, 2022.
Unsupervised cross-lingual representation learning at scale. A Conneau, K Khandelwal, N Goyal, V Chaudhary, G Wenzek, F Guzmán, E Grave, M Ott, L Zettlemoyer, V Stoyanov, arXiv:1911.02116arXiv preprintA. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzmán, E. Grave, M. Ott, L. Zettlemoyer, and V. Stoyanov, "Unsupervised cross-lingual representation learning at scale," arXiv preprint arXiv:1911.02116, 2019.
Hubert: Self-supervised speech representation learning by masked prediction of hidden units. W.-N Hsu, B Bolte, Y.-H H Tsai, K Lakhotia, R Salakhutdinov, A Mohamed, IEEE/ACM Transactions on Audio, Speech, and Language Processing. 29W.-N. Hsu, B. Bolte, Y.-H. H. Tsai, K. Lakhotia, R. Salakhut- dinov, and A. Mohamed, "Hubert: Self-supervised speech rep- resentation learning by masked prediction of hidden units," IEEE/ACM Transactions on Audio, Speech, and Language Pro- cessing, vol. 29, pp. 3451-3460, 2021.
Knowledge distillation from bert transformer to speech transformer for intent classification. Y Jiang, B Sharma, M Madhavi, H Li, arXiv:2108.02598arXiv preprintY. Jiang, B. Sharma, M. Madhavi, and H. Li, "Knowledge distil- lation from bert transformer to speech transformer for intent clas- sification," arXiv preprint arXiv:2108.02598, 2021.
A fine-tuned wav2vec 2.0/hubert benchmark for speech emotion recognition, speaker verification and spoken language understanding. Y Wang, A Boumadane, A Heba, arXiv:2111.02735arXiv preprintY. Wang, A. Boumadane, and A. Heba, "A fine-tuned wav2vec 2.0/hubert benchmark for speech emotion recognition, speaker verification and spoken language understanding," arXiv preprint arXiv:2111.02735, 2021.
| [] |
[
"Clustering Words with the MDL Principle",
"Clustering Words with the MDL Principle"
] | [
"Hang Li lihang@sbl.cl.nee.co.jp \nTheory NEC Laboratory\nRWCP* c/o C&C Research Labora.tories\nNEC 4-1-1 Miya~zaki Miyamae-ku, Ke~wasa.ki216Japan\n",
"Naoki Abe abe@sbl.cl.nee.co.jp \nTheory NEC Laboratory\nRWCP* c/o C&C Research Labora.tories\nNEC 4-1-1 Miya~zaki Miyamae-ku, Ke~wasa.ki216Japan\n"
] | [
"Theory NEC Laboratory\nRWCP* c/o C&C Research Labora.tories\nNEC 4-1-1 Miya~zaki Miyamae-ku, Ke~wasa.ki216Japan",
"Theory NEC Laboratory\nRWCP* c/o C&C Research Labora.tories\nNEC 4-1-1 Miya~zaki Miyamae-ku, Ke~wasa.ki216Japan"
] | [] | We address the probhml of automaticMly constructing a thesaurus by clustering words based on corpus data. We view this problem as that of estimating a joint distribution over the (:artesian product of a partition of a set of nouns and a partition of a set of verbs, and propose a learning a.lgorithm based on the Mininmm Description Length (MDL) Principle for such estimation. We empirically compared the performance of our method based on the MDL Principle against the Maximum Likelihood Estimator in word clustering, and found that the former outperforms the latter. ~¢Ve also evaluated the method by conducting pp-attachment disambiguation experiments using an automaticMly constructed thesaurus. Our experimental results indicate that such a thesaurus can be used to improve accuracy in disambiguation. | 10.3115/992628.992633 | null | 148 | cmp-lg/9605014 | f7d15a1747596f86cf68f24fd0eb5469555b6298 |
Clustering Words with the MDL Principle
Hang Li lihang@sbl.cl.nee.co.jp
Theory NEC Laboratory
RWCP* c/o C&C Research Labora.tories
NEC 4-1-1 Miya~zaki Miyamae-ku, Ke~wasa.ki216Japan
Naoki Abe abe@sbl.cl.nee.co.jp
Theory NEC Laboratory
RWCP* c/o C&C Research Labora.tories
NEC 4-1-1 Miya~zaki Miyamae-ku, Ke~wasa.ki216Japan
Clustering Words with the MDL Principle
We address the probhml of automaticMly constructing a thesaurus by clustering words based on corpus data. We view this problem as that of estimating a joint distribution over the (:artesian product of a partition of a set of nouns and a partition of a set of verbs, and propose a learning a.lgorithm based on the Mininmm Description Length (MDL) Principle for such estimation. We empirically compared the performance of our method based on the MDL Principle against the Maximum Likelihood Estimator in word clustering, and found that the former outperforms the latter. ~¢Ve also evaluated the method by conducting pp-attachment disambiguation experiments using an automaticMly constructed thesaurus. Our experimental results indicate that such a thesaurus can be used to improve accuracy in disambiguation.
Introduction
Recently various methods for automatically constructing a thesaurus (hierarchically clustering words) based on corpus data. have been proposed (Hindle, 1990;Brown et al., 1992;Pereira et al., 1993;Tokunaga et al., 1995). The realization of such an automatic construction method would make it possible to a) save the cost of constructing a thesaurus by hand, b) do away with subjectivity inherent in a hand made thesaurus, and c) make it easier to adapt a natural language processing system to a new domain. In this paper, we propose a new method for automatic construction of thesauri. Specifically, we view the problem of automatically clustering words as that of estimating a joint distributiofl over the Cartesian product of a partition of a set of nouns (in general, any set of words) and a partition of a set of w:rbs (in general, any set of words), and propose an est.imation *Real World Computing Partership algorithm using simulated annealing with an energy function based on the blinimum Description Length (MDL) Principle. The MDL Principle is a well-motivated and theoretically sound principle for data compression and estimation in information theory and statistics. As a method of sta-tisticM estimation MDL is guaranteed to be near optimal.
We empiricMly evMuated the effectiveness of our method. In particular, we compared the performance of an MDL-based sinm]ated anuealilag Mgorithm in hierarchical word clustering against. that of one based on the Maximum Likelihood Estimator (MLE, for short).
We found that the MDL-based method performs better than the MLE-based method.
We also evaluated our method by conducting pp-attachment disambiguation experiments using a thesaurus automatically constructed by it and found that disambiguation results can be improved.
Since some words never occur in a corpus, and thus cannot be reliably classified by a method solely based on corpus data, we propose to combine the use of an automatically constructed thesaurus and a hand made thesaurus in disambiguation. We conducted some experiments in order to test the effectiveness of this strategy. Our experimental results indicate that combining an automatically constructed thesaurus and a hand made thesaurus widens the coverage 1 of our disambiguation method, while maintaining high accuracy e.
The Problem Setting
A method of constructing a thesaurus based on corpus data usually consists of the following three steps: (i) Extract co-occurrence data (e.g. case frame data, adjacency data) fl'om a corpus, (ii) Starting from a single class (or each word composing its own class), divide (or merge) word classes based Oll the co-occurrence data using 8Ollle Sill> ilarity (distance) measure. (The former apl)roach is called 'divisive', the latter 'agglomerative'.) (iii) Repeat step (ii) until some stopping condition is met, to construct a thesaurus (tree). The method we propose here consists of the same three st.eps.
Suppose available to us are frequency data (cooccurrence data.) between verbs and their case slot. values extracted from a corpus (step (i)). We then view the problem of clustering words as that of estimating a probabilistic model (representing a. probability distribution) tllat generates such data
We assume that the target model can be defined in the following way. First, we define a noun partition "PA. ~ over a given set of nouns ..'V" and a verb partioll "Pv over a given set. of verbs 12. A noun partition is any set T'-~ satisfying "P,~ C 2 H,Wc~e'&v('i = A/ and VCi,(..)
E 7)A.', Ci 0 (/j = O.
A verb partition 7)v is defined analogously. In this paper, we call a member of a noun partition 'a, llOUll cluster', and a nlenlbe, r of a verb partition a ~verb cluster'. We refer to a member of the Cartesian product of a noun partition and a verb partition ( C "P:v x "Pv ) simply as 'a cluster'. We then define a probabilistic model (a joint distribution), written I' (C,, (:v), where random variable C,, assumes a value fl'om a fizcd nouu partition ~PX, and C~. a va.lue from a fixed verb partition 7)v. Within a given cluster, we assume thai each element is generated with equal probability, i.e., P(c,,,c~,) v., E c,,,v,,, E c,,, P(,,,,,,) -IC. x <,1 (t)
In this paper, we assume that the observed data are generaied by a model belonging to the class of models just de.scribed, and select a model which best explains the data.. As a result of this, we obtain both noun clusters and verb clusters. This problem setting is based on the intuit.lye assumption that similar words occur in the sa.me context with roughly equal likelihood, as is made explicit in equation (l). Thus selecting a model which best explains the given data is equivalent to finding the most appropriate classification of words base(t on their co-occurrence.
Clustering with MDL
We now turn to the question of what. strategy (or criterion) we should employ for estimating the best model. Our choice is the MDL (Minimum Description I,ength) principle (tlissanen, 1989), a well-known principle of data compression and statistical estimation from inforlnation theory. MDI, stipulates that the best probability model for given data is that model which requires the least cod(: length ['or encoding of the model itself, as well as the giwql data relative to it a. We refer to the code length for the model aWe refer /.he interested reader to eli aml Abe, 1!195) for explana.tion of ra.tionals behind using the as 'the model description h'ngth' and that for tile data 'the data description length."
We apply MDI, to the problem of estimating a model consisting of a pair of partitions as described above. In this context, a model with less clusters tends to be simpler (in t.erms of the number of parameters), but also tends to have a poorer fit. to the data. In contrast, a model with more clusters is more complex, but tends to have a better fit to the data. Thus, there is a trade-off relationship between the simplicity of a model and the goodness of fit to the data. The model description length quantifies the simplicity (complexity) of a model, and the data description length quantifies the tlt. to the data. According to MDL, the model which minimizes the sum total of the two types of description lengths should be selected.
In what follows, we will describe in detail how the description length is to be calculated in our current context, as well as our silnulated annealing algorithm based on MI)L.
3.1
Calculating Description Length
We will now describe how the description length for a model is calculated, lh'call that each model is specified by the Cartesian product of a partition of nouns and a partition of verbs, and a number of parameters for them. Here we let /,', denote the size of the noun partition, and /q, the size of the verb partition. Tiien, there are k,. k~,-1 free parameters in a model.
We employ the %inary noun clustering method', in which k,, is fixed at IVt and we are to dechle whether k,~ --1 or k,,. = 2, which is then to be applied recursiw~ly to the clusters thus obtained. This is as if we view the noutls as entities a.nd the verbs as features and cluster the entities based on their feat.ures. Since there are 2Pv'I subsets of the set of llottns .~, and for each 'binary' noun partition we have two different subsets (a special case of which is when one subset is A 'r and the other the empty set 0), the number of possible binary noml partitions is 2tAq/2 = 21~'l-J. Thus for each I)inary noun partition we need log 21a"l-t = i3j-I _ 1 bit.s 5 to describe it. 6 Ilenee L ..... a(M) is calculated MI)L principle in natural language processing. ~L(M) depends on .';, but we will leave ,5' implicit. 5Throughout the paper 'log' denotes the logarit.hnt to the base 2.
6 For further explanation, see (Quinlan and Rivest, 1989). where ISl denotes the input data size, and/¢,. ]c,,-1 is the nnnlber of (free) parauleters ill tlle nlodel.
It is known that using log ~ = ~ bits to describe each of the parameters will (approximately) minimize the description length (1Rissanen, 1.989).
FinMly, Ld,t(M) is calculated by Ldat(M)=-E f(n,v).logP(n,v) (5) (n,v)ES
where f(n,,v) denotes the observed frequency of the noun verb pair (n,v), and P(n,v) the estimated probability of (n, v), which is calculated as follows.
A Sinllllated Annealing-based Algorithm
We could ill principle calculate the description length for each model and select, a model with the nfininmm description length, if COlnputation time were of no concern. However, since the number of probal)ilistic models under consideration is super exponential, this is not feasible in practice. We employ the 'simulated a.m~ealing technique' to deal with this problem. Figure 1 shows our (divisive) clustering algorithm s .
Advantages of Our Method
In this section, we elaborate on the merits of our method.
In. statistical natural language processing, usually the number of parameters in a probabilistic 7The exact formulation of L,~od(M) is subjective, and it depends on the exact coding scheme used for the description of the models.
SAs we noted earlier, an Mternative would be to employ an agglomerative Mgorithm. model to be estimated is very large, and therefore such a model is difficult to estimate with a reasonable data size that is available in practice. (This problem is usually referred to as the 'data sparseness problem'.) We could smooth the estimated probabilities using an existing smoothing technique (e.g., (Dagan el, al., 1992;Gale and Church, 1990)), then calculate some similarity measure using the smoothed probabilities, and then cluster words according to it. There is no guarantee, however, that the employed smoothing method is in any way consistent with the clustering method used subsequently. Our method based on MDL resolves this issue in a unified fashion. By employing models that embody the assumption that words belonging to a same class occur in the same context with equal likelihood, our method achieves the smoothing effect as a side effect of the clustering process, where the domains of smoothing coincide with the classes obtained by clustering. Thus, the coarseness or fineness of clustering also determines the degree of smoothing. All of these effects fall out naturally as a corollary of the imperatiw? of 'best possible estimation', the original motivation behind the MDL principle.
in our simulated annealing algorithm, we could alternatively employ the Maxinmm Likelihood Estimator (MLE) as criterion for the best probabilistic model, instead of MDL. MLE, as its name suggests, selects a model which maximizes the likelihood of the data, that is, /5 = a.rg maxp I-[~¢s P(x). This is equivalent to mininfizing the 'data description length' as defined in Section 3, i.e. i 5 = arg minp ~,~-~s -log P(x). We can see easily that MDL genet:al[zes MLE, in that it also takes into account the complexity of the model itself. In the presence of models with varying complexity, MLE tends to overfit the data, and output; a model that is too complex and tailored to fit the specifics of the input data. If we employ MLE as criterion in our simulated annealing algorithm, it. will result in selecting a very fine model with many small clusters, most of which will have probabilities estimated as zero. Thus, in contrast to employing MDL, it will not have the effect of smoothing a.t all.
Purely as a method of estimation as well, the superiority of MI)L over MLE is supported by convincing theoretical findings (c.f. (Barton and Cover, 1991;Yamanishi, 1992)). For instance, the speed of convergence of the models selected by MDL to the true model is known to be near optiinal. (The models selected by MDL converge to the true model approximately at the rate of 1/s where s is the nmnber of parameters in the true model, whereas for MLE the rate is l/t, where t is the size of the domain, or in our context, the total number of elements of N" x V.) 'Consistency' is another desirable property of MDL, which is not shared by MLE. That is, the number of parame-Algorithm: Clustering 1. Divide the noun set N into two subs0ts. I)efine a probabilistic model consisting of the l)artition of nouns si)ecified by the two sul)sets and th(" entire set. of verbs.
do{
2.1 Randomly select, one noun, rcmow> it from t.h~; subset it. belongs to and add it. to the other. 2.2 C.alcuh~tc the description length for the two models (before and after the mow~') as L1 and Le, respectively.
2.3 Viewing the description length as the energy flmction for annealing, let AL = Le -L:. If AL < 0, fix the mow~, otherwise ascertain the mowe with probability P = eXl)(-AL/T). } while (the description length has decreased during the past 10. INI trials.)
Itere T is the a.nnealing t.enq.)crat.urc whose initial value, is 1 and updated to be 0.97' after 10. ]NI trials.
3. If one of the obtained subset is elul)t,y, t]ll?ll return the IlOll-Olllpty subset, otherwise recursiw,ly apply Clustering on both of the two subsets.
Figure 1: Simulated annealing algorithm for word clustering
ters in l;he models selected by MDI~ ('otivorg~" to that of the true model (Rissanen, 1989). Both of these prol>erties of MI)I, ar~ Oml>irically w'ri/ied in our present (;Ollt(?x[,, as will be show,: in t.ho t:(,xl section. In particular, we haw~ compared l,h(' p(u'forn:a.nc0 of employing an M1)L-based simula.ted annealing against that of one 1)ascd on M[,I", ill hierarchical woM clust.c'ring.
Experimental Results
--it. he con:party they we i the t:rue model and the estimated model. ('l'hc algorithm used for MI,E was lhe same as that showJt in Figure 1, except the 'data description length' replaces the (total) description length' in Sl.ep 2.) Figure 3(a) plots the number of obtained IIOlllI clusters (leaf nodes in the obtained thesaurus trc~,) w?rsus the input data size, aw;raged ow;r 10 trials.
(The number of noun clusters in the true model is 4.) Figure 3(b) plots the KI, distance versus the data size, also averaged over l:he san> 10 trials. The results indicalc that MI)L conw,rges to the true Inode] fasl.er i.]ian M I,E. Also, MI,I'; tends to select a mo(h'l overfittil:g the data, while Ml)l, t.cnds to seh>ct a. model which is simple and yet tits the data reasonably well.
--sale l K~ stock sha,'~' t billion million l,'iguro 2: An example thesaurus
We desert b c our experimental rcsull s ill th is section.
Experiment 1: MDL v.s. MLE
We COml)ared the performance of elnploying M1)], as a criterion in our silnulatcd annealing algorithm, against that of employing M IA~; by simulation experiments. We artificially constructed a true model of word co-occurrence, and then generated data according to its distributiou. We then used the data. to estimale a model (clustering words), and measured the I(L distancd ~ between °'l'he K], distance (relative Clt|,l:Opy), which is widely used in information theory and sta, tist, ics, is a, nleasur,2 of 'dista, n<:c' l>~[,wcen two distributions
Experiment 2: Qualitative Evaluation
We extracted roughly 180,000 case fl:anles from the bracketed WSJ (Wall Street Journal) corpus of the Penn Tree Bank (Marcus et al., 1993) as co-occurrence data. We then eonstrucl.ed a number of thesauri based on these data, using our method. Figure 2 shows all example thesaurus for the 20 most frequently occurred nouns in the data, constructed based on their appearances as subject and object of roughly 2000 verbs. The obtained thesaurus seems to agree with human intuition to settle degr(~e. For example, 'million' and 'billion' are classilied in one IIOll[I chlster, alld 'stock' and 'share' arc classified together. Not all of tile IlOUII C]ltsters, however, seem to be meaningful in the useflll sense. This is probably because the. data size we had was not large enough. Pragmatically speaking, however, whethcl: the obtained thesaurus agrees with our intuition in itself is only of secondary concern, since the main lmrpose is to use the constructed t.hcsaurus to help i~uprow~ on a disaml)igual.ion I,ask. (('.over and Tl,omas, 1991). ]t is Mways non-negative a.nd is zero iff the two distributions arc identical.
Experiment 3: Disambiguation
We also evaluated our method by using a constructed thesaurus in a pp-attachment disan> bigua.tion experiment. We used as training data the same 180,000 case fl'ames in Experiment 1. We also extracted as our test data 172 (verb, no~nll,prep,'noune) patterns Dora the data in the same corpus, which is not used in the training data. For the 150 words that appear in the position of ,oun.e in the test data, we constructed a thesaurus based on the co-occurrences between heads and slot. values of the fl'ames in the training data. This is because in our disambiguation test we only need a. thesaurus consisting of these 150 words. We then applied the learning method proposed in (Li and Abe, 1995) to learn case fl'ame patterns with the constructed thesaurus as input using the same training data. That is, we used it to learn the conditional distributions P ( Classlll,erb, prep), P(Classe [n, ounl, prep), where Class1 and Classe vary over the internal nodes in a certain 'cut' in the thesaurus tree l0
We then compare 0), we conclude that we cannot make a decision. Table 1 shows the results of our pp-attachment disambiguation experiment in terms of 'coverage' and 'accuracy.' tlere 'coverage' refers to the proportion (in percentage) of the test patterns on which the disambiguation method could make a decision. 'Base Line' refers to tile method of always ~ttaching (prep, noun.~.) to noun1. 'Word-Based', 'MLE-Thesaurus', and 'MDL-Thesaurus' respectively stand tbr using word-based estimates, using a thesaurus constructed by employing MLE, and using a thesaurus constructed by our method. Note that the coverage of ~MDL-Thesaurus' signifiea.ntly outperformed that of 'Word-Based', while basically maintaining high accuracy (though it drops somewhat), indicating that using an automatically constructed thesaurus can improve disambiguation results in terms of coverage.
We also tested the method proposed in (Li and Abe, 1995) of learning case frames patterns using all existing thesaurus. In particular, we used this method with WordNet (Miller et al., 1993) and using the same training data., and then conducted pp-attachment disambiguation experiment using the obtained case frame patterns. We show the result of this experiment as 'WordNet' in Table 1. We can see that in terms of 'coverage', ~WordNet' outperforms 'MDL-Thesaurus', but in terms of "accuracy', 'MDL-Thesaurus' outperforms 'Word-Net.'. These results can be interpreted as follows. An automa.tically constructed thesaurus is more domaiu dependent and captures the domain dependent features better, and thus using it achieves high accuracy. On the other hand, since training data. we had available is insufficient, its coverage is smaller than that of a hand made thesaurus. In practice, it makes sense to combine both types of thesauri. More specifically, an atttomatically constructed thesaurus can be used within its coverage, and outside its coverage, a hand made thesaurus can be used. Given the current state of the word clustering technique (namely, it requires data size that is usually not available, and it tends to be computationally demanding), this strategy is practical. We show the result of this combined T~bte 2: I)l'-attachinent, disambiguation results M1)l,-'I'h,~saurus + Word Net MI)L-Thesaltrus + \VordNct: + I,A + I)efaull;
Coverage(%) Accuracy(~)
Given a model M and data k', its total description length L(J/) 4 is COlnputed as the suni of the model description length L .... d('lt), the description length of its parameters I;~,,,,.(M), and data description length Ld,~t(M). (We often refer to Lm.od(.'l.]) q-Lpar (:'~l) as the model description length). Namely, L(:~'I) = L,,~o(~(:~I) + L>.,,.(:~I) + L~,(M)
v,,. c c,,,w, c Cv P(,~,,,~,) -f'((::,,,c'~,) (s) ' IC,, x c,.I P(C,,, C,, ) -f(C,,, C,, ) (C',~, C,,) denotes the obserw.d frequency of the noun verb pairs belonging to cluster (c,~, <;'~ ). With tile description length of a model defined in the above manner, we wish to select a model having the minimum description length and output it as the result of clustering. Since the model description length Lmod is the same for each model, in practice we only need to calculate and compare L'(M) = L,,<,,.(M) + ];d<~,(M).
Figure 3 :
3..,+.._ __ ...__ _.~ .......,~'"-'~",, ,,, . . . . . . . . I . . . . . . . . I , . ..............(a) Number of clusters versus data size and (b) KL distance wersus data size
Table 1 :
1PP-attachment disaml)iguation results which are estimated based on the case fl'ame patterns, to determine the a.ttachment site of (prep, not*he). More specifically, if the former is larger than the latter, we attach it. to verb, and if the latter is larger tha.n the former, we attach it. to n.o'unl, and otherwise (including when both are 1°Each 'cut.' in a t.hesa.urus tree defines a different noun paxt.ition. See (Li and Abe, 1995) for details.Base Line
Word-Based
MI) L-Thesaurus
MLE-Thesaurus
WordNet
Cowerage(%,) Accuracy(%)
100
70.:2
19.7
95.1
33.1
93.0
33.7
89.7
49.4
88.2
~Cover~tge' refers to the proportion (in percentage) of test data for which the disambiguat.ion method can make a decision.2'Accuracy' refers to the success rate, given that the disambiguation method makes a decision.
Our exlmritnenl,al resnlt, shows lltal: elnph)yhag t;he cotn]>ined nlet.hod does itwrease t.he cow:rage of disainbiguation. We also tested +M1)I, Thesaurus + WordNel.-t-I,A -t-l)('fatllt.', which sl:ands for using l.hc' learm~d thesaurus altd \Vord-Net first+, t.heu t.he lexical associal.iotl valtm l>rO -posed by. Wordnot, Our hest disaml)iguatioll rcsull, obtained using t, his last. combined niet.tiod sontewhat improves t, he accuracy rei>orl.ed itt (Li and At><', 1.995) (84.3%)method a.s 'Ml)l/l'hesaurlts + WordNot.' it/ Ta- Me 2. Our exlmritnenl,al resnlt, shows lltal: eln- ph)yhag t;he cotn]>ined nlet.hod does itwrease t.he cow:rage of disainbiguation. We also tested +M1)I, Thesaurus + WordNel.-t-I,A -t-l)('fatllt.', which sl:ands for using l.hc' learm~d thesaurus altd \Vord- Net first+, t.heu t.he lexical associal.iotl valtm l>rO - posed by (lIindle a.nd F/.ooth, 1991), and finally tile defa.ull; (i.e. always atl.aching ])/'el), *~ottl+2 l;o no+tn~). Our hest disaml)iguatioll rcsull, ob- tained using t, his last; combined niet.tiod sontewhat improves t, he accuracy rei>orl.ed itt (Li and At><', 1.995) (84.3%).
Conchtding Remarks We have proposed a tnethod of" hierarchical <'hisfeting of words hased on laxge corpus data. \Vo conclude wit, h the following remarl,:s. lip ll/et ho([ (>[" chtst:('ritlg w(wds has(,d cqt th. MI)L t',ritlciph" is ~h<,reficalty sc.+,rtd. Our experimental t'esult.s show t.hal: il. is I+ot t.er to enq)loy MI)I. than M 1,1!; as estimation <'riWrion in hierarchical word chtsteringConchtding Remarks We have proposed a tnethod of" hierarchical <'his- feting of words hased on laxge corpus data. \Vo conclude wit, h the following remarl,:s, [. ()lip ll/et ho([ (>[" chtst:('ritlg w(wds has(,d cqt th(' MI)L t',ritlciph" is ~h<,reficalty sc.+,rtd. Our experimental t'esult.s show t.hal: il. is I+ot t.er to enq)loy MI)I, than M 1,1!; as estimation <'riW- rion in hierarchical word chtstering.
rt~cl,cd l>y ottr met hod can inq>rov(; pp-;fl.t, ach)nent, disaml)igtmtion results. lsing a tlwsaltrus consl.[lsing a tlwsaltrus consl.rt~cl,cd l>y ottr met hod can inq>rov(; pp-;fl.t, ach)nent, disaml)igtmtion results.
ent, st, a, te of the art. itt st.al, istical na.t;ttral languag(~ I)rocessing, it. is b('st. I.o use a cotnbination of an a.ut.ontat.ically const.rucl(>d thesa.urus and a hand made l;hesattrus ['or disatnbigua.l.ion purpose. T At, Clm, Fhe disaulhiglmtion accttra. cy obtained this way wets 85+5(/cAt, t.he Clm:ent, st, a, te of the art. itt st.al, istical na.t;ttral languag(~ I)rocessing, it. is b('st. I.o use a cotnbination of an a.ut.ontat.ically const.rucl(>d thesa.urus and a hand made l;hesattrus ['or dis- atnbigua.l.ion purpose. "Fhe disaulhiglmtion accttra.cy obtained this way wets 85+5(/c,.
(:ow;r. t991. Minimum comph'xit.y densil.y estimation. l.l';YE 7'ra,saclion. o, lnformatio~ Theor:q. Andrew 1~ Barren, Thomas M , 37Andrew 1~. Barren and Thomas M. (:ow;r. t991. Minimum comph'xit.y densil.y estima- tion. l.l';YE 7'ra,saclion. o, lnformatio~ The- or:q, 37(4):1034 t054.
t992+ (:lass-hased ~t-gratn models of natura.l language. Computational Li,.quistics. Peter F. Browu, Vincent 3. I)ella Piet.ra, Pet, er V. deSouza, Jenifer (',. I, ai, and l(ohert L. Mercer.184Peter F. Browu, Vincent 3. I)ella Piet.ra, Pe- t, er V. deSouza, Jenifer (',. I, ai, and l(ohert L. Mercer. t992+ (:lass-hased ~t-gratn models of natura.l language. Computational Li,.quistics, 18(4):283 298.
do I)agan, Shaul Marcus, and Shaul Makovit. Tholicta.s M. (:over and .loy A. ThoniasWiley & Sons Inc. [EIemen, ls of [nformalion. 7'heor!l. chTholicta.s M. (:over and .loy A. Thonias. 1991. EI- emen, ls of [nformalion. 7'heor!l. ,lohu Wiley & Sons Inc. [do I)agan, Shaul Marcus, and Shaul Makovit, ch.
(:ontextua.1 word similarit, y a.nd estitna.-tion fi:om sparse data. Proceedings oflhc. !/2.lhc701!/2. (:ontextua.1 word similarit, y a.nd estitna.- tion fi:om sparse data. Proceedings oflhc ,701h
Poor esl, itnales of conl;cxt are worse {.[tan ltOlte. l>,oceedings of Ihc I)A I~PA Speech and Nalu~'al La,:luage Workshop. A Kenth, W , hutch. 1990283287\ViHiams A. (;ale and Kenth W. (:hutch. 1990. Poor esl, itnales of conl;cxt are worse {.[tan ltOlte. l>,oceedings of Ihc I)A I~PA Speech and Nalu~'al La,:luage Workshop, pages 283 287.
Donald llindle and Mal;s 1-1ooth. St, ructural ambiguity and lexicd relations. Proceedi,.:lS of the 291h A CL. 229236Donald llindle and Mal;s 1-1ooth. 1991. St, ructural ambiguity and lexicd relations. Proceedi,.:lS of the 291h A CL, pages 229 236.
Noun classification front predicat, e-argument st, ructures. Donald Tiindle, Proceedings of the 281h ACL. the 281h ACL268275Donald tIindle. 1990. Noun classification front predicat, e-argument st, ructures. Proceedings of the 281h ACL, [>ages 268 275.
Getwralizing case frames using a. thesaurus and the MDL principle. Proceedings of Rrce,t A d~,a~ccs in Nal~trol Langua:lc Proces.sing. aug [,i aud Naoki AbeRrce,t A d~,a~ccs in Nal~trol Langua:lc Proces.sing18[[aug [,i aud Naoki Abe. 1995. Getwralizing case frames using a. thesaurus and the MDL princi- ple. Proceedings of Rrce,t A d~,a~ccs in Nal~trol Langua:lc Proces.sing, pages 239 2,18.
Mitchell P Marcus, Beatrice Sant Oriui, Mary Ann Marcinkiewicz, The peu.n t, reebank. Computational Linguistics. 19330Mitchell P. Marcus, Beatrice Sant.oriui, and Mary Ann Marcinkiewicz. 1993. Bttildhig a. large annotated corpus of english: The peu.n t, reebank. Computational Linguistics, 19(1):313 330.
Kat Miller, eorgc A. Milh'r, I~.ichar<l Beckwilh, (!hirst.ian<~ I:ellbaunL Derek ClrOSS. eorgc A. Milh'r, I~.ichar<l Beckwilh, (!hirst.ian<~ I:ellbaunL Derek ClrOSS, and Kat,herine Miller.
!)!)3. Introducl;ion to WordNet: An onlira, le×ical database. .,tT~o~ymous I"7'P: clarily.l>rt, c~lo~:, cdu. l:ernando Pereira, Naft.ali Tishhy, and LiIlia.n l,ee. [993. l)ist, ributional clustering of maglish words. t)rocccdings of lke. 1831907tsl A (TL!)!)3. Introducl;ion to WordNet: An on- lira, le×ical database. .,tT~o~ymous I"7'P: clar- ily.l>rt, c~lo~:, cdu. l:ernando Pereira, Naft.ali Tishhy, and LiIlia.n l,ee. [993. l)ist, ributional clustering of maglish words. t)rocccdings of lke .'7tsl A (TL, pages 183 190.
Inferring decision trees using t.he mininiutn description [engt, h principle, lnformalion and C'omputation. lorma lTissanen. 1989. ,qlor'haslic Uomple<+'it 9 in 5'talislical [nquiey. a. lT.oss Quinlan aitd I¢onahl 1,. Rives/.Worhl Scientific Publishing80ioa. lT.oss Quinlan aitd I¢onahl 1,. Rives/.. 1989. In- ferring decision trees using t.he mininiutn de- scription [engt, h principle, lnformalion and C'omputation, 80:227-248. ,lorma lTissanen. 1989. ,qlor'haslic Uomple<+'it 9 in 5'talislical [nquiey. Worhl Scientific Publishing ( io.
Makot:o hva.yama, and ltozttmi Tanaka. 1995. Aut, omat.ic thesaurus cot~struct, ion based-on grannnat, ica/ relations. Takenobu 'fokunaga, Proceedings of 1.1CA. 1.1CA95Takenobu 'Fokunaga, Makot:o hva.yama, and ltozttmi Tanaka. 1995. Aut, omat.ic thesaurus cot~struct, ion based-on grannnat, ica/ relations. Proceedings of 1.1CA ['95.
A learning criterion ['or stochast, ic rules. Kenji Yatnanishi, Machine Lcarnin.fl. 9Kenji Yatnanishi. 1992. A learning criterion ['or stochast, ic rules. Machine Lcarnin.fl , 9:165 203.
| [] |
[
"OKGIT: Open Knowledge Graph Link Prediction with Implicit Types",
"OKGIT: Open Knowledge Graph Link Prediction with Implicit Types"
] | [
"Chandrahas chandrahas@iisc.ac.in \nIndian Institute of Science\nBangalore\n",
"Partha Pratim Talukdar \nIndian Institute of Science\nBangalore\n"
] | [
"Indian Institute of Science\nBangalore",
"Indian Institute of Science\nBangalore"
] | [] | Open Knowledge Graphs (OpenKG) refer to a set of (head noun phrase, relation phrase, tail noun phrase) triples such as (tesla, return to, new york) extracted from a corpus using Ope-nIE tools. While OpenKGs are easy to bootstrap for a domain, they are very sparse and far from being directly usable in an end task. Therefore, the task of predicting new facts, i.e., link prediction, becomes an important step while using these graphs in downstream tasks such as text comprehension, question answering, and web search query recommendation. Learning embeddings for OpenKGs is one approach for link prediction that has received some attention lately. However, on careful examination, we found that current OpenKG link prediction algorithms often predict noun phrases (NPs) with incompatible types for given noun and relation phrases. We address this problem in this work and propose OKGIT that improves OpenKG link prediction using novel type compatibility score and type regularization. With extensive experiments on multiple datasets, we show that the proposed method achieves state-of-the-art performance while producing type compatible NPs in the link prediction task. | 10.18653/v1/2021.findings-acl.225 | [
"https://arxiv.org/pdf/2106.12806v1.pdf"
] | 235,624,320 | 2106.12806 | 865bfd8034785a084f0c7d651119b8504e939535 |
OKGIT: Open Knowledge Graph Link Prediction with Implicit Types
Chandrahas chandrahas@iisc.ac.in
Indian Institute of Science
Bangalore
Partha Pratim Talukdar
Indian Institute of Science
Bangalore
OKGIT: Open Knowledge Graph Link Prediction with Implicit Types
Open Knowledge Graphs (OpenKG) refer to a set of (head noun phrase, relation phrase, tail noun phrase) triples such as (tesla, return to, new york) extracted from a corpus using Ope-nIE tools. While OpenKGs are easy to bootstrap for a domain, they are very sparse and far from being directly usable in an end task. Therefore, the task of predicting new facts, i.e., link prediction, becomes an important step while using these graphs in downstream tasks such as text comprehension, question answering, and web search query recommendation. Learning embeddings for OpenKGs is one approach for link prediction that has received some attention lately. However, on careful examination, we found that current OpenKG link prediction algorithms often predict noun phrases (NPs) with incompatible types for given noun and relation phrases. We address this problem in this work and propose OKGIT that improves OpenKG link prediction using novel type compatibility score and type regularization. With extensive experiments on multiple datasets, we show that the proposed method achieves state-of-the-art performance while producing type compatible NPs in the link prediction task.
Introduction
An Open Knowledge Graph (OpenKG) is a set of factual triples extracted from a text corpus using Open Information Extraction (OpenIE) tools such as TEXTRUNNER (Banko et al., 2007) and ReVerb (Fader et al., 2011). These triples are of the form (noun phrase, relation phrase, noun phrase), e.g., (tesla, return to, new york). An OpenKG can be viewed as a multi-relational graph where the noun phrases (NPs) are the nodes, and the relation phrases (RPs) are the labeled edges between pairs of nodes. It is easy to bootstrap OpenKGs from a domain-specific corpus, making them suitable for newer domains. However, they are extremely sparse and may not be directly usable for an end task. Therefore, tasks such as NP canonicalization (merging mentions of the same entity) and link prediction (predicting new facts) become an important step in downstream applications. Some example applications are text comprehension (Mausam, 2016), relation schema induction (Nimishakavi et al., 2016), canonicalization (Vashishth et al., 2018), question answering (Yao and Van Durme, 2014), and web search query recommendation (Huang et al., 2016). In this work, we focus on improving OpenKG link prediction.
Although OpenKGs are structurally similar to Ontological KGs, they come with a different set of challenges. They are extremely sparse, NPs and RPs are not canonicalized, and no type information is present for NPs. There has been much work on learning embeddings for Ontological KGs in the past years. However, this task has not received much attention in the context of OpenKGs. CaRE (Gupta et al., 2019) is a recent method which addresses this problem. It learns embeddings for NPs and RPs in an OpenKG while incorporating NP canonicalization information. However, even after incorporating canonicalization, we find that CaRE struggles to predict NPs whose types are compati-ble with given head NP and RP.
As observed by Petroni et al. (2019), modern pretrained language representation models like BERT can store factual knowledge and can be used to perform link prediction in KGs. However, in our explorations with OpenKGs, we found that even though BERT may not predict the correct NP on the top, it predicts type compatible NPs (Table 1). A similar observation was also made in the context of entity linking . As OpenKGs do not have any underlying ontology and obtaining type information can be expensive, BERT predictions can help improve OpenKG link prediction.
Motivated by this, we employ BERT for improving OpenKG link prediction, using novel type compatibility score (Section 4.2) and type regularizer term (Section 4.4). We propose OKGIT, a method for OpenKG link prediction with improved type compatibility. We test our model on multiple datasets and show that it achieves state-of-the-art performance on all of these datasets.
We make the following contributions:
• We address the problem of OpenKG link prediction, focusing on improving type compatibility of predictions. To the best of our knowledge, this is the first work that addresses this problem.
• We propose OKGIT, a method for OpenKG link prediction with novel type compatibility score and type regularization. OKGIT can utilize NP canonicalization information while improving the type compatibility of predictions.
• We evaluate OKGIT on the link prediction across multiple datasets and observe that it outperforms the baseline methods. We also demonstrate that the learned model generates more type compatible predictions.
Source code for the proposed model and the experiments from this paper is available at https: //github.com/Chandrahasd/OKGIT.
Related Work
OpenKG Embeddings: Learning embeddings for OpenKGs has been a relatively under-explored area of research. Previous work using OpenKG embeddings has primarily focused on canonicalization. CESI (Vashishth et al., 2018) uses KG embedding models for the canonicalization of noun phrases in OpenKGs. The problem of incorporating canonicalization information into OpenKG embeddings was addressed by Gupta et al. (2019). Their method for OpenKG embeddings (i.e., CaRE) performs better than Ontological KG embedding baselines in terms of link prediction performance. The challenges in the link prediction for OpenKGs were discussed in Broscheit et al. (2020), and methods similar to CaRE were proposed. In spirit, CaRE (Gupta et al., 2019) comes closest to our model; however, they do not address the problem of type compatibility in the link prediction task.
Entity Type: Entity typing is a popular problem where given a sentence and an entity mention, the goal is to predict explicit types of the entity. It has been an active area of research, and many models and datasets, such as (Mai et al., 2018), (Hovy et al., 2006), and (Choi et al., 2018), have been proposed. However, unlike this task, we aim to incorporate unsupervised implicit type information present in the pre-trained BERT model into OpenKG embeddings, rather than predicting explicit entity types present in ontologies or corpora.
For unsupervised cases, the problem of type compatibility in link prediction was addressed in . They employ a type compatibility score by learning a type vector for each NP and two type vectors (head and tail) for each relation. This score is multiplied with the triple score function, and the type vectors are trained jointly with embedding vectors. Although their method addresses the type compatibility issue, it is based on Ontological KG embedding models and shares the same limitations. In another work (Xie et al., 2016), hierarchical type information available in the dataset is incorporated while learning embeddings. However, their model is suitable only for Ontological KGs where the type information is readily available.
BERT in KG Embedding: BERT architecture has been used for scoring KG triples (Yao et al., 2019;Wang et al., 2019). However, their methods work on Ontological KGs without any explicit attention to NP types. In other work (Petroni et al., 2019), pre-trained BERT models are used for predicting links in KG. However, their focus was to evaluate knowledge present in the pre-trained BERT models instead of improving the existing link prediction model. BERT embeddings were also used for extracting entity type information . However, it was used for Entity Linking compared to OpenKG link prediction in our case. Figure 1: OKGIT Architecture. OKGIT learns embeddings for Noun Phrases (NP) and Relation Phrases (RP) present in an OpenKG by augmenting a standard tail prediction loss with type compatibility loss. Guidance for the tail type is obtained through type projection out of BERT's tail embedding prediction. In the figure, h, r, and t are the head NP, relation (RP), and tail NP. h = w h 1 . . . w h k h and r = w r 1 . . . w r kr are tokens in the head NP and relation, respectively. t C and t B are the tail NP vectors predicted by CaRE and BERT models (Please see Section 3 for background on these two models). Vectors τ B and τ are the type vectors obtained using type projections P B and P , respectively. ψ PRED represents tail prediction score (Section 4.1) while ψ TYPE represents type compatibility score (Section 4.2). ψ OKGIT is the combined score generated by OKGIT for the input triple (h, r, t) (Section 4.3). Please refer to Section 4 for more details.
Background
We first introduce the notation used in this paper, followed by brief descriptions of BERT and CaRE. Notation: An Open Knowledge Graph OpenKG = (N , R, T ) contains a set of noun phrases (NPs) N , a set of relation phrases (RPs) R and a set of triples (h, r, t) ∈ T where h, t ∈ N and r ∈ R.
Here, h and t are called the head and tail NPs, and r is the RP between them. Each of them contains tokens from a vocabulary V, specifically, h = (w h 1 , w h 2 , . . . , w h k h ), t = (w t 1 , w t 2 , . . . , w t kt ) and r = (w r 1 , w r 2 , . . . , w r kr ). Here, k h , k r , and k t are the numbers of tokens in the head NP, the relation, and the tail NP. OpenKG embedding methods learn vector representations for NPs and RPs. Specifically, vectors for an NP e ∈ N and an RP r ∈ R are represented by boldface letters e ∈ R de and r ∈ R dr . Here, d e and d r are dimensions of NP and RP vectors. Usually, d e = d r . A score function ψ(h, r, t) represents the plausibility of a triple. Similarly, BERT represents tokens by d Bdimensional vectors. A type projection matrix P takes the vectors to a common d τ -dimensional type space R dτ . The vectors in the type space are denoted by τ . BERT (Devlin et al., 2019): BERT is a bidirectional language representation model based on the transformer architecture (Vaswani et al., 2017), which has shown performance improvements across multiple NLP tasks. It is pre-trained on two tasks, (1) Masked Language Modeling (MLM), where the model is trained to predict randomly masked tokens from the input sentences, and (2) Next Sentence Prediction (NSP), where the model is trained to predict whether an input pair of sentences occurs in a sequence or not. In our case, we use a pre-trained BERT model (without fine-tuning) for predicting a masked tail NP in a triple. CaRE (Gupta et al., 2019): CaRE is an OpenKG embedding method that can incorporate NP canonicalization information while learning the embeddings. NP canonicalization is the problem of grouping all surface forms of a given entity in one cluster, e.g., inferring that Barack Obama, Barack H. Obama, and President Obama all refer to the same underlying entity. CaRE consists of three components: (1) a canonicalization cluster encoder (CN), which generates NP embeddings by aggregating embeddings of canonical NPs from the corresponding cluster, (2) a bi-directional GRU based phrase encoder (PN), which encodes the tokens in RPs to generate RP embeddings, and (3) a base model, which is an Ontological KG embedding method like ConvE (Dettmers et al., 2018). It uses NP and RP embeddings for scoring triples. These triple scores are then fed to a loss function (e.g., pairwise ranking loss with negative sampling (Bordes et al., 2013) or binary cross-entropy loss (BCE) (Dettmers et al., 2018)). In this paper, we use CaRE with ConvE as the base model. This model generates a candidate tail NP vector for a given NP h and RP r, denoted by CaRE(h, r).
OKGIT: Our Proposed Method
Motivation: As illustrated in Table 1, top NPs predicted by CaRE may not always be type compatible with the input query. On the other hand, BERT's top predictions are usually type compatible , although they may not be factually correct. Thus, we hypothesize that a combination of these two models can produce correct as well as type compatible predictions. Motivated by this, we develop OKGIT, which combines the best of both of these models. The complete architecture of the proposed model can be found in Figure 1. In the following section, we present various components of the proposed model.
ψ PRED : Tail Prediction Score
The correctness of tail prediction in a triple is measured by the triple score function ψ PRED . Given a triple (h, r, t), it uses the corresponding vectors (h, r, t) and assigns high scores to correct triples and low scores to incorrect triples. We follow CaRE (Gupta et al., 2019) for scoring triples, which internally uses ConvE (Dettmers et al., 2018) as the base model. For a given triple (h, r, t), the CaRE model first predicts a tail NP vector t C as
t C = CaRE(h, r)(1)
The predicted tail NP vector t C is then matched against the given tail NP vector t using dot product to generate the triple score ψ PRED .
ψ PRED (t, t C ) = t C t.(2)
The score ψ PRED represents tail prediction correctness, and CaRE model uses only this score.
ψ TYPE : Tail Type Compatibility Score
The type compatibility between a given (head NP, RP) pair and a tail NP is measured by the type compatibility score function ψ TYPE . It assigns a high score when an NP t has suitable types as candidate tail NP for given head NP h and RP r. We employ a Masked Language Model (MLM) for measuring type compatibility, specifically BERT (Devlin et al., 2019). Following (Petroni et al., 2019), we can generate a candidate tail NP vector using BERT. Specifically, given a triple (h, r, t), we replace the head NP h and RP r with their tokens and tail NP t with a special MASK token. The resulting sentence (w h 1 , . . . , w h k h , w r 1 , . . . , w r kr , MASK) is sent as input to the BERT model. We denote the output vector from BERT corresponding to the MASK tail token as t B .
t B = BERT(h, r, MASK)(3)
We can predict tail NPs for a given (h, r) by finding the nearest neighbors of t B from the BERT vocabulary (Appendix D). These predicted NPs may not be the correct tail NP present in KG; however, they tend to be type compatible with the given (h, r) pair.
Motivated by this, we extract the implicit NP type information from this vector using a type projector P B ∈ R dτ ×d B . The output vector from BERT t B is high-dimensional and can be used as a proxy for NP's type . Therefore, P B projects the t B vector to a lower dimensional space such that only relevant information is retained. We do a similar operation on tail NP embedding t and use a type projector P ∈ R dτ ×de to extract type information. Both P B and P are trained jointly with the model. Thus, the type vectors are given by
τ B = P B t B and τ = P t(4)
for BERT and CaRE, respectively. Here, both τ B , τ ∈ R dτ . Then, the type compatibility score between these can be measured by negative of Euclidean distance, i.e.,
ψ TYPE (τ , τ B ) = −||τ B − τ || 2 2 .
We also experimented with a dot product version of the type score, ψ Dot TYPE (τ , τ B ) = τ B τ , and found its performance to be comparable to the Euclidean distance version. Therefore, we use the Euclidean distance version for all our experiments.
ψ OKGIT : Final Composite Score
The score functions ψ PRED and ψ TYPE may contain complementary information. Therefore, we use a combination of triple and type compatibility scores as final score for a given triple. Please recall that t C and τ B are in turn dependent on h and r ( (1) and (3)), while τ is dependent on t (4). Here, γ controls the relative weights given to individual scores. This final score takes care of both, i.e., triple correctness as well as type compatibility. For training, we feed the sigmoid of this score function to the Binary Cross Entropy (BCE) loss function following (Dettmers et al., 2018).
ψ OKGIT (h, r, t) = ψ PRED (t, t C ) + γ × ψ TYPE (τ , τ B ).(5)
Learning with Type Regularization
Let X = {(h i , r i )|(h i , r i , t i ) ∈ T for some t i ∈ N }
be the set of all head NPs and RPs which appear in the OpenKG. Let y i be the label for the triple (h i , r i , t i ) which is 1 if (h i , r i , t i ) ∈ T and 0 otherwise. We apply the logistic sigmoid function σ on score ψ OKGIT to get the predicted label
y i = σ(ψ OKGIT (h i , r i , t i ))
Finally, we use the following binary cross-entropy (BCE) loss for triple correctness.
TripleLoss(h i , r i , t i ) = y i · log(ŷ i ) + (1 − y i ) · log(1 −ŷ i )
To further reinforce the type compatibility in the model, we include an additional loss term which forces the type vectors of correct triples to be closer in the type space. Similar to TripleLoss, we use the binary cross-entropy loss for type regularization as well. The type regularization term is shown below.
TypeLoss(h i , r i , t i ) = y i · log(p i ) + (1 − y i ) · log(1 −p i ) wherep = σ(ψ TYPE (τ , τ B )).
The cumulative loss function is then given as below.
n i=1 TripleLoss(h i , r i , t i )+λ×TypeLoss(h i , r i , t i ) (6)
where n is the number of training instances. We consider X × N as our training data where triples present in T have label 1 and rest have label 0.
Experiments
Datasets: Following (Gupta et al., 2019), we use two subsets of English OpenKGs created using Re-Verb (Fader et al., 2011), namely ReVerb20K and ReVerb45K. We follow the same train-validationtest split for these datasets. As noted in (Petroni et al., 2019), predicting multi-token NPs using BERT could be challenging and it might require special pre-training (Joshi et al., 2020). To understand this difference, we create filtered subsets of these datasets such that they contain only single token NPs 1 . Specifically, we create Re-Verb20KF (ReVerb20K-Filtered) and ReVerb45KF (ReVerb45K-Filtered) which contain only single token NPs. More details about these datasets can be found in Table 2. Setup and hyperparameters: We use d e = d r = 300 for NP and RP vectors. For other hyperparameters, we use grid-search and select the model based on MRR on validation split. For type vectors, we select d τ from {100, 300, 500}. The weight for type regularization term λ is selected from the range {10 −3 , 10 −2 . . . , 10 1 } ∪ {0}. Type composition weight γ is selected from {0.25, 0.5, 1.0, 2.0, 5.0}. For the language model, we try both BERT-base as well as BERT-large. The optimal values for hyperparameters are shown in Table 3. The experiments run for 1.5 hours (for filtered subsets) and 9 hours (for full datasets) on GeForce GTX 1080 Ti GPU.
Results
We evaluate the proposed model on the link prediction task. We follow the same evaluation process as in (Gupta et al., 2019). From our experiments, we try to answer the following questions:
1. Is OKGIT effective in the link prediction task? (Section 6.1) 3. Is the Type Projector effective in extracting type vectors from embeddings? (Section 6.3)
Effectiveness of OKGIT Embeddings in Link Prediction
We evaluate our model on the link prediction task. Given a held-out triple (h i , r i , t i ), all the NPs e ∈ N in the KG are ranked as candidate tail NP based on their score ψ OKGIT (h i , r i , e). Let the rank of the correct tail NP t be denoted by rank t i . Similarly, ranks are also calculated for predicting head NPs instead of tail NPs using inverse relations (Dettmers et al., 2018;Gupta et al., 2019); let it be denoted by rank h i . These ranks are then used to find Mean Reciprocal Rank (MRR), Mean Rank (MR) and Hit@k (k=1,3,10) as follows.
MRR = 1 2 × n test ntest i=1 1 rank h i + 1 rank t i , MR = 1 2 × n test ntest i=1 rank h i + rank t i , and Hits@k = ntest i=1 1(rank h i ≤ k) + 1(rank t i ≤ k) 2 × n test .
Here, n test is the number of test triples and 1 is the indicator function. As noted in (Gupta et al., 2019), ranking individual NPs is not suitable for OpenKGs due to the lack of canonicalization. Hence, following their approach, we rank gold canonicalization clusters instead of individual NPs. The gold canonicalization partitions the NPs into clusters such that NPs mentioning the same entity belong to the same cluster. For ranking these clusters, we first find ranks of all NPs e ∈ N . Then for each cluster, we keep the NP with minimum rank as representative and discard others. The representative NPs are then ranked again and the new ranks are assigned to the corresponding clusters. The rank of the cluster containing the true NP is then used for evaluating the performance. For better readability, the MRR and Hits@k metrics have been multiplied by 100. We compare OKGIT with BERT (MLM), ConvE (Ontological KGE) and CaRE (OpenKGE). We also compare against a version of CaRE where phrase embeddings have been initialized with BERT (CaRE [BERT initialization]). As we can see from the results in Table 4, the proposed model OKGIT outperforms baseline methods in link prediction task across all datasets. This suggests that the implicit type scores from BERT help in improving ranks of correct NPs. Moreover, OKGIT outperforms CaRE with BERT initialization, suggesting the importance of type projectors 2 .
The performance gain is higher for ReVerb20K and ReVerb20KF (+5.3 MRR) than ReVerb45K and ReVerb45KF (+1.2 and +3.1 MRR) datasets. As we can see from Table 2, the number of NPs are very close to the number of gold clusters in the 20K Figure 2: Effect of type compatibility score and type regularization on link prediction performance. While the type compatibility score with λ = 0 gives better gains in MRR (11%-12%) than type regularization term with γ = 0 (7%-11%), the combined model performs the best, achieving 12%-18% gains in MRR (Section 6.1).
datasets. Thus, the canonicalization information is slightly weaker in the 20K datasets than the 45K datasets. Due to this, CaRE achieved better gains in the ReVerb45K dataset as noted in (Gupta et al., 2019). This leaves more scope of improvements in the 20K datasets. By including the type information from BERT, OKGIT is able to fill this gap. It achieves better gains in the 20K datasets and is able to alleviate the lack of canonicalization information. Moreover, OKGIT is able to improve ranks of correct NPs ranked lower by CaRE. This can be seen by significant improvements in the MR. Other Language Models: Using RoBERTa instead of BERT results in similar performance improvements (Appendix B). However, our primary focus is to understand the impact of implicit type information present in pre-trained MLMs, such as BERT, and not to compare multiple MLMs themselves. Ablations: We perform ablation experiments to compare the relative importance of type compatibility score ψ T Y P E and type regularization term. We evaluate OKGIT with disabled type compatibility score (i.e., γ = 0 in Equation (5)) and disabled type regularization term (i.e., λ = 0 in Equation (6)) separately. Please note that CaRE model is equivalent to OKGIT with γ = 0 and λ = 0. The results of this experiment are shown in Figure 2. We find that while type compatibility score gives more performance gain (11%-12% gain in MRR) than type regularization (7%-11% gain in MRR), Table 5: Results of type evaluation in CaRE and OKGIT predictions. We find that OKGIT performs better than CaRE in all datasets in terms of F1-score. Also, the results are statistically significant for all the datasets (Section 6.2).
the combined model achieves the best performance (12%-18% gain in MRR). It suggests that both the components are important. Please refer to the Appendices A, B, C for more ablation experiments.
Type Compatibility in Predicted NPs
As noted in , BERT vectors contain NP type information 3 . OKGIT utilizes this type information for improving OpenKG link prediction. In this section, we evaluate whether OKGIT improves upon CaRE in predicting type compatible NPs. For such an evaluation, we require type annotations for the NPs in the OpenKGs. However, OpenKGs do not have an underlying ontology or explicit gold NP type annotations, making a direct evaluation impossible. Therefore, we employ a pre-trained entity typing model UFET (Choi et al., 2018). Given a sentence and an entity mention, the entity typing model predicts the mentioned entity's types. Using this model, we obtain types for true as well as predicted NPs by CaRE and OKGIT and use it for the evaluation. Please note that this evaluation is limited to the coverage and quality of the UFET model. Evaluation Protocol: The type vocabulary in UFET model contains 10, 331 types including 9 general, 121 fine-grained, and 10, 201 ultra-fine types. The model takes a sentence (w h 1 , . . . , w h k h , w r 1 , . . . , w r kr , w t 1 , . . . , w t kt ) formed from a triple (h, r, t) along with an entity mention (either t or h) as inputs and outputs a distribution over types. We use the top five predicted types for our experiments 4 . For a triple (h, r, t), we consider the types predicted for the true tail NP t as true types Γ(t). Lett CaRE andt OKGIT be the top predicted tail NP by CaRE and OKGIT for the (h, r) pair. Then the types Γ(t CaRE ) predicted fort CaRE Figure 3: t-SNE projections of tail NP embeddings (left) and type vectors (right) extracted by the Type Projector from tail NP embeddings (Section 4.2) in the ReVerb20K dataset. We find that the Type Projector is able to extract informative type vectors from the tail embeddings. This is evident from the fact that the tail embeddings corresponding to person, location, and dates were inter-mixed in the left plot, while they have been separated into type specific clusters in the right plot. Please see Section 6.3 for details.
in the triple (h, r,t CaRE ) is used as predicted types for CaRE. Similarly the types Γ(t OKGIT ) predicted fort OKGIT in the triple (h, r,t OKGIT ) are used as predicted types for OKGIT. For evaluation, we calculate the mean F1-score as follows 5
F1 = 2 n test ntest i=1 |Γ(t i ) ∩ Γ(t i )| |Γ(t i )| + |Γ(t i )| .
Here, |Γ(t)| denotes the number of types present in Γ(t) andt representst CaRE ort OKGIT . We can obtain the F1-scores for head NP similarly. We evaluate the mean F1-scores across head and tail NP prediction tasks on the test data and compare CaRE with OKGIT.
As we can see from the results in Table 5, OKGIT performs better than CaRE, suggesting that OKGIT generates more type compatible NPs than CaRE in the link prediction task. OKGIT achieves higher gains in the single-token datasets (i.e., ReVerb20KF and ReVerb45KF) than multitoken dataset (i.e., ReVerb20K and ReVerb45K). Upon investigation, we found that the types obtained using the entity typing model (true as well as predicted) for the multi-tokens datasets often contain common noisy types, leading to the small difference between CaRE and OKGIT. Following Dror et al. (2018), we also check the results for sta-tistical significance using Permutation, Wilcoxon, and t-test with α = 0.05, and found it to be significant for all the datasets.
Effectiveness of Type Projector
To better understand the effect of type projection, we visualize the vectors in NP-space from CaRE and Type-space (i.e., after type projection) from OKGIT. For this experiment, we randomly select 5 NPs from 3 categories, namely Person, Location and Year. More details about this selection process can be found in the Appendix E. We project the NP vectors (i.e., t) corresponding to these NPs to a 2-dimensional NP-space using t-SNE (Maaten and Hinton, 2008) 6 . Similarly, we also project the corresponding type vectors (i.e., τ ) to 2-dimensional Type-space. We plot the resulting vectors, color and shape coded by their respective categories, in Figure 3.
We can see that the vectors from different categories in the NP-space are mixed. However, after the type projection, the vectors in the Type-space are clustered together based on their categories.
Qualitative Evaluations
In this section, we present some examples of predictions made by CaRE and OKGIT methods. The result is shown in Table 6. As we see in Triple-1, both CaRE and OKGIT predict the correct NP (i.e., We see similar patterns in Triple-2, where the correct tail NP should be of type number indicating the count of votes. OKGIT is able to predict numbers in top predictions for Triple-2, while CaRE has mixed types in top predictions.
Conclusion
The task of link prediction for Open Knowledge Graphs (OpenKG) has been a relatively underexplored research area. Previous work on OpenKG embeddings has primarily focussed on improving or incorporating NP canonicalization information. While there are few methods for OpenKG link prediction, they often predict noun phrases with types incompatible with the query noun and relation phrases. Therefore, we use implicit type information from BERT to improve OpenKG link prediction and propose OKGIT. With the help of novel type compatibility score and type regularization term, OKGIT achieves significant performance improvement on the link prediction task across multiple datasets. We also find that OKGIT produces more type compatible predictions than CaRE, evaluated using an external entity typing model.
Broader Impact
OKGIT is the first attempt towards incorporating implicit type information in OpenKG link prediction without human intervention. It will greatly benefit densification and applications of OpenKGs where no underlying ontologies are available.
However, OKGIT predictions depend on various datasets, i.e., the corpus used for training the masked language model (e.g., BERT) and the corpus from which the OpenKG triples were extracted. A potential, possibly undesirable, bias may be introduced in the predictions by manipulating these corpora or adding a large number of malicious triples in the OpenKG.
We have tested OKGIT in English datasets. While the overall model architecture is independent of the language, the model's effectiveness might vary depending upon the quality of the masked language model, and it needs to be tested.
Appendices
A BERT Initialization vs Type Projectors
Here, we demonstrate the importance of type projectors by comparing OKGIT with multiple BERTaugmented versions of CaRE. Specifically, we initialize the phrase and word embeddings in CaRE with a pre-trained BERT model. The phrase (word) is passed as input to BERT and the output corresponding to the [CLS] token is then used for initializing phrase ( In all the methods, including OKGIT, we never fine-tune BERT, as our goal is to evaluate the type information already present in pre-trained BERT model. We experiment with both, BERT-base and BERT-large, and report the best performing model.
As we can see from the results in Table 7, OKGIT outperforms these baselines. Although BERT initialization improves the performance of CaRE model, the usage of explicit type-score and type regularization leads to significant performance improvements, suggesting their importance.
B Replacing BERT with other operations
In this section, we evaluate whether BERT module in OKGIT can be replaced by simple operations such as vector addition and concatenation. Specifically, we modify t B in Equation (3) by replacing BERT with these operations leading to the following variants of OKGIT. OKGIT-C: BERT is replaced by concatenation of 7 we also tried using pre-trained BERT as RP encoder in CaRE, however, it performed poorly due to fixed RP encoder. head NP vector h and relation phrase vector r t B = [h; r].
OKGIT-A: BERT is replaced by vector addition
t B = h + r.
OKGIT-R: We also experiment with another masked language model RoBERTa in place of BERT.
t B = RoBERTa(h, r, MASK).
For this experiment, we use the ReVerb20KF and ReVerb45KF datasets as representatives. We perform grid-search with similar hyper-parameters as in Section 5 of the main paper and select the best model based on the MRR on the validation split. The results are reported in Table 8.
As we can see from the results, the OKGIT-C and OKGIT-A perform very similar to CaRE on both datasets. This suggests that the performance gains for OKGIT come from the BERT module. This observation is further reinforced because OKGIT-R results in similar improvements compared to CaRE as OKGIT. However, in all cases, we find that OKGIT with BERT outperforms other model variants.
C CaRE with Entity Typing
Entity typing is the task of predicting explicit types of an entity given a sentence and its mention. As we are interested in improving type compatibility of predictions in the link prediction task, we can also incorporate the output from an entity typing model. In this section, we explore this setting by replacing the BERT module in OKGIT with an entity typing model UFET from (Choi et al., 2018). Specifically, we replace the vector t B in Equation (3) with the output of UFET representing the predicted probability distribution over types. OKGIT(UFET) Model: The UFET model takes a sentence and an entity mention as input and produces a distribution over explicit set of types. In our case, the sentence is formed by concatenating the subject NP, relation phrase, and object NP, while the object NP is used as mention. The output distribution from UFET is used as t B in our model. We call this version of the model as OKGIT(UFET) and compare it CaRE and OKGIT.
We run a grid-search for finding the best hyperparameter similar to Section 5 and report the results This limitation, however, is not valid for OKGIT. In OKGIT, the vector t B is used for computing tail type compatibility score, instead of predicting tail NPs. Therefore, it is not restricted to BERT vocabulary or single-token NPs. As shown in Table 4, OKGIT is equally effective for single-token datasets (e.g., ReVerb20KF and ReVerb45KF) and multi-token datasets (e.g., ReVerb20K and Re-Verb45K).
E Selection of NPs for t-SNE
The OpenKGs do not have type annotations for the NPs. Therefore, we manually annotated a set of NPs and visualized a random subset. For this process, we first list all the NPs and shuffle them. Then we scan this list and note the first fifteen person names, locations, and years. Later, we select five NPs from each of these categories randomly and use them for the evaluation.
F Link Prediction Performance on Validation Split
The performance of CaRE and OKGIT on validation data on the link prediction task can be found in Table 10. These performance corresponds to the respective models which were used to report results in Table 4 of the main paper.
G Type Information in BERT Predictions
Our proposed OKGIT model is based on the hypothesis that BERT vectors (i.e., t B in Equation (3) in Section 4.2) contain implicit type information.
In this section, we evaluate this hypothesis that BERT vectors contain type information. It should be noted that evaluating OKGIT model for predicting NP types is not the goal here. We are interested in understanding whether pre-trained BERT vectors have sufficient type information, measured with respect to some existing anchors. Evaluation Method: For this experiment, we use Freebase (Bollacker et al., 2008) which contains explicit gold type information for entities. Specifically, we use FB15K dataset (Bordes et al., 2013). We use the data from (Yao et al., 2019) for converting symbolic names in FB15k to textual descriptions. We only consider the subset of triples in FB15k which has single token in the tail node as BERT can only predict single token NPs. 8 This results in n T = 95, 782 triples. For type information, we use the data from (Xie et al., 2016). It contains 61 primary types (e.g., /award). Please note that each node in FB15k can have multiple types. For a triple (h, r, t), we consider the types associated with the true tail NP t as true types Γ(t). We then pass tokenized head NP and RP to BERT and find the top predictiont = BERT(h, r, MASK) for tail position. The set of types associated with the predicted NPt, denoted by Γ(t), is then used as the predicted types. For evaluation, we calculate the following metrics
Precision = 1 n T n T i=1 |Γ(t i ) ∩ Γ(t i )| |Γ(t i )| , Recall = 1 n T n T i=1 |Γ(t i ) ∩ Γ(t i )| |Γ(t i )| , and F1 = 2 n T n T i=1 |Γ(t i ) ∩ Γ(t i )| |Γ(t i )| + |Γ(t i )| .
Here, |Γ(t)| and |Γ(t)| denotes the number of types present in Γ(t) and Γ(t) respectively. 9 For comparison, we use the following baseline methods to assign types to a given (h, r, t).
Triple (tesla, return to, ?)CaRE
polytechnic
institute
2009
1986
jp
morgan
patent
BERT chicago
earth
england america
detroit
OKGIT new york america paris
california london
Table 1 :
1Some sample tail NP predictions by CaRE, BERT, and OKGIT. The true tail NP is underlined. As we can see, both CaRE and BERT fail to predict the correct tail NP. However, BERT predictions are type compatible with the query. OKGIT predicts the correct NP while improving the type compatibility with the query.
Table 2 :
2Dataset Statistics. Please refer to Section 5 for more details.
Table 3 :
3Optimal Hyperparameter values. Please refer to Section 5 for more details.
Table 4 :
4Results of link prediction task. Here ↑ indicates higher values are better while ↓ indicates lower values are better. We can see that the OKGIT model outperforms the baseline models on all the datasets (Section 6.1).2. Does OKGIT generate more type compatible
NPs in link prediction? (Section 6.2)
Table 6 :
6Few example predictions made by CaRE and
OKGIT models. We observe that the OKGIT predic-
tions are more type compatible with the query. Please
refer to Section 6.4 for more details.
leipzig) on top. However, more predictions from
OKGIT are type compatible (i.e., all are locations)
to the input query. On the other hand, CaRE pre-
dictions have mixed types (i.e., location, person,
etc.). Also, CaRE makes an incorrect prediction,
vladimir horowitz, possibly due to the presence of
a training triple (vladimir horowitz, had a great
affinity for, bach).
word) embedding for CaRE model. This modified CaRE model is trained similar to the base CaRE model. Based on different initialization methods, we experiment with following baselines. CaRE [BERT NP]: NP embeddings are initialized using BERT and the rest of the model is same as CaRE. This model uses 768 (for BERT-base) or 1024 (for BERT-large) dimensional vectors. CaRE [BERT NP+PROJ]: Since CaRE [BERT NP] uses higher dimensional vectors (768 or 1024) compared to other methods (300), the comparison may not be fair. To address this issue, we project BERT embeddings to 300 dimension. The projection is trained with the rest of the model. CaRE [BERT NP+RP]: We initialize NP embedings as well as the word embeddings in RP encoder using BERT embeddings. This method also uses 768 or 1024 dimensional vectors 7 .
Table 7 :
7Results of the link prediction task. Here ↑ indicates higher values are better while ↓ indicates lower values are better. We can see that the OKGIT model outperforms the baseline models on all the datasets (Appendix A). Here, BERT-B and BERT-L denote BERT-base and BERT-large respectively. § For NP+PROJ models, BERT-large performs best for ReVerb20K, while BERT-base performs best for ReVerb45K.OKGIT-C [t B = [h; r]] OKGIT-A [t B = h + r]ReVerb20KF
ReVerb45KF
Model
MRR(%)↑ MR↓
Hits(%)↑
MRR(%)↑ MR↓
Hits(%)↑
@1
@3
@10
@1
@3
@10
CaRE (Gupta et al., 2019)
29.3
308.3
22.1
31.6
43.2
26.6
692.7
20.1
28.8
39.1
30.0
309.3
22.9
32.4
43.8
27.1
666.5
20.2
29.8
39.9
30.4
331.7
23.5
32.9
43.6
27.1
660.5
19.9
30.6
40.2
OKGIT-R [RoBERTa]
32.7
221.0
25.3
35.1
46.5
29.0
596.7
21.8
32.0
43.0
OKGIT [Our model]
34.6
214.7
26.5
38.0
50.2
29.7
500.2
22.5
32.4
43.3
Table 8 :
8Results of the ablation experiments. We replace the BERT module from OKGIT with simple operations such as vector addition (OKGIT-A) and vector concatenation (OKGIT-C). We also use RoBERTa in place of BERT(OKGIT-R). As we can see, replacing BERT with simple operations result in performance similar to CaRE. However, we do see better gains with RoBERTa, which performs better than CaRE and similar to OKGIT for ReVerb45KF. For all datasets, the OKGIT model outperforms other variants (Appendix B).ReVerb20KF
ReVerb45KF
Model
MRR(%)↑ MR↓
Hits(%)↑
MRR(%)↑ MR↓
Hits(%)↑
@1
@3
@10
@1
@3
@10
CaRE
29.3
308.3
22.1
31.6
43.2
26.6
692.7
20.1
28.8
39.1
OKGIT (UFET)
8.8
1208.2 6.9
9.6
11.0
4.9
1156.8 1.5
4.0
11.5
OKGIT [Our model]
34.6
214.7
26.5
38.0
50.2
29.7
500.2
22.5
32.4
43.3
Table 9 :
9Comparison of OKGIT with OKGIT(UFET). We can see that including UFET model in the system hurts the performance of the model (Appendix C). on ReVerb20KF and ReVerb45KF datasets. The results are presented in theTable 9. As we can see from the results, OKGIT(UFET) performs poorly, even when compared to CaRE. It suggests that explicit type vectors from UFET model does not help in the link prediction task.D BERT as Link Prediction ModelAs mentioned in Section 4.2, t B from Equation (3) can be used for predicting tail NPs by finding nearest neighbors in BERT vocabulary. However, this approach has a limitation. This model can only predict NPs that are single token and present in BERT vocabulary, restricting its applicability.
Table 10 :
10Results of link prediction task on the validation split. We can see that the OKGIT model outperforms the baseline models on all the datasets (Appendix F).
Please note that the single-token limitation is only valid for BERT, not for OKGIT (Appendix D).
Please refer to Appendix A for a detailed comparison.
We also verify this using Freebase, an ontological KG. Please refer to the Appendix G for more details.4 We observe similar behaviour with top one and three types.
Since we use a fixed number of types for ground truth and predictions, precision, recall, and F1-score have the same values. Therefore, we only report the F1-score.
We run t-SNE for 2000 iterations with 15 perplexity.
Please note that this limitation is only valid for BERT, not for OKGIT.9 Please note that, since we have gold type annotations available for Freebase, the number of true and predicted types need not be the same. Therefore, we evaluate precision and recall along with F1-scores.
AcknowledgmentsWe thank the anonymous reviewers for their constructive comments. This work is supported by the Ministry of Human Resource Development (Government of India).Model PrecisionRecall F1 Random: assign |Γ(t)| randomly selected types.Most Frequent Types (MFT): assign |Γ(t)| most frequent types. Human: We also evaluate the type annotations provided by human annotators on randomly selected 100 triples. Each triple is exposed to three annotators and they are asked to provide types to the tail NP. Since most of the annotations contain one type for each triple, we take the union of the types provided by different annotators to compensate for Recall. For 69% of the triples, the annotators agreed on the same type.To be fair with the automated baselines, we use the same number of predicted types as BERT (i.e., |Γ(t)|). A comparison with a pre-trained explicit entity typing methods, such as(Choi et al., 2018), is not applicable here as their type vocabulary is different. As we can see from the results inTable 11, BERT achieves best F1 score, suggesting that it contains type information. The Recall for Human is low since most of the annotations contained only one type, resulting in lower F1 score.
Open information extraction from the web. Michele Banko, Michael J Cafarella, Stephen Soderland, Matt Broadhead, Oren Etzioni, IJ-CAI, IJCAI'07. San Francisco, CA, USAMorgan Kaufmann Publishers IncMichele Banko, Michael J. Cafarella, Stephen Soder- land, Matt Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In IJ- CAI, IJCAI'07, page 2670-2676, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
Freebase: A collaboratively created graph database for structuring human knowledge. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, Jamie Taylor, ACM SIGMOD. AcMKurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A col- laboratively created graph database for structuring human knowledge. In ACM SIGMOD, pages 1247- 1250. AcM.
Translating embeddings for modeling multirelational data. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, Oksana Yakhnenko, Neural Information Processing Systems (NeurIPS). Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Neural Information Processing Systems (NeurIPS), pages 1-9.
Can we predict new facts with open knowledge graph embeddings? a benchmark for open link prediction. Samuel Broscheit, Kiril Gashteovski, Yanjie Wang, Rainer Gemulla, 10.18653/v1/2020.acl-main.209Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsSamuel Broscheit, Kiril Gashteovski, Yanjie Wang, and Rainer Gemulla. 2020. Can we predict new facts with open knowledge graph embeddings? a bench- mark for open link prediction. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 2296-2308, Online. As- sociation for Computational Linguistics.
Improving entity linking by modeling latent entity type information. Shuang Chen, Jinpeng Wang, Feng Jiang, Chin-Yew Lin, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Shuang Chen, Jinpeng Wang, Feng Jiang, and Chin- Yew Lin. 2020. Improving entity linking by model- ing latent entity type information. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 34, pages 7529-7537.
Ultra-fine entity typing. Eunsol Choi, Omer Levy, Yejin Choi, Luke Zettlemoyer, 10.18653/v1/P18-1009Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, Australia1Long Papers). Association for Computational LinguisticsEunsol Choi, Omer Levy, Yejin Choi, and Luke Zettle- moyer. 2018. Ultra-fine entity typing. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 87-96, Melbourne, Australia. Associa- tion for Computational Linguistics.
Convolutional 2d knowledge graph embeddings. Tim Dettmers, Minervini Pasquale, Stenetorp Pontus, Sebastian Riedel, Proceedings of the 32th AAAI Conference on Artificial Intelligence. the 32th AAAI Conference on Artificial IntelligenceTim Dettmers, Minervini Pasquale, Stenetorp Pon- tus, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In Proceedings of the 32th AAAI Conference on Artificial Intelligence, pages 1811-1818.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423NAACL-HLT. Minnesota. ACLMinneapolisLong and Short Papers1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL-HLT, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Min- nesota. ACL.
The hitchhiker's guide to testing statistical significance in natural language processing. Rotem Dror, Gili Baumer, Segev Shlomov, Roi Reichart, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- ichart. 2018. The hitchhiker's guide to testing sta- tistical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392. Association for Computational Linguistics.
Identifying relations for open information extraction. Anthony Fader, Stephen Soderland, Oren Etzioni, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingEdinburgh, Scotland, UK.Association for Computational LinguisticsAnthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information ex- traction. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1535-1545, Edinburgh, Scotland, UK. Associ- ation for Computational Linguistics.
CaRe: Open knowledge graph embeddings. Swapnil Gupta, Sreyash Kenkre, Partha Talukdar, 10.18653/v1/D19-1036EMNLP-IJCNLP. Hong Kong, China. ACLSwapnil Gupta, Sreyash Kenkre, and Partha Talukdar. 2019. CaRe: Open knowledge graph embeddings. In EMNLP-IJCNLP, pages 378-388, Hong Kong, China. ACL.
OntoNotes: The 90% solution. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, Ralph Weischedel, NAACL-HLT, Companion. New York City, USA. ACLShort PapersEduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: The 90% solution. In NAACL-HLT, Companion Vol- ume: Short Papers, pages 57-60, New York City, USA. ACL.
Kb-enabled query recommendation for long-tail queries. Zhipeng Huang, Bogdan Cautis, Reynold Cheng, Yudian Zheng, 10.1145/2983323.2983650CIKM, CIKM '16. New York, NY, USAACMZhipeng Huang, Bogdan Cautis, Reynold Cheng, and Yudian Zheng. 2016. Kb-enabled query recommen- dation for long-tail queries. In CIKM, CIKM '16, pages 2107-2112, New York, NY, USA. ACM.
Type-sensitive knowledge base inference without explicit type supervision. Prachi Jain, Pankaj Kumar, Mausam , Soumen Chakrabarti, 10.18653/v1/P18-2013In ACL. 2Short PapersPrachi Jain, Pankaj Kumar, Mausam, and Soumen Chakrabarti. 2018. Type-sensitive knowledge base inference without explicit type supervision. In ACL (Volume 2: Short Papers), pages 75-80, Melbourne, Australia. ACL.
SpanBERT: Improving pre-training by representing and predicting spans. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, Omer Levy, 10.1162/tacl_a_00300Transactions of the Association for Computational Linguistics. 8Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. SpanBERT: Improving pre-training by representing and predicting spans. Transactions of the Associa- tion for Computational Linguistics, 8:64-77.
Visualizing data using t-sne. Laurens Van Der Maaten, Geoffrey Hinton, JMLR. 9Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. JMLR, 9(Nov):2579- 2605.
An empirical study on fine-grained named entity recognition. Khai Mai, Thai-Hoang Pham, Minh Trung Nguyen, Tuan Duc Nguyen, Danushka Bollegala, Ryohei Sasano, Satoshi Sekine, COLING. Santa Fe, New Mexico, USA. ACLKhai Mai, Thai-Hoang Pham, Minh Trung Nguyen, Tuan Duc Nguyen, Danushka Bollegala, Ryohei Sasano, and Satoshi Sekine. 2018. An empirical study on fine-grained named entity recognition. In COLING, pages 711-722, Santa Fe, New Mexico, USA. ACL.
Open information extraction systems and downstream applications. Mausam Mausam, Proceedings of the twenty-fifth international joint conference on artificial intelligence. the twenty-fifth international joint conference on artificial intelligenceMausam Mausam. 2016. Open information extraction systems and downstream applications. In Proceed- ings of the twenty-fifth international joint conference on artificial intelligence, pages 4074-4077.
Relation schema induction using tensor factorization with side information. Madhav Nimishakavi, Uday Singh Saini, Partha Talukdar, 10.18653/v1/D16-1040Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsMadhav Nimishakavi, Uday Singh Saini, and Partha Talukdar. 2016. Relation schema induction using tensor factorization with side information. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 414- 423, Austin, Texas. Association for Computational Linguistics.
Language models as knowledge bases?. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander Miller, 10.18653/v1/D19-1250EMNLP-IJCNLP. Hong Kong, China. ACLFabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In EMNLP-IJCNLP, pages 2463-2473, Hong Kong, China. ACL.
CESI: canonicalizing open knowledge bases using embeddings and side information. Shikhar Vashishth, Prince Jain, Partha P Talukdar, 10.1145/3178876.3186030WWW 2018. Lyon, FranceShikhar Vashishth, Prince Jain, and Partha P. Talukdar. 2018. CESI: canonicalizing open knowledge bases using embeddings and side information. In WWW 2018, Lyon, France, April 23-27, 2018, pages 1317- 1327.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, NeurIPS. I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. GarnettCurran Associates, IncAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, NeurIPS, pages 5998-6008. Curran As- sociates, Inc.
Quan Wang, Pingping Huang, Haifeng Wang, Songtai Dai, Wenbin Jiang, Jing Liu, Yajuan Lyu, Yong Zhu, Hua Wu, arXiv:1911.02168COKE: Contextualized Knowledge Graph Embedding. arXiv preprintQuan Wang, Pingping Huang, Haifeng Wang, Song- tai Dai, Wenbin Jiang, Jing Liu, Yajuan Lyu, Yong Zhu, and Hua Wu. 2019. COKE: Contextual- ized Knowledge Graph Embedding. arXiv preprint arXiv:1911.02168.
Representation learning of knowledge graphs with hierarchical types. Ruobing Xie, Zhiyuan Liu, Maosong Sun, IJCAI, IJCAI'16. AAAI PressRuobing Xie, Zhiyuan Liu, and Maosong Sun. 2016. Representation learning of knowledge graphs with hierarchical types. In IJCAI, IJCAI'16, page 2965-2971. AAAI Press.
KG-BERT: BERT for Knowledge Graph Completion. Liang Yao, Chengsheng Mao, Yuan Luo, abs/1909.03193ArXiv. Liang Yao, Chengsheng Mao, and Yuan Luo. 2019. KG-BERT: BERT for Knowledge Graph Comple- tion. ArXiv, abs/1909.03193.
Information extraction over structured data: Question answering with Freebase. Xuchen Yao, Benjamin Van Durme, 10.3115/v1/P14-1090Long Papers). Baltimore, Maryland. ACL1ACLXuchen Yao and Benjamin Van Durme. 2014. Infor- mation extraction over structured data: Question an- swering with Freebase. In ACL (Volume 1: Long Pa- pers), pages 956-966, Baltimore, Maryland. ACL.
| [] |
[
"Eye Gaze Estimation Model Analysis",
"Eye Gaze Estimation Model Analysis"
] | [
"Aveena Kottwani aveena.kottwani@stonybrook.edu \nDepartment of Computer Science\nDepartment of Computer Science\nStony Brook University\nNYUSA\n",
"Ayush Kumar aykumar@cs.stonybrook.edu \nStony Brook University\nNYUSA\n"
] | [
"Department of Computer Science\nDepartment of Computer Science\nStony Brook University\nNYUSA",
"Stony Brook University\nNYUSA"
] | [] | We explore techniques for eye gaze estimation using machine learning. Eye gaze estimation is a common problem for various behavior analysis and human-computer interfaces. The purpose of this work is to discuss various model types for eye gaze estimation and present the results from predicting gaze direction using eye landmarks in unconstrained settings. In unconstrained real-world settings, feature-based and modelbased methods are outperformed by recent appearance-based methods due to factors like illumination changes and other visual artifacts. We discuss a learning-based method for eye region landmark localization trained exclusively on synthetic data. We discuss how to use detected landmarks as input to iterative model-fitting and lightweight learning-based gaze estimation methods and how to use the model for person-independent and personalized gaze estimations. | 10.13140/rg.2.2.22546.99522 | [
"https://export.arxiv.org/pdf/2207.14373v1.pdf"
] | 251,196,608 | 2207.14373 | 8b2eef13aaf94dc39ae3dcd1afdc61f1db58e761 |
Eye Gaze Estimation Model Analysis
Aveena Kottwani aveena.kottwani@stonybrook.edu
Department of Computer Science
Department of Computer Science
Stony Brook University
NYUSA
Ayush Kumar aykumar@cs.stonybrook.edu
Stony Brook University
NYUSA
Eye Gaze Estimation Model Analysis
Index Terms-Eye gaze estimationAppearance-based gaze estimationFeature-based gaze estimationmodel-based gaze estimationEye Tracking
We explore techniques for eye gaze estimation using machine learning. Eye gaze estimation is a common problem for various behavior analysis and human-computer interfaces. The purpose of this work is to discuss various model types for eye gaze estimation and present the results from predicting gaze direction using eye landmarks in unconstrained settings. In unconstrained real-world settings, feature-based and modelbased methods are outperformed by recent appearance-based methods due to factors like illumination changes and other visual artifacts. We discuss a learning-based method for eye region landmark localization trained exclusively on synthetic data. We discuss how to use detected landmarks as input to iterative model-fitting and lightweight learning-based gaze estimation methods and how to use the model for person-independent and personalized gaze estimations.
I. INTRODUCTION
Human gaze direction can assist users with motor disabilities using eye-gaze tracking cursor control system, gaze-based human-computer interaction , visual attention analysis [17], consumer behavior research, next-generation UIs, marketing analysis, AR, VR and automation of vehicles. Eye-gaze estimation using web cameras can be used by the users with motor-disabilities in everyday tasks [12], such as reading [25], gaze-based interaction ,digital signage and human-computer interaction. Gaze tracking is the process of measuring either the point of gaze or the motion of an eye relative to the head. Accurate eye gaze tracking models requires a manual calibration procedure for each new user or expensive specialized hardware. This does not appeal consumer market applications. Since prices of cameras has decreased, this encourages the integration of gaze estimation function in consumer-grade devices equipped with monocular cameras for applications such next-generation controllers for games [8], natural user interfaces, and human-computer interaction (HCI) [4], [5]. There is a need for creating gaze estimation function that uses consumer-grade devices equipped with monocular cameras under conditions like low-resolution eye images, unconstrained user postures, different locations, and varying distances from the camera. Recently, inexpensive solutions that work at lower cost and complexity and require no active illumination have been proposed [1]. Most of these systems are built using modern computer vision algorithms that work with camera in any computer screen, without additional hardware.
Traditional feature-based and model-based gaze estimation typically rely on accurate detection of eye region landmarks, such as the iris center or the eye corners. Many previous works have therefore focused on accurately localizing iris center and eye corners landmarks.Using large-scale data-sets we can now have user-specific calibration-free point of gaze (PoG) estimation with a monocular camera.But, the head positions in conventional data sets are constrained considering only the region near to the camera which cannot be used for images taken from more varied distances. In addition, the requirement of high-resolution eye images also limits the practical applicability. Deep learning has shown successes in a variety of computer vision tasks, where their effectiveness is dependent on the size and diversity of the image data set.In this paper, we discuss type of gaze estimation models and related algorithms.
II. RELATED WORK
A. Feature based Gaze Estimation
Feature-based methods utilizes distinctive features of the eyes like limbus and pupil contour, eye comers and cornea reflections are the common features used for gaze estimation. The aim of feature-based methods is to identify local features of the eye that are generally less sensitive to variations in illumination and viewpoint. These systems have performance issues in outdoors or under strong ambient light. Featurebased gaze estimation utilizes geometric vectors which map the shape of an eye and head pose to estimate gaze direction. A simple approach discussed by Sesma [3] , is the pupil-centereye-corner vector or PC-EC vector that is used for estimating horizontal gaze direction on public displays [6]. This approach replaces corneal reflections used in eye tracking with the PC-EC vector. Another approach using Feature-based gaze model is [7] building eye gaze model using user interaction cues. Using supervised learning algorithm to learn the correlation between gaze and interaction cues, such as cursor and caret locations. The supervised learning algorithm would give sturdy geometric gaze features and a ways to identify good training data from noisy data. This method uses behavior-informed validation [2] to extract gaze features that correspond with the interaction cue, with an average error of 4.06º.
B. Model based Gaze Estimation
Model-based methods estimate gaze by fitting facial and eye models to the input image. As these methods use human facial features, simple parameters can describe the gaze state without a large amount of person-specific training data.Traditional model-based methods use the shape of the iris to estimate gaze direction. [9] An ellipse is fitted to the observed iris, and then the gaze is estimated from the ellipse parameters. [10]. Downside of this approach is that it requires high-resolution eye images. Recent model-based approaches use 3D eyeball models [10], [14] In these approaches, the gaze direction vector is defined from the eyeball center to the iris center. The author [10] used eyelids for the eyeball model but this feature required extra annotation of the eye corners as well. The authors in [11] estimated 3D gaze vectors using tracked facial feature points; however, this requires manual annotations of eye corners and one-time calibration. The advantage of model-based methods is that they utilize the head position and rotation information obtained from the face image more productively than appearance-based methods [13].To enhance gaze estimation combination of the head pose and eye location information is proposed, but this requires calibration phases to look at known targets. [15] One of the issues of model-based methods is the manual prior calibration process for individual users in order to estimate accurate position of the eyeballs in the face. The author [16] proposed an automatic calibration method to minimize the projected errors between model output and images through hidden online calibration.
Model-based approaches use an explicit geometric model of the eye to estimate 3D gaze direction vector. [19] The 3D model-based gaze estimation methods use center and radius of the eyeball as well as the angular offset between visual and optical axes.The eyeball center is determined either by facial landmark such as tip of nose or by fitting deformable eye region models [18]. Most 3D model-based (or geometric) approaches use metric information from camera calibration and a global geometric model (external to the eye) of light sources, camera and monitor position and orientation. Most of the model-based method first reconstruct the optical axis of the eye, then reconstruct visual axis; finally the point of gaze is estimated by intersecting the visual axis with the scene geometry. Reconstruction of the optical axis is done by estimation of the cornea and pupil centre.
C. Cross Ratio based Gaze Estimation
In contrast to feature-based and model-based methods, cross-ratio methods achieves gaze estimation using a few IR illumination sources and the detection of their corneal reflections. [20] In this approach, suggest a new system five IR LEDs and a CCD camera are used to estimate the direction of a user's eye gaze. The IR LEDs placed on the corners of a computer monitor make glints on the cornea of the eye when user sees the monitor. It highlights the the center of a pupil creating a polygon. Without computing the geometrical relation among the eye,eye gaze can be computed from the camera, and the monitor in 3D space. Cross-ratio (CR) based methods offer many attractive properties for remote gaze estimation using a single camera in an uncalibrated setup by exploiting invariance of a plane projectivity. To improve the performance of CR-based eye gaze trackers as the subject moves away from the calibration position, an adaptive homography mapping is used for achieving gaze prediction with higher accuracy at the calibration position and more robustness under head movements. This framework uses a learning-based method for reducing spatially-varying gaze errors and head pose dependent errors simultaneously. While these methods are promising, additional illumination sources may not be available on unmodified devices or settings for applications such as crowd-sourced saliency estimation using commodity devices
D. Appearance based Gaze Estimation
Appearance-based methods directly use an eye image as input to estimate the point of gaze through machine learning. Previously many algorithms like adaptive linear regression [22], support vector regression [23], Gaussian process regression [24], and convolutional neural networks (CNN) [11] have been proposed for point of gaze estimation. Appearancebased methods work more effectively on low-resolution eye images than model-based methods [27]. Earlier works would use image intensities as features for linear regression,random forests and k-NN. Previously, appearance-based methods required large user-specific calibrated training datasets. Instead of calibration, hundreds of individual samples can be used to achieve sufficient accuracy. Another approach was proposed by authors [27] who collected the MPIIGaze dataset, which contains a large number of images of laptop users looking at on-screen markers in daily life. They trained a CNN using the dataset, then achieved person-and head pose-independent gaze estimation in the wild. But this requires high computational cost and a discrete GPU for real-time tracking. [27] A VGG-16 network performs better than compared to the MnistNet architecture by an improvement of 0.8•. On the architectural side, other works explore multi-modal training, such as with head pose information [27], full-face images , or an additional "face-grid" modality for direct estimation of point of gaze [28]. Modified AlexNet used with face images [29] for the task of gaze estimation show drastic improvement in accuracy 1.9•. CNNs learns features implicitly for personalized gaze estimation using as few as 10 calibration samples.
[2]
III. APPEARANCE BASED MODEL ARCHITECTURE
A. Stacked Hourglass Network 1) Stacked Hourglass Network Overview : Hourglass modules are similar to auto-encoders in that feature maps are downscaled via pooling operations, then upscaled using bilinear interpolation. When given 64 feature maps, the network refines them at 4 different image scales, multiple times. This repeated bottom-up, top-down inference ensures a large effective receptive field and allows for the encoding of spatial relations between landmarks, even under occlusion.
Stacked Hourglass Network (HG) is a stack of hourglass modules. It got this name because the shape of each hourglass module closely resemble an hourglass, as we can see from [30] the picture above. The idea behind stacking multiple HG (Hourglass) modules instead of forming a giant encoder and decoder network is that each HG module will produce a full heat-map for landmark prediction. Thus, the latter HG module can learn from the landmark predictions of the previous HG module.
We use heat-map to represent facial locations in an image.This preserves the location information, and then we just need to find the peak of the heat-map.In addition, we would also calculate the loss for each intermediate prediction, which helps us to supervise not only the final output but also all HG modules effectively.
2) Hourglass Module: In the below diagram, each box is a residual block plus some additional operations like pooling. In general, an HG module is an encoder and decoder architecture, where we downsample the features first, and then upsample the features to recover the info and form a heat-map. Each encoder layer would have a connection to its decoder counterpart, and we could stack as many as layers we want. In the implementation, we usually make some recursions and let this HG module to repeat itself.
3) Intermediate Supervision: As you can see from the diagram above, when we produce something from the HG module, we split the output into two paths. The top path includes some more convolutions to further process the features [30] and then go to the next HG module. The interesting thing happens at the bottom path. Here we use the output of that convolution layer as an intermediate heat-map result (blue box) and then calculate loss between this intermediate heat-map and the ground-truth heat-map. In other words, if we have 4 HG modules, we will need to calculate four losses in total: 3 for the intermediate result, and 1 for the final result.
4) Loss Function: The network performs the task of predicting heatmaps, one per eye region landmark. The heatmaps encode the per-pixel confidence on a specific landmark's location. We place 2-dimensional Gaussians centered at the sub-pixel landmark positions such that the peak value is 1. The neural network then minimizes the l2 distance between the predicted and ground-truth heatmaps per landmark via the following loss term:
L heatmaps = α 18 i=1 p h i (p) − h i (p) 2 2
where h(p) is the confidence at pixel p andh is a heatmap predicted by the network. We empirically set the weight coefficient α = 1. Additionally predict an eyeball radius valuẽ r uv . This is done by first appending a soft-argmax layer [Honari et al. 2018] to calculate landmark coordinates from heatmaps, then further appending 3 linear fully-connected layers with 100 neurons each with batch normalization and ReLU activation and one final regression layer with 1 neuron. The loss term for the eyeball radius output is:
L radius = β r uv − r uv 2 2
where we set β = 10 −7 and use ground-truth radius r uv Fig. 6. Concatenation of feature maps [31] B. Densely Connected Convolutional Networks 1) Densely Connected Convolutional Networks Overview : Densely Connected Convolutional Networks are used to deal with vanishing gradient problem about how, as networks get deeper, gradients aren't back-propagated sufficiently to the initial layers of the network. The gradients keep getting smaller as they move backwards into the network and as a result, the initial layers lose their capacity to learn the basic low-level features.
2) Dense connections: Following the feed-forward nature of the network, each layer in a dense block receives feature maps from all the preceding layers, and passes its output to all subsequent layers. Feature maps received from other layers are fused through concatenation, and not through summation (like in ResNets). These connections form a dense circuit of pathways that allow better gradient-flow.Because of these dense connections, the model requires fewer layers, as there is no need to learn redundant feature maps, allowing the collective knowledge (features learnt collectively by the network) to be reused. The proposed architecture has narrow layers, which provide state-of-the-art results for as low as 12 channel feature maps. Fewer and narrower layers means that the model has fewer parameters to learn, making them easier to train.
3) Composite function : Each CONV block in the network representations in the paper [25] corresponds to an operation of BatchNorm→ReLU→Conv* 4) Dense block: A dense block comprises n dense layers. These dense layers are connected using a dense circuitry such that each dense layer receives feature maps from all preceding layers and passes it's feature maps to all subsequent layers. The dimensions of the features (width, height) stay the same in a dense block.
Dense layer Each dense-layer consists of 2 convolutional operations -1 X 1 CONV (conventional conv operation for extracting features) 3 X 3 CONV (bringing down the feature depth/channel count) Fig. 7. Each layer has direct access to the gradients of the loss function and the original input signal [31] Fig. 8. Composite function [31] The DenseNet-121 comprises of 6 such dense layers in a dense block. The depth of the output of each dense-layer is equal to the growth rate of the dense block. 5) Transition layer : A transition layer (or block) is added between two dense blocks. The transition layer consists of -1 X 1 CONV operation 2 X 2 AVG POOL operation The 1 X 1 CONV operation reduces the channel count to half.
The 2 X 2 AVG POOL layer is responsible for downsampling the features in terms of the width and height.
u i = m 2 − r sin φ cos θ v i = n 2 − r sin θ
where r = r cos sin −1 1 2 , and gaze direction g = (θ, φ). For regression to g, using DenseNet architecture explained above, we perform image classification and gain desired output confidence maps via 1 x 1 convolutional layers. The loss term for gaze direction is :
L gaze = g −ĝ 2 2
whereĝ is the gaze direction predicted by out network. In this gazemap network, we use 3 hourglass modules with intermediate supervision applied on the gazemap outputs of the last module only. The minimized intermediate loss is:
L gazemap = −α p∈P m(p) logm(p)
where we calculate a cross-entropy between predictedm and ground-truth gazemap m for pixels p in set of all pixels P. In our evaluations, we set the coefficient α to 10 −5 .
IV. EXPERIMENTAL ANALYSIS
A. Hyperparameters
Hyperparameters modified and tested include -L2 weights regularization coefficient, batch size, Alpha: learning rate, number of epochs. For the parameters -Number of layers, Size of layers, Activation function, Learning rate, ReLU activation function was used. Slight data augmentation is applied in terms of image translation and scaling and learning rate is multiplied by 0.1 after every 5k gradient update steps, to address overfitting and to stabilize the final error. Number of stacks in hourglass network is increased to train on 8 hourglass modules with intermediate supervision yields significantly improved landmark localization accuracy compared to 2-stack or 4-stack models with the same number of model parameters.
V. RESULTS ANALYSIS
A. Error Analysis
The result show the mean angular error (MAE) of vectors of pitch and yaw for different parameters and models. We mainly discuss two models : Gazemap approach (Model 1) using dense net architecture with stacked hourglass models and Eye region landmark approach (Model 2) using identification of eye region and iris localization landmarks as input for the stacked hourglass network. Below models were trained on 150k entries.
Different parameters for Model 1:Gazemap approach. Mean angular error is in degrees.
Different number of dense block modules and different umber of layers per block:
Paramaters
Gazemap MAE 5 dense blocks(5 layers) 5.88 5 dense blocks(3 layers) 9.09 5 dense blocks(6 layers) 4. Changing batchsize did not affect the accuracy whereas increasing the number of epochs (< 100) did not affect the results. The above graph shows above mean angular error for pitch and yaw angles. Figure 14 is the demo image of running the model on a web-camera video file.
B. Future Work
There are many different approaches to identify different eye region landmarks and try different representations of landmarks to gain more robust gaze estimation model. This model can be easily adapted to work on low resolution data sets. The current work is made open source and available on https://github.com/aveenakottwani/EyeGazeEstimationModels.
Fig. 1 .
1Stacked hour glass for facial landmark localization [30] Fig. 2. Stacked hourglass network
Fig. 3 .
3An illustration of a single "hourglass" module. Each box in the figure corresponds to a residual module as seen in Figure 1. The number of features is consistent across the whole hourglass. [30] Fig. 4. Intermediate supervision [30] Fig. 5. Residual module
Fig. 9 .
9Dense block with channel count (C) of features entering and exiting the layers [31]Fig. 10. Transition layer [31] Fig. 11. Full network [31] 6) Full network: Different number of dense layers for each of the three dense block. 7) Loss Function: This approach models input image into an intermediate image representation of the eye as gazemap i.e. m. Gaze direction g is defined as :g = k • j(x) where j : x → m and k : m → gSo using gamzemap m, we can estimate gaze direction g. Considering a simple model of the human eyeball and iris where iris diameter is approximately 12mm and eyeball diameter is approximately 24mm, we calculate the iris centre coordinates (u i , v i ):
Fig. 12 .
12Predicted vs Actual gaze estimation angle pitch Fig. 13. Predicted vs Actual gaze estimation angle yaw
In the eye of the beholder: A survey of models for eyes and gaze. D W Hansen, Q Ji, IEEE Transactions on Pattern Analysis and Machine Intelligence. 323D. W. Hansen, Q. Ji, In the eye of the beholder: A survey of models for eyes and gaze, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2010, 32(3), 478-500
Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings Seonwook Park ETH Zurich spark@inf.ethz.ch Xucong Zhang MPI for Informatics xczhang@mpi-inf.mpg.de Andreas Bulling MPI for Informatics bulling@mpi-inf.mpg.de Otmar Hilliges ETH Zurich otmarh@inf.ethz. ch Fig. 14. Demo applicationLearning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings Seonwook Park ETH Zurich spark@inf.ethz.ch Xucong Zhang MPI for Informatics xczhang@mpi-inf.mpg.de Andreas Bulling MPI for Informatics bulling@mpi-inf.mpg.de Otmar Hilliges ETH Zurich otmarh@inf.ethz.ch Fig. 14. Demo application
Evaluation of Pupil Centereye Corner Vector for Gaze Estimation Using a Web Cam. Laura Sesma, Arantxa Villanueva, Rafael Cabeza, Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA '12). the Symposium on Eye Tracking Research and Applications (ETRA '12)New York, NY, USAACMLaura Sesma, Arantxa Villanueva, and Rafael Cabeza. 2012. Evaluation of Pupil Centereye Corner Vector for Gaze Estimation Using a Web Cam. In Proceedings of the Symposium on Eye Tracking Research and Applications (ETRA '12). ACM, New York, NY, USA, 217-220.
Visual multi-metric grouping of eye-tracking data. A Kumar, R Netzel, M Burch, D Weiskopf, K Mueller, Journal of Eye Movement Research. 510Kumar, A., Netzel, R., Burch, M., Weiskopf, D. and Mueller, K., 2017. Visual multi-metric grouping of eye-tracking data. Journal of Eye Movement Research, 10(5).
Demo of the EyeSAC System for Visual Synchronization, Cleaning, and Annotation of Eye Movement Data. A Kumar, D Mohanty, K Kurzhals, F Beck, D Weiskopf, K Mueller, ACM Symposium on Eye Tracking Research and Applications. Kumar, A., Mohanty, D., Kurzhals, K., Beck, F., Weiskopf, D. and Mueller, K., 2020, June. Demo of the EyeSAC System for Visual Synchronization, Cleaning, and Annotation of Eye Movement Data. In ACM Symposium on Eye Tracking Research and Applications (pp. 1-3).
Pupil-canthiratio: a calibration-free method for tracking horizontal gaze direction. Yanxia Zhang, Andreas Bulling, Hans Gellersen, Proc. of the 2014 International Working Conference on Advanced Visual Interfaces (AVI 14). of the 2014 International Working Conference on Advanced Visual Interfaces (AVI 14)New York, NY, USAACMYanxia Zhang, Andreas Bulling, and Hans Gellersen. 2014. Pupil-canthi- ratio: a calibration-free method for tracking horizontal gaze direction. In Proc. of the 2014 International Working Conference on Advanced Visual Interfaces (AVI 14) (2014-05-27). ACM, New York, NY, USA, 129-132.
Building a self-learning eye gaze model from user interaction data. M X Huang, T C Kwok, G Ngai, H V Leong, S C Chan, Proceedings of the 22nd ACM international conference on Multimedia. the 22nd ACM international conference on MultimediaHuang, M.X., Kwok, T.C., Ngai, G., Leong, H.V. and Chan, S.C., 2014, November. Building a self-learning eye gaze model from user interaction data. In Proceedings of the 22nd ACM international conference on Multimedia (pp. 1017-1020).
Visual analysis of eye gazes to assist strategic planning in computer games. A Kumar, M Burch, K Mueller, Proceedings of the 3rd Workshop on Eye Tracking and Visualization. the 3rd Workshop on Eye Tracking and VisualizationKumar, A., Burch, M. and Mueller, K., 2018, June. Visual analysis of eye gazes to assist strategic planning in computer games. In Proceedings of the 3rd Workshop on Eye Tracking and Visualization (pp. 1-5).
Eye gaze estimation from the elliptical features of one iris. W Zhang, T.-N Zhang, S.-J Chang, Opt. Eng. 50447003W. Zhang, T.-N. Zhang and S.-J. Chang, "Eye gaze estimation from the elliptical features of one iris", Opt. Eng., vol. 50, no. 4, pp. 047003, 2011.
On eye-model personalization for automatic visual line estimation. Y Kitagawa, H Wu, T Wada, T Kato, Proc. PRMU. PRMU106Y. Kitagawa, H. Wu, T. Wada and T. Kato, "On eye-model personaliza- tion for automatic visual line estimation", Proc. PRMU, vol. 106, no. 469, pp. 55-60, 2007.
3D gaze estimation with a single camera without IR illumination. J Chen, Q Ji, Proc. 19th Int. Conf. Pattern Recognit. 19th Int. Conf. Pattern RecognitJ. Chen and Q. Ji, "3D gaze estimation with a single camera without IR illumination", Proc. 19th Int. Conf. Pattern Recognit., pp. 1-4, Dec. 2008.
Task classification model for visual fixation, exploration, and search. A Kumar, A Tyagi, M Burch, D Weiskopf, K Mueller, Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. the 11th ACM Symposium on Eye Tracking Research & ApplicationsKumar, A., Tyagi, A., Burch, M., Weiskopf, D. and Mueller, K., 2019, June. Task classification model for visual fixation, exploration, and search. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications (pp. 1-4).
A method of gaze direction estimation considering head posture. W.-Z Zhang, Z.-C Wang, J.-K Xu, X.-Y Cong, Int. J. Signal Process. Image Process. Pattern Recognit. 62W.-Z. Zhang, Z.-C. Wang, J.-K. Xu and X.-Y. Cong, "A method of gaze direction estimation considering head posture", Int. J. Signal Process. Image Process. Pattern Recognit., vol. 6, no. 2, pp. 103-112, 2013.
Conic-based algorithm for visual line estimation from one image. H Wu, Q Chen, T Wada, Proc. 6th IEEE Int. Conf. Autom. Face Gesture Recognit. (FGR). 6th IEEE Int. Conf. Autom. Face Gesture Recognit. (FGR)H. Wu, Q. Chen and T. Wada, "Conic-based algorithm for visual line estimation from one image", Proc. 6th IEEE Int. Conf. Autom. Face Gesture Recognit. (FGR), pp. 260-265, May 2004, [online] Available:
Combining head pose and eye location information for gaze estimation. R Valenti, N Sebe, T Gevers, IEEE Trans. Image Process. 212R. Valenti, N. Sebe and T. Gevers, "Combining head pose and eye location information for gaze estimation", IEEE Trans. Image Process., vol. 21, no. 2, pp. 802-815, Feb. 2012.
Automatic calibration of 3D eye model for single-camera based gaze estimation. H Yamazoe, A Utsumi, T Yonezawa, S Ave, Trans. IEICE. 94H. Yamazoe, A. Utsumi, T. Yonezawa and S. Ave, "Automatic calibra- tion of 3D eye model for single-camera based gaze estimation", Trans. IEICE, vol. 94, pp. 998-1006, Jun. 2011.
Multi-similarity matrices of eye movement data. A Kumar, R Netzel, M Burch, D Weiskopf, K Mueller, 2016 IEEE Second Workshop on Eye Tracking and Visualization (ETVIS). IEEEKumar, A., Netzel, R., Burch, M., Weiskopf, D. and Mueller, K., 2016, October. Multi-similarity matrices of eye movement data. In 2016 IEEE Second Workshop on Eye Tracking and Visualization (ETVIS) (pp. 26- 30). IEEE.
Eye Gaze Tracking Using an RGBD Camera: A Comparison with a RGB Solution. Xuehan Xiong, Zicheng Liu, Qin Cai, Zhengyou Zhang, Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication (UbiComp '14 Adjunct. the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication (UbiComp '14 AdjunctNew York, NY, USAACMXuehan Xiong, Zicheng Liu, Qin Cai, and Zhengyou Zhang. 2014. Eye Gaze Tracking Using an RGBD Camera: A Comparison with a RGB So- lution. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication (UbiComp '14 Adjunct). ACM, New York, NY, USA, 1113-1121.
A Survey on Eye-Gaze Tracking Techniques. H R Chennamma, X Yuan, ArXiv abs/1312.6410n. pagChennamma, H. R. and X. Yuan. "A Survey on Eye-Gaze Tracking Techniques." ArXiv abs/1312.6410 (2013): n. pag.
Non-Contact Eye Gaze Tracking System by Mapping of Corneal Reflections. Dong Hyun Yoo, Rae Bang, Myoung Jin Lee, Chung, Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition (FGR '02). the Fifth IEEE International Conference on Automatic Face and Gesture Recognition (FGR '02)Washington, DC, USAIEEE Computer Society101Dong Hyun Yoo, Bang Rae Lee, and Myoung Jin Chung. 2002. Non- Contact Eye Gaze Tracking System by Mapping of Corneal Reflections. In Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition (FGR '02). IEEE Computer Society, Washington, DC, USA, 101-.
Towards accurate and robust cross-ratio based gaze trackers through learning from simulation. Jia-Bin Huang, Qin Cai, Zicheng Liu, Narendra Ahuja, Zhengyou Zhang, 10.1145/2578153.2578162Eye Tracking Research and Applications, ETRA '14. Safety Harbor, FL, USAJia-Bin Huang, Qin Cai, Zicheng Liu, Narendra Ahuja, and Zhengyou Zhang. 2014a. Towards accurate and robust cross-ratio based gaze trackers through learning from simulation. In Eye Tracking Research and Applications, ETRA '14, Safety Harbor, FL, USA, March 26-28, 2014. 75-82. https://doi.org/10.1145/2578153.2578162
Adaptive Linear Regression for Appearance-Based Gaze Estimation. F Lu, Y Sugano, T Okabe, Y Sato, IEEE Trans. Pattern Anal. Mach. Intell. 3610F. Lu, Y. Sugano, T. Okabe and Y. Sato, "Adaptive Linear Regression for Appearance-Based Gaze Estimation", IEEE Trans. Pattern Anal. Mach. Intell., vol. 36, no. 10, pp. 2033-2046, Oct. 2014.
Nonlinear eye gaze mapping function estimation via support vector regression. Z Zhu, Q Ji, K P Bennett, Proc. 18th Int. Conf. Pattern Recognit. (ICPR). 18th Int. Conf. Pattern Recognit. (ICPR)1Z. Zhu, Q. Ji and K. P. Bennett, "Nonlinear eye gaze mapping function estimation via support vector regression", Proc. 18th Int. Conf. Pattern Recognit. (ICPR), vol. 1, pp. 1132-1135, Aug. 2006.
Calibration-free eye gaze direction detection with Gaussian processes. B Noris, K Benmachiche, A Billard, Proc. Int. Conf. Comput. Vis. Theory Appl. Int. Conf. Comput. Vis. Theory ApplB. Noris, K. Benmachiche and A. Billard, "Calibration-free eye gaze direction detection with Gaussian processes", Proc. Int. Conf. Comput. Vis. Theory Appl., pp. 1-6, 2008.
Visually comparing eye movements over space and time. A Kumar, M Burch, K Mueller, Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications. the 11th ACM Symposium on Eye Tracking Research & ApplicationsKumar, A., Burch, M. and Mueller, K., 2019, June. Visually comparing eye movements over space and time. In Proceedings of the 11th ACM Symposium on Eye Tracking Research & Applications (pp. 1-9).
Convolutional neural networks for eye detection in remote gaze estimation systems. C L L Jerry, M Eizenman, Proc. Int. MultiConf. Eng. Comput. Scientists. 1C. L. L. Jerry and M. Eizenman, "Convolutional neural networks for eye detection in remote gaze estimation systems", Proc. Int. MultiConf. Eng. Comput. Scientists, vol. 1, pp. 1-6, 2008.
Appearancebased gaze estimation in the wild. X Zhang, Y Sugano, M Fritz, A Bulling, X. Zhang, Y. Sugano, M. Fritz and A. Bulling, Appearance- based gaze estimation in the wild, Apr. 2015, [online] Available: https://arxiv.org/abs/1504.02863
Eye Tracking for Every. Kyle Krafka, Aditya Khosla, Petr Kellnhofer, Harini Kannan, Suchendra Bhandarkar, Wojciech Matusik, Antonio Torralba, Kyle Krafka, Aditya Khosla, Petr Kellnhofer, Harini Kannan, Suchen- dra Bhandarkar, Wojciech Matusik, and Antonio Torralba. 2016. Eye Tracking for Every
It's Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation. Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling, The IEEE Conference on Computer Vision and Pattern Recog. Xucong Zhang, Yusuke Sugano, Mario Fritz, and Andreas Bulling. 2017. It's Written All Over Your Face: Full-Face Appearance-Based Gaze Estimation. In The IEEE Conference on Computer Vision and Pattern Recog
On certain integrals of Lipschitz-Hankel type involving products of Bessel functions. G Eason, B Noble, I N Sneddon, Phil. Trans. Roy. Soc. London. 247G. Eason, B. Noble, and I. N. Sneddon, "On certain integrals of Lipschitz-Hankel type involving products of Bessel functions," Phil. Trans. Roy. Soc. London, vol. A247, pp. 529-551, April 1955.
Densely connected convolutional networks. G Huang, Z Liu, L Van Der Maaten, K Q Weinberger, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely con- nected convolutional networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (July 2017)
Gao & Huang, Liu, Laurens & Shichen & Van Der Maaten, Kilian Weinberger, CondenseNet: An Efficient DenseNet using Learned Group Convolutions. Huang, Gao & Liu, Shichen & van der Maaten, Laurens & Weinberger, Kilian. (2017). CondenseNet: An Efficient DenseNet using Learned Group Convolutions.
| [
"https://github.com/aveenakottwani/EyeGazeEstimationModels."
] |
[
"Automatic Reference-Based Evaluation of Pronoun Translation Misses the Point",
"Automatic Reference-Based Evaluation of Pronoun Translation Misses the Point"
] | [
"Liane Guillou lguillou@inf.ed.ac.uk ",
"Christian Hardmeier christian.hardmeier@lingfil.uu.se ",
"\nSchool of Informatics Scotland\nUniversity of Edinburgh\nUnited Kingdom\n",
"\nDept. of Linguistics & Philology Uppsala\nUppsala University\nSweden\n"
] | [
"School of Informatics Scotland\nUniversity of Edinburgh\nUnited Kingdom",
"Dept. of Linguistics & Philology Uppsala\nUppsala University\nSweden"
] | [
"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing"
] | We compare the performance of the APT and AutoPRF metrics for pronoun translation against a manually annotated dataset comprising human judgements as to the correctness of translations of the PROTEST test suite. Although there is some correlation with the human judgements, a range of issues limit the performance of the automated metrics. Instead, we recommend the use of semiautomatic metrics and test suites in place of fully automatic metrics. | 10.18653/v1/d18-1513 | [
"https://www.aclweb.org/anthology/D18-1513.pdf"
] | 51,972,729 | 1808.04164 | 0cadbb4732abb5559b4b117da101d083378190ab |
Automatic Reference-Based Evaluation of Pronoun Translation Misses the Point
Association for Computational LinguisticsCopyright Association for Computational LinguisticsOctober 31 -November 4. 2018. 2018
Liane Guillou lguillou@inf.ed.ac.uk
Christian Hardmeier christian.hardmeier@lingfil.uu.se
School of Informatics Scotland
University of Edinburgh
United Kingdom
Dept. of Linguistics & Philology Uppsala
Uppsala University
Sweden
Automatic Reference-Based Evaluation of Pronoun Translation Misses the Point
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsOctober 31 -November 4. 2018. 20184797
We compare the performance of the APT and AutoPRF metrics for pronoun translation against a manually annotated dataset comprising human judgements as to the correctness of translations of the PROTEST test suite. Although there is some correlation with the human judgements, a range of issues limit the performance of the automated metrics. Instead, we recommend the use of semiautomatic metrics and test suites in place of fully automatic metrics.
Introduction
As the general quality of machine translation (MT) increases, there is a growing interest in improving the translation of specific linguistic phenomena. A case in point that has been studied in the context of both statistical (Hardmeier, 2014;Guillou, 2016;Loáiciga, 2017) and neural MT (Bawden et al., 2017;Voita et al., 2018) is that of pronominal anaphora. In the simplest case, translating anaphoric pronouns requires the generation of corresponding word forms respecting the grammatical constraints on agreement in the target language, as in the following English-French example, where the correct form of the pronoun in the second sentence varies depending on which of the (equally correct) translations of the word bicycle was used in the first:
(1) a. I have a bicycle. It is red.
b. J'ai un vélo. Il est rouge. [ref] c. J'ai une bicyclette. Elle est rouge. [MT] However, the problem is more complex in practice because there is often no 1 : 1 correspondence between pronouns in two languages. This is easily demonstrated at the corpus level by observing that the number of pronouns varies significantly across languages in parallel texts (Mitkov * Both authors contributed equally.
and Barbu, 2003), but it tends to be difficult to predict in individual cases.
In general MT research, significant progress was enabled by the invention of automatic evaluation metrics based on reference translations, such as BLEU (Papineni et al., 2002). Attempting to create a similar framework for efficient research, researchers have proposed automatic reference-based evaluation metrics specifically targeting pronoun translation: AutoPRF (Hardmeier and Federico, 2010) and APT (Miculicich Werlen and Popescu-Belis, 2017). We study the performance of these metrics on a dataset of English-French translations and investigate to what extent automatic evaluation based on reference translations provides insights into how well an MT system handles pronouns. Our analysis clarifies the conceptual differences between AutoPRF and APT, uncovering weaknesses in both metrics, and investigates the effects of the alignment correction heuristics used in APT. By using the fine-grained PROTEST categories of pronoun function, we find that the accuracy of the automatic metrics varies across pronouns of different functions, suggesting that certain linguistic patterns are captured better in the automatic evaluation than others. We argue that fully automatic wide-coverage evaluation of this phenomenon is unlikely to drive research forward, as it misses essential parts of the problem despite achieving some correlation with human judgements. Instead, semiautomatic evaluation involving automatic identification of correct translations with high precision and low recall appears to be a more achievable goal. Another more realistic option is a test suite evaluation with a very limited scope.
Pronoun Evaluation Metrics for MT
Two reference-based automatic metrics of pronoun translation have been proposed in the literature.
The first (Hardmeier and Federico, 2010) is a variant of precision, recall and F-score that measures the overlap of pronouns in the MT output with a reference translation. It lacks an official name, so we refer to it as AutoPRF following the terminology of the DiscoMT 2015 shared task (Hardmeier et al., 2015). The scoring process relies on a word alignment between the source and the MT output, and between the source and the reference translation. For each input pronoun, it computes a clipped count (Papineni et al., 2002) of the overlap between the aligned tokens in the reference and the MT output. The clipped count of a given word is defined as the number of times it occurs in the MT output, limited by the number of times it occurs in the reference translation. The final metric is then calculated as the precision, recall and F-score based on these clipped counts.
Miculicich Werlen and Popescu-Belis (2017) propose a metric called Accuracy of Pronoun Translation (APT) that introduces several innovations over the previous work. It is a variant of accuracy, so it counts, for each source pronoun, whether its translation can be considered correct, without considering multiple alignments. Since word alignment is problematic for pronouns, the authors propose an heuristic procedure to improve alignment quality. Finally, it introduces the notion of pronoun equivalence, assigning partial credit to pronoun translations that differ from the reference translation in specific ways deemed to be acceptable. In particular, it considers six possible cases when comparing the translation of a pronoun in MT output and the reference. The pronouns may be: (1) identical, (2) equivalent, (3) different/incompatible, or there may be no translation in: (4) the MT output, (5) the reference, (6) either the MT output or the reference. Each of these cases may be assigned a weight between 0 and 1 to determine the level of correctness.
The PROTEST Dataset
We study the behaviour of the two automatic metrics using the PROTEST test suite . The test suite comprises 250 hand-selected personal pronoun tokens taken from the DiscoMT2015.test dataset of TED talk transcriptions and translations and annotated according to the ParCor guidelines (Guillou et al., 2014). It is structured according to a linguistic typology motivated by work on func-tional grammar by Dik (1978) and Halliday (2004). Pronouns are first categorised according to their function:
anaphoric: I have a bicycle. It is red. event: He lost his job. It was a shock. pleonastic: It is raining. addressee reference: You're welcome.
They are then subcategorised according to morphosyntactic criteria, whether the antecedent is a group noun, whether the ancedent is in the same or a different sentence, and whether an addressee reference pronoun refers to one or more specific people (deictic) or to people in general (generic).
Our dataset contains human judgements on the performance of nine MT systems on the translation of the 250 pronouns in the PROTEST test suite. The systems include five submissions to the DiscoMT 2015 shared task on pronoun translation (Hardmeier et al., 2015) -four phrase-based SMT systems AUTO-POSTEDIT (Guillou, 2015), UU-HARDMEIER (Hardmeier et al., 2015), IDIAP (Luong et al., 2015), UU-TIEDEMANN (Tiedemann, 2015), a rule-based system ITS2 (Loáiciga and Wehrli, 2015), and the shared task baseline (also phrase-based SMT). Three NMT systems are included for comparison: LIMSI (Bawden et al., 2017), NYU (Jean et al., 2014), and YANDEX (Voita et al., 2018).
Manual evaluation was conducted using the PROTEST graphical user interface and accompanying guidelines . The annotators were asked to make judgements (correct/incorrect) on the translations of the pronouns and antecedent heads whilst ignoring the correctness of other words (except in cases where it impacted the annotator's ability to make a judgement). The annotations were carried out by two bilingual English-French speakers, both of whom are native speakers of French. Our human judgements differ in important ways from the human evaluation conducted for the same set of systems at DiscoMT 2015 (Hardmeier et al., 2015), which was carried out by non-native speakers over an unbalanced data sample using a gap-filling methodology. In the gap-filling task annotators are asked to select, from a predefined list (including an uninformative catch-all group "other"), those pronouns that could fill the pronoun translation slot. Unlike in the PROTEST evaluation, the pronoun translations were obscured in the MT output. This avoided priming the annotators with the output of the candidate translation, but it occasionally caused valid translations to be rejected because they were missed by the annotator.
Accuracy versus Precision/Recall
There are three ways in which APT differs from Au-toPRF: the scoring statistic, the alignment heuristic in APT, and the definition of pronoun equivalence.
APT is a measure of accuracy: It reflects the proportion of source pronouns for which an acceptable translation was produced in the target. AutoPRF, by contrast, is a precision/recall metric on the basis of clipped counts. Hardmeier and Federico (2010) motivate the use of precision and recall by pointing out that word alignments are not 1 : 1, so each pronoun can be linked to multiple elements in the target language, both in the reference translation and in the MT output. Their metric is designed to account for all linked words in such cases.
To test the validity of this argument, we examined the subset of examples of 8 systems in our English-French dataset 1 giving rise to a clipped count greater than one 2 and found that these examples follow very specific patterns. All 143 cases included exactly one personal pronoun. In 99 cases, the additional matched word was the complementiser que 'that'. In 31 and 4 cases, respectively, it was a form of the auxiliary verbs avoir 'to have' andêtre 'to be'. One example matched both que and a form ofêtre. Two had reflexive pronouns, and one an imperative verb form. With the possible exception of the two reflexive pronouns, none of this seems to be relevant to pronoun correctness. We conclude that it is more reasonable to restrict the counts to a single pronominal item per example. With this additional restriction, however, the recall score of AutoPRF becomes equivalent to a version of APT without equivalent pronouns and alignment correction. We therefore limit the remainder of our study to APT.
Effects of Word Alignment
APT includes an heuristic alignment correction procedure to mitigate errors in the word alignment between a source-language text and its translation (reference or MT output). We ran experiments to 1 Excluding the YANDEX system, which was added later. 2 A clipped count greater than one for a given pronoun translation indicates that the MT output and the reference translation aligned to this pronoun overlap in more than one token. assess the correlation of APT with human judgements, with and without the alignment correction heuristics. Table 1 displays the APT results in both conditions and the proportion of pronouns in the PROTEST test suite marked as correctly translated. For better comparison with the PROTEST test suite results, we restricted APT to the pronouns in the test suite. We used two different weight settings: 3 APT-A uses weight 1 for identical matches and 0 for all other cases. APT-B uses weight 1 for identical matches, 0.5 for equivalent matches and 0 otherwise.
There is little difference in the APT scores when we consider the use of alignment heuristics. This is due to the small number of pronouns for which alignment improvements are applied for most systems (typically 0-12 per system). The exception is the ITS2 system output for which 18 alignment improvements are made. For the following systems we observe a very small increase in APT score for each of the two weight settings we consider, when alignment heuristics are applied: UU-HARDMEIER (+0.8), ITS2 (+0.8), BASELINE (+0.8), YANDEX (+0.8), and NYU (+0.4). However, these small improvements are not sufficient to affect the system rankings. It seems, therefore, that the alignment heuristic has only a small impact on the validity of the score.
To assess differences in correlation with human judgment for pairs of APT settings, we run Williams's significance test (Williams, 1959;Graham and Baldwin, 2014). The test reveals that differences in correlation between the various configurations of APT and human judgements are not statistically significant (p > 0.2 in all cases).
Metric Accuracy per Category
Like Miculicich Werlen and Popescu-Belis (2017), we use Pearson's and Spearman's correlation coefficients to assess the correlation between APT and our human judgements (Table 2). Although APT does correlate with the human judgements over the PROTEST test suite, the correlation is weaker than that with the DiscoMT gap-filling evaluations reported in Miculicich Werlen and Popescu-Belis (2017). A Williams significance test reveals that the difference in correlation (for those systems common to both studies) is not statistically significant (p > 0.3). Table 1 also shows that the rankings induced from the PROTEST and APT scores are rather different. The differences are due to the different ways in which the two metrics define pronoun correctness, and the different sources against which correctness is measured (reference translation vs. human judgement). We also study how the results of APT (with alignment correction) interact with the categories in PROTEST. We consider a pronoun to be measured as correct by APT if it is assigned case 1 (identical) or 2 (equivalent). Likewise, a pronoun is considered incorrect if it is assigned case 3 (incompatible). We compare the number of pronouns marked as correct/incorrect by APT and by the human judges, ignoring APT cases in which no judgement can be made: no translation of the pronoun in the MT output, reference or both, and pronouns for which the human judges were unable to make a judgement due to factors such as poor overall MT quality, incorrect word alignments, etc. The results of this comparison are displayed in Table 3.
At first glance, we can see that APT disagrees with the human judgements for almost a quarter (24.3%) of the assessed translations. The distribution of the disagreements over APT cases is very skewed and ranges from 8% for case 1 to 32% for case 2 and 49% for case 3. In other words, APT identifies correct pronoun translations with good precision, but relatively low recall. We can also see that APT rarely marks pronouns as equivalent (case 2).
Performance for anaphoric pronouns is mixed. In general, there are three main problems affecting anaphoric pronouns (Table 4). 1) APT, which does not incorporate knowledge of anaphoric pronoun antecedents, does not consider pronoun-antecedent head agreement so many valid alternative translations involving personal pronouns are marked as incompatible (i.e. incorrect, case 3), but as correct by the human judges. Consider the following example, in which the pronoun they is deemed correctly translated by the YANDEX system (according to the human judges) as it agrees in number and grammatical gender with the translation of the antecedent extraits (clips). However, the pronoun translation ils is marked as incorrect by APT as it does not match the translation in the reference (elles).
SOURCE: so what these two clips show is not just the devastating consequence of the disease, but they also tell us something about the shocking pace of the disease. . . YANDEX: donc ce que ces deux extraits[masc.,pl.] montrent n'est pas seulement la conséquence dévastatrice de la maladie, mais ils[masc. pl.] nous disent aussi quelque chose sur le rythme choquant de la maladie. . .
2) Substitutions between pronouns are governed by much more complex rules than the simple pronoun equivalence mechanism in APT. For example, the dictionary of pronouns used in APT lists il and ce as equivalent. However, while il can often replace ce as a pleonastic pronoun in French, it has a much stronger tendency to be interpreted as anaphoric, rendering pleonastic use unacceptable if there is a salient masculine antecedent in the context. 3) APT does not consider the use of impersonal pronouns such as c' in place of the feminine personal pronoun elle or the plural forms ils and elles. As with anaphoric pronouns, APT incorrectly marks some pleonastic and event translations as equivalent, in disagreement with the human judges. Other common errors arise from 1) the use of alternative translations marked as incompatible (i.e. incorrect) by APT but correct by the human judges, for example il (personal) in the MT output when the reference contained the impersonal pronoun cela or ça (30 cases for pleonastic, 7 for event), or 2) the presence of il in both the MT output and reference marked by APT as identical but by the human judges as incorrect (3 cases for pleonastic, 15 event).
Some of these issues could be addressed by incorporating knowledge of pronoun function in the source language, of pronoun antecedents, and of the wider context of the translation surrounding the pronoun. However, whilst we might be able to derive language-specific rules for some scenarios, it would be difficult to come up with more general or language-independent rules. For example, il and ce can be anaphoric or pleonastic pronouns, but il has a more referential character. Therefore in certain constructions that are strongly pleonastic (e.g. clefts) only ce is acceptable. This rule would be specific to French, and would not cover other scenarios for the translation of pleonastic it. Other issues include the use of pronouns in impersonal constructions such as il faut [one must/it takes] in which evaluation of the pronoun requires consideration of the whole expression, or transformations between active and passive voice, where the perspective of the pronouns changes.
Conclusions
Our analyses reveal that despite some correlation between APT and the human judgements, fully automatic wide-coverage evaluation of pronoun translation misses essential parts of the problem. Comparison with human judgements shows that APT identifies good translations with relatively high precision, but fails to reward important patterns that pronoun-specific systems must strive to generate. Instead of relying on fully automatic evaluation, our recommendation is to emphasise high precision in the automatic metrics and implement semiautomatic evaluation procedures that refer negative cases to a human evaluator, using available tools and methods . Fully automatic evaluation of a very restricted scope may still be feasible using test suites designed for specific problems (Bawden et al., 2017).
Table 2 :
2Correlation of APT and human judgementsAPT
Human Disagreement
Category
Cases
Assess.
1
2 3
Dis. Ex. %
Anaphoric
intra sbj it
130 13 73 156 60 47 / 216 21.8
intra nsbj it
59 1 31
77 14 19 / 91 20.9
inter sbj it
104 21 111 142 94 63 / 236 26.7
inter nsbj it
21 0 7
8 20 13 / 28 46.4
intra they
131 0 95 154 72 37 / 226 16.4
inter they
126 0 108 129 105 47 / 234 20.1
sg they
57 0 66
83 40 58 / 123 47.2
group it/they
47 0 41
64 24 31 / 88 35.2
Event it
145 42 94 185 96 60 / 281 21.4
Pleonastic it
171 54 52 243 34 46 / 277 16.6
Generic you
117 0 70 186 1 69 / 187 36.9
Deictic sg you
95 0 47 140 2 45 / 142 31.7
Deictic pl you
91 0 7
97 1 6 / 98 6.1
Total
1,294 131 802 1,664 563 541 / 2,227 24.3
Table 3 :
3Number of pronouns marked as cor-
rect/incorrect in the PROTEST human judgements, as
identical (1), equivalent (2), and incompatible (3) by
APT, and the percentage of disagreements, per category
(Disagree [Dis.] / Examples [Ex.])
V: Valid alternative translation I: Impersonal translation E: Incorrect equivalence O: OtherCategory
V E
I O
Anaphoric
intra-sent. subj. it
22 9 8 8
intra-sent. non-subj. it 16 -1 2
inter-sent. subj. it
35 6 22 -
inter-sent. non-subj. it
---13
intra-sent. they
25 -3 9
inter-sent. they
22 -3 22
singular they
40 --18
group it/they
21 --10
Event it
-16 -44
Pleonastic it
-11 -35
Table 4 :
4Common cases of disagreement for anaphoric, pleonastic, and event reference pronouns
Personal recommendation by Lesly Miculicich Werlen.
REFERENCE: ce que ces deux vidéos[fem.,pl.] montrent, ce ne sont pas seulement les conséquences dramatiques de cette maladie, elles[fem. pl.] nous montrent aussi la vitesse fulgurante de cette maladie. . .
AcknowledgementsWe would like to thank our annotators, Marie Dubremetz and Miryam de Lhoneux, for their many hours of painstaking work, Lesly Miculicich Werlen for providing APT results for the Dis-coMT 2015 systems, Elena Voita, Sébastien Jean, Stanislas Lauly and Rachel Bawden for providing the NMT system outputs, and the three anonymous reviewers. The annotation work was funded by the European Association for Machine Translation. The work carried out at The University of Edinburgh was funded by the ERC H2020 Advanced Fellowship GA 742137 SEMANTAX and a grant from The University of Edinburgh and Huawei Technologies. The work carried out at Uppsala University was funded by the Swedish Research Council under grant 2017-930.
Evaluating discourse phenomena in neural machine translation. Rachel Bawden, Rico Sennrich, Alexandra Birch, Barry Haddow, abs/1711.00513CoRRRachel Bawden, Rico Sennrich, Alexandra Birch, and Barry Haddow. 2017. Evaluating discourse phe- nomena in neural machine translation. CoRR, abs/1711.00513.
Functional Grammar. Amsterdam. C Simon, Dik, North-HollandSimon C. Dik. 1978. Functional Grammar. Amster- dam, North-Holland.
Testing for significance of increased correlation with human judgment. Yvette Graham, Timothy Baldwin, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsYvette Graham and Timothy Baldwin. 2014. Testing for significance of increased correlation with human judgment. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 172-176, Doha, Qatar. Associ- ation for Computational Linguistics.
Automatic post-editing for the DiscoMT pronoun translation task. Liane Guillou, Proceedings of the Second Workshop on Discourse in Machine Translation. the Second Workshop on Discourse in Machine TranslationLisbon, PortugalAssociation for Computational LinguisticsLiane Guillou. 2015. Automatic post-editing for the DiscoMT pronoun translation task. In Proceedings of the Second Workshop on Discourse in Machine Translation, pages 65-71, Lisbon, Portugal. Associ- ation for Computational Linguistics.
Incorporating Pronoun Function into Statistical Machine Translation. Liane Guillou, Edinburgh University, Department of InformaticsPh.D. thesisLiane Guillou. 2016. Incorporating Pronoun Function into Statistical Machine Translation. Ph.D. thesis, Edinburgh University, Department of Informatics.
PROTEST: A test suite for evaluating pronouns in machine translation. Liane Guillou, Christian Hardmeier, Proceedings of the Eleventh Language Resources and Evaluation Conference, LREC 2016. the Eleventh Language Resources and Evaluation Conference, LREC 2016Portorož, SloveniaLiane Guillou and Christian Hardmeier. 2016. PROTEST: A test suite for evaluating pronouns in machine translation. In Proceedings of the Eleventh Language Resources and Evaluation Conference, LREC 2016, pages 636-643, Portorož, Slovenia.
ParCor 1.0: A parallel pronoun-coreference corpus to support statistical MT. Liane Guillou, Christian Hardmeier, Aaron Smith, Jörg Tiedemann, Bonnie Webber, Proceedings of the 9th International Conference on Language Resources and Evaluation, LREC 2014. the 9th International Conference on Language Resources and Evaluation, LREC 2014Reykjavik, IcelandEuropean Language Resources Association (ELRALiane Guillou, Christian Hardmeier, Aaron Smith, Jörg Tiedemann, and Bonnie Webber. 2014. ParCor 1.0: A parallel pronoun-coreference corpus to support statistical MT. In Proceedings of the 9th Interna- tional Conference on Language Resources and Eval- uation, LREC 2014, pages 3191-3198, Reykjavik, Iceland. European Language Resources Association (ELRA).
An introduction to functional grammar. A K Michael, Halliday, Hodder Arnold London3rd editionMichael A. K. Halliday. 2004. An introduction to func- tional grammar, 3rd edition. Hodder Arnold Lon- don.
Discourse in Statistical Machine Translation. Christian Hardmeier, Uppsala University, Department of Linguistics and PhilologyPh.D. thesisChristian Hardmeier. 2014. Discourse in Statistical Machine Translation. Ph.D. thesis, Uppsala Univer- sity, Department of Linguistics and Philology.
Modelling pronominal anaphora in statistical machine translation. Christian Hardmeier, Marcello Federico, Proceedings of the 7th International Workshop on Spoken Language Translation, IWSLT 2010. the 7th International Workshop on Spoken Language Translation, IWSLT 2010Paris, FranceChristian Hardmeier and Marcello Federico. 2010. Modelling pronominal anaphora in statistical ma- chine translation. In Proceedings of the 7th Interna- tional Workshop on Spoken Language Translation, IWSLT 2010, pages 283-289, Paris, France.
A graphical pronoun analysis tool for the protest pronoun evaluation test suite. Christian Hardmeier, Liane Guillou, Baltic Journal of Modern Computing. 2Christian Hardmeier and Liane Guillou. 2016. A graph- ical pronoun analysis tool for the protest pronoun evaluation test suite. Baltic Journal of Modern Com- puting, (2):318-330.
Pronoun-focused MT and cross-lingual pronoun prediction: Findings of the 2015 DiscoMT shared task on pronoun translation. Christian Hardmeier, Preslav Nakov, Sara Stymne, Jörg Tiedemann, Yannick Versley, Mauro Cettolo, Proceedings of the Second Workshop on Discourse in Machine Translation. the Second Workshop on Discourse in Machine TranslationDiscoMT; Lisbon, PortugalChristian Hardmeier, Preslav Nakov, Sara Stymne, Jörg Tiedemann, Yannick Versley, and Mauro Cettolo. 2015. Pronoun-focused MT and cross-lingual pro- noun prediction: Findings of the 2015 DiscoMT shared task on pronoun translation. In Proceedings of the Second Workshop on Discourse in Machine Translation, DiscoMT 2015, pages 1-16, Lisbon, Portugal.
Christian Hardmeier, Jörg Tiedemann, Preslav Nakov, Sara Stymne, Yannick Versely, Dis-coMT 2015 Shared Task on Pronoun Translation. Christian Hardmeier, Jörg Tiedemann, Preslav Nakov, Sara Stymne, and Yannick Versely. 2016. Dis- coMT 2015 Shared Task on Pronoun Translation.
LINDAT/CLARIN digital library at Institute of Formal and Applied Linguistics. Charles University in PragueLINDAT/CLARIN digital library at Institute of For- mal and Applied Linguistics, Charles University in Prague.
On using very large target vocabulary for neural machine translation. Sébastien Jean, Kyunghyun Cho, Roland Memisevic, Yoshua Bengio, ArXiv e-prints, 1412.Sébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2014. On using very large tar- get vocabulary for neural machine translation. ArXiv e-prints, 1412.2007.
Rule-based pronominal anaphora treatment for machine translation. Sharid Loáiciga, Eric Wehrli, Proceedings of the Second Workshop on Discourse in Machine Translation. the Second Workshop on Discourse in Machine TranslationLisbon, PortugalAssociation for Computational LinguisticsSharid Loáiciga and Eric Wehrli. 2015. Rule-based pronominal anaphora treatment for machine trans- lation. In Proceedings of the Second Workshop on Discourse in Machine Translation, pages 86-93, Lis- bon, Portugal. Association for Computational Lin- guistics.
Pronominal anaphora and verbal tenses in machine translation. Sharid Loáiciga, Université de GenèvePh.D. thesisSharid Loáiciga. 2017. Pronominal anaphora and ver- bal tenses in machine translation. Ph.D. thesis, Uni- versité de Genève.
Pronoun translation and prediction with or without coreference links. Ngoc Quang Luong, Lesly Miculicich Werlen, Andrei Popescu-Belis, Proceedings of the Second Workshop on Discourse in Machine Translation. the Second Workshop on Discourse in Machine TranslationLisbon, PortugalAssociation for Computational LinguisticsNgoc Quang Luong, Lesly Miculicich Werlen, and An- drei Popescu-Belis. 2015. Pronoun translation and prediction with or without coreference links. In Pro- ceedings of the Second Workshop on Discourse in Machine Translation, pages 94-100, Lisbon, Portu- gal. Association for Computational Linguistics.
Validation of an automatic metric for the accuracy of pronoun translation (APT). Lesly Miculicich Werlen, Andrei Popescu-Belis, Proceedings of the Third Workshop on Discourse in Machine Translation (DiscoMT). Association for Computational Linguistics (ACL). the Third Workshop on Discourse in Machine Translation (DiscoMT). Association for Computational Linguistics (ACL)Lesly Miculicich Werlen and Andrei Popescu-Belis. 2017. Validation of an automatic metric for the ac- curacy of pronoun translation (APT). In Proceed- ings of the Third Workshop on Discourse in Ma- chine Translation (DiscoMT). Association for Com- putational Linguistics (ACL).
Using bilingual corpora to improve pronoun resolution. Ruslan Mitkov, Catalina Barbu, Languages in Contrast. 42Ruslan Mitkov and Catalina Barbu. 2003. Using bilin- gual corpora to improve pronoun resolution. Lan- guages in Contrast, 4(2):201-211.
BLEU: A method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia (Pennsylvania, USAACLKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia (Pennsylvania, USA). ACL.
Baseline models for pronoun prediction and pronoun-aware translation. Jörg Tiedemann, Proceedings of the Second Workshop on Discourse in Machine Translation. the Second Workshop on Discourse in Machine TranslationLisbon, PortugalAssociation for Computational LinguisticsJörg Tiedemann. 2015. Baseline models for pronoun prediction and pronoun-aware translation. In Pro- ceedings of the Second Workshop on Discourse in Machine Translation, pages 108-114, Lisbon, Portu- gal. Association for Computational Linguistics.
Context-aware neural machine translation learns anaphora resolution. Elena Voita, Pavel Serdyukov, Rico Sennrich, Ivan Titov, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics. Association for Computational LinguisticsElena Voita, Pavel Serdyukov, Rico Sennrich, and Ivan Titov. 2018. Context-aware neural machine trans- lation learns anaphora resolution. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Association for Compu- tational Linguistics.
Evan J Williams, Regression Analysis. New YorkWileyEvan J. Williams. 1959. Regression Analysis, vol- ume 14. Wiley, New York.
| [] |
[
"TruthBot: An Automated Conversational Tool for Intent Learning, Curated Information Presenting, and Fake News Alerting",
"TruthBot: An Automated Conversational Tool for Intent Learning, Curated Information Presenting, and Fake News Alerting"
] | [
"Ankur Gupta ankugupt@iitk.ac.in \nIndian Institute of Technology Kanpur\n\n",
"Yash Varun yashyv@iitk.ac.in \nIndian Institute of Technology Kanpur\n\n",
"Prarthana Das prdas@iitk.ac.in \nIndian Institute of Technology Kanpur\n\n",
"Nithya Muttineni \nIndian Institute of Technology Kanpur\n\n",
"Parth Srivastava parthsri@iitk.ac.in \nIndian Institute of Technology Kanpur\n\n",
"Hamim Zafar \nIndian Institute of Technology Kanpur\n\n",
"Tanmoy Chakraborty tanmoy@iiitd.ac.in \nIndian Institute of Information Technology Delhi\n\n",
"Swaprava Nath swaprava@iitk.ac.in \nIndian Institute of Technology Kanpur\n\n"
] | [
"Indian Institute of Technology Kanpur\n",
"Indian Institute of Technology Kanpur\n",
"Indian Institute of Technology Kanpur\n",
"Indian Institute of Technology Kanpur\n",
"Indian Institute of Technology Kanpur\n",
"Indian Institute of Technology Kanpur\n",
"Indian Institute of Information Technology Delhi\n",
"Indian Institute of Technology Kanpur\n"
] | [] | We present TruthBot (name is anonymized), an all-in-one multilingual conversational chatbot designed for seeking truth (trustworthy and verified information) on specific topics. It helps users to obtain information specific to certain topics, fact-check information, and get recent news. The chatbot learns the intent of a query by training a deep neural network from the data of the previous intents and responds appropriately when it classifies the intent in one of the classes above. Each class is implemented as a separate module which uses either its own curated knowledge-base or searches the web to obtain the correct information. The topic of the chatbot is currently set to COVID-19. However, the bot can be easily customized to any topic-specific responses. Our experimental results show that each module performs significantly better than its closest competitor, which is verified both quantitatively and through several user-based surveys in multiple languages. TruthBot has been deployed in June 2020 and is currently running. 1 | null | [
"https://arxiv.org/pdf/2102.00509v1.pdf"
] | 231,740,473 | 2102.00509 | 3161a0cf9ed25edb597d161d8ea3d5033d6d6e65 |
TruthBot: An Automated Conversational Tool for Intent Learning, Curated Information Presenting, and Fake News Alerting
Ankur Gupta ankugupt@iitk.ac.in
Indian Institute of Technology Kanpur
Yash Varun yashyv@iitk.ac.in
Indian Institute of Technology Kanpur
Prarthana Das prdas@iitk.ac.in
Indian Institute of Technology Kanpur
Nithya Muttineni
Indian Institute of Technology Kanpur
Parth Srivastava parthsri@iitk.ac.in
Indian Institute of Technology Kanpur
Hamim Zafar
Indian Institute of Technology Kanpur
Tanmoy Chakraborty tanmoy@iiitd.ac.in
Indian Institute of Information Technology Delhi
Swaprava Nath swaprava@iitk.ac.in
Indian Institute of Technology Kanpur
TruthBot: An Automated Conversational Tool for Intent Learning, Curated Information Presenting, and Fake News Alerting
We present TruthBot (name is anonymized), an all-in-one multilingual conversational chatbot designed for seeking truth (trustworthy and verified information) on specific topics. It helps users to obtain information specific to certain topics, fact-check information, and get recent news. The chatbot learns the intent of a query by training a deep neural network from the data of the previous intents and responds appropriately when it classifies the intent in one of the classes above. Each class is implemented as a separate module which uses either its own curated knowledge-base or searches the web to obtain the correct information. The topic of the chatbot is currently set to COVID-19. However, the bot can be easily customized to any topic-specific responses. Our experimental results show that each module performs significantly better than its closest competitor, which is verified both quantitatively and through several user-based surveys in multiple languages. TruthBot has been deployed in June 2020 and is currently running. 1
Introduction
Social media has endowed us with a massive volume of information, which has both useful, not-so-useful, and misleading content (Vraga and Tully, 2019; Shu et al., 2017). This is worrisome since a usual netizen's social media time is significant. According to eMarketer 1 , US adults use social media for an average of 12 hours per day. During the crisis situations like a global pandemic, people desperately look for solutions and end up consuming unverified information from various online resources including social media.
It is observed that the users often trust the messages they see on social media and instant messengers. A study by Bridgman et al. (2020) shows that those who receive most of the news from social media are more likely to believe falsehoods about COVID-19. It is also observed that the social media has taken over the role of a news platform and now ranks just behind television (Kohut et al., 2010) which may mask true information for certain users. To help such users reach to the correct information, technology needs to be developed within the social media and instant messengers itself.
On the other hand, popularity of chatbots among the Internet users has increased significantly in the past few years 2 (GrandViewResearch, 2017). Studies on US-based users have shown that 86% of them prefer chatting with a chatbot than a human agent (Forbes.com, 2019). A study by Brandtzaeg and Følstad (2017) shows that the major motivations for using chatbots are their timely and efficient assistance for information which improves productivity. Hence, the prospect of a chatbot to truth-seek information is bright, and serves as the motivation of this paper.
1.0.1 State-of-the-art and limitations.
The current fact-checking apps require the users to enter the suspicious messages in those apps to fact-check or ask the users to read the current fact-checking articles to find out if their answers lie within the articles. Such apps are not very useful, since suspicious messages typically come via social media platforms or instant messengers and the cost of switching apps or read verified messages to find out the truth is quite high. The other kind of solutions that involve forwarding service (e.g., to WhatsApp business accounts) for fact-checking are typically manually verified and responded by a team of journalists. The chatbot approach for fact-checking is very limited as we discuss in Section 2. However, those approaches also do not (a) consider a complete truth-seeking design that provides holistic information on a topic, rather take care of certain frequently asked questions on a topic (e.g., , and (b) handle low-resource languages.
Note that we consider the term truth-seek to be more general than fact-check. While fact-checking tries to classify a piece of news to be true or not, truth-seeking provides a complete information against a query. For instance, the query can be a general question about a topic or a news article about which a user is only partially aware or an areabased infection statistics of a disease. The scope of truth-seeking, therefore, subsumes fact-checking as we discuss in the following section.
Proposed Solution: TruthBot.
We introduce TruthBot, a truth-seeking chatbot that (a) provides answers to topic-specific frequently asked questions (FAQ), (b) returns articles after searching on the web, (c) responses to custom queries (e.g., area-wise infection statistics for , in addition to (d) fact-checking news articles. TruthBot uses a deep neural network to identify the intent of the user's query. However, if the confidence of the identification is not significant, it consults with the user through a conversation to make sure that the right intent is conveyed to the chatbot. If the query topic falls in any of the four classes discussed above, it triggers the corresponding module and retrieves the answer. The modules have dependencies, e.g., if the FAQ module cannot respond to the query, it automatically searches for the query in the fact-check or Google search modules. The details are in Section 3.
Evaluation.
We evaluate TruthBot in three aspects:
We measure the response accuracy by finding the relevance of the content of the response with the content of the original query. We conduct a survey on the satisfaction of the chatbot regarding the topic it handles (COVID-19 in our example).
We also conduct another survey with a different population on the user interface about the usefulness, ease of use, credibility, precision, and value of the chatbot. Table 1 shows a typical example of the responses returned by the international fact-checking network (IFCN) chatbot on WhatsApp 3 vis-a-vis TruthBot, where the latter is much more precise against the query.
Our contributions.
The contributions of this paper can be summarized as follows:
Unlike other fact-checking chatbots, TruthBot is a truth-seeking bot, which performs four tasks (Section 3.3 to Section 3.6) via a conversational AI engine (Section 3.2) which identifies the intent of a query and guides the user through a piece of curated information or gets the intent clarified from the user.
The chatbot identifies the query language and responds in that language (Section 3.7). This is supported for the 108 languages that Google translate is capable of handling (we use the Google translate service for this task), which is a major advantage of using TruthBot, particularly for low-resource languages.
Experiments (Section 4) show that the response accuracy for all types of queries is much better than that of the IFCN chatbot, which also does a query-based factchecking and is the closest in its design to our chatbot.
TruthBot has been operational since June 2020 and, it is currently accepting queries related to COVID-19.
Reproducibility
We provide all data, code, and survey results separately.
Related work
We discuss the related studies briefly in two aspects -fake news detection and task-specific chatbots.
Detecting malicious content on social media such as fake news, misinformation, disinformation, rumors, cyberbullying has been one of the primary research agenda in natural language processing and social computing. Due to the abundance of literature, we would like the readers to point to two recent survey articles by Zhou and Zafarani (2018) and Bondielli and Marcelloni (2019). Existing approaches can be broadly divided into four categories -(i) knowledge-based, which verifies whether the given news is consistent with a knowledge base (true knowledge) Pan et al. (2018); (ii) style-based, which employs various stylometry analysis modules to check the similarity of the writing style of a given news with existing fake news ; (iii) propagation-based, which explores the propagation of a given news on social media to check how it correlates with the usual fake information diffusion Bian et al. (2020); and (iv) source-based, which examines the trustworthiness of the source (the account) from which the news has started propagating .
Another direction of research relevant to the current study deals with the use of chatbots as an assistant for different applications (Lin et al., 2020;Radziwill and Benton, 2017;Følstad et al., 2018). Though chatbots originated in the 1960s, its use as an AI assistant and widespread acceptance is a recent phenomenon (Shum et al., 2018). The use of chatbots for fake news detection is quite a recent development. The approach closest to this paper is by the international fact-checking network (IFCN) (IFCN, 2020), that has tied up with WhatsApp to develop a fact-checking chatbot on that platform for people to fact-check news articles. In this paper, we provide case-studies and a comparison of outputs of various queries of our chatbot vis-a-vis the IFCN chatbot.
TruthBot is quite different from all chatbot approaches to true information retrieval, since it is not limited to fact-checking alone, rather provides a complete information on a topic in multiple languages.
Architecture of TruthBot
The objective of TruthBot is to bring all relevant truth-seeking activities on a topic within the scope of a single chatbot. Users can interact with TruthBot to get (a) general information regarding the topic, (b) fact-check potentially fake news, (c) general information on search queries from Google search, and (d) response from other custom information modules. The current setup of TruthBot is tuned to provide all such information regarding COVID-19 pandemic, with the custom information module being the infection and death statistics of a city, district, state, or country. However, it is perfectly general to be tuned to any other topic with a little customization. We present the basic architecture of TruthBot in this section. Fig. 1 shows the graphical outline of the working units of TruthBot.
User queries
The beta-version of the bot has been operational since June, 2020, and has attracted 696 unique users (who either have directly used or are informed about the bot till August, 2020) on the three platforms -WhatsApp, Facebook Messenger, and Telegram. For the analyses presented in this paper, we used about 1300 queries which were received till July, 2020. 4 We manually classified these queries into six classes -frequently asked questions (FAQ) on a specialized topic (COVID-19 in our case), fake information verification (FAKE), general search for a news (GEN), some custom information (area-wise infection, death statistics-AREASTAT-in our case of COVID-19), greeting, and spam. Some of the queries, e.g., "warm saline water gargling can cure coronavirus", were classified into multiple classes, as this example can be classified as FAQ, FAKE, and GEN. This manual labeling task took about 30 person-hours since one needed to read every query and use human justification to classify them. This process generated the training set (the JSON file with these classification is available in the supplementary material) which we used to classify new queries.
Conversational AI approach
TruthBot is designed to help users check the truth of a news or a message. The focus is to provide the least cognitive load to the user and understand the intent of her queries before proceeding on to searching the information. This is important since the trajectory of the search for the different tasks TruthBot is capable of doing is quite different. The query is tokenized and stemmed into root words to create a bag of words. The collection of such words and the query classes are used to train the Query Classification Unit (QCU). This unit classifies the intents of the queries into the six classes discussed in Section 3.1 using a deep neural network (DNN). Any new query is soft-classified into these classes using the softmax scores. If the scores are significant-the threshold has been tuned based on manual inspection of the classification-then the action corresponding to that class is invoked (e.g., it calls the module Fact Check if it is classified as a fake information verification query). If the scores of classification are not significant, then it returns to the user and provides a selection choice of the intent to make a focused truth-search of the query. A couple of examples of how the conversation unfolds in this method are provided in Fig. 3. Fig. 2 shows the schematic representation of QCU. The performance metrics of this classification for different classes are shown in Table 2.
Metric (Shu et al., 2017)
Topic-specific frequently asked questions (FAQs)
The purpose of this module is to answer some standard questions users may have on the topic TruthBot is tuned to. Since standard answers to these FAQs are already available, it is possible to create a knowledge-base from which these questions can be directly answered and reduce the latency of the responses. In the following subsection, we discuss the case study of the FAQs on COVID-19 that this bot is currently tuned to.
Case study: COVID-19.
For COVID-19, websites like WHO (https://covid19.who.int/), CDC (https://www. cdc.gov/), several government health departments (e.g., the Indian Ministry of Health and Family Welfare (https://www.mohfw.gov.in/)) provide a large collection of FAQs and myth-busters. We first create a knowledge-base by web-scraping those articles from these websites. Fig. 4 shows the flowchart of this module. The Text Scraping Unit (TSU) consists of several Python libraries. The TSU downloads the HTML content of the target webpage (e.g., the FAQ page of CDC 5 ) by sending an HTTP request to the associated URL and uses it to construct a parse tree using html.parser. The useful FAQs and the paragraphs answering each FAQ are extracted from the parse tree and stored in the local knowledge-base.
To augment the local knowledge-base, we also employ automatic question generation from the answer texts. This is done such that a slightly rephrased question gets a better similarity with one of the questions in our knowledge-base. The answer, however, remains same for all the questions generated from that text. The most straightforward means for this task is answer-aware question generation Lopez et al. (2020). In answer-aware question generation, the model is presented with the answer and a passage to generate a question for that answer considering the passage as the context. As the answer-aware models require answers for generating questions, we first split the passage into sentences and select a subset based on named-entity recognition and noun-phrase extraction. We have used a text-to-text transformer model Raffel et al. (2019) for question generation. The model is fine-tuned in a multitask way using task prefixes with the help of SQuADv1 dataset Rajpurkar et al. (2016) along with pre-trained transformers (specifically seq-2-seq models) on paragraphs that provide answers to COVID-19 related FAQs extracted from online databases.
When a user sends the query, we need to find the appropriate question that matches this query. We use BERT Devlin et al. (2018) fine-tuned on CORD-19 dataset 6 to generate the contextual embedding for each question sentence in the corpus. Then, we use cosine similarity to match a query with one of the questions in the local knowledge-base. A cosine similarity score of 0.85 is used as an empirically determined threshold for the correctness of the matchings. If a query scores above this threshold, we return the answer corresponding to the matched question in the corpus. Otherwise, the query is transferred to Fact Check module.
Custom information query
Several topic-specific custom modules can be added to TruthBot. The purpose of these modules varies with the kind of topic we tune the chatbot to. As an example, we explain one such custom information module for COVID-19 below.
Case study: COVID-19.
Currently, TruthBot is running a custom information module, called AREASTAT. The goal of this module is to provide the user with the infection, death, recovery statistics of a city, state, or country. Currently, the bot responds to the finer AREASTAT queries on Indian states and cities. For other countries, it provides country-level aggregate information. However, this can be easily extended with an appropriate database providing the finer information for the cities and states of other countries.
After the intent of the query is classified as AREASTAT, it queries the databases with the name of the city, state, or country extracted from the query. The following workflow is repeated at most twice until the result is found.
The name of the place is extracted from the query.
The place is looked up in three databases -state wise, district wise, and country wise in order.
Figure 5: Custom information query (AREASTAT) layer.
If a match is found in the database, the results of the number of COVID-19 cases, partitioned into confirmed, recovered, active, and deceased are sent to the user.
If no match is found in the database, the user is sent a message asking to enter only the name of the place to ensure that there is no spelling mistake.
Verifying if the query is about AREASTAT.
The AREASTAT queries follow a pattern, e.g., "what are the number of cases in place name ", "number of COVID-19 cases in place name ", "how many people are infected with covid in place name " etc. The query is first processed for the common phrases such as "number of cases" or "how many" followed by the place name.
However, since the database for area-wise statistics available to us is only for COVID-19, we need to distinguish similar queries that ask the statistics for a different topic, e.g., the death in a cyclone. We first identify if the query has any COVID-19 related keywords along with the phrases discussed above. Once the query is confirmed to be a COVID-19 statistics query, we detect the place name using the usual patterns of such AREASTAT questions.
There are two ways in which the statistics query may not be responded to: (a) if the statistics asked is not related to or (b) if the place name is not available in the database. In the former case, the query is transferred to the GEN module for a Google search of the query. In the latter, the user is responded with a message asking to re-enter the place correctly.
Statistics databases. The covid19india API 7 crowdsources the latest number of COVID-19 cases from various sources pertaining to each state and city of India. For the countries other than India, the data is scraped from the Worldometers website 8 at the time when the query is received.
Fake news alerting
Several responsible news agencies bust fake news and post such analysis on their dedicated fact-check section of their websites. If a query is classified as FAKE, it is processed through the Fact Check module that uses 'Google Fact Check Claim Search API' to search for the already fact-checked claims by sending an HTTP request. A typical url for calling the API consists of the query, language code, page size and an API key. An API call can return with a nested dictionary (denoted by FCR) of multiple objects corresponding to the number of fact-checked claims available on different fact-checker websites (e.g., Alt News 9 ). For each of the retrieved results, we store the TEXT, TEXTUAL RATING, and URL attributes. For each result FCR i , we assign a relevance score rs i that computes the semantically matching similarity (based on cosine similarity) between the contextual embedding of the TEXT attribute of FCR i and the contextual embedding of the query. The results in FCR sorted based on their relevance scores and the top-k (k ≤ 3) results are displayed to the user if their relevance scores exceed a predefined (empirically determined) cutoff. If the query language is a language other than English, the results are displayed after translating to the original language using the Language Processor module. If the API call does not return any result or the relevance scores of all the returned results are below the cutoff, the query is forwarded to the google-search module. A regional language query is forwarded to the Google search module in its original language. The flow-chart for the Fact Check module is shown in Fig. 6.
Google search articles
When a query is classified as GEN or the FAQ and Fact Check module does not produce any response to a query in FAQ or FAKE category respectively, the query is forwarded to the Google Search module that retrieves results by performing Google search using the google search library. The query is fed to the search() method of Google search library and the top two urls (already sorted based on their relevance to the query) are retrieved. For each of the two URLs obtained in the previous step, we scrape the webpage so that we can perform summarization using that scraped data. After scraping the url, the content is summarized using TextRank Mihalcea and Tarau (2004), a graph-based summarization algorithm. To compute the edge weights of the graph used in TextRank, we use three different similarity metrics 1. Cosine similarity between sentences using TF-IDF model, 2. Longest common substring between any two sentences, 3. BM25 / Okapi-BM25 (Robertson and Zaragoza, 2009), a bag-of-words ranking function used for information retrieval tasks.
The obtained summary and the url are displayed to the user. If the original query is in a low-resource language and the scraped data of the webpage is in the same language, no translation is required. However, if the obtained webpages are in English, then the language processor (LP) module (see Fig. 1) translates the summary into the language of the original query before displaying it to the user.
Extension to low-resource languages
One distinguishing factor of TruthBot is that it is capable of responding in multiple languages. This is particularly helpful for the low-resource language users since the social media and instant messengers have significant user-base in these languages but technologies for various applications (including true information finding applications) are not well-developed for them. We use the language detection and translation service of Google translate 10 in the QPU and language processor modules in Fig. 1. In QPU, if the detected language is different from English, then that information is stored, the query is translated into English and sent to the later modules. At the language processor module, the response of the bot is translated back into the original language of the query and passed through the conversational AI module to be displayed on user's screen. Fig. 8 shows the responses of a typical query in Hindi in IFCN chatbot and in TruthBot.
Evaluation dataset
To evaluate TruthBot, we curated a dataset consisting of 79 queries. The queries belonged to four categories -FAQ, FAKE, GEN, and AREASTAT. The FAQ queries were manually curated from WHO and CDC which provide general information on COVID-19. For FAKE and GEN queries, we collected most recent (as of August 10, 2020) fake information and true news articles on COVID-19 from different media outlets (e.g., https://www.indiatoday.in/). Single sentence queries that conveyed the main message were curated from the collected articles. We curated the AREASTAT queries manually. Supplementary materials accompany the list of queries and the responses of the two chatbots. In the quantitative comparison of the response accuracy, we used the first three categories, since the textual similarity between AREASTAT queries and responses is not informative about accuracy.
Evaluating response accuracy
First, we wanted to evaluate the accuracy of TruthBot in retrieving the correct article available in the online or offline databases in response to the specific query. For this, we designed a metric called 'response accuracy', which measures the similarity of the retrieved response against the actual query. We used cosine similarity between the main text of the retrieved response and the actual query for computing response accuracy. Results of TruthBot were compared against that of the chatbot by IFCN. For IFCN chatbot, the sentence following the phrase "Claim rated false" or "Claim rated misleading" was selected as the main text of the response. For TruthBot, for FAKE queries, the sentence followed by the phrase "Claim" was selected as the main text, whereas for FAQ and GEN queries, the complete response was compared against that of the actual query. Fig. 9 that it produces for FAQ and GEN queries may not be meaningful. However, since the response accuracy measures the textual similarity between the response and the query, it also indicates whether relevant articles were retrieved by the chatbot. For FAQ and GEN queries, TruthBot's average response accuracy was 35.64% and 68.2% better than that of IFCN chatbot.
T r u t h B o t I F C N B o t T r u t h B o t I F C N B o t T r u t h B o t I F C N B o t T r u t h B o t I F C N
Chatbot satisfaction survey
We conducted a human-centric evaluation to determine how satisfied a user would be with the responses received from TruthBot as compared to IFCN chatbot. The subjects were chosen through an open call sent via email and 64 users participated in this survey. The age-group of the participants were: 10-30 years (80.3%), 31-50 years (16.3%), and above 50 years (3.4%). Among them, 20.2% were female and 79.2% were male participants. Each user was assigned 10 queries (2 AREASTAT, 3 FAKE, 2 FAQ and 3 GEN) and the corresponding responses from TruthBot and IFCN chatbot. The queries were randomly chosen from the set of queries described in Section 4.1 so that we obtain multiple ratings (by multiple users) for a single query. Each user was instructed to rate the response of each bot based on how satisfied (1 for least and 5 for most) they were with the response obtained from a bot for each of their assigned queries. The result of the user survey is shown in Fig. 10. For each of the query classes, TruthBot achieved much better user rating as compared to IFCN chatbot. For AREASTAT, FAQ and GEN, median user rating of TruthBot was 5 as compared to the median rating of 2 for IFCN chatbot. Even for FAKE queries (for which IFCN chatbot is specialized), TruthBot achieved better rating (median rating 4 as compared to 3 for IFCN chatbot).
User interface study
Finally, we also compared TruthBot against the IFCN bot based on the feedback from the users regarding different aspects of the user interface -Usefulness, Ease of Use, Credibility, Precision, and Value (interaction design.org, 2020). We obtained responses from 51 users Usefulness Credibility Value Ease of Use Precision for this survey. In the survey, the users had to rate the bots for each of the above aspects. Possible ratings were from 1 (worst) to 5 (best). The result of the user survey is shown in Fig. 11. For each of the above aspects, TruthBot obtained better ratings compared to that of IFCN bot. Specifically, for Usefulness and Precision, mean rating for TruthBot was 2 times better than that IFCN bot on an average. The p-values of the t-test for statistical significance on all the aspects were < 0.00001.
Summary and Future Work
We presented a multilingual and multipurpose chatbot that provides almost all aspects of information on a topic a user can be interested in. Such chatbots will be a single-point destination for users during situations such as COVID-19. Performance evaluation and user survey (as exemplified for COVID-19) also demonstrate the superiority of the bot over the existing options (IFCN chatbot) in providing multi-faceted well-searched information on a specific topic. The bot can be customized for other topics too. A primary goal of this paper is to introduce and exhibit the concept of such holistic information dissemination through a chatbot and provide evaluations to show its efficacy. The chatbot still depends on a number of external databases. While this may not be completely avoidable, but in our future work, we plan to make some of these information to be cached in our local databases to reduce the dependencies on such external factors, which helps both in the response latency and reliability. We also plan to expand the scope of the chatbot to image and multimedia messages, which are often sources of misinformation.
Figure 1 :
1Schematic workflow of TruthBot.
Figure 2 :
2Deep neural network for query classification (number of nodes is representative).
Figure 3 :
3Examples of the conversation with the user when TruthBot is confident (left) and not completely confident (right).
5
https://www.cdc.gov/coronavirus/2019-ncov/faq.html
Figure 4 :
4Architecture of the FAQ layer.
Figure 6 :
6Architecture of Fact Check module.
Figure 7 :
7Consolidated architecture of Google search and summarizer layers.
TruthBotFigure 8 :
8Response of IFCN chatbot (left) and TruthBot (right) in Hindi.
Figure 9 :
9Comparison of Response Accuracy for FAKE, FAQ, and GEN queries. The numbers on the square brackets on the top of the plots denote the p-values of the t-test for statistical significance.
Figure 10 :
10Comparison of user satisfaction while using TruthBot and IFCN chatbot based for AREASTAT, FAKE, FAQ and GEN queries. The results are based on a user survey. The numbers on the square brackets on the top of the plots denote the p-values of the t-test for statistical significance.
Figure 11 :
11Comparison of the GUIs of the two chatbots along different metrics.
Table 1 :
1Query: monkeys are being tortured in laboratories to find COVID-19 vaccine IFCN chatbot (click https://wa.me/17272912606?text=hi) Response: (1) Claim rated False: A vaccine for coronavirus was available since 2001. Fact checked by Newsmeter.in on 04/26/2020 This is a dog vaccine wrongly claimed as being a vaccine for COVID-19. (provides the link) (2) Claim rated False: A WhatsApp forward claims that the COVID-19 vaccine is ready, was announced by U.S. President Donald Trump, and will be launched next Sunday ... Fact checked by Newschecker on 03/23/2020 (provides the link)TruthBotResponse: (1) (provides link to news which reads) Fact check: This disturbing viral video of animal cruelty is not of COVID-19 vaccine test A disturbing video purportedly showing animal cruelty circulating on social media claims: this is how monkeys are being tortured in laboratories to test the COVID-19 vaccine. Viral video showing animal torture is from last year reportedly from a German laboratory ...(2) (provides link to news which reads) Ferrets are among the animals researchers are infecting with the Covid-19 virus as they search for a suitable species to use in tests ... An example query and response obtained from two chatbots -IFCN (a baseline) and TruthBot (our method).
Table 2 :
2Performance metrics of the DNN classification.
compares the response accuracy of TruthBot against that of IFCN chatbot for different classes of queries. For FAKE queries, average response accuracy of TruthBot was 0.74 as compared to 0.56 for IFCN chatbot. Since IFCN is only meant for fake news alerting, the responsep < 2.22e−16
p < 2.22e−16
q
q
q
p < 2.22e−16
q
q
q
q
q
q q q q
p < 2.22e−16
Areastat
FAKE
FAQ
GEN
https://g.co/trends/7tkFb
https://www.poynter.org/fact-checking/2020/power-in-your-pocket-how-to-use-the-ifcns-newwhatsapp-chatbot/
Collectively we got 2270 different queries sent to our bot till September 9, 2020. The performance of the bot will improve as this training set becomes richer.
https://allenai.org/data/cord-19
https://api.covid19india.org/ 8 https://www.worldometers.info/coronavirus/
https://www.altnews.in/
https://translate.google.com
Experimental ResultsThe experiments are performed to validate the efficacy of TruthBot with its closest competitor, the IFCN chatbot. The comparison considers three metrics. First, we perform a quantitative comparison on the response accuracy, which compares the textual similarity of the query and the response provided by the two chatbots. Second, we run a user survey to capture their qualitative satisfaction with the responses of the two chatbots. Finally, we conduct a user survey about the user interface and the presentation of the curated information in these two chatbots.
Rumor detection on social media with bi-directional graph convolutional networks. Tian Bian, Xi Xiao, Tingyang Xu, Peilin Zhao, Wenbing Huang, Yu Rong, Junzhou Huang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Tian Bian, Xi Xiao, Tingyang Xu, Peilin Zhao, Wenbing Huang, Yu Rong, and Junzhou Huang. Rumor detection on social media with bi-directional graph convolutional net- works. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 549-556, 2020.
A survey on fake news and rumour detection techniques. Alessandro Bondielli, Francesco Marcelloni, Information Sciences. 497Alessandro Bondielli and Francesco Marcelloni. A survey on fake news and rumour detection techniques. Information Sciences, 497:38-55, 2019.
Why people use chatbots. Petter Bae Brandtzaeg, Asbjørn Følstad, International Conference on Internet Science. SpringerPetter Bae Brandtzaeg and Asbjørn Følstad. Why people use chatbots. In International Conference on Internet Science, pages 377-392. Springer, 2017.
Derek Ruths, Lisa Teichmann, and Oleg Zhilin. The causes and consequences of covid-19 misperceptions: Understanding the role of news and social media. Aengus Bridgman, Eric Merkley, Peter John Loewen, Taylor Owen, Aengus Bridgman, Eric Merkley, Peter John Loewen, Taylor Owen, Derek Ruths, Lisa Teichmann, and Oleg Zhilin. The causes and consequences of covid-19 misperceptions: Understanding the role of news and social media. 2020.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pretraining of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Chatbots for social good. Asbjørn Følstad, Tom Petter Bae Brandtzaeg, Feltwell, L-C Effie, Manfred Law, Ewa Tscheligi, Luger, Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. Asbjørn Følstad, Petter Bae Brandtzaeg, Tom Feltwell, Effie L-C Law, Manfred Tscheligi, and Ewa Luger. Chatbots for social good. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, 2018.
. Com Forbes, Electronic, Forbes.com. Electronic, 2019. URL https://www.forbes.com/sites/gilpress/2019/10/ 02/ai-stats-news-86-of-consumers-prefer-to-interact-with-a-human-agent-rather-than-a-ch #2d2cb9432d3b.
. Grandviewresearch, Electronic, GrandViewResearch. Electronic, 2017. URL https://www.grandviewresearch.com/ industry-analysis/chatbot-market.
The 7 factors that influence user experience. online. IFCN. Ifcn fact checking organizations on whatsapp. IFCN. Ifcn fact checking organizations on whatsapp. online, 2020. URL https://faq. whatsapp.com/general/ifc-n-fact-checking-organizations-on-whatsapp. interaction design.org. The 7 factors that influence user experience. on- line, 2020. URL https://www.interaction-design.org/literature/article/ the-7-factors-that-influence-user-experience.
Americans spending more time following the news. Andrew Kohut, Carroll Doherty, Michael Dimock, Scott Keeter, Pew Research Center. Andrew Kohut, Carroll Doherty, Michael Dimock, and Scott Keeter. Americans spending more time following the news. Pew Research Center, 2010.
Caire: An end-to-end empathetic chatbot. Zhaojiang Lin, Peng Xu, Genta Indra Winata, Farhad Bin Siddique, Zihan Liu, Jamin Shin, Pascale Fung, AAAI. Zhaojiang Lin, Peng Xu, Genta Indra Winata, Farhad Bin Siddique, Zihan Liu, Jamin Shin, and Pascale Fung. Caire: An end-to-end empathetic chatbot. In AAAI, pages 13622-13623, 2020.
Transformer-based end-to-end question generation. Luis Enrico Lopez, Diane Kathryn Cruz, Jan Christian Blaise, Charibeth Cruz, Cheng, Luis Enrico Lopez, Diane Kathryn Cruz, Jan Christian Blaise Cruz, and Charibeth Cheng. Transformer-based end-to-end question generation, 2020.
TextRank: Bringing order into text. Rada Mihalcea, Paul Tarau, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. the 2004 Conference on Empirical Methods in Natural Language ProcessingBarcelona, SpainAssociation for Computational LinguisticsRada Mihalcea and Paul Tarau. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404-411, Barcelona, Spain, July 2004. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/W04-3252.
Content based fake news detection using knowledge graphs. Siyana Jeff Z Pan, Chenxi Pavlova, Ningxi Li, Yangmei Li, Jinshuo Li, Liu, International semantic web conference. SpringerJeff Z Pan, Siyana Pavlova, Chenxi Li, Ningxi Li, Yangmei Li, and Jinshuo Liu. Con- tent based fake news detection using knowledge graphs. In International semantic web conference, pages 669-683. Springer, 2018.
Evaluating quality of chatbots and intelligent conversational agents. M Nicole, Morgan C Radziwill, Benton, arXiv:1704.04579arXiv preprintNicole M Radziwill and Morgan C Benton. Evaluating quality of chatbots and intelligent conversational agents. arXiv preprint arXiv:1704.04579, 2017.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2019.
SQuAD: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, 10.18653/v1/D16-1264Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392, Austin, Texas, November 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1264. URL https://www.aclweb.org/anthology/D16-1264.
The probabilistic relevance framework: BM25 and beyond. Stephen Robertson, Hugo Zaragoza, Now Publishers IncStephen Robertson and Hugo Zaragoza. The probabilistic relevance framework: BM25 and beyond. Now Publishers Inc, 2009.
Fake news detection on social media: A data mining perspective. Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, Huan Liu, ACM SIGKDD explorations newsletter. 191Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. Fake news detection on social media: A data mining perspective. ACM SIGKDD explorations newsletter, 19(1): 22-36, 2017.
From eliza to xiaoice: challenges and opportunities with social chatbots. Heung-Yeung Shum, Xiao-Dong He, Di Li, Frontiers of Information Technology & Electronic Engineering. 191Heung-Yeung Shum, Xiao-dong He, and Di Li. From eliza to xiaoice: challenges and op- portunities with social chatbots. Frontiers of Information Technology & Electronic En- gineering, 19(1):10-26, 2018.
News literacy, social media behaviors, and skepticism toward information on social media. Information. K Emily, Melissa Vraga, Tully, Communication & SocietyEmily K Vraga and Melissa Tully. News literacy, social media behaviors, and skepticism toward information on social media. Information, Communication & Society, pages 1-17, 2019.
A survey of fake news: Fundamental theories, detection methods, and opportunities. Xinyi Zhou, Reza Zafarani, arXiv:1812.00315arXiv preprintXinyi Zhou and Reza Zafarani. A survey of fake news: Fundamental theories, detection methods, and opportunities. arXiv preprint arXiv:1812.00315, 2018.
Network-based fake news detection: A pattern-driven approach. Xinyi Zhou, Reza Zafarani, ACM SIGKDD Explorations Newsletter. 212Xinyi Zhou and Reza Zafarani. Network-based fake news detection: A pattern-driven ap- proach. ACM SIGKDD Explorations Newsletter, 21(2):48-60, 2019.
Fake news: Fundamental theories, detection strategies and challenges. Xinyi Zhou, Reza Zafarani, Kai Shu, Huan Liu, Proceedings of the twelfth ACM international conference on web search and data mining. the twelfth ACM international conference on web search and data miningXinyi Zhou, Reza Zafarani, Kai Shu, and Huan Liu. Fake news: Fundamental theories, detection strategies and challenges. In Proceedings of the twelfth ACM international conference on web search and data mining, pages 836-837, 2019.
| [] |
[
"Identifying Relationships Among Sentences in Court Case Transcripts using Discourse Relations",
"Identifying Relationships Among Sentences in Court Case Transcripts using Discourse Relations"
] | [
"Gathika Ratnayaka gathika.14@cse.mrt.ac.lk \nDepartment of Computer Science & Engineering\nUniversity of Moratuwa\n\n",
"Thejan Rupasinghe \nDepartment of Computer Science & Engineering\nUniversity of Moratuwa\n\n",
"Nisansa De Silva \nDepartment of Computer Science & Engineering\nUniversity of Moratuwa\n\n",
"Menuka Warushavithana \nDepartment of Computer Science & Engineering\nUniversity of Moratuwa\n\n",
"Viraj Gamage \nDepartment of Computer Science & Engineering\nUniversity of Moratuwa\n\n",
"Amal Shehan Perera \nDepartment of Computer Science & Engineering\nUniversity of Moratuwa\n\n"
] | [
"Department of Computer Science & Engineering\nUniversity of Moratuwa\n",
"Department of Computer Science & Engineering\nUniversity of Moratuwa\n",
"Department of Computer Science & Engineering\nUniversity of Moratuwa\n",
"Department of Computer Science & Engineering\nUniversity of Moratuwa\n",
"Department of Computer Science & Engineering\nUniversity of Moratuwa\n",
"Department of Computer Science & Engineering\nUniversity of Moratuwa\n"
] | [] | Case Law has a significant impact on the proceedings of legal cases. Therefore, the information that can be obtained from previous court cases is valuable to lawyers and other legal officials when performing their duties. This paper describes a methodology of applying discourse relations between sentences when processing text documents related to the legal domain. In this study, we developed a mechanism to classify the relationships that can be observed among sentences in transcripts of United States court cases. First, we defined relationship types that can be observed between sentences in court case transcripts. Then we classified pairs of sentences according to the relationship type by combining a machine learning model and a rule-based approach. The results obtained through our system were evaluated using human judges. To the best of our knowledge, this is the first study where discourse relationships between sentences have been used to determine relationships among sentences in legal court case transcripts.Index Terms-discourse relations, natural language processing, machine learning | 10.1109/icter.2018.8615485 | [
"https://arxiv.org/pdf/1809.03416v2.pdf"
] | 52,181,489 | 1809.03416 | c097446a67f5f2ee2445211e76cd0b598b08870d |
Identifying Relationships Among Sentences in Court Case Transcripts using Discourse Relations
15 Sep 2018
Gathika Ratnayaka gathika.14@cse.mrt.ac.lk
Department of Computer Science & Engineering
University of Moratuwa
Thejan Rupasinghe
Department of Computer Science & Engineering
University of Moratuwa
Nisansa De Silva
Department of Computer Science & Engineering
University of Moratuwa
Menuka Warushavithana
Department of Computer Science & Engineering
University of Moratuwa
Viraj Gamage
Department of Computer Science & Engineering
University of Moratuwa
Amal Shehan Perera
Department of Computer Science & Engineering
University of Moratuwa
Identifying Relationships Among Sentences in Court Case Transcripts using Discourse Relations
15 Sep 2018Index Terms-discourse relationsnatural language processingmachine learning
Case Law has a significant impact on the proceedings of legal cases. Therefore, the information that can be obtained from previous court cases is valuable to lawyers and other legal officials when performing their duties. This paper describes a methodology of applying discourse relations between sentences when processing text documents related to the legal domain. In this study, we developed a mechanism to classify the relationships that can be observed among sentences in transcripts of United States court cases. First, we defined relationship types that can be observed between sentences in court case transcripts. Then we classified pairs of sentences according to the relationship type by combining a machine learning model and a rule-based approach. The results obtained through our system were evaluated using human judges. To the best of our knowledge, this is the first study where discourse relationships between sentences have been used to determine relationships among sentences in legal court case transcripts.Index Terms-discourse relations, natural language processing, machine learning
I. INTRODUCTION
Case Law can be described as a part of common law, consisting of judgments given by higher (appellate) courts in interpreting the statutes (or the provisions of a constitution) applicable in cases brought before them [1]. In order to make use of the case law, lawyers and other legal officials have to manually go through related court cases to find relevant information. This task requires a significant amount of effort and time. Therefore, automatic extraction of Information from legal court case transcripts would generate numerous benefits to the people working in the legal domain.
From this point onwards we are referring to the court case transcripts as court cases. In the process of extracting information from legal court cases, it is important to identify how arguments and facts are related to one another. The objective of this study is to automatically determine the relationships between sentences which can be found in documents related to previous court cases of United States Supreme Court. Transcripts of U.S. court cases were obtained from FindLaw 1 following a method similar to numerous other artificial intelligence applications in the legal domain [2]- [6].
When a sentence in a court case is considered, it may provide details on arguments or facts related to a particular legal situation. Some sentences may elaborate on the details provided in the previous sentence. It is also possible that the following sentence may not have any relationship with the details in the previous sentence and may provide details about a completely new topic. Another type of relationship is observed when a sentence provides contradictory details to the details provided in the previous sentence. Determining these relationships among sentences is vital to identifying the information flow within a court case. To that end, it is important to consider the way in which clauses, phrases, and text are related to each other. It can be argued that identifying relationships between sentences would make the process of Information Extraction from court cases more systematic given that it will provide a better picture of the information flow of a particular court case. To achieve this objective, we used discourse relations based approach to determine the relationships between sentences in legal documents.
Several theories related to discourse structures have been proposed in recent years. Cross-document Structure Theory (CST) [7] , Penn Discourse Tree Bank (PDTB) [8], Rhetorical Structure Theory (RST) [9], [10] and Discourse Graph Bank [11] can be considered as prominent discourse structures. The main difference that can be observed between each of these discourse structures is they have defined the relation types in a different manner. This is mainly due to the fact that different discourse structures are intended for different purposes. In this study, we have based the discourse structure on the discourse structure proposed by CST.
A sentence in a court case transcript can contain different types of details such as descriptions of a scenario, legal arguments, legal facts or legal conditions. The main objective of identifying relationships between sentences is to determine which sentences are connected together within a single flow. If there is a weak or no relation between two sentences, it would probably infer that those two sentences provide details on different topics. Consider the sentence pair taken from Lee v. United States [12] shown in Example 1.
It can be seen that sentence 1.2 elaborates further on the details provided by sentence 1.1 to give a more comprehensive idea on the topic which is discussed in sentence 1.1. These two sentences are connected to each other within a same flow of information. This can be considered as Elaboration relationship, which is a relation type described in CST. Now, Consider the sentence pair shown in Example 2 which was also taken from Lee v. United States [12]. In Example 2, it can be seen that the two sentences have the Follow Up relationship as defined in CST. But still, these two sentences are connected together within the same information flow in a court case. There are also situations where we can see sentences are showing characteristics which are common to multiple discourse relations. Therefore, several discourse relations can be grouped together based on their properties to make the process of determining relationships between sentences in court case transcripts more systematic.
The two sentences for Example 3 were also taken from Lee v. United States [12]:
Example 3
• Sentence 3.1: The question is whether Lee can show he was prejudiced by that erroneous advice.
• Sentence 3.2: A claim of ineffective assistance of counsel will often involve a claim of attorney error "during the course of a legal proceeding"-for example, that counsel failed to raise an objection at trial or to present an argument on appeal.
The sentence 3.2 follows sentence 3.1. A significant connection between these two sentences cannot be observed. It can also be seen that sentence 3.2 starts a new flow by deviating from the topic discussed in sentence 3.1. These observations which were provided by analyzing court cases emphasize the importance of identifying relationships between sentences.
In this study, we defined the relationship types that are important to be considered when it comes to information extraction from court cases. Next, for each of the relationship type we defined, we identified the relevant CST relations [7]. Finally, we developed a system to predict the relationship between given two sentences of a court case transcript by combining a machine learning model and a rule-based component.
The next section provides an overview of how discourse relations have been applied in different domains including the legal domain. Section III describes the methodology which was followed when developing our system. Section IV describes the approaches we took to evaluate the system. The results obtained by evaluating the system are analyzed in Section V. Finally, we conclude our discussion in Section VI.
II. BACKGROUND
Understanding how information is related to each other in machine-readable texts has always been a challenge when it comes to Natural Language Processing. Determining the way in which two textual units are connected to each other is helpful in different applications such as text classification, text summarization, understanding the context, evaluating answers provided for a question. Analyzing of discourse relationships or rhetorical relationships between sentences can be considered as an effective approach to understanding the way how two textual units are connected with each other.
Discourse relations have been applied in different application domains related to NLP. [13] describes CST [7] based text summarization approach which involves mechanisms such as identifying and removing redundancy in a text by analyzing discourse relations among sentences. [14] compares and evaluates different methods of text summarizations which are based on RST [10]. In another study [15], text summarization has been carried out by ranking sentences based on the number of discourse relations existing between sentences. [16]- [18] are some other studies where discourse analysis has been used for text summarization. These studies related to text summarization suggest that discourse relationships are useful when it comes identifying information that discusses on same topic or entity and also to capture information redundancy. Analysis of discourse relations has also been used for question answering systems [19], [20] and for natural language generation [21].
In the study [22], discourse relations existing between sentences are used to generate clusters of similar sentences from document sets. This study shows that a pair of sentences can show properties of multiple relation types which are defined in CST [7]. In order to facilitate text clustering process, discourse relations have been redefined in this study by categorizing overlapping or closely related CST relations together. In [23], the discourse relationships which are defined in [22] have been used for text summarization based on text clustering. The studies [22], [23] emphasize how discourse relationships can be defined according to the purpose and objective of the study in order to enhance the effectiveness.
When it comes to the legal domain, [24] discusses the potential of discourse analysis for extracting information from legal texts. [25] describes a classifier which determines the rhetorical status of a sentence from a corpus of legal judgments. In this study, rhetorical annotation scheme is defined for legal judgments. The study [26] provides details on summarization of legal texts using rhetorical annotation schemes. The studies [25], [26] focus mainly on the rhetorical status in a sentence, but not on the relationships between sentences. An approach which can be used to detect the arguments in legal text using lexical, syntactic, semantic and discourse properties of the text is described in [27].
In contrast to other studies, this study is intended to identify relationships among sentences in court case transcripts by analyzing discourse relationships between sentences. Identifying relationships among sentences will be useful in the task of determining how information is flowed within a court case.
III. METHODOLOGY
A. Defining Discourse Relationships in Court Cases
Five major relationship types were defined by examining the nature of relationships that can be observed between sentences in court case transcripts.
• Elaboration -One sentence adds more details to the information provided in the preceding sentence or one sentence develops further on the topic discussed in the previous sentence.
• Redundancy -Two sentences provide the same information without any difference or additional information.
• Citation -A sentence provides references relevant to the details provided in the previous sentence.
• Shift in View -Two sentences are providing conflicting information or different opinions on the same topic or entity.
• No Relation -No relationship can be observed between the two sentences. One sentence discusses a topic which is different from the topic discussed in another sentence. After defining these relationships, we adopted the rhetorical relations provided by CST [7] to align with our definitions as shown in the Table below. It is very difficult to observe same sentence appearing more than once within nearby sentences in court case transcripts. However, we have included it as a relationship type in order to identify redundant information in a case where the two sentences in a sentence pair are the same.
B. Expanding the Dataset
A Machine Learning model was developed in order to determine the relationship between two sentences in court cases. We used the publicly available dataset of CST bank [28] to learn the Model. The dataset obtained from CST bank contains sentence pairs which are annotated according to the CST relation types. Since we have a labeled dataset [28], we performed supervised learning to develop the machine learning model. Support Vector Machine (SVM) was used because it has shown promising results in previous studies where discourse relations have been used to identify relationships between sentences [22], [23]. Table II provides details on the number of sentence pairs in the data set for each relationship type. By examining the CST relationship types available in the dataset TableII, it can be observed that a relationship type which suggests that there is no relationship between sentences cannot be found. But No Relation is a fundamental relation type that can be observed between two sentences in court case transcripts. Therefore, we expanded the data set by manually annotating 50 pairs of sentences where a relationship between two sentences cannot be found. This new class was named as No Relation. The 50 sentence pairs which were annotated were obtained from previous court case transcripts.
A sentence pair is made up of a source sentence and a target sentence. The source sentence is compared with the target sentence when determining the relationship that is present in the sentence pair. For example, if the source sentence contains all the information in target sentence with some additional information, the sentence pair is said to have the subsumption relationship. Similarly, if the source sentence elaborates the target sentence, the sentence pair is said to have the elaboration relationship.
C. Determining the relationship between sentences using SVM Model
In order to train the SVM model with annotated data, features based on the properties that can be observed in a pair of sentences were defined. Before calculating the features related to words, we removed stop words in sentences to eliminate the effect of less significant words. Also, coreferencing was performed on a given pair of sentences using Stanford CoreNLP CorefAnnotator (coref ) [29] in order to make the feature calculation more effective. The two sentences for Example 4 are also taken from Lee v. United States [12], Here the "Petitioner Jae Lee" in the target sentence, is referred using the pronouns "he" and "his" in both sentences. the system replaces he and his with their representative mention Petitioner Jae Lee. Then the sentences in Example 4 are changed as shown below.; By resolving co-references calculating Noun Similarity, Verb Similarity, Adjective Similarity, Subject Overlap Ratio, Object Overlap Ratio, Subject Noun Overlap Ratio and Semantic Similarity between Sentence features were made more effective.
All the features were calculated and normalized such that their values fall into [0, 1] range. We have defined 9 feature categories based on the properties that can be observed in a pair of sentences.
Following 5 feature categories were adopted mainly from [22] though we have done changes in implementation such as use of co-referencing.
Cosine Similarities
Following cosine similarity values are calculated for a given sentence pairs,
• Word Similarity • Noun Similarity • Verb Similarity • Adjective Similarity Following equation is used to calculate the above mentioned cosine similarities.
CosineSimilarity = n i=1 F V S,i * F V T,i 2 n i=1 (F V S,i ) 2 + 2 n i=1 (F V T,i ) 2
(1) Here F V S,i and F V T,i represents frequency vectors of source sentence and target sentence respectively. Standford CoreNLP POS Tagger (pos) [30] is used to identify nouns, verbs and adjectives in sentences.
In calculating the Noun Similarity feature, singular and plural nouns, proper nouns, personal pronouns and possessive pronouns are considered. Both superlative and comparative adjectives are considered when calculating the Adjective Similarity. The system ignores verbs that are lemmatized into be, do, has verbs when calculating Verb Similarity feature as the priority should be given to effective verbs in sentences.
Word Overlap Ratios
Two ratios are considered based on the word overlapping. One ratio is measured in relation to the target sentence. Another ratio is measured in relation to the source sentence. These ratios provide an indication on the equivalence of two sentences. For example, when it comes to a relationship like subsumption, source sentence usually contains all the words in the target sentence. This property will be also useful in determining relations such as Identity, Overlap (Partial Equivalence) which are based on the equivalence of two sentences.
W OR(T ) = Comm(T, S) Distinct(T ) (2) W OR(S) = Comm(T, S) Distinct(S)(3)
W OR(T ), W OR(S) represents the word overlap ratios measured in relation to source and target sentences respectively. Distinct(T ), Distinct(S) represents number of distinct words in source sentence and target sentence respectively. The number of distinct common words between two sentences are shown by Comm(T, S).
Grammatical Relationship Overlap Ratios
Three ratios which represent the grammatical relationship between target and source sentences are considered.
• Subject Noun Overlap Ratio SubjN ounOverlap = Comm(Subj(S), N oun(T )) Subj(S) (6) All these features are calculated with respect to the source sentence. Subj, Obj, N oun represents the number of subjects, objects, and nouns respectively. Comm gives the number of common elements.
Stanford CoreNLP Dependency Parse Annotator (depparse) [31] is used here to identify subjects and objects. All the subject types including nominal subject, clausal subject, their passive forms and controlling subjects are taken into account in calculating the number of subjects. Direct and indirect objects are considered when calculating the number of objects. All subject and object types are referred from Stanford typed dependencies manual [32].
Longest Common Substring Ratio
Longest Common Substring is the maximum length word sequence which is common to both sentences. When the number of characters in longest common substring is taken as n(LCS) and number of characters in source sentence is taken as n(S), Longest Common Substring Ratio (LCSR) can be calculated as,
LCSR = n(LCS) n(S)(7)
This value indicates the part of the target sentence which is present in the source sentence as a fraction. Thus, this will be useful especially in determining discourse relations such as overlap, attribution, and paraphrase.
Number of Entities
Ratio between number of named entities can be used as a measurement of relationship between two sentences.
N ERatio = N E(S) M ax(N E(S), N E(T ))(8)
N E(X) represents the number of named entities in a given sentence X. Stanford CoreNLP Named Entity Recognizer (NER) [33] was used to identify named entities which belong to 7 types; PERSON, ORGANIZATION, LOCATION, MONEY, PERCENT, DATE and TIME.
In addition to the above mentioned features, following features have been introduced to the system.
Semantic Similarity between Sentences
This feature is useful in determining the closeness between two sentences. Semantic similarity will provide the closeness between those two words. A method described in [34] is adopted when calculating the semantic similarity between two sentences. Semantic similarity score for a pair of sentences is calculated using WordNet::Similarity [35].
score = Average n i=1 N ounScore+ n i=1 V erbScore (9)
Transition Words and Phrases
Availability of a transition word or a transition phrase at the start of a sentence indicates that there is a high probability of having a strong relationship with the previous sentence. For example, sentences beginning with transition words such as And, Thus usually elaborates the previous sentence. Phrases such as To make that, In addition at the beginning of a sentence also implies that the sentence is elaborating on the details provided in the previous sentence. Considering these linguistic properties two boolean features were defined. 1) Elaboration Transition: If the first word of the source sentence is a transition word which implies elaboration such as and, thus, therefore or if a transition phrase is found within first six words of the source sentence, this feature will output 1. If both of above two conditions are false, the feature will return 0. We maintain two lists containing 59 transition words and 91 transition phrases which implies elaboration. Though it is difficult to include all transition phrases in the English language which implies elaboration relationship, we can clearly say that if these phrases are present at the beginning of a sentence, the sentence is more than likely to elaborate the previous sentence. 2) Follow Up Transition: If the source sentence begins with a word like however, although or phrases like in contrast, on the contrary which implies that the source sentence is following up the target sentence, this feature will output 1. Otherwise, the feature will output 0.
Length Difference Ratio
This feature considers the difference of lengths between the source sentence and the target sentence. When length(S) and length(T ) represent the number of words in source sentence and target sentence respectively, Length Difference Ratio (LDR) is calculated as shown below.
LDR = 0.5 + length(S) − length(T ) 2 * M ax(length(S), length(T ))(10)
In a relationship like Subsumption, the length of the source sentence has to be more than the length of the target sentence. In Identity relationship, both sentences are usually of the same length. These properties can be identified using this feature.
Attribution
This feature checks whether a sentence describes a detail in another sentence in a more descriptive manner. Within this feature, we check whether a word or phrase in one sentence is cited in the other sentence using a quotation mark to determine this property. This is also a boolean feature. The source sentence and target sentence for Example 5 was obtained from Turner v. United States [36]: It can be seen that source sentence define or provides more details on what is meant by "reasonable probability" in the target sentence. Such properties can be identified using this feature.
Example
D. Determining Explicit Citation Relationships in Court Case Transcripts
In legal court case documents, several standard ways are used to point out whence a particular fact or condition was obtained. The target sentence and source sentence in Example 6 are obtained from Lee v. United States [12]. The two sentences given in Example 6 are adjacent to each other. It can be clearly seen that the source sentence provides a citation for the target sentence. This is only one of the many ways of providing citations in court case transcripts.
After observing different ways of providing citations in court case transcripts, a rule-based mechanism to detect such citations was developed. If this rule-based system detects that there is citation relationship, the pair of sentences will be assigned with the citation relationship. Such a pair of sentences will not be inputted to the SVM model for further processing.
IV. EXPERIMENTS
In order to determine the effectiveness of our system, it is important to carry out evaluations using legal court case transcripts, as it is the domain this system is intended to be used. Court case transcripts related to United States Supreme Court were obtained from Findlaw. Then the transcripts were preprocessed in order to remove unnecessary data and text. Court case title, section titles are some examples of details which were removed in the preprocessing process. Those details are irrelevant when it comes to determining relationships between sentences.
The relationship types of sentence pairs were assigned using the system. First, the pairs were checked for citation relationship using the rule-based approach. The relationship types of the sentence pairs where citation relationship couldn't be detected using the rule-based approach were determined using the Support Vector Machine model.
The results obtained using the system for the sentence pairs extracted from the court case transcripts were then stored in a database. From those sentence pairs, 200 sentence pairs were selected to be annotated by human judges. Before selecting 200 sentence pairs, the sentence pairs were shuffled to eliminate the potential bias that could have been existent due to a particular court case. Shuffling was helpful in making sure that the sentence pairs to be annotated by human judges were related to different court case transcripts.
Then the selected 200 pairs of sentences to be annotated were grouped together as clusters of five sentence pairs. Each cluster was annotated by two human judges who were trained to identify the relationships between sentence pairs as defined in this study.
V. RESULTS
As expected, the redundancy relationships between sentences could not be observed within the sentence pairs which were annotated using human judges. From the 200 sentence pairs that were observed, our system did not predict redundancy relationship for any sentence pair. Similarly, human judges did not annotate the "redundancy" relationship for any sentence pair.
The confusion matrix which was generated according to the results obtained is given in Table III. The details provided in the matrix are based only on the sentence pairs that were agreed by two human judges to have a similar relationship. The reasoning behind this approach is to eliminate sentence pairs where there are ambiguities of the relationship type between them.
The same approach was used to obtain the results which are presented in Table IV. In contrast, Table V contains results obtained by considering sentence pairs where at least one of the two judges who annotated the pair agrees upon a particular relationship type. The Recall results given in Table IV has a significant importance as all the sentence pairs contained in that results set are annotated with a relationship type which was agreed by two human judges. The Precision results provided in Table V indicate the probability of at least one human judge agreeing with the system's predictions in relation to each relationship type. Evaluation results from Table IV, Table V the system works well when identifying Elaboration, No Relation and Citation relationship types where F-measure values are above 75% in all cases. Shift in View relationship type was not assigned by the system to any of the 200 sentence pairs which were considered in the evaluation.
Human vs Human correlation and Human vs System correlation when it comes to identifying these relationship types were also analyzed. First, we calculated these correlations without considering the relationship type using the following approach. For a given sentence pair P , m(P ) is the value assigned to the pair. n is the number of sentence pairs.
Human vs Human Correlation (Cor(H, H))
When both human judges are agreeing on a single relationship type for the pair P , we assign m(P ) = 1. Otherwise, we assign m(P ) = 0.
Cor(H, H) = n P =1 m(P ) n(11)
Human vs System Correlation (Cor(H, S))
When both human judges are agreeing with the relationship type predicted by the system for the sentence pair P , we assign m(P ) = 1.0. If only one human judge is agreeing with the relationship type predicted by the system for P , we assign m(P ) = 0.5. If both human judges disagree with the relationship type predicted by the system for P , we assign m(P ) = 0.0.
Cor(H, S) = n P =1 m(P ) n(12)
It was observed that the correlation between a human judge and another human judge was calculated to be 0.805 while the correlation between a human judge and the system was calculated to be 0.813. When analyzing these two correlations, it can be seen that our system performs with a capability which is close to the human capability.
The results obtained by calculating Human vs. Human and Human vs. System correlations in relation to each relationship type are given in (Corr(H, S)) where; for a given set A, n(A) indicates number of elements in set A, and for a relationship type R, S denotes the set containing all the sentence pairs which are predicted by the system as having the relationship type R, U denotes the set containing all the sentence pairs which were annotated by at least one human judge as having the relationship type R and V denotes the set containing all the sentence pairs which were annotated by two human judges as having the relationship type R.
Corr(H, H) = n(V ) n(U )(13)
Corr(H, S) = n(S ∧ U ) n(S ∨ U )
The results obtained using this approach is provided in Table VI. The results which are in Table VI suggest that the system performs with a capability which is near to the human capability when it comes to identifying relationships such as Elaboration, No Relation, and Citation in court case transcripts. Enhancing system's ability to identify Shift in View relationship is one of the major future challenges. At the same time Human vs Human correlation when it comes to identifying Shift in View relationship type is 0.188. This indicates that humans are also having ambiguities when identifying Shift in View relationships between sentences in court case transcripts.
Either Elaboration or Shift in View relationship occurs when the two sentences are discussing the same topic or entity. Shift in View relationship occurs over Elaboration when two sentences are providing different views or conflicting facts on the same topic or entity. The No Relation relationship can be observed between two sentences when two sentences are no longer discussing the same topic or entity. In other words, No Relation relationship suggests that there is a shift in the information flow in court cases. As shown in Table III, the sentence pairs with Shift in View relationship are always predicted as having Elaboration relationship by the system. By observing these results, it can be seen that in most of the cases the system is able to identify whether the sentences are discussing the same topic or not.
VI. CONCLUSIONS
The primary research contribution of this study was the use of discourse relationships between sentences to identify the relationships among sentences in transcripts of United States court cases. Five discourse relationship types were defined in this study in order to automatically identify the flow of information within a court case transcript. This study describes how a machine learning model and a rule-based system can be combined together to enhance the accuracy of identifying relationships between sentences in court case transcripts. Features based on the properties that can be observed between sentences have been introduced to enhance the accuracy of the machine learning model.
The proposed methodology can be successfully applied to identify the sentences which develop on the same discussion topic or entity. In addition, the system is capable of identifying situations in court cases where the discussion topic changes. The system is highly successful in the identification of legal citations. These outcomes demonstrate that the approach described in this study has a promising potential to be applied in tasks related to systematic information extraction from court case transcripts.One such task is the identification of supporting facts, citations which are related to a particular legal argument. Another is the identification of changes in discussion topics within a court case.
The system has difficulties in detecting the occasions where the two sentences are providing different opinions on the same discussion topic. Enhancing this capability in the system can be considered as the major future work.
Example 6 ••
6Sentence 6.1 (Target): The decision whether to plead guilty also involves assessing the respective consequences of a conviction after trial and by plea. Sentence 6.2 (Source): See INS v. St. Cyr, 533 U. S. 289, 322-323 (2001).
Example 1 •
1Sentence 1.1: The Government makes two errors in urging the adoption of a per se rule that a defendant with no viable defense cannot show prejudice from the denial of his right to trial.• Sentence 1.2: First, it forgets that categorical rules are ill
suited to an inquiry that demands a "case-by-case examina-
tion" of the "totality of the evidence".
TABLE I ADOPTING
ICST RELATIONSHIPSDefinition
CST Relationships
Elaboration
Paraphrase,Modality,Subsumption,Elaboration,
Indirect Speech, Follow-up, Overlap,
Fulfillment, Description, Historical Background,
Reader Profile,Attribution
Redundancy
Identity
Citation
Citation
Shift in View
Change of Perspective,Contradiction
No Relation
-
TABLE II NUMBER
IIOF SENTENCE PAIRS FOR EACH RELATIONSHIP TYPECST Relationship
Number of Sentence Pairs
Identity
99
Equivalent
101
Subsumption
590
Contradiction
48
Historical Background
245
Modality
17
Attribution
134
Summary
11
Follow-up
159
Indirect Speech
4
Elaboration
305
Fulfillment
10
Description
244
Overlap (Partial Equivalence)
429
• Sentence 4.1 (Target): Petitioner Jae Lee moved to the United States from South Korea with his parents when he was 13.Example 4
• Sentence 4.2 (Source): In the 35 years he has spent in this
country, he has never returned to South Korea, nor has he
become a U. S. citizen, living instead as a lawful permanent
resident.
5 • Sentence 5.1 (Target): Such evidence is 'material' . . . when
there is a reasonable probability that, had the evidence been
disclosed, the result of the proceeding would have been
different.
• Sentence 5.2 (Source): A 'reasonable probability' of a differ-
ent result is one in which the suppressed evidence 'under-
mines confidence in the outcome of the trial.
TABLE III CONFUSION MATRIX
IIIMATRIXActual
Predicted
Elaboration
No Relation
Citation
Shift In View
Σ
Elaboration
93.9%
6.1%
0.0%
0.0%
99
No Relation
11.9%
88.1%
0.0%
0.0%
42
Citation
0.0%
4.8%
95.2%
0.0%
21
Shift In View
100.0%
0.0%
0.0%
0.0%
3
Σ
101
44
20
0
165
TABLE IV
IVRESULTS COMPARISON OF PAIRS WHERE BOTH JUDGES AGREEDiscourse Class
Precision
Recall
F-Measure
Elaboration
0.921
0.939
0.930
No Relation
0.841
0.881
0.861
Citation
1.000
0.952
0.975
Shift in View
-
0
-
TABLE V
VRESULTS COMPARISON OF PAIRS WHERE AT LEAST ONE JUDGE AGREESDiscourse Class
Precision
Recall
F-Measure
Elaboration
0.930
0.902
0.916
No Relation
0.846
0.677
0.752
Citation
1.000
0.910
0.953
Shift in View
-
0
-
Table VI .
VIThe results were obtained by Equation 13 which calculates Human vs Human correlation (Corr(H, H)) and Equation 14 which calculates Human vs System correlation
TABLE VI CORRELATIONS
VIBY TYPEDiscourse Class
Human-Human
Human-System
Human-System
Human-Human
Elaboration
0.75
0.843
1.124
No Relation
0.646
0.603
0.933
Citation
1.0
0.955
0.955
Shift in View
0.188
0.0
0.0
https://caselaw.findlaw.com/
What is case law? definition and meaning businessdictionary.com. WebFinance IncWebFinance Inc, "What is case law? defi- nition and meaning - businessdictionary.com,"
Word vector embeddings and domain specific semantic based semi-supervised ontology instance population. V Jayawardana, D Lakmal, N Silva, A S Perera, K Sugathadasa, B Ayesha, M Perera, International Journal on Advances in ICT for Emerging Regions. 1011V. Jayawardana, D. Lakmal, N. de Silva, A. S. Perera, K. Sugathadasa, B. Ayesha, and M. Perera, "Word vector embeddings and domain specific semantic based semi-supervised ontology instance population," International Journal on Advances in ICT for Emerging Regions, vol. 10, no. 1, p. 1, 2017.
Synergistic union of word2vec and lexicon for domain specific semantic similarity. K Sugathadasa, B Ayesha, N Silva, A S Perera, V Jayawardana, D Lakmal, M Perera, Industrial and Information Systems (ICIIS). K. Sugathadasa, B. Ayesha, N. de Silva, A. S. Perera, V. Jayawardana, D. Lakmal, and M. Perera, "Synergistic union of word2vec and lexicon for domain specific semantic similarity," in Industrial and Information Systems (ICIIS), 2017 IEEE International Conference on. IEEE, 2017, pp. 1-6.
Deriving a representative vector for ontology classes with instance word vector embeddings. V Jayawardana, D Lakmal, N Silva, A S Perera, K Sugathadasa, B Ayesha, 2017 Seventh International Conference on. INTECHInnovative Computing TechnologyV. Jayawardana, D. Lakmal, N. de Silva, A. S. Perera, K. Sugathadasa, and B. Ayesha, "Deriving a representative vector for ontology classes with instance word vector embeddings," in Innovative Computing Tech- nology (INTECH), 2017 Seventh International Conference on. IEEE, 2017, pp. 79-84.
Legal document retrieval using document vector embeddings and deep learning. K Sugathadasa, B Ayesha, N Silva, A S Perera, V Jayawardana, D Lakmal, M Perera, arXiv:1805.10685arXiv preprintK. Sugathadasa, B. Ayesha, N. de Silva, A. S. Perera, V. Jayawardana, D. Lakmal, and M. Perera, "Legal document retrieval using document vector embeddings and deep learning," arXiv preprint arXiv:1805.10685, 2018.
Semi-supervised instance population of an ontology using word vector embedding. V Jayawardana, D Lakmal, N Silva, A S Perera, K Sugathadasa, B Ayesha, M Perera, 2017 Seventeenth International Conference on. Advances in ICT for Emerging Regions (ICTerV. Jayawardana, D. Lakmal, N. de Silva, A. S. Perera, K. Sugathadasa, B. Ayesha, and M. Perera, "Semi-supervised instance population of an ontology using word vector embedding," in Advances in ICT for Emerging Regions (ICTer), 2017 Seventeenth International Conference on. IEEE, 2017, pp. 1-7.
A common theory of information fusion from multiple text sources step one: cross-document structure. D R Radev, Proceedings of the 1st SIGdial workshop on Discourse. the 1st SIGdial workshop on DiscourseAssociation for Computational Linguistics10D. R. Radev, "A common theory of information fusion from multiple text sources step one: cross-document structure," in Proceedings of the 1st SIGdial workshop on Discourse and dialogue-Volume 10. Association for Computational Linguistics, 2000, pp. 74-83.
The penn discourse treebank 2.0. P Rashmi, D Nihkil, L Alan, M Eleni, R Livio, J Aravind, W Bonnie, Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC08). the Sixth International Conference on Language Resources and Evaluation (LREC08)Marrakech, Morocco, mayEuropean Language Resources Association (ELRA).P. Rashmi, D. Nihkil, L. Alan, M. Eleni, R. Livio, J. Aravind, W. Bon- nie et al., "The penn discourse treebank 2.0," in Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC08), Marrakech, Morocco, may. European Language Resources Association (ELRA)., 2008.
Rhetorical structure theory: A theory of text organization. W C Mann, S A Thompson, University of Southern California, Information Sciences InstituteW. C. Mann and S. A. Thompson, Rhetorical structure theory: A theory of text organization. University of Southern California, Information Sciences Institute, 1987.
L Carlson, M E Okurowski, D Marcu, RST discourse treebank. Linguistic Data Consortium. University of PennsylvaniaL. Carlson, M. E. Okurowski, and D. Marcu, RST discourse treebank. Linguistic Data Consortium, University of Pennsylvania, 2002.
Discourse graphbank. F Wolf, E Gibson, A Fisher, M Knight, Linguistic Data Consortium. F. Wolf, E. Gibson, A. Fisher, and M. Knight, "Discourse graphbank," Linguistic Data Consortium, Philadelphia, 2004.
Lee v. United States. US. Supreme Court43223"Lee v. United States," in US, vol. 432, no. No. 76-5187. Supreme Court, 1977, p. 23.
Towards cst-enhanced summarization. Z Zhang, S Blair-Goldensohn, D R Radev, AAAI/IAAI. Z. Zhang, S. Blair-Goldensohn, and D. R. Radev, "Towards cst-enhanced summarization," in AAAI/IAAI, 2002, pp. 439-446.
A comprehensive summary informativeness evaluation for rst-based summarization methods. V Uzêda, T Pardo, M Nunes, International Journal of Computer Information Systems and Industrial Management Applications (IJCISIM) ISSN. V. Uzêda, T. Pardo, and M. Nunes, "A comprehensive summary informa- tiveness evaluation for rst-based summarization methods," International Journal of Computer Information Systems and Industrial Management Applications (IJCISIM) ISSN, pp. 2150-7988, 2009.
Experiments with cst-based multidocument summarization. M L D R Castro Jorge, T A S Pardo, Proceedings of the 2010 Workshop on Graph-based Methods for Natural Language Processing. the 2010 Workshop on Graph-based Methods for Natural Language ProcessingAssociation for Computational LinguisticsM. L. d. R. Castro Jorge and T. A. S. Pardo, "Experiments with cst-based multidocument summarization," in Proceedings of the 2010 Workshop on Graph-based Methods for Natural Language Processing. Association for Computational Linguistics, 2010, pp. 74-82.
From discourse structures to text summaries. D Marcu, Intelligent Scalable Text Summarization. D. Marcu, "From discourse structures to text summaries," Intelligent Scalable Text Summarization, 1997.
Centroid-based summarization of multiple documents. D R Radev, H Jing, M Styś, D Tam, Information Processing & Management. 406D. R. Radev, H. Jing, M. Styś, and D. Tam, "Centroid-based summa- rization of multiple documents," Information Processing & Management, vol. 40, no. 6, pp. 919-938, 2004.
Discourse indicators for content selection in summarization. A Louis, A Joshi, A Nenkova, Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue. the 11th Annual Meeting of the Special Interest Group on Discourse and DialogueAssociation for Computational LinguisticsA. Louis, A. Joshi, and A. Nenkova, "Discourse indicators for content selection in summarization," in Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics, 2010, pp. 147-156.
Cl research experiments in trec-10 question answering. K C Litkowski, National Institute of Standards & TechnologyK. C. Litkowski, "Cl research experiments in trec-10 question answer- ing," no. 250. National Institute of Standards & Technology, 2002, pp. 122-131.
Discourse-based answering of why-questions. S Verberne, L W J Boves, N H J Oostdijk, P A J M Coppen, Traitement Automatique des Langues. 47S. Verberne, L. W. J. Boves, N. H. J. Oostdijk, and P. A. J. M. Coppen, "Discourse-based answering of why-questions," Traitement Automatique des Langues, vol. 47, pp. 21-41, 2007.
Generating expository dialogue from monologue: motivation, corpus and preliminary rules. P Piwek, S Stoyanchev, Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational LinguisticsP. Piwek and S. Stoyanchev, "Generating expository dialogue from monologue: motivation, corpus and preliminary rules," in Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 2010, pp. 333-336.
Exploiting discourse relations between sentences for text clustering. N A H Zahri, F Fukumoto, S Matsuyoshi, 24th International Conference on Computational Linguistics. 17N. A. H. Zahri, F. Fukumoto, and S. Matsuyoshi, "Exploiting discourse relations between sentences for text clustering," in 24th International Conference on Computational Linguistics, 2012, p. 17.
Exploiting rhetorical relations to multiple documents text summarization. N A H Zahri, F Fukumoto, M Suguru, O B Lynn, International Journal of Network Security & Its Applications. 721N. A. H. Zahri, F. Fukumoto, M. Suguru, and O. B. Lynn, "Exploiting rhetorical relations to multiple documents text summarization," Interna- tional Journal of Network Security & Its Applications, vol. 7, no. 2, p. 1, 2015.
Information extraction from legal texts: the potential of discourse analysis. M.-F Moens, C Uyttendaele, J Dumortier, International Journal of Human-Computer Studies. 516M.-F. Moens, C. Uyttendaele, and J. Dumortier, "Information extraction from legal texts: the potential of discourse analysis," International Journal of Human-Computer Studies, vol. 51, no. 6, pp. 1155-1171, 1999.
A rhetorical status classifier for legal text summarisation. B Hachey, C Grover, Text Summarization Branches Out. B. Hachey and C. Grover, "A rhetorical status classifier for legal text summarisation," Text Summarization Branches Out, 2004.
Extractive summarisation of legal texts. Artificial Intelligence and Law. 144--, "Extractive summarisation of legal texts," Artificial Intelligence and Law, vol. 14, no. 4, pp. 305-345, 2006.
Automatic detection of arguments in legal texts. M.-F Moens, E Boiy, R M Palau, C Reed, Proceedings of the 11th international conference on Artificial intelligence and law. the 11th international conference on Artificial intelligence and lawACMM.-F. Moens, E. Boiy, R. M. Palau, and C. Reed, "Automatic detection of arguments in legal texts," in Proceedings of the 11th international conference on Artificial intelligence and law. ACM, 2007, pp. 225- 230.
CSTBank: Cross-document Structure Theory Bank. D Radev, J Otterbacher, Z Zhang, D. Radev, J. Otterbacher, and Z. Zhang, "CSTBank: Cross-document Structure Theory Bank," http://tangra.si.umich.edu/clair/CSTBank, 2003.
Entity-centric coreference resolution with model stacking. K Clark, C D Manning, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing1Long Papers)K. Clark and C. D. Manning, "Entity-centric coreference resolution with model stacking," in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), vol. 1, 2015, pp. 1405-1415.
Featurerich part-of-speech tagging with a cyclic dependency network. K Toutanova, D Klein, C D Manning, Y Singer, Proceedings of the 2003 Conference of the North American Chapter. the 2003 Conference of the North American ChapterAssociation for Computational Linguistics1K. Toutanova, D. Klein, C. D. Manning, and Y. Singer, "Feature- rich part-of-speech tagging with a cyclic dependency network," in Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1. Association for Computational Linguistics, 2003, pp. 173-180.
A fast and accurate dependency parser using neural networks. D Chen, C Manning, Proceedings of the 2014 conference on empirical methods in natural language processing. the 2014 conference on empirical methods in natural language processingD. Chen and C. Manning, "A fast and accurate dependency parser using neural networks," in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), 2014, pp. 740-750.
Stanford typed dependencies manual. M.-C De Marneffe, C D Manning, Stanford University, Tech. Rep.Technical reportM.-C. De Marneffe and C. D. Manning, "Stanford typed dependencies manual," Technical report, Stanford University, Tech. Rep., 2008.
Incorporating non-local information into information extraction systems by gibbs sampling. J R Finkel, T Grenager, C Manning, Proceedings of the 43rd annual meeting on association for computational linguistics. the 43rd annual meeting on association for computational linguisticsAssociation for Computational LinguisticsJ. R. Finkel, T. Grenager, and C. Manning, "Incorporating non-local information into information extraction systems by gibbs sampling," in Proceedings of the 43rd annual meeting on association for computa- tional linguistics. Association for Computational Linguistics, 2005, pp. 363-370.
Word net based method for determining semantic sentence similarity through various word senses. M A Tayal, M Raghuwanshi, L Malik, Proceedings of the 11th International Conference on Natural Language Processing. the 11th International Conference on Natural Language ProcessingM. A. Tayal, M. Raghuwanshi, and L. Malik, "Word net based method for determining semantic sentence similarity through various word senses," in Proceedings of the 11th International Conference on Natural Language Processing, 2014, pp. 139-145.
Wordnet:: Similarity: measuring the relatedness of concepts. T Pedersen, S Patwardhan, J Michelizzi, Demonstration papers at HLT-NAACL 2004. Association for Computational LinguisticsT. Pedersen, S. Patwardhan, and J. Michelizzi, "Wordnet:: Similarity: measuring the relatedness of concepts," in Demonstration papers at HLT- NAACL 2004. Association for Computational Linguistics, 2004, pp. 38-41.
Turner v. United States. US. 396398"Turner v. United States," in US, vol. 396, no. No. 190. Supreme Court, 1970, p. 398.
| [] |
[
"Virtual Event, Canada",
"Virtual Event, Canada"
] | [
"Sebastian Hofstätter 1s.hofstaetter@tuwien.ac.at \nTU Wien\n\n",
"Bhaskar Mitra \nMicrosoft\n\n",
"Hamed Zamani 3zamani@cs.umass.edu \nACM Reference Format\nUniversity of Massachusetts Amherst\n\n",
"Nick Craswell nickcr@microsoft.com \nMicrosoft\n\n",
"Allan Hanbury hanbury@ifs.tuwien.ac.at \nTU Wien\n\n",
"Sebastian Hofstätter ",
"Bhaskar Mitra ",
"Hamed Zamani ",
"Nick Craswell ",
"Allan Hanbury "
] | [
"TU Wien\n",
"Microsoft\n",
"ACM Reference Format\nUniversity of Massachusetts Amherst\n",
"Microsoft\n",
"TU Wien\n"
] | [
"Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '21)"
] | An emerging recipe for achieving state-of-the-art effectiveness in neural document re-ranking involves utilizing large pre-trained language models-e.g., BERT-to evaluate all individual passages in the document and then aggregating the outputs by pooling or additional Transformer layers. A major drawback of this approach is high query latency due to the cost of evaluating every passage in the document with BERT. To make matters worse, this high inference cost and latency varies based on the length of the document, with longer documents requiring more time and computation. To address this challenge, we adopt an intra-document cascading strategy, which prunes passages of a candidate document using a less expensive model, called ESM, before running a scoring model that is more expensive and effective, called ETM. We found it best to train ESM (short for Efficient Student Model) via knowledge distillation from the ETM (short for Effective Teacher Model) e.g., BERT. This pruning allows us to only run the ETM model on a smaller set of passages whose size does not vary by document length. Our experiments on the MS MARCO and TREC Deep Learning Track benchmarks suggest that the proposed Intra-Document Cascaded Ranking Model (IDCM) leads to over 400% lower query latency by providing essentially the same effectiveness as the state-of-the-art BERT-based document ranking models. | 10.1145/3404835.3462889 | [
"https://arxiv.org/pdf/2105.09816v1.pdf"
] | 234,790,128 | 2105.09816 | ab64ea2c1a9a419b21e8b7ea16f5cf12323a5bc8 |
Virtual Event, Canada
CanadaCopyright CanadaJuly 11-15, 2021. July 11-15, 2021
Sebastian Hofstätter 1s.hofstaetter@tuwien.ac.at
TU Wien
Bhaskar Mitra
Microsoft
Hamed Zamani 3zamani@cs.umass.edu
ACM Reference Format
University of Massachusetts Amherst
Nick Craswell nickcr@microsoft.com
Microsoft
Allan Hanbury hanbury@ifs.tuwien.ac.at
TU Wien
Sebastian Hofstätter
Bhaskar Mitra
Hamed Zamani
Nick Craswell
Allan Hanbury
Virtual Event, Canada
Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '21)
the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '21)New York, NY, USA, 10CanadaJuly 11-15, 2021. July 11-15, 202110.1145/3404835.34628892021. Intra-Document Cascading: Learning to Select Passages for Neural Document Ranking. In ACM ISBN 978-1-4503-8037-9/21/07. . . $15.00CCS CONCEPTS • Information systems → Learning to rankKEYWORDS Neural Re-RankingKnowledge Distillation
An emerging recipe for achieving state-of-the-art effectiveness in neural document re-ranking involves utilizing large pre-trained language models-e.g., BERT-to evaluate all individual passages in the document and then aggregating the outputs by pooling or additional Transformer layers. A major drawback of this approach is high query latency due to the cost of evaluating every passage in the document with BERT. To make matters worse, this high inference cost and latency varies based on the length of the document, with longer documents requiring more time and computation. To address this challenge, we adopt an intra-document cascading strategy, which prunes passages of a candidate document using a less expensive model, called ESM, before running a scoring model that is more expensive and effective, called ETM. We found it best to train ESM (short for Efficient Student Model) via knowledge distillation from the ETM (short for Effective Teacher Model) e.g., BERT. This pruning allows us to only run the ETM model on a smaller set of passages whose size does not vary by document length. Our experiments on the MS MARCO and TREC Deep Learning Track benchmarks suggest that the proposed Intra-Document Cascaded Ranking Model (IDCM) leads to over 400% lower query latency by providing essentially the same effectiveness as the state-of-the-art BERT-based document ranking models.
INTRODUCTION
Ranking documents in response to a query is a core problem in information retrieval (IR). Many systems incorporate ranking as a core component, such as Web search engines and news search systems. Other systems build on document ranking, e.g., an agent capable of conversation and disambiguation may still have document ranking as a component [42]. Therefore, retrieval model improvements are likely to lead to "lifting all boats".
One difficulty in document ranking is that documents can vary significantly in length. In traditional IR, variation in length can be explained by (1) the verbosity hypothesis, that the author used more words to explain the topic, or (2) the scope hypothesis, that the author covered multiple topics [29]. In practice, each hypothesis is a partial explanation for document length, and retrieval models typically apply document length normalization. In other words, a document with some irrelevant parts can still be considered relevant overall, because having somewhat broader scope than the current query does not rule out a document from being a useful result.
One way to deal with documents of varying length with some irrelevant parts is to use passage-level evidence for document ranking. Early studies [6,18] found that a passage of text need not be defined based on document structure, such as paragraphs or sentences. A good approach was to divide the content into fixed-size windows of 150 words or more, compare the query to all passages, then score the document based on the score of its highest-scoring passage. This is consistent with the scope hypothesis, that we should focus on finding some relevant content, without penalizing the document for having some irrelevant content.
In neural IR, it is possible to significantly outperform classic retrieval systems [9,21]. This is often done by taking multiple fixedsize windows of text, applying a deep neural network to score each passage, and scoring the document based on the highest-scoring passages. This is similar to the classic IR approaches in [6,18], but works much better as the per-passage models are more effective.
However, the problem with the neural approaches is the cost of applying the per-passage model. Applying neural net inference for every passage in every document being ranked requires significant computation and leads to higher query latency. This limits the impact of the new neural approaches, since if the cost of inference is too high, it cannot be used in large-scale production systems. To avoid this problem, some models retrieve documents solely based on their first passage [9], however, this is a sub-optimal solution.
In this work, we address this issue by proposing an Intra-Document Cascading Model (IDCM) that employs an in-document cascading mechanism with a fast selection module and a slower effective scoring module. 1 This simultaneously provides lower query latency and state-of-the-art ranking effectiveness. We evaluate our model on two document ranking query sets: (i) TREC DL 2019 [9], (ii) MSMARCO [2]. We study how to train the IDCM architecture in multiple stages to control the collaboration of the cascading sub-modules, and investigate: RQ1 Can IDCM achieve comparable effectiveness to the full BERTbased ranker at lower computation cost and query latency? Among the different variants we explore in this work, we found that training the selection module using knowledge distillation based on passage-level labels derived from the BERT score in conjunction with an intra-document ranking loss achieves the best overall reranking quality. Under this setting, the selection module is trained to approximate the passage ranking that would have been produced if ordered by their corresponding BERT-scores, to filter out nonrelevant parts of the document.
An important hyperparameter in this setting is the number of passages per document that survives the selection stage for the subsequent BERT-based evaluation. Here we study:
RQ2 How is the effectiveness-efficiency trade-off influenced by the number of passages that the less expensive model selects from the document? In our baseline setting, the BERT model inspects up to the first passages. We observe superior performance with = 40 compared to smaller values of . However, the proposed IDCM framework achieves roughly the same ranking effectiveness by applying the BERT model to only the top 4 passages pre-selected by the lessexpensive preceding model. Consequently, this result comes at a much lower query latency.
The application of BERT to multiple in-document passages has the undesirable property of introducing large variance in query response time depending on the length of candidate documents. Under the IDCM settings, this variance is largely reduced because the expensive BERT model is applied to top passages for every document, unless the document is so short that it has less than passages. This leads us to our next research question:
RQ3 How does IDCM compare to the baseline with respect to variance in query latency? We observe that our baseline setup of using BERT over all passages has a very high standard deviation and long tail w.r.t. query latency, whereas IDCM has much more predictable query latency centered around the mean. This is partly because our passage selection module is fast enough to triage up to 40 candidate passages in the same time as the BERT model takes to evaluate a single passage. Therefore, while the contribution of the selection stage to query latency is still a function of document length, the variance is largely reduced.
Finally, we study the how the passage selection under IDCM compares to the highest-scoring passages by the BERT model.
RQ4
How often does the passage selection under IDCM recall the same passages as those scored highly by the BERT model? Overall, we observe that the selection module recalls 60-85% of the top BERT-scored passages depending on the value of . We 1 In our experiments, we used DistilBERT [30], an efficient variant of BERT as the "slow effective scoring module". Throughout this paper, we refer to it as the BERT model. also find that passages closer to the beginning of the document are more likely to be selected by both models, although there is a long tail of passage selection from lower positions within the document. Interestingly, we also observe that the selection module achieves a better recall in the case of relevant documents, which may be explained by the relevant passages being easier to distinguish from the nonrelevant passages in a relevant document, compared to the same in case of a nonrelevant document.
To summarize, this paper proposes a cascaded architecture and training regime to address the high and variable query latency associated with applying BERT to evaluate all in-document passages for the document ranking task. We demonstrate that employing our approach leads to lower mean and variance for query latency, and enables broader scope of application for BERT-based ranking models in real-world retrieval systems.
• We propose IDCM, an intra-document cascade ranking model, including a training workflow using knowledge distillation. • We evaluate our approach on TREC-DL'19 and MSMARCO DEV; and show that IDCM achieves similar effectiveness on average at four times lower latency compared to the BERT-only ranking model. • We perform extensive ablation studies to validate our multi-stage training approach and the benefits of knowledge distillation for optimizing the selection module of IDCM.
To improve the reproducibility of our work, we open-source our implementation and release the trained models at: https://github.com/sebastian-hofstaetter/intra-document-cascade
RELATED WORK
Since demonstrating impressive performance on passage ranking [25], several successful efforts are underway to employ BERT [13] and other Transformer [33] based models to the document ranking task [9,16]. However, the quadratic memory complexity of the self-attention layer with respect to the input sequence length poses a unique challenge in the way of scaling these models to documents that may contain thousands of terms. While some efforts are underway to directly address the scalability of the self-attention layer in these ranking models [1,3,24], a majority of applications of Transformer-based architectures to document ranking involves segmenting the document text into smaller chunks of text that can be processed efficiently [16,20,39].
The idea of using passage-level evidence for document ranking is not unique to neural methods. Several classical probabilistic [6] and language model [4,22] based retrieval methods-as well as machine learning based approaches [31]-incorporate passage-based relevance signals for document ranking. An underlying assumption across many of these approaches is that all passages from the document are inspected by the model. However, as our models become more computation and memory intensive, it quickly becomes expensive and even infeasible to evaluate every passage from the document. This is exactly where text selection using light-weight approaches becomes necessary so that the more complex model only has to inspect the most important parts of the document, which is the main motivation for this work.
One could draw parallels between our approach of using cheaper models to detect interesting regions of the document to the highly influential work of Viola and Jones [34] in computer vision for fast object detection using cascades of models of increasing complexity. Similarly, cascaded approaches have also been employed extensively [7,14,26,35] in IR to progressively rank-and-prune the set of candidate documents, from the full collection all the way to the final ten or so results presented to the user. Unlike these approaches that employ a cascade of models to prune the candidate set of documents, we use the cascaded approach to prune the set of regions within the document that needs to be inspected by the expensive and effective ranking model (i.e., intra-document cascading).
In a cascaded setting, typically the models are trained progressively, starting with the simplest model that is exposed to the largest number of candidates, and then subsequent models are trained using a data distribution that reflects the candidates that survive the earlier stages of pruning. This sequential training strategy is sometimes referred to as telescoping [23]. Joint optimization of the different rankers within a multi-stage cascaded search architecture has also been explored [14].
In our work, we adopt a different training strategy. We begin by training the costly ranking model and then apply knowledge distillation to train the model for the preceding selection stage. Our approach is motivated by the strong empirical performance observed in recent work exploring knowledge distillation from larger BERT to more effective models [15,17,30,32].
THE INTRA-DOCUMENT CASCADE MODEL
Neural ranking models have resulted in significant improvements in a wide variety of information retrieval tasks. However, there exists an efficiency-effectiveness trade-off in these models. In other words, state-of-the-art neural ranking models mostly suffer from high query latency and high GPU memory requirements. A popular solution to address the GPU memory issue is to divide documents into multiple passages and compute the retrieval scores for each passage [27,39,41]. For instance, Dai and Callan [10] compare considering only the first passage of the document with scoring every passage in the document and use the highest passage score as the document score. We believe that the existing approaches lead to either sub-optimal ranking or high computational cost.
Inspired by previous work on passage-level evidences for document retrieval [6,22], in this section we propose an alternative efficient and effective solution by introducing the Intra-Document Cascade Model named IDCM. In the following, we first introduce the IDCM architecture and optimization, and then describe its implementation details.
The IDCM Architecture
IDCM is designed based on a cascade architecture within the documents. For each query-document pair, the idea is to select a few passages from the document using an efficient model and then produce the retrieval score for the document by exploiting a more expensive and effective model on the selected passages. The highlevel architecture of IDCM is presented in Figure 1.
IDCM takes a query and a document as input. It first divides the document into multiple units of partially overlapping windows of size with an overlapping factor of , where < . This results in ⌈ / ⌉ passages as follows:
= [( 1− : + )( − :2 + )( 2 − :3 + ); ...](1)
Each passages contains + 2 tokens with exactly 2 + 1 tokens in common with their previous and next passages respectively. The first and the last passages are padded. A key practical impact of padding the windows (as opposed to padding the document) is the possibility to compact batched tensor representations and skip padding-only windows entirely for batches that contain documents of different lengths.
IDCM uses an efficient model to score each passage ∈ with respect to the query. This model is called ESM. To cascade the scoring decision and discard a large number of passages, IDCM selects the top passages with the highest scores produced by the ESM model, as follows:
= arg max ⊆ , |ˆ|= ∑︁ ∈ˆE SM( , )(2)
It is important to keep the size ofˆas small as possible to use the efficiency advantages brought by the cascading approach. The selected passages are then scored by a more expensive and effective ranking model, called ETM:
Passage
ETM = ETM( , ) ∈ˆ(3)
Note that the token embedding parameters are shared between the ESM and ETM models. We compute the document relevance score using a weighted linear interpolation. In other words, we feed the top sorted ETM scores to a fully-connected layer to produce the document relevance score as follows:
IDCM( , ) = top ( ETM ) *(4)
where is a × 1 weight matrix for linear interpolation of the passage scores.
The IDCM Optimization
The IDCM framework consists of multiple non-differentiable operations (e.g., passage selection) that makes it difficult to use gradient descent-based methods for end-to-end optimization. Therefore, we split the IDCM optimization to three steps with different objectives, as shown in Figure 2. These three steps include: (1) optimizing the ETM model for passage ranking, (2) extending the ETM optimization to full document ranking, and (3) optimizing the ESM model for passage selection using knowledge distillation. Each step completes with early stopping based on the performance on a held-out validation set and the best model checkpoint is used as the initialization for the following step(s). The first step involves training the passage ranking model.
Step I: Optimizing ETM for Passage Ranking. The first training step is to train the ETM model on a passage collection. To this aim, we adopt the pairwise ranking loss function used in RankNet [5]. In more detail, for a given pair of negative and positive passages − and + for the query in the training set, we use a binary cross-entropy loss function for pairwise passage ranking optimization as follows:
L Pas. ( , + , − ) = − log (ETM( , + ) − ETM( , − ))(5)
where (·) is the sigmoid function.
This step prepares the ETM model for the document retrieval task. Such pre-training has been successfully employed in recent models, such as PARADE [20]. The parallel MSMARCO passage and document collections make this pre-training possible, albeit it remains optional if passage relevance is not available, as the BERT module is also trained in the next step.
Step II: Extending the ETM Optimization to Full-Document Ranking. Optimizing the model for passage ranking is not sufficient for document retrieval, mainly because of the following two reasons: First, the passage aggregation parameters (i.e., ) need to be optimized; Second, the passage and document collections may exhibit different assumptions on what constitutes relevance. Therefore, in the second optimization step, we train the ETM model in addition to the passage aggregation layer using a full document ranking setting, in which there is no passage selection and all passages are scored and the top passages are chosen for the aggregation layer (i.e., ). We initialize the ETM parameters with the best checkpoint obtained from early stopping in the previous optimization step. We again use the binary cross-entropy loss function, this time for a query and a pair of positive and negative documents.
This optimization step further fine-tunes the ETM parameters and learns the parameters.
Step III: Optimizing ESM for Passage Selection using Knowledge Distillation. The last two optimization steps give us an effective document ranking model that runs ETM on every passage in the document and aggregates the scores. In this step, we optimize the ESM parameters. Given the fact that the goal of ESM is to select passages to be consumed by ETM in a cascade setting, we use knowledge distillation for training the ESM model. In other words, we optimize the ESM parameters such that it mimics the ETM behavior using a teacher-student paradigm, where ESM and ETM play the roles of student and teacher, respectively. Therefore, the output of ETM provides labels for the ESM model. A similar idea has been employed in the weak supervision literature [12].
Formally, the loss function for this optimization step is defined as:
L Selection ( , ) = L KD [ETM( , )], [ESM( , )] ∈(6)
where denotes all the passages in the document and L KD denotes a knowledge distillation loss function. The function L KD is responsible for computing the average across passages. A unique feature of our distillation approach is that the teacher signals created by ETM are unsupervised. It is important to train the less capable ESM on the exact distribution it is later used. There are no passage-level labels for all MSMARCO documents we could use to train the ESM, therefore the ETM is the only training signal source.
In our experiments, we study multiple loss functions for knowledge distillation. They include distribution losses, such as mean square error (MSE) and cross entropy, and in-document passage ranking loss functions, such as nDCG2 introduced as part of the LambdaLoss framework [36]. The nDCG2 loss function is a gainbased loss to tightly bind the loss to NDCG-like metrics. For the exact formulation we refer to Wang et al. [36]. In the nDCG2 loss, we assign gain values only to the top passages sorted by ETM and all other passages receive no gain. The nDCG2 loss, focusing on moving the correct passages in the top positions is a great fit for our problem: The ESM is only used for pruning or filtering, which means that the ordering inside and outside the top-set does not matter. The only thing that matters is to find the right set of passages, as the ETM then creates our final fine-grained scores for each of the passages. This is a crucial difference in our knowledge distillation approach to concurrent works, which try to fit every decision from the more powerful ranking model to a smaller model [20]. Not surprisingly, we find that using the nDCG2 ranking loss outperforms other loss functions, as discussed in Section 5.1.
The IDCM Implementation
In this section, we describe the implementation details for the ESM and ETM models used in our experiments.
ESM: The CK Model. The ESM model is the first model in our cascaded architecture and is expected to be extremely efficient. In our experiments, we use CK, an efficient variation of the Conv-KNRM model [11] that combines convolutional neural networks (CNNs) with the kernel-pooling approach of Xiong et al. [38]. Unlike Conv-KNRM that uses multiple convolutional layers with different window sizes for soft-matching of n-grams in the query and document, the CK model uses a single convolutional layer to provide local contextualization to the passage representations without the quadratic time or memory complexity required by Transformer models. In more detail, the CK model transforms the query and passage representations using a CNN layer and uses the cosine function to compute their similarities, which are then activated by Gaussian kernels with different distribution parameters:
, = exp − cos(CNN( ), CNN( )) − 2 2 2(7)
where and respectively denote the th token in the query and the th token in the passage. and are the Gaussian kernel parameters. Each kernel represents a feature extractor, which is followed by a pooling layer that sums up the individual activations, first by the passage dimension and then log-activated by the query token dimension . Each kernel result is weighted and summed with a single linear layer ( ) as follows:
CK( , ) = | | ∑︁ =1 log | | ∑︁ =1 , *(8)
ETM: The BERT Ranking Model. Large-scale pre-trained language models, such as BERT [13], have led to state-of-the-art results in a number of tasks, including passage retrieval [25]. The BERT passage ranking model takes sequences representing a query and a passage and concatenates them using a separation token. The obtained BERT representation for the first token of the querypassage pair (i.e., the [CLS] token) is then fed to a fully-connected layer ( ) to produce the ranking score:
BERT( , ) = BERT [CLS] ([CLS]; ; [SEP]; ) *(9)
where ; denotes the concatenation operation.
EXPERIMENT DESIGN
In this section, we describe our experiment setup. We implemented our models using the HuggingFace Transformer library [37] and PyTorch [28]. We employed PyTorch' mixed precision training and inference throughout our experiments for efficiency. In our experiments, we re-rank the documents retrieved by BM25 implemented in Anserini [40]. The query latency measurements are conducted on the same single TITAN RTX GPU.
Document Collections and Query Sets
For our first passage-training step we utilize the MSMARCO-Passage collection and training data released by Bajaj et al. [2]. We follow the setup of Hofstätter et al. [15] for training the BERT passage ranker. For passage results of the BERT ranking model we refer to the previous work. In all datasets, we limit the query length at 30 tokens to remove only very few outliers, and re-rank 100 documents from BM25 candidates. We use two query sets for evaluating our models as follows:
TREC DL 2019 and MS MARCO benchmarks. We use the 2019 TREC Deep Learning Track Document Collection [8] that contains 3.2 million documents with a mean document length of 1,600 words and the 80 th percentile at 1,900 words. We aim to include them with a 2,000 token limit on our experiments. We selected 5,000 queries from the training set as validation set for early stopping and removed those queries from the training data. We use the following two query sets in our evaluation:
• TREC DL 2019: 43 queries used in the 2019 TREC Deep Learning Track for document ranking task. A proper pooling methodology was used to create a complete set of relevance judgments for these topics [8]. For evaluation metrics that require binary relevance labels (i.e., MAP and MRR), we use a binarization point of 2. • MS MARCO: 5,193 queries sampled from the Bing query logs contained in the MS MARCO Development set [2]. This query set is larger than the previous one, but suffers from incomplete relevance judgments.
Training Configuration
We use Adam optimizer [19] with a learning rate of 7 * 10 −6 for all BERT layers. CK layers contain much fewer and randomly initialized parameters and therefore are trained with a higher learning rate of 10 −5 . We employ early stopping, based on the best nDCG@10 value on the validation set.
Model Parameters
We use a 6-layer DistilBERT [30] knowledge distilled from BERT-Base on the MSMARCO-Passage collection [15] as initialization for our document training. We chose DistilBERT over BERT-Base, as it has been shown to provide a close lower bound on the results at half the runtime for training and testing [15,30]. In general our approach is agnostic to the BERT-variant used, when using a language model with more layers and dimensions, the relative improvements of our cascade become stronger, at the cost of higher training time and GPU memory. For the passage windows we set a base size of 50 and an overlap of 7 for a total window size of 64. In a pilot study, we confirmed that a larger window size does not improve the All-BERT effectiveness results. For the BERT passage aggregation, we use the top 3 BERT scores to form the final document score, independent of the cascade selection count. This allows us to base all selection models on the same All-BERT instance.
We set the token context size for CK to 3 and evaluate two different CK dimensions, first a full 768 channel convolution corresponding to the dimensions of the BERT embeddings and second a smaller convolution with a projection to 384 dimensions before the convolution and another reduction in the convolution output dimension to 128 per term, which we refer to as CKS.
RESULTS
In this section, we address the research questions raised in Section 1.
RQ1: Knowledge Distillation and Effectiveness Study
Our first research question centers around training techniques and we investigate:
RQ1 Can IDCM achieve comparable effectiveness to the full BERTbased ranker at lower computation cost and query latency?
To address this research question, we compare the proposed method to a number of strong retrieval baselines, including BM25, TKL [16], and PARADE [20]. TKL is a non-BERT local self-attention ranking model with kernel-pooling; PARADE MAX-Pool is very close to our All-BERT baseline, in that it scores every passage with a BERT ranker and aggregates passage representations in a lightweight layer; PARADE TF uses an additional Transformer block to aggregate passage representations. The results are reported in Table 1. The first observation on the All-BERT setting of IDCM, without cascading, is the strong difference in effectiveness across collections between 512 document tokens and 2K tokens (Line 5 vs. 6). The All-BERT 2K model (Line 6) outperforms all previously published single model re-ranking baselines on the TREC DL 2019 dataset, except for PARADE TF (Line 4) on MAP . We see how training and evaluating with longer document input improves the effectiveness. This is also a strong argument for our cascading approach that makes it possible to process long inputs. We chose the All-BERT 2K setting (Line 6) as the base model for all following results.
Furthermore we use static cascade selectors as baselines, that use BERT scoring after static selections. One selects the first 3 passages by position, to have a direct comparison to CK selections (Line 7). Another option we benchmark is the use of raw term frequency matches, where we take the three passages with the highest frequency of direct term matches without semantic or trained matching (Line 8). Both approaches fail to improve the scoring over the baselines or our CK selection model. The term frequency selection even underperforms the first positions selector. Our main IDCM configuration (Lines 9 & 10) with a knowledge distilled CK shows strong results, which are not statistically different to their base model of All-BERT (2K, Line 6). Across both query sets, the select 3 (Line 9) is already close to the reachable results and on select 4 (Line 10) we obtain better MRR and MAP results on TREC-DL'19, even though they are not significantly different.
We further compare two training strategies for the efficient CK model: (1) knowledge distillation as described in Section 3.2, and (2) standalone training on relevance judgments. The results are reported in Table 2. We observe that the knowledge distilled CK model leads to significantly higher performance on both TREC DL 2019 and MS MARCO datasets in comparison to the standalone training. The improvements on the TREC DL 2019 query set are much larger. As expected, CK alone shows substantially lower effectiveness compared to BERT. Overall we show that even though alone CK is ineffective, combined with BERT in the IDCM model produces very effective and efficient results.
We extend our analysis by inspecting different knowledge distillation losses and their effect on the full cascaded setting in Table 3. We probe knowledge distillation with an MSE, Cross Entropy and the LambdaLoss nDCG2 losses for two different cascade selection numbers, three and four. When four passages are selected, all three knowledge distillation losses perform on par with the All-BERT model. Overall using nDCG2 as the knowledge distillation loss outperforms the other two loss functions and is the most stable across the two evaluated query sets. Therefore, in the following experiments, we only focus on nDCG2 as the default knowledge distillation loss.
RQ2: Efficiency-Effectiveness Analysis
To study our next research question we turn to the viewpoint of both efficiency and effectiveness of different IDCM model configurations to answer: RQ2 How is the effectiveness-efficiency trade-off influenced by the number of passages that the less expensive model selects from the document?
The selection variants of IDCM are all based on the All-BERT setting and therefore this model instance sets our potential for cascaded effectiveness results, as we do not train the BERT module during selection training. In Figures 3a and 3b effectiveness (x-axes). Akin to a precision-recall plot, the best result would be situated in the upper right corner.
We evaluated IDCM's selection parameter -the number of passages scored with the costly BERT model -from 1 to 6 using the full convolution CK and a smaller dimensional CKS setting. We find that selecting too few passages reduces the effectiveness strongly, however starting with a selection of 4 or more passages, IDCM results are very similar to All-BERT results, while providing a much higher throughput. On the nDCG@10 metric of TREC-DL'19 in Figure 3a IDCM reaches the All-BERT effectiveness starting with 4 selected passages. In Figure 3b the IDCM setting is already close to the reachable effectiveness, and taking 5 and 6 passages close the gap to All-BERT further, to a point of almost no differences.
A simple efficiency improvement is to use a lower document length on All-BERT passage scoring, such as limiting the document to the first 512 tokens. We find that this works both more slowly and less effectively than IDCM.
It is important to note, that in all our experiments presented here we utilize the 6-layer DistilBERT encoder. When we compare with related work, which commonly uses larger BERT-style models, such as the original BERT-Base or BERT-Large, we show even better efficiency improvements. A 12-layer BERT-Base model in the All-BERT (2,000) configuration can only process 85 documents per second, and the 24-layer BERT-large only manages to score 30 documents per second. Applying our IDCM technique to these larger encoders, brings even larger performance improvements.
In summary, with this analysis we show how IDCM can be as effective as an All-BERT model, while maintaining a four times higher throughput. In addition, users of the IDCM model have the option to trade-off effectiveness for more efficiency, along a clear curve.
RQ3: Query Latency Analysis
The mean or median aggregations of query latencies hide crucial information about the tail of queries that require more processing time. In the case of neural ranking models this tail is heavily dependent on the total document length of all documents in a batch. We now study efficiency in detail with:
RQ3 How does IDCM compare to the baseline with respect to variance in query latency? In Figure 4 we plot the fraction of queries (y-axis) that can be processed by the neural models in the given time (x-axis). The All-BERT baseline (dash-dotted black line) for documents up to 2,000 tokens has a large range in the time required to re-rank documents. The All-BERT setting of IDCM already is a strong baseline for query latency, as we compact passage representations, to skip padding-only passages. However, it still requires BERT to run on up to 40 passages. Our IDCM model configurations with two CK sizes (dotted and full line) show a much lower variance and overall faster query response. Naturally, the more passages we select for cascading to the BERT module, the longer IDCM takes to compute. Now, we turn to a more focused query latency view of the different IDCM configurations with boxplots of the query latency distributions in Figure 5. Again, we report timings for a full CK (red; right side) and smaller CKS variant (blue; left side). The first entry with a selection of 0 shows only the computational cost of the CK module for 2,000 tokens without running BERT. When we compare the latency differences between 0 and 1 selection we can see that computing BERT for a single passage per document adds 25 ms to the median latency. This shows the efficiency of CK(S): roughly 40 passages processed with CK have an equal runtime compared to a single passage that BERT requires.
RQ4: Passage Selection Analysis
As presented in Section 3.2 we aim to train the CK module to imitate the passage scoring of the BERT module. To understand how well the CK model is able to do just that we evaluate the intra-document passage recall of the CK scores in comparison to the top-3 BERT passages selected to form the final document score and study:
RQ4 How often does the passage selection under IDCM recall the same passages as those scored highly by the BERT model? In Figure 6 we plot the CK recall for different numbers of selected CK-passages and split the reporting by document relevance grades on the TREC-DL'19 query set. We find an interesting pattern among the different relevance classes: CK is able to provide more accurate passage selections the more relevant a document is.
A recall of 1 would guarantee the same document score, however as we showed it is not necessary for the IDCM model to provide very close effectiveness results to the original ranking.
Finally, we inspect the behavior of both BERT and CK modules with respect to the positions of the highest scoring passages. In Figure 7 we investigate the top selections of both CK and BERT along the positions of the passages. In gray in the background are the available passages per position of the top-100 documents of the TREC-DL'19 query set, they reduce as not all documents are long enough to fit the maximum used length of 2,000 tokens. The All-BERT setting needs to compute BERT scores for all available passages, whereas in IDCM the selected passages in blue are the only passages score by BERT. The top-3 passages from BERT are furthermore visualized by the shaded dark blue bars.
We see a strong focus on the first and last possible passages selected by the modules. A focus on the first passages is to be expected as the title and introduction of web pages is situated there, however the focus on the last passages is more curious. Because of our symmetric padding, the first and last passages have an empty (padded) overlap. This is the only indicator to the BERT model of passage positions in a document, the absence of 7 more tokens. It seems this is the signal the BERT model picks up on and subsequently trains the CK model to follow along. We leave deeper investigations of the behavior of BERT in different document parts for future work and conclude this section with the finding that IDCM learns to use all available passages including the end of a document input.
CONCLUSION
Applying a BERT model many times per document, once for each passage, yields state-of-the-art ranking performance. The trade-off is that inference cost is high due to the size of the model, potentially affecting both the mean and variance of query processing latency. Typically in neural retrieval systems, one is forced to a make a clear decision between reaching efficiency or effectiveness goals. In this work we presented IDCM, an intra-document cascaded ranking model that provides state-of-the-art effectiveness, while at the same time improving the median query latency by more than four times compared a non-cascaded full BERT ranking model. Our twomodule combination allows us to efficiently filter passages and only provide the most promising candidates to a slow but effective BERT ranker. We show how a key step in achieving the same effectiveness as a full BERT model is a knowledge distilled training using the BERT passage scores to train the more efficient selection module. Our knowledge distillation provides self-supervised teacher signals for all passages, without the need for manual annotation. Our novel distillation technique not only improves the query latency of our model in a deployment scenario, it also provides efficiency for replacing manual annotation labor and cost with a step-wise trained teacher model. In the future we plan to extend the concept of intra-document cascading for document ranking to a dynamic number of passages selected and more cascading stages.
Figure 1 :
1The IDCM architecture that consists of two cascading stages: ➊ To allow for long-document input, the first stage is a lightweight and fast selection model. ➋ Only the top k passages from the selection model are scored with a costly BERT-based scoring module to form the final document score.
Figure 2 :
2The staged training workflow of IDCM: ➊ Training the ETM (BERT) passage module ➋ Training the full model on a document collection without selection (all available passages of a document are scored with ETM). ➌ The ESM (CK) selection module is now trained via knowledge distillation using the ETM (BERT) scores as labels.
Throughput and MRR@10 results on MS MARCO Dev.
Figure 3 :
3Throughput and ranking effectiveness trade-off results. The vertical line shows the achievable effectiveness, for all IDCM models based on All-BERT (2,000). The number next to the marker indicates the selection count.
Figure 4 :
4we compare the model throughput of documents per second (y-axes) with their Fraction of queries that can be answered in the given time-frame for re-ranking 100 documents with up to 2,000 tokens on MSMARCO. Select 0 means only CK timing without BERT cascading.
Figure 5 :
5Query latency for different IDCM cascade selection configurations for re-ranking 100 documents with up to 2,000 tokens on MSMARCO. Selection 0 only measures the time for the CK module without routing passages to BERT.
Figure 6 :
6Intra-document passage selection recall of the CK selection module in comparison to the top-3 BERT selection split by document relevance grade on TREC-DL'19. The higher the label the more relevant a document.
Figure 7 :
7Selected Passages by the CK top-4 module and then subsequently scored by BERT for the top-3 scoring.
Table 1 :
1Effectiveness results for TREC-DL'19 and MSMARCO DEV query sets. Our aim is to hold the effectiveness of an All-BERT configuration with a cascaded efficiency improvement. * is a stat.sig. difference to All-BERT (2K); paired t-test ( < 0.05).Model
Cascade
# BERT
Doc.
TREC DL 2019
MSMARCO DEV
Scored Length nDCG@10 MRR@10 MAP@100 nDCG@10 MRR@10 MAP@100
Baselines
1 BM25
-
-
-
0.488
0.661
0.292
0.311
0.252
0.265
2 TKL [16]
-
-
2K
0.634
0.795
0.332
0.403
0.338
0.345
3 PARADE Max-Pool [20] -
All
2K
0.666
0.807
0.343
0.445
0.378
0.385
4 PARADE TF [20]
-
All
2K
0.680
0.820
0.375
0.446
0.382
0.387
Ours
5
IDCM
-
All
512
0.667
0.815
0.348
*0.440
*0.374
*0.383
6
-
All
2K
0.688
0.867
0.364
0.450
0.384
0.390
7
Static First
3
2K
0.638
0.785
0.309
*0.394
*0.330
*0.338
8
Static Top-TF 3
2K
*0.624
0.778
0.324
*0.393
*0.329
*0.337
9
CK (nDCG2) 3
2K
0.671
0.876
0.361
0.438
0.375
0.380
10
CK (nDCG2) 4
2K
0.688
0.916
0.365
0.446
0.380
0.387
Table 2 :
2Impact of knowledge distillation with measures using a cutoff at 10. * indicates stat.sig. difference; paired t-test ( < 0.05).Scoring Training
TREC-DL'19
MSMARCO
Model
nDCG
MRR nDCG
MRR
CK
Standalone
0.551
0.677
0.353
0.287
CK
BERT-KD
*0.595 0.749 *0.363 *0.299
Table 3 :
3Knowledge distillation loss study of the IDCM cas-
cade training with measures using a cutoff at 10. Stat.sig. is
indicated with the superscript to the underlined character;
paired t-test ( < 0.05).
# BERT KD-Loss
TREC-DL'19
MSMARCO
Scored
nDCG
MRR
nDCG
MRR
3
MSE
0.664 0.816
0.426
0.362
Cross Entropy
0.667 0.851
0.437
0.373
nDCG2
0.671 0.876
0.438
0.375
4
MSE
0.683 0.870
0.437
0.374
Cross Entropy
0.675 0.889
0.446
0.381
nDCG2
0.688 0.916
0.446
0.380
ACKNOWLEDGMENTSThis work was supported in part by the Center for Intelligent Information Retrieval. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.
ETC: Encoding Long and Structured Inputs in Transformers. Joshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, Li Yang, Proc. of EMNLP. of EMNLPJoshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, and Li Yang. 2020. ETC: Encoding Long and Structured Inputs in Transformers. In Proc. of EMNLP.
MS MARCO : A Human Generated MAchine Reading COmprehension Dataset. Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew Mcnamara, Bhaskar Mitra, Tri Nguyen, Proc. of NIPS. of NIPSPayal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew Mcnamara, Bhaskar Mitra, and Tri Nguyen. 2016. MS MARCO : A Human Generated MAchine Reading COmprehension Dataset. In Proc. of NIPS.
Longformer: The longdocument transformer. Iz Beltagy, E Matthew, Arman Peters, Cohan, arXiv:2004.05150arXiv preprintIz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long- document transformer. arXiv preprint arXiv:2004.05150 (2020).
Utilizing passage-based language models for document retrieval. Michael Bendersky, Oren Kurland, Proc. of ECIR. of ECIRMichael Bendersky and Oren Kurland. 2008. Utilizing passage-based language models for document retrieval. In Proc. of ECIR.
From ranknet to lambdarank to lambdamart: An overview. J C Christopher, Burges, MSR-Tech ReportChristopher JC Burges. 2010. From ranknet to lambdarank to lambdamart: An overview. MSR-Tech Report (2010).
Passage-level evidence in document retrieval. James P Callan, Proc. of SIGIR. of SIGIRJames P Callan. 1994. Passage-level evidence in document retrieval. In Proc. of SIGIR.
Efficient cost-aware cascade ranking in multi-stage retrieval. Ruey-Cheng Chen, Luke Gallagher, Roi Blanco, J Shane Culpepper, Proc. of SIGIR. of SIGIRRuey-Cheng Chen, Luke Gallagher, Roi Blanco, and J Shane Culpepper. 2017. Efficient cost-aware cascade ranking in multi-stage retrieval. In Proc. of SIGIR.
Overview of the TREC 2019 deep learning track. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, TREC. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2019. Overview of the TREC 2019 deep learning track. In TREC.
Overview of the trec 2019 deep learning track. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, Ellen M Voorhees, TREC. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M Voorhees. 2019. Overview of the trec 2019 deep learning track. In TREC.
Deeper text understanding for IR with contextual neural language modeling. Zhuyun Dai, Jamie Callan, Proc. of SIGIR. of SIGIRZhuyun Dai and Jamie Callan. 2019. Deeper text understanding for IR with contextual neural language modeling. In Proc. of SIGIR.
Convolutional Neural Networks for Soft-Matching N-Grams in Ad-hoc Search. Zhuyun Dai, Chenyan Xiong, Jamie Callan, Zhiyuan Liu, Proc. of WSDM. of WSDMZhuyun Dai, Chenyan Xiong, Jamie Callan, and Zhiyuan Liu. 2018. Convolutional Neural Networks for Soft-Matching N-Grams in Ad-hoc Search. In Proc. of WSDM.
Fidelity-weighted learning. Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, and Bernhard Schölkopf. Mostafa Dehghani, Arash Mehrjou, Stephan Gouws, Jaap Kamps, and Bernhard Schölkopf. 2018. Fidelity-weighted learning. Proc. of ICLR (2018).
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. J Devlin, M Chang, K Lee, K Toutanova, Proc. of NAACL. of NAACLJ. Devlin, M. Chang, K. Lee, and K. Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proc. of NAACL.
Joint optimization of cascade ranking models. Luke Gallagher, Ruey-Cheng Chen, Roi Blanco, J Shane Culpepper, Proc. of WSDM. of WSDMLuke Gallagher, Ruey-Cheng Chen, Roi Blanco, and J Shane Culpepper. 2019. Joint optimization of cascade ranking models. In Proc. of WSDM.
Sebastian Hofstätter, Sophia Althammer, Michael Schröder, arXiv:cs.IR/2010.02666Mete Sertkan, and Allan Hanbury. 2020. Improving Efficient Neural Ranking Models with Cross-Architecture Knowledge Distillation. Sebastian Hofstätter, Sophia Althammer, Michael Schröder, Mete Sertkan, and Allan Hanbury. 2020. Improving Efficient Neural Ranking Models with Cross- Architecture Knowledge Distillation. arXiv:cs.IR/2010.02666
Local Self-Attention over Long Text for Efficient Document Retrieval. Sebastian Hofstätter, Hamed Zamani, Bhaskar Mitra, Nick Craswell, Allan Hanbury, Proc. of SIGIR. of SIGIRSebastian Hofstätter, Hamed Zamani, Bhaskar Mitra, Nick Craswell, and Allan Hanbury. 2020. Local Self-Attention over Long Text for Efficient Document Retrieval. In Proc. of SIGIR.
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, arXiv:1909.10351Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprintXiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351 (2019).
Passage retrieval revisited. Marcin Kaszkiel, Justin Zobel, ACM SIGIR Forum. New York, NY, USAACM31Marcin Kaszkiel and Justin Zobel. 1997. Passage retrieval revisited. In ACM SIGIR Forum, Vol. 31. ACM New York, NY, USA, 178-185.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic opti- mization. arXiv preprint arXiv:1412.6980 (2014).
PA-RADE: Passage Representation Aggregation for Document Reranking. Canjia Li, Andrew Yates, Sean Macavaney, Ben He, Yingfei Sun, arXiv:2008.09093arXiv preprintCanjia Li, Andrew Yates, Sean MacAvaney, Ben He, and Yingfei Sun. 2020. PA- RADE: Passage Representation Aggregation for Document Reranking. arXiv preprint arXiv:2008.09093 (2020).
Jimmy Lin, Rodrigo Nogueira, Andrew Yates, arXiv:2010.06467Pretrained Transformers for Text Ranking: BERT and Beyond. arXiv preprintJimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2020. Pretrained Transformers for Text Ranking: BERT and Beyond. arXiv preprint arXiv:2010.06467 (2020).
Passage retrieval based on language models. Xiaoyong Liu, W Bruce Croft, Proc. of CIKM. of CIKMXiaoyong Liu and W Bruce Croft. 2002. Passage retrieval based on language models. In Proc. of CIKM.
High accuracy retrieval with multiple nested ranker. Irina Matveeva, Chris Burges, Timo Burkard, Andy Laucius, Leon Wong, Proc. of SIGIR. of SIGIRIrina Matveeva, Chris Burges, Timo Burkard, Andy Laucius, and Leon Wong. 2006. High accuracy retrieval with multiple nested ranker. In Proc. of SIGIR.
Conformer-Kernel with Query Term Independence for Document Retrieval. Bhaskar Mitra, Sebastian Hofstatter, arXiv:2007.10434arXiv preprintHamed Zamani, and Nick CraswellBhaskar Mitra, Sebastian Hofstatter, Hamed Zamani, and Nick Craswell. 2020. Conformer-Kernel with Query Term Independence for Document Retrieval. arXiv preprint arXiv:2007.10434 (2020).
Rodrigo Nogueira, Kyunghyun Cho, arXiv:1901.04085Passage Re-ranking with BERT. arXiv preprintRodrigo Nogueira and Kyunghyun Cho. 2019. Passage Re-ranking with BERT. arXiv preprint arXiv:1901.04085 (2019).
Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, Jimmy Lin, arXiv:1910.14424Multi-stage document ranking with BERT. arXiv preprintRodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019. Multi-stage document ranking with BERT. arXiv preprint arXiv:1910.14424 (2019).
Document Expansion by Query Prediction. R Nogueira, W Yang, J Lin, K Cho, arXiv:1904.08375arXiv preprintR. Nogueira, W. Yang, J. Lin, and K. Cho. 2019. Document Expansion by Query Prediction. arXiv preprint arXiv:1904.08375 (2019).
Automatic differentiation in PyTorch. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary Devito, Zeming Lin, Alban Desmaison, Luca Antiga, Adam Lerer, Proc. of NIPS-W. of NIPS-WAdam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In Proc. of NIPS-W.
The probabilistic relevance framework: BM25 and beyond. Stephen Robertson, Hugo Zaragoza, Now Publishers IncStephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance frame- work: BM25 and beyond. Now Publishers Inc.
Dis-tilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, arXiv:1910.01108arXiv preprintVictor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Dis- tilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019).
A passage-based approach to learning to rank documents. Eilon Sheetrit, Anna Shtok, Oren Kurland, Information Retrieval Journal. Eilon Sheetrit, Anna Shtok, and Oren Kurland. 2020. A passage-based approach to learning to rank documents. Information Retrieval Journal (2020), 1-28.
Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, Jimmy Lin, arXiv:1903.12136Distilling task-specific knowledge from bert into simple neural networks. arXiv preprintRaphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling task-specific knowledge from bert into simple neural networks. arXiv preprint arXiv:1903.12136 (2019).
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Proc. of NIPS. of NIPSAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, et al. 2017. Atten- tion is all you need. In Proc. of NIPS.
Rapid object detection using a boosted cascade of simple features. Paul Viola, Michael Jones, Proc. of CVPR. of CVPRPaul Viola and Michael Jones. 2001. Rapid object detection using a boosted cascade of simple features. In Proc. of CVPR.
A cascade ranking model for efficient ranked retrieval. Lidan Wang, Jimmy Lin, Donald Metzler, Proc. of SIGIR. of SIGIRLidan Wang, Jimmy Lin, and Donald Metzler. 2011. A cascade ranking model for efficient ranked retrieval. In Proc. of SIGIR.
The LambdaLoss Framework for Ranking Metric Optimization. Xuanhui Wang, Cheng Li, Nadav Golbandi, Mike Bendersky, Marc Najork, Proc. of CIKM. of CIKMXuanhui Wang, Cheng Li, Nadav Golbandi, Mike Bendersky, and Marc Najork. 2018. The LambdaLoss Framework for Ranking Metric Optimization. In Proc. of CIKM.
HuggingFace's Transformers: State-of-the-art Natural Language Processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, arXiv-1910ArXiv. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. HuggingFace's Transformers: State-of-the-art Natural Language Processing. ArXiv (2019), arXiv-1910.
End-to-End Neural Ad-hoc Ranking with Kernel Pooling. Chenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, Russell Power, Proc. of SIGIR. of SIGIRChenyan Xiong, Zhuyun Dai, Jamie Callan, Zhiyuan Liu, and Russell Power. 2017. End-to-End Neural Ad-hoc Ranking with Kernel Pooling. In Proc. of SIGIR.
IDST at TREC 2019 Deep Learning Track: Deep Cascade Ranking with Generation-based Document Expansion and Pre-trained Language Modeling. Ming Yan, Chenliang Li, Chen Wu, Bin Bi, Wei Wang, Jiangnan Xia, Luo Si, TREC. Ming Yan, Chenliang Li, Chen Wu, Bin Bi, Wei Wang, Jiangnan Xia, and Luo Si. 2019. IDST at TREC 2019 Deep Learning Track: Deep Cascade Ranking with Generation-based Document Expansion and Pre-trained Language Modeling.. In TREC.
Anserini: Enabling the use of Lucene for information retrieval research. Peilin Yang, Hui Fang, Jimmy Lin, Proc. of SIGIR. of SIGIRPeilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: Enabling the use of Lucene for information retrieval research. In Proc. of SIGIR.
Cross-domain modeling of sentence-level evidence for document retrieval. Wei Zeynep Akkalyoncu Yilmaz, Haotian Yang, Jimmy Zhang, Lin, Proc. of EMNLP-IJCNLP. of EMNLP-IJCNLPZeynep Akkalyoncu Yilmaz, Wei Yang, Haotian Zhang, and Jimmy Lin. 2019. Cross-domain modeling of sentence-level evidence for document retrieval. In Proc. of EMNLP-IJCNLP.
Macaw: An Extensible Conversational Information Seeking Platform. Hamed Zamani, Nick Craswell, Proc. of SIGIR. of SIGIRHamed Zamani and Nick Craswell. 2020. Macaw: An Extensible Conversational Information Seeking Platform. In Proc. of SIGIR.
| [
"https://github.com/sebastian-hofstaetter/intra-document-cascade"
] |
[
"ENCONTER: Entity Constrained Progressive Sequence Generation via Insertion-based Transformer",
"ENCONTER: Entity Constrained Progressive Sequence Generation via Insertion-based Transformer"
] | [
"Lee-Hsun Hsieh \nSingapore Management University\nEe-Peng LimSingapore\n",
"Yang-Yin Lee yylee@smu.edu.sg \nSingapore Management University\nEe-Peng LimSingapore\n"
] | [
"Singapore Management University\nEe-Peng LimSingapore",
"Singapore Management University\nEe-Peng LimSingapore"
] | [
"Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics"
] | Pretrained using large amount of data, autoregressive language models are able to generate high quality sequences. However, these models do not perform well under hard lexical constraints as they lack fine control of content generation process. Progressive insertion-based transformers can overcome the above limitation and efficiently generate a sequence in parallel given some input tokens as constraint. These transformers however may fail to support hard lexical constraints as their generation process is more likely to terminate prematurely. The paper analyses such early termination problems and proposes the ENtity-CONstrained insertion TransformER (ENCONTER), a new insertion transformer that addresses the above pitfall without compromising much generation efficiency. We introduce a new training strategy that considers predefined hard lexical constraints (e.g., entities to be included in the generated sequence). Our experiments show that ENCONTER outperforms other baseline models in several performance metrics rendering it more suitable in practical applications. 1 | 10.18653/v1/2021.eacl-main.313 | [
"https://www.aclweb.org/anthology/2021.eacl-main.313.pdf"
] | 232,257,961 | 2103.09548 | 29d209880e47c0d630bcd094b8e1e3086f9d8030 |
ENCONTER: Entity Constrained Progressive Sequence Generation via Insertion-based Transformer
April 19 -23, 2021
Lee-Hsun Hsieh
Singapore Management University
Ee-Peng LimSingapore
Yang-Yin Lee yylee@smu.edu.sg
Singapore Management University
Ee-Peng LimSingapore
ENCONTER: Entity Constrained Progressive Sequence Generation via Insertion-based Transformer
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics
the 16th Conference of the European Chapter of the Association for Computational LinguisticsApril 19 -23, 20213590
Pretrained using large amount of data, autoregressive language models are able to generate high quality sequences. However, these models do not perform well under hard lexical constraints as they lack fine control of content generation process. Progressive insertion-based transformers can overcome the above limitation and efficiently generate a sequence in parallel given some input tokens as constraint. These transformers however may fail to support hard lexical constraints as their generation process is more likely to terminate prematurely. The paper analyses such early termination problems and proposes the ENtity-CONstrained insertion TransformER (ENCONTER), a new insertion transformer that addresses the above pitfall without compromising much generation efficiency. We introduce a new training strategy that considers predefined hard lexical constraints (e.g., entities to be included in the generated sequence). Our experiments show that ENCONTER outperforms other baseline models in several performance metrics rendering it more suitable in practical applications. 1
Introduction
The field of Natural Language Generation (NLG) (Gatt and Krahmer, 2018) has seen significant improvements in recent years across many applications such as neural machine translation (Bahdanau et al., 2015), text summarization (Chopra et al., 2016), poem generation (Zugarini et al., 2019) and recipe generation (H. Lee et al., 2020). Constrained text generation (CTG) is one of the challenging problems in NLG that is important to many real world applications but has not been well addressed. CTG imposes input constraints which may be in the form of objects expected to exist in the generated text or rules over objects in the generated text (Hokamp and Liu, 2017). The objects here can be entities, phrases, predefined nouns, verbs, or sentence fragments. The constraints can be categorized into two types: (1) Hard-constraints which require mandatory inclusion of certain objects and complete compliance of given rules (Post and Vilar, 2018;Miao et al., 2019;Welleck et al., 2019;Zhang et al., 2020); and (2) Softconstraints which allow the some constraint objects or rules to be not strictly enforced in the generated text (Qin et al., 2019;Tang et al., 2019). As autoregressive models generate tokens from left to right, they cannot easily support constraints involving multiple input objects, hard-constrained text generation therefore often requires non-autoregressive models.
Recently, Zhang et al. (2020) proposed a non-autoregressive hard-constrained text generation model (POINTER) that generates a text sequence in a progressive manner using an insertiontransformer . To train an insertion transformer to generate a missing token between every two tokens in an input sequence, the training data is prepared by masking "less important" tokens in the original text sequence in an alternating manner. The process is then repeated using the masked input sequence as the new original sequence, and further masking alternate tokens in it. The process ends when the masked sequence meets some length criteria.
While POINTER shows promising results, it does not consider hard constraints which involve entities that must be included in the generated sequence. Such entity constraint requirements are unfortunately prevalent in many applications. For example, we may want to generate a job description with some given skills, or a food recipe with some given ingredients.
A naive approach to the problem is to apply con-straints on the POINTER's masking strategy forcing it to keep entity tokens. We call this modified model POINTER-E. Although this allow entity information entering POINTER-E, another problem rises. POINTER-E suffers from cold start problem which refers to the inability to generate meaningful tokens at the early stages of inference forcing the generation to end prematurely. This issue can be attributed to the POINTER-E's top-down masking strategy for training the insertion transformer and the tokens of input entities not evenly spread out across the sequence.
To solve the cold start generation problem, we propose ENCONTER that incorporates bottom-up masking strategy. ENCONTER supports hard entity constraints, and encourages more meaningful tokens to be generated in the early stages of generation thus reducing cold start. On top of that, we further introduce the balanced binary tree scheme to reduce the number of stages in generation and to improve the efficiency of generation.
Entity Constrained Sequence Generation
In this section, we first describe the state-of-theart POINTER model, its preprocessing of training data and inference process. We highlight the pitfalls of the entity constrained variant of POINTER, POINTER-E. We then present our proposed entity constrained insertion transformer called ENCON-TER.
POINTER
POINTER adopts a progressive masking approach to train an insertion transformer. Let X = {x 1 , x 2 , . . . , x T } denote a a sequence where x t ∈ V , where T is the sequence length and V is a finite vocabulary set. Suppose X is a training sequence, POINTER preprocesses it to obtain the training pairs S = (X k , Y k ) k ∈ {K, . . . , 0} using a progressive masking strategy. As shown in Figure 1a, in each stage X k represents the input sequence for stage k, and Y k represents the sequence of masked tokens to be inferred. X K is identical to the final training sequence X K = X, and there should not be any additional tokens to infer. X 0 on the other hand represents the initial lexical constraints. In stage k, Y k are the tokens to be predicted between adjacent tokens of X k . A special no-insertion token [N OI] is added to the vocabulary V and used where B,D, and F are the tokens forming the entity constraints. The stopping criteria for POINTER is set to n = 3.
in Y k to indicate that no token is to be generated between adjacent tokens. Y K is thus a sequence of all [N OI]'s indicating the end of generation. Word-Piece (Wu et al., 2016) tokenization is applied in POINTER, and tokens split from the same word share the same score.
Token importance scoring POINTER assigns each token x t ∈ X an importance score α t :
α t = α T F −IDF t + α P OS t + α Y AKE t , (1) where α T F −IDF t , α P OS t ,
and α Y AKE t denote term frequency-inverse document frequency (TF-IDF), POS tag scores and YAKE (Campos et al., 2020) keyword scores, respectively. These scores are normalized to [0,1]. α P OS t is defined such that the scores of nouns and verbs are higher than those of other POS tags. The token importance scores are used to derive the masking pattern Y k−1 of stage k − 1 from X k .
POINTER adopts four criteria to derive Y k−1 from X k : (1) Y k−1 can only include non-adjacent tokens in X k ; (2) the number of tokens to be masked are maximized in each stage to make the model more efficient; (3) less important tokens are masked before more important ones and (4) A stopping criteria n is defined. The algorithm stops when |X k | = n. Kadane's algorithm (Gries, 1982) has been use in POINTER to fulfill the criteria. Specifically, the algorithm selects as many unimportant tokens as possible to be masked while not masking two adjacent tokens. X 0 is automatically determined when |X k | = n, it does not necessarily match the way the initial input sequence is provided by real world applications or users, including the entity constraints. Inference Given X 0 as input sequence, POINTER starts to inferŶ 0 and combines the two sequences to getX
1 = x 0 1 ,ŷ 0 1 ,x 0 2 ,ŷ 0 2 , . . . ,x 0 |X 0 | ,ŷ 0 |X 0 | . If y 0
t happens to be [N OI], it will be deleted and leaving only non-[N OI] tokens inX 1 . The process repeats until all the generated tokens inŶ k are [N OI]s.
As shown in Figure 1a, entities may not be preserved during the preprocessing steps and the lexical constraint X 0 is not guaranteed to cover entity constraint X e even entity tokens are assigned high importance scores. The trained POINTER therefore may not be able generate a sequence successfully when given entity constraints during the inference. We therefore propose some changes to POINTER to make it entity-aware.
Entity Aware POINTER (POINTER-E)
The entity-aware POINTER model, POINTER-E, adopts a different preprocessing approach. Let X e ⊂ X be an ordered sequence of entity tokens (e.g., the person names in a news document). As X e is likely to be used as the initial generation input (i.e., X 0 = X e ), POINTER-E's preprocessing does not mask these entity tokens over the different preprocessing stages. This way, the model is trained to focus on generating tokens around the entities. Such tokens form the context around the entities and context relating one entity to others. We achieve such goal by ignoring the importance scores applied on entity tokens. That is, we only compute α t for x t / ∈ X e . We then apply the POINTER's masking strategy on the sub-sequence between every two entity to-
kens in X. Suppose (x i , x j ) ⊂ X is a subsequence spanned by two entity tokens {x e l = x i , x e l+1 = x j } ∈ X e where l ∈ {0, . . . , |X e | − 1}.
Masking is applied on this subsequence iteratively until only {x e l , x e l+1 } are left:
S ={(X K = X, Y K ), . . . , (X 0 = X e , Y 0 )}.(2)
As shown in Figure 1b, POINTER-E always picks the optimal masking patterns while preserving the entities.
Cold Start Problem While POINTER-E is aware of entities, entities in X e may appear very close or very far from one another in the full sequence X, i.e., the gap between entities in the sequence X can vary a lot. Consider two sub-sequences
(x i = x e l , x j = x e l+1 ), (x u = x e w , x v = x e w+1 ) ⊂ X where w, l ∈ (0, T e − 1) and w = l. Suppose j − i v − u.
The tokens between (x u , x v ) will then be masked out long before tokens in (x i , x j ) during preprocessing and training. This results in POINTER-E trained to generate a lot of [N OI]s in Y k for small k's. Figure 1b depicts this cold start problem as entity tokens B, D and E are near one another in X. As tokens between them are masked in early stages, the masked sequences in stages 0 and 1, Y 0 and Y 1 , contain many [N OI] tokens. POINTER-E trained with such data will therefore lack the ability to generate meaningful tokens in-between these entity tokens. In the worst case, POINTER-E simply generates all [N OI] tokens and ends the generation prematurely which is known as the cold start problem.
To better show the problem, we define:
N OI ratio = #[N OI] in Y k #tokens in Y k .(3)
A clear problem of high N OI ratio is that Y k is very similar to Y k+1 . When N OI ratio = 1, the generation will end, In cases where N OI ratio is very high for masked sequences in early stages, say Y 0 , the trained POINTER-E will more likely infer from X 0 all [N OI]'s forŶ 0 and end the generation process. To address this, we need to re-examine the top-down masking stratey used in POINTER and POINTER-E.
ENCONTER
In this section, we propose ENCONTER which adopts a bottom-up masking strategy to overcome the cold start problem. There are two variants: GREEDY ENCONTER and BBT-ENCONTER. GREEDY ENCONTER Different from POINTER-E, we now construct training pairs S from X by setting X 0 to be X e :
S ={(X 0 = X e , Y 0 ), (X 1 , Y 1 ), . . . , (X K = X, Y K )},(4)
where Y k represents the sequence of masked tokens to be inserted into X k to form X k+1 . Similar to POINTER,
Y K contains [N OI]'s only. For ev- ery two adjacent tokens {x k t , x k t+1 } ∈ X k where t ∈ 0, . . . , |X k | − 1 , we insert a mask token. Let {x k t = x i , x k t+1 = x j } and (x i , x j ) be the span of (x i , x j ) in X. If i + 1 = j, the mask to- ken is [N OI].
Otherwise, we select a token x t from (x i , x j ) with maximum importance score α t within (x i , x j ) as the mask token. The sequence Y k is formed after we go through all the t's. By inserting Y k into X k , we obtain the next sequence X k+1 . The iterative process stops when all the tokens to be inserted are [N OI]s. This method GREEDY ENCONTER greedily selects the token with maximum importance score in the span to be generated in a bottom up insertion (or unmasking) process. By forcing more non-[N OI] tokens to be included in Y 0 and Y k of small k's, Greedy Enconter achieves lower N OI ratio in the early stages of inference. Experimentally, we find that the cold start problem is eliminated. Balanced binary tree ENCONTER (BBT-ENCONTER) To further improve the efficiency of GREEDY ENCONTER, we incorporate balanced binary tree into ENCONTER to bias the masking of tokens to be those near the center of the unobserved subsequence of tokens. BBT reward is added to the importance score function as follows. Suppose x i and x j are two adjacent tokens in X k , and (x i , x j ) represents the corresponding subsequence in X. We define the distance d p for token x p ∈ (x i , x j ) as:
d p = min(p − i, j − p).(5)
We use a softmax function to compute the reward for weighted score based on d p :
w p = exp(d p /τ ) j k=i exp(d k /τ ) .(6)
The weights in the span are then normalized to [0, 1]. Then the importance score is defined as:
α p = w p · (α T F −IDF p + α P OS p + α Y AKE p ). (7)
The construction of S is almost the same as GREEDY ENCONTER. The only difference is the new importance score function defined by Eq. 7. This proposed model, known as BBT-ENCONTER, will predict the center and semantically important token in X between two adjacent tokens of X k .
Models with Entity Span Aware Inference Option (ESAI)
So far, all above-mentioned models assume that each entity consists of one single token. In real world use cases, an entity may contain more than one token. Without any control during the inference process, it is possible for other tokens to be generated in-between tokens of the same entity. For example in Table 5, "Group Consolidation" may be split into "handling Group s project / Consolidation". To avoid inserting any tokens in between any multi-token entity, we introduce the entity span aware inference option to the inference process of POINTER-E and ENCONTER to force the inference ofŶ k to always generate [N OI] in between the tokens of the multi-token entities. After applying ESAI, the multi-token entities will remain unbroken duing the generation process.
Empirical Analysis of POINTER-E and ENCONTER
In this section, we conduct an analysis of the data preprocessing step in POINTER-E, GREEDY EN-CONTER and BBT-ENCONTER. Our objective is to empirically evaluate the characteristics of training data generated for the two models. We have left out POINTER as it is inherently not entity-aware and POINTER-E is its entity-aware variant. We first present the two datasets used in this study.
Datasets
CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003): We select the English version which contains 1,393 news articles labeled with four named entity types: persons, locations, organizations and names of miscellaneous. Training and development sets are used to train the model. Documents having more than 512 tokens by wordpiece tokenizer used in BERT (Devlin et al., 2019) are discarded to ensure that the whole document can fit into the models.
Jobs: This is a job post dataset collected from Singapore's Jobsbank 2 . The dataset consists of 7,474 job posts under the software developer occupation (SD) and 7,768 job posts under the sales and marketing manager occupation (SM). We extract the requirement section of these job posts as the text sequences to be generated. For each requirement text sequence (or document), we use a dictionary of skills to annotate the skill and job related entities in the sequence.
The detailed information of the datasets can be found in Table 1.
Analysis of NOI ratio and Stage Counts
We first analyse the ratio of [N OI] tokens inserted or masked in every stage of the training data. Figure 3 shows the mean together with one standard deviation of the POINTER-E, GREEDY ENCONTER and BBT-ENCONTER for each dataset. X-axis is in log scale. Note we add 1 to the stage number for showing log scale (e.g., the 10 0 in the figure indicates the ratio of [N OI] tokens in Y 0 ). From Figure 3, we find all datasets share a few similar characteristics, namely: (1) for POINTER-E, the [N OI] ratio is quite high in the first few stages, and drops when the stage is higher. A sudden increase of the ratio to 1 is due to the ending sequence consists all [N OI]'s; (2) for ENCONTER the [N OI] ratio is low in the first few stages, and slowly increase to 1. The result shows ENCONTER can learn to generate balance proportion of [N OI] and non-[N OI] tokens in the first few stages, and also learn not to generate to many non-[N OI] tokens when approaching the end of the generation process. Figure 2 shows the number of stages each training document requires under different models. The numbers are sorted according to the following priority: GREEDY ENCONTER, POINTER-E, then BBT-ENCONTER. Since BBT-ENCONTER incorporates the binary tree reward scheme, it is able to perform insertion in the middle stages more efficiently compare to GREEDY ENCONTER. This helps to lower the total number of stages required to derive training pairs.
Experiment
Models for Comparison
GPT-2 (Radford et al., 2019) GPT-2 can be used to conduct conditional generation as well (softconstraints). For a training sequence X together with its entities X e , we concatenate X e with X to form a training sequence {X e , X}. X e is then served as a control code sequence to guide GPT-2 in the generation of X. We fine-tune the GPT-2 small pretrained by huggingface 3 with 10 −5 learning rate. Warmup and weight decay are applied. 10 epochs are used for fine-tuning. POINTER-E, GREEDY ENCONTER, and BBT-ENCONTER: We use BERT (Devlin et al., 2019) as the underlying insertion transformer for all these models similar to that of POINTER. Specifically, we use the bert-based-cased pretrained by huggingface. BERT with language model head is fine-tuned on all the training pairs to obtain the models. Learning rate is set to 10 −5 with warmup and weight decay. 10 epochs are used for fine-tuning.
For POINTER-E, GREEDY ENCONTER, and BBT-ENCONTER, top-k (top-20) sampling method is used to deriveŶ k . For GPT-2, we feed in theX e and let GPT-2 generate the following tokens until reaching the end-of-generation token.
Evaluation Metrics
We evaluate the models using a few criteria, namely: recall of entities, quality with respect to human crafted text, diversity, fluency, cold start, and generation efficiency. We measure recall of entity constraints by the proportion of entity tokens found in the generated text. Even without ESAI, the recall metric will allow to compare the recall ability of models. Besides recall, we also consider BLEU (Papineni et al., 2002), METEOR (MTR) (Lavie and Agarwal, 2007) and NIST (Doddington, 2002), which are common metrics for evaluating the quality of generated text against human craft text. We compute the BLEU-2 (B-2) and BLEU-4 (B-4) which are n-gram precision-based metrics. For the BLUE based evaluation metric NIST, we compute the NIST-2 (N-2) and . To measure the diversity of generation, Entropy (Zhang et al., 2018) and Distinction (Li et al., 2016) are used. Entropy-4 (E-4) is defined as the frequency distribution of unique 4-gram terms. Dist-1 (D-1) and Dist-2 (D-2) are used to derive distinct n-grams in the generated text. We also utilize pretrained language model to measure fluency. Perplexity (PPL) is calculated using pretrained GPT-2 (Radford et al., 2019) without fine-tuning. The lower the perplexity is, the more fluent the generation is (based on GPT-2). "AvgLen" is the averaged word counts of the generated sequence. "failure" indicates the proportion of test sequences that fail to be generated at the first step (i.e.,Ŷ 0 are all [N OI]'s). Finally, "AvgSteps" shows the average number of steps for the model to complete the generation. Note for GPT-2, the AvgSteps is based on tokens, while the AvgLen is based on words.
Experiment Results
Tables 2, 3, and 4 show the results of different models on the different datasets. On recall, GPT-2, due to its inability to enforce hard lexical constraints, yields the worst recall. For non-autoregressive models without ESAI, they still achieve high recall. Nevertheless, the high recall of POINTER-E is "contributed by" relatively high failure ratio ("failure") as recall is 1 even when the model fails to generate anything in the first stage. In other words, POINTER-E suffers from cold start problem. GREEDY ENCONTER and BBT-ENCONTER, in contrast, enjoy both good recall and zero failure ratio. With ESAI option, all non-autoregressive models can achieve perfect recall without much additional generation steps. However, this option does not reduce the high failure ratio of POINTER-E. On generation quality compared with human crafted text, GREEDY ENCONTER and BBT-ENCONTER outperform all other models by NIST, BLEU, and MTR. This suggests that ENCONTER models learn the context of entities better compared to other models. On generation diversity, POINTER-E again has the highest diversity largely due to its high failure ratio. Finally, we discuss the efficiency of models measured by AvgSteps. The autoregressive nature of GPT-2 makes it the least efficient model among all. POINTER-E's ability to optimize masking patterns makes it the most efficient model. With balance binary tree reward, BBT-ENCONTER is able to finish its generation in fewer iterations than GREEDY ENCONTER. Table 5 shows a case example from Jobs SM dataset. The entities of the given constraint are underlined. Invalid entities generated are colored in red, while the remaining ones are colored in blue. There are three types of invalid cases. First, the case of entity is not the same as specified. Second, the entity is not recalled in the generation. Third, the entity has its tokens separated by some other token(s). In this example, POINTER-E and POINTER-E ESAI terminate their generations prematurely. They fail to perform generation at the very first stage. directly steers the pretrained language by a bag-ofwords model or simple linear discriminator. The above models in their own ways gain certain level of control over the content generation process. However, they do not provide a mechanism to directly enforce some lexical constraints in the final generation. Non-monotonic sequence generation (Welleck et al., 2019) is designed to perform hard lexical constrains generation based on binary tree structure. By leveraging level-order and inorder traversal of binary tree, the model allows text to be generated non-monotonically. Although the results from non-monotonic generation models seem promising, they do not perform token generation in parallel and the tree structure governing the generation process may produce many unused tokens during the generation. The emergence of non-autoregressive language model provides another approach to support hard lexical constraints. Insertion transformer uses transformer architecture with balanced binary tree loss to perform insertion-based generation. KERMIT is proposed as a structure to unify insertion transformers. Levenshtein transformer (Gu et al., 2019) further introduces deletion as an action to take during generation. Our ENCONTER models differ from these previous models as they are not designed to support any lexical constraints, including entity constrains.
Case example
Related Work
Conclusions
Constrained text generation is an important task for many real world applications. In this paper, we focus on hard entity constraints and the challenges associated with enforcing them in text generation. Our analysis of the state-of-the-art insertion transformers reveals issues, namely, cold start problems and inefficient generation. We therefore propose two insertion transformer models, GREEDY EN-CONTER and BBT ENCONTER, that use a bottomup preprocessing strategy to prepare training data so as to eliminate the cold start problem caused by top-down preprocessing strategy. BBT Enconter further incorporates a balanced tree reward scheme to make the generation process more efficient. Through experiments on real world datasets, we show that the two models outperform the strong baselines, POINTER-E and GPT2, in recall, quality and failure rate while not compromising much generation efficiency. For future research, it will be interesting to consider more diverse constraints (e.g., soft constraint, rules, etc.) and user interaction in the generation process to expand the scope of applications that can benefit from this research.
Figure 1 :
1POINTER, POINTER-E, and EN-CONTER with original sequence X = {A, B, C, D, E, F, G, H, I, J, K, L, M, N }
Figure 2 :Figure 3 :
23Number of stages of each document in SD, SM, and CoNLL-2003 (sorted). Mean and standard deviation of the ratio of inserted/masked [NOI] tokens in each stage. All x axis are capped to 15 stages. The original maximum number of stages of (a), (b), and (c) are 67, 51, and 100, respectively.
NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI A B C D E F G H I J K L M N NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOI NOIInference
Preprocessing
xz
Insertion Transformer
A B C D E F G H I
J K L M N
SOS
EOS
Insertion Transformer
B
D
F
H
J
L
N
SOS
EOS
C
E
G
I
K
M
NOI
A
NOI
Insertion Transformer
SOS
D
H
L
EOS
B
F
J
N
NOI
(a) POINTER masking.
Inference
Preprocessing
xz
SOS
EOS
Insertion Transformer
B
D
F
H
J
L
N
SOS
EOS
A
C
E
G
I
K
M
NOI
NOI
Insertion Transformer
B
D
F
J
N
EOS
SOS
NOI
NOI
H
L
NOI
NOI
NOI
Insertion Transformer
xz
Insertion Transformer
SOS
B
D
F
EOS
NOI
NOI
NOI
J
NOI
Insertion Transformer
SOS
B
D
F
J
EOS
NOI
NOI
NOI
NOI
N
NOI
(b) POINTER-E masking.
Inference
Preprocessing
xz
A B C D E F G H I
J K L M N
SOS
EOS
Insertion Transformer
A B C D E
F G
I
J
K
L M N
SOS
EOS
NOI NOI NOI NOI NOI NOI NOI
H NOI NOI NOI NOI NOI
NOI
NOI
Insertion Transformer
A
B
C
D
E
F
I
K
M
SOS
EOS
NOI
NOI
NOI
NOI
NOI
G
J
L
N
NOI
NOI
Insertion Transformer
xz
Insertion Transformer
SOS
B
D
F
EOS
A
C
E
K
NOI
A
B
C
D
E
F
K
SOS
EOS
NOI
NOI
NOI
I
M
NOI
NOI
NOI
NOI
Insertion Transformer
(c) ENCONTER insertion.
Table 1
1reveals that POINTER-E has much higher N OI ratio than ENCONTER in all the datasets. 2 https://www.mycareersfuture.sg/CoNLL
SM
SD
#training docs
1,004
6,715
7,006
#testing docs
231
754
761
Avg length
220.7
99.4
121.1
Avg entities
24.6
24.4
27.7
#training pairs
POINTER-E
6,557 43,913 41,343
GREEDY ENCONTER
17,694 83,587 79,467
BBT-ENCONTER
8,492 52,609 48,625
N OI ratio of Y 0
POINTER-E
0.820
0.904
0.936
GREEDY ENCONTER
0.546
0.463
0.519
BBT-ENCONTER
0.546
0.463
0.519
Table 1: Summary of the datasets. #training pairs
refers to the total number of training pairs derive from
each dataset
Table 2 :
2CoNLL-2003 result
Table 3 :
3SD resultMethod
Recall
NIST
BLEU MTR Entropy
DIST
PPL AvgLen failure AvgSteps
N-2 N-4 B-2 B-4
E-4 D-1 D-2
Baselines
GPT-2
0.72 1.50 1.51 0.15 0.10 0.23
4.40 0.05 0.32 101.0
96.4
0.00
127.48
POINTER-E
0.98 1.32 1.32 0.17 0.10 0.26
3.46 0.09 0.48 2447.7
52.7
0.34
4.89
POINTER-E (+ESAI)
1.00 1.26 1.26 0.16 0.09 0.25
3.42 0.09 0.48 2535.7
53.3
0.38
5.07
ENCONTER
Greedy
0.99 2.48 2.49 0.31 0.20 0.36
4.21 0.07 0.40 153.9
82.2
0.00
9.75
Greedy (+ESAI)
1.00 2.44 2.45 0.31 0.20 0.36
4.19 0.07 0.40 147.4
80.2
0.00
9.62
BBT
0.98 2.73 2.74 0.34 0.24 0.38
4.26 0.07 0.41 161.1
83.8
0.00
6.04
BBT (+ESAI)
1.00 2.69 2.70 0.34 0.23 0.38
4.25 0.07 0.41 157.5
83.6
0.00
6.05
Human
-
-
-
-
-
-
4.45 0.08 0.43 104.3
101.6
-
-
Table 4 :
4SM result
Recent years have witnessed significant success using autoregressive(Dai and Le, 2015;Peters et al., 2018;Radford, 2018) generative models to conduct conditional generation on various tasks. CTRL (Keskar et al., 2019) uses control codes trained together with large amount of data to control the content to be generated. RecipeGPT (H.Lee et al., 2020) takes ingredients as a series of control and trains the generation of recipe text. PPLM(Dathathri et al., 2020) GREEDY ENCONTER: Degree / ACCA / CIMA / CA / CFA / F & B / Group Consolidation experience Degree in General Accounting Good track record in IFRSRSM, Fixed Assets, IFRS GREEDY ENCONTER ESAI: * * Degree / ACCA / CIMA / CA / CFA / CA / CAPA / Singapore Group Consolidation / Management / General Accounting experience * Experience in IFRS or preferred * Experience in Fixed Assets Management * Extensive experience in IFRS BBT-ENCONTER: Degree or ACCA / CIMA or CFA qualifications YEARSPAN'experience in handling Group s project / Consolidation / Good Inteconor / General Accounting Knowledge of IFRSPAN, Fixed Assets ( IFRS, etc ) BBT-ENCONTER ESAI: Minimum Degree / ACCA / CIMA, CFA or equivalent Minimum of YEARSPAN of experience in Marketing and Group Consolidation and General Accounting Knowledge of IFRS ) and Fixed Assets ( IFRS ) GPT-2: (missing: IFRS, CFA, Group Consolidation, Fixed Assets) Job Requirements : -Degree in General Accounting / ACCA Qualification -At least YEARSPAN of applicable working experience in similar capacity -Must be able to multi-task and handle different priorities simultaneously -General accounting knowledge will be advantageous Interested applicant, kindly send in your CPA or CIMA reference number to EMAIL EA Licence number : LICENSENUM Registration number : REGNUM Human: Professional Qualifications : Bachelors Degree Qualified with a professional financial body ( ICAEW / ICPA / ACCA / CIMA / CFA etc ) Specialist Knowledge / Skills : Group Consolidation General Accounting IFRS Fixed Assets Industry Experience Experience : YEARSPAN post qualified with extensive IFRS experience and industry experience
Table 5 :
5A generated example from SM dataset. POINTER-E and POINTER-E ESAI are not shown since they failed to generate at first step.
Our code is available at https://github.com/ LARC-CMU-SMU/Enconter
https://huggingface.co/
AcknowledgmentsThis research is supported by the National Research Foundation, Singapore under its International Research Centres in Singapore Funding Initiative. Any opinions, findings conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, 3rd International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015.
. Ricardo Campos, Vítor Mangaravite, Arian Pasquali, Alípio Jorge, Célia Nunes, Adam Jatowt, Ricardo Campos, Vítor Mangaravite, Arian Pasquali, Alípio Jorge, Célia Nunes, and Adam Jatowt. 2020.
Yake! keyword extraction from single documents using multiple local features. Information Sciences. 509Yake! keyword extraction from single documents using multiple local features. Information Sciences, 509:257-289.
Kermit: Generative insertion-based modeling for sequences. William Chan, Nikita Kitaev, Kelvin Guu, Mitchell Stern, Jakob Uszkoreit, arXiv:1906.01604arXiv preprintWilliam Chan, Nikita Kitaev, Kelvin Guu, Mitchell Stern, and Jakob Uszkoreit. 2019. Kermit: Gener- ative insertion-based modeling for sequences. arXiv preprint arXiv:1906.01604.
Abstractive sentence summarization with attentive recurrent neural networks. Sumit Chopra, Michael Auli, Alexander M Rush, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSumit Chopra, Michael Auli, and Alexander M Rush. 2016. Abstractive sentence summarization with at- tentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 93-98.
Semi-supervised sequence learning. M Andrew, Quoc V Dai, Le, Advances in neural information processing systems. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural informa- tion processing systems, pages 3079-3087.
Plug and play language models: A simple approach to controlled text generation. Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, Rosanne Liu, International Conference on Learning Representations. Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, and Rosanne Liu. 2020. Plug and play language mod- els: A simple approach to controlled text generation. In International Conference on Learning Represen- tations.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. George Doddington, Proceedings of the second international conference on Human Language Technology Research. the second international conference on Human Language Technology ResearchGeorge Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. In Proceedings of the second international conference on Human Language Tech- nology Research, pages 138-145.
Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Albert Gatt, Emiel Krahmer, Journal of Artificial Intelligence Research. 61Albert Gatt and Emiel Krahmer. 2018. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. Journal of Artifi- cial Intelligence Research, 61:65-170.
A note on a standard strategy for developing loop invariants and loops. David Gries, Science of Computer Programming. 23David Gries. 1982. A note on a standard strategy for de- veloping loop invariants and loops. Science of Com- puter Programming, 2(3):207-214.
Levenshtein transformer. Jiatao Gu, Changhan Wang, Junbo Zhao, Advances in Neural Information Processing Systems. Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In Advances in Neural In- formation Processing Systems, pages 11181-11191.
Recipegpt: Generative pretraining based cooking recipe generation and evaluation system. Helena H Lee, Ke Shu, Palakorn Achananuparp, Yue Philips Kokoh Prasetyo, Ee-Peng Liu, Lav R Lim, Varshney, Companion Proceedings of the Web Conference 2020. Helena H. Lee, Ke Shu, Palakorn Achananuparp, Philips Kokoh Prasetyo, Yue Liu, Ee-Peng Lim, and Lav R Varshney. 2020. Recipegpt: Generative pre- training based cooking recipe generation and evalua- tion system. In Companion Proceedings of the Web Conference 2020, pages 181-184.
Lexically constrained decoding for sequence generation using grid beam search. Chris Hokamp, Qun Liu, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Chris Hokamp and Qun Liu. 2017. Lexically con- strained decoding for sequence generation using grid beam search. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1535- 1546.
Improved lexically constrained decoding for translation and monolingual rewriting. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, Benjamin Van Durme, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1J Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, and Benjamin Van Durme. 2019. Improved lexically constrained decoding for translation and monolingual rewriting. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 839-850.
Bryan Nitish Shirish Keskar, Lav Mccann, Caiming Varshney, Richard Xiong, Socher, arXiv:1909.05858CTRL -A Conditional Transformer Language Model for Controllable Generation. arXiv preprintNitish Shirish Keskar, Bryan McCann, Lav Varsh- ney, Caiming Xiong, and Richard Socher. 2019. CTRL -A Conditional Transformer Language Model for Controllable Generation. arXiv preprint arXiv:1909.05858.
Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. Alon Lavie, Abhaya Agarwal, Proceedings of the second workshop on statistical machine translation. the second workshop on statistical machine translationAlon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceed- ings of the second workshop on statistical machine translation, pages 228-231.
A diversity-promoting objective function for neural conversation models. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016. A diversity-promoting objec- tive function for neural conversation models. In Pro- ceedings of the 2016 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 110-119.
Cgmh: Constrained sentence generation by metropolis-hastings sampling. Ning Miao, Hao Zhou, Lili Mou, Rui Yan, Lei Li, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2019. Cgmh: Constrained sentence generation by metropolis-hastings sampling. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 6834-6842.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting of the Association for Computational Linguistics. the 40th annual meeting of the Association for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.
Deep contextualized word representations. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersMatthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227- 2237.
Fast lexically constrained decoding with dynamic beam allocation for neural machine translation. Matt Post, David Vilar, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers1Matt Post and David Vilar. 2018. Fast lexically con- strained decoding with dynamic beam allocation for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 1314-1324.
Conversing by reading: Contentful neural conversation with on-demand machine reading. Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, Jianfeng Gao, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsLianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, and Jian- feng Gao. 2019. Conversing by reading: Contentful neural conversation with on-demand machine read- ing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5427-5436.
Improving language understanding by generative pre-training. A Radford, A. Radford. 2018. Improving language understanding by generative pre-training.
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Insertion transformer: Flexible sequence generation via insertion operations. Mitchell Stern, William Chan, Jamie Kiros, Jakob Uszkoreit, International Conference on Machine Learning. Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible se- quence generation via insertion operations. In In- ternational Conference on Machine Learning, pages 5976-5985.
Targetguided open-domain conversation. Jianheng Tang, Tiancheng Zhao, Chenyan Xiong, Xiaodan Liang, Eric Xing, Zhiting Hu, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsJianheng Tang, Tiancheng Zhao, Chenyan Xiong, Xiao- dan Liang, Eric Xing, and Zhiting Hu. 2019. Target- guided open-domain conversation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5624-5634.
Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. Erik F Tjong, Kim Sang, Fien De Meulder, Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. the Seventh Conference on Natural Language Learning at HLT-NAACL 2003Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.
Non-monotonic sequential text generation. Sean Welleck, Kianté Brantley, Hal Daumé, Kyunghyun Cho, 36th International Conference on Machine Learning, ICML 2019. IMLSSean Welleck, Kianté Brantley, Hal Daumé, and Kyunghyun Cho. 2019. Non-monotonic sequen- tial text generation. In 36th International Con- ference on Machine Learning, ICML 2019, pages 11656-11676. International Machine Learning Soci- ety (IMLS).
Google's neural machine translation system: Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, Zhifeng Chen, V Quoc, Mohammad Le, Wolfgang Norouzi, Maxim Macherey, Yuan Krikun, Qin Cao, Klaus Gao, Macherey, arXiv:1609.08144arXiv preprintYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144.
Generating informative and diverse conversational responses via adversarial information maximization. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, Bill Dolan, Advances in Neural Information Processing Systems. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. In Advances in Neural Information Processing Sys- tems, pages 1810-1820.
Pointer: Constrained text generation via insertion-based generative pre-training. Yizhe Zhang, Guoyin Wang, Chunyuan Li, Zhe Gan, Chris Brockett, Bill Dolan, arXiv:2005.00558arXiv preprintYizhe Zhang, Guoyin Wang, Chunyuan Li, Zhe Gan, Chris Brockett, and Bill Dolan. 2020. Pointer: Con- strained text generation via insertion-based genera- tive pre-training. arXiv preprint arXiv:2005.00558.
Neural poetry: Learning to generate poems using syllables. Andrea Zugarini, Stefano Melacci, Marco Maggini, International Conference on Artificial Neural Networks. SpringerAndrea Zugarini, Stefano Melacci, and Marco Maggini. 2019. Neural poetry: Learning to generate poems using syllables. In International Conference on Ar- tificial Neural Networks, pages 313-325. Springer.
| [] |
[
"Extending Word-Level Quality Estimation for Post-Editing Assistance",
"Extending Word-Level Quality Estimation for Post-Editing Assistance"
] | [
"Yizhen Wei \nUniversity of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan\n",
"Takehito Utsuro \nUniversity of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan\n",
"Masaaki Nagata \nUniversity of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan\n",
"Deg Prog \nUniversity of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan\n",
"Sys \nUniversity of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan\n",
"&inf \nUniversity of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan\n",
"Grad Eng \nUniversity of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan\n",
"Sch \nUniversity of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan\n",
"Sci \nUniversity of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan\n",
"&tech \nUniversity of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan\n"
] | [
"University of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan",
"University of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan",
"University of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan",
"University of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan",
"University of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan",
"University of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan",
"University of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan",
"University of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan",
"University of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan",
"University of Tsukuba\nNTT Communication Science Laboratories\nNTT Corporation\nJapan"
] | [] | We define a novel concept called extended word alignment in order to improve postediting assistance efficiency. Based on extended word alignment, we further propose a novel task called refined word-level QE that outputs refined tags and word-level correspondences. Compared to original word-level QE, the new task is able to directly point out editing operations, thus improves efficiency. To extract extended word alignment, we adopt a supervised method based on mBERT. To solve refined word-level QE, we firstly predict original QE tags by training a regression model for sequence tagging based on mBERT and XLM-R. Then, we refine original word tags with extended word alignment. In addition, we extract source-gap correspondences, meanwhile, obtaining gap tags. Experiments on two language pairs show the feasibility of our method and give us inspirations for further improvement. | 10.48550/arxiv.2209.11378 | [
"https://export.arxiv.org/pdf/2209.11378v1.pdf"
] | 252,519,377 | 2209.11378 | 5c4305ec9c157474149057e9515bbbcb23b5da80 |
Extending Word-Level Quality Estimation for Post-Editing Assistance
Sep 2022
Yizhen Wei
University of Tsukuba
NTT Communication Science Laboratories
NTT Corporation
Japan
Takehito Utsuro
University of Tsukuba
NTT Communication Science Laboratories
NTT Corporation
Japan
Masaaki Nagata
University of Tsukuba
NTT Communication Science Laboratories
NTT Corporation
Japan
Deg Prog
University of Tsukuba
NTT Communication Science Laboratories
NTT Corporation
Japan
Sys
University of Tsukuba
NTT Communication Science Laboratories
NTT Corporation
Japan
&inf
University of Tsukuba
NTT Communication Science Laboratories
NTT Corporation
Japan
Grad Eng
University of Tsukuba
NTT Communication Science Laboratories
NTT Corporation
Japan
Sch
University of Tsukuba
NTT Communication Science Laboratories
NTT Corporation
Japan
Sci
University of Tsukuba
NTT Communication Science Laboratories
NTT Corporation
Japan
&tech
University of Tsukuba
NTT Communication Science Laboratories
NTT Corporation
Japan
Extending Word-Level Quality Estimation for Post-Editing Assistance
Sep 2022
We define a novel concept called extended word alignment in order to improve postediting assistance efficiency. Based on extended word alignment, we further propose a novel task called refined word-level QE that outputs refined tags and word-level correspondences. Compared to original word-level QE, the new task is able to directly point out editing operations, thus improves efficiency. To extract extended word alignment, we adopt a supervised method based on mBERT. To solve refined word-level QE, we firstly predict original QE tags by training a regression model for sequence tagging based on mBERT and XLM-R. Then, we refine original word tags with extended word alignment. In addition, we extract source-gap correspondences, meanwhile, obtaining gap tags. Experiments on two language pairs show the feasibility of our method and give us inspirations for further improvement.
Introduction
Post-editing refers to the process of editing a rough machine-translated sentence (referred to as MT) into a correct one. Compared with conventional statistical machine translation (Koehn et al., 2003), neural machine translation (Cho et al., 2014;Sutskever et al., 2014;Vaswani et al., 2017) can generate translations with high accuracy. However, Yamada (2019) suggested that there is no significant difference in terms of cognitive load for one to post-edit an MT even it has high quality. Therefore, post-editing assistance is profoundly needed.
Traditional post-editing assistance methods leave room for improvement. A typical method is word-level QE (Specia et al., 2020) that predicts tags expressed in the form of OK or BAD. However, such a dualistic judgement is not efficient enough because meaning of BAD is ambiguous.
Word alignment is also proved to be helpful for post-editing assistance. Schwartz et al. (2015) demonstrated that displaying word alignment statistically significantly improves post-editing quality. However, unlike QE tags, word alignment cannot tell where translation errors are. Besides, it is non-trivial to extract word alignment between source sentence and MT. Schwartz et al. (2015) used a built-in function of Moses (Koehn et al., 2007), a decoder for statistical machine translation that is no longer suitable for neural models.
In this paper, we propose a novel concept called extended word alignment. In extended word alignment, we include incorrect word translations and null alignment between a source sentence and MT. We adopt a supervised method based on pretrained language models to extract it. Based on extended word alignment, we further propose a novel task called refined word-level QE which outputs refined tags including REP, INS, and DEL along with word-level correspondences. By referring to those information, post-editors could immediately realize what operations (replacement, insertion, and deletion towards MT) to do. Thus, we believe that refined word-level QE can significantly improve post-editing assistance efficiency. Methodologically, we firstly predict original word tags by training regression models for sequence tagging based on architectures such as multilingual BERT (Devlin et al., 2019) (mBERT) and XLM-RoBERTa (Conneau et al., 2020) (XLM-R). Then, we refine the original word tags by incorporating extended word alignment in a rule-based manner. In addition, we adopt a method similar to the one for extended word alignment to extract source-gap correspondences and then determine gap tags.
Experiments on En-De and En-Zh datasets are conducted. Results show that our method significantly outperforms the baseline. For En-De, our best performance outperforms the baseline by 12.9% and 6.0% respectively in terms of mean F1 scores for Source and MT word refined tags. For En-Zh, the gap reaches 48.9% and 16.9%. Further more, we discuss the effectiveness and limitations of our method with specific cases.
Related Work
Word Alignment Extraction. Methods based on statistical models (Brown et al., 1993;Och and Ney, 2003;Dyer et al., 2013) were dominant methods for word alignment extraction. In recent years, neural-based methods developed quickly. Garg et al. (2019) tried to obtain word alignment based on attention inside a transformer (Vaswani et al., 2017), but their method perform just as well as statistical tools like GIZA++ (Och and Ney, 2003). Dou and Neubig (2021) utilized multilingual BERT to extract embeddings of all words conditioned on context, aligning them under the restriction of optimal transport (Kusner et al., 2015). Nagata et al. (2020) utilized the pre-trained language model in a supervised manner and achieved a significant improvement against previous studies with only around 300 parallel sentence pairs for fine-tuning. In our work, we adapt their approach from ordinary word alignment to extended word alignment. Details will be introduced in Section 4.1.
Word-Level QE. One of the conventional architectures for word-level QE is LTSM-based predictor-estimator (Kim and Lee, 2016;Zhang and Weiss, 2016;Kim et al., 2017). Recent researches (Wang et al., 2020) adopted new architectures such as transformer (Vaswani et al., 2017).
For moderner methods, a typical example is QE BERT (Kim et al., 2019). They built a mBERT for classification with explicit gap tokens in the input sequence, but we find that regression models with adjustable threshold consistently outperform classification models and explicit gap tokens harm final performance. A newer research (Lee, 2020) adopted XLM-R rather than mBERT, but they did not explain their strategy to determine a threshold.
All methods above require third-party largescale parallel data for pre-training. In contrast, our method introduced in Section 4.2 achieves acceptable performance with small cost.
Post-Editing User Interface. Nayek et al. (2015) depicted an interface where words that need editing are displayed with different colors. Schwartz et al. (2015) emphasized the importance of displaying the word alignment. Both interfaces do not tell the correctness of the translation of the MT words. Compared to them, the interface we envisaged provides information about translation quality (correctness) as well as suggestions of specific post-editing operations.
There are also some other studies (Herbig et al., 2020;Jamara et al., 2021) tried to introduce multimodalities including touching, speech, hand gestures into post-editing user interface, improving efficiency from another perspective.
Refined Word-Level QE for
Post-Editing Assistance
Original Word-Level QE
According to Specia et al. (2020), word-level QE shown in Figure 1(a) is a task that takes a source sentence and its machine-translated counterpart (MT) as input. It then outputs tags for source words, MT words and gaps between MT words (MT gaps). 1 All those tags are expressed either as OK or BAD. BAD indicates potential translation errors that post-editors should correct. We refer to such a task as original word-level QE.
Original word-level QE is not efficient enough for post-editing assistance because BAD is ambiguous. For example, in Figure 1(a), tag of "white" indicates a replacement of the mistranslation "黑" (black), but tag of "dogs" indicates an insertion into the gap between "猫" and "吗". It is impossible to distinguish between these indications unless one attend to both entire sentences, which makes post-editing assistance meaningless.
Extended Word Alignment
We formally define a novel concept called extended word alignment between source sentence and MT. Ordinary word alignment indicates wordto-word relations between a pair of semantically equivalent sentences in two languages. Any word can be theoretically aligned with another semantically equivalent word on the other side. In contrast, in extended word alignment, translation errors in MT are considered. Specifically, a source word is allowed to be aligned with its mistranslation (wrong word choice) and a word is allowed to be aligned with nothing, namely null-aligned.
Refined Word-Level QE
Extended word alignment can disambiguate BAD tags, overcoming the disadvantage of original word-level QE. When a BAD-tagged source word is aligned with a BAD-tagged MT word, it is clear that a replacement is needed. Likewise, a nullaligned BAD-tagged source word indicates an insertion and a BAD-tagged MT word is a deletion.
To make our idea more user-friendly, we formally propose a novel task called refined wordlevel QE by incorporating extended word alignment with original word-level QE. Besides extended word alignment, following refined tags are also included as the objectives.
• REP is assigned to a source word and its mistranslation (wrong word choice) in MT, indicating a replacement. • INS is assigned to a source word and the gap where translation should be inserted in, indicating an insertion. • DEL is assigned to a redundant MT word, indicating a deletion.
In addition, we include correspondences between INS-tagged source words and MT gaps to express the insertion points. Those source-gap correspondences along with extended word alignment are collectively referred to as word-level correspondences. Figure 1(b) is an example of our proposal. Compared with Figure 1 the replacement of "黑" (black), the insertions of "and" and "dogs" to the insertion point, and the deletion of "吗" (an interrogative voice auxiliary).
Methodology 2
Extended Word Alignment Extraction
Extracting extended word alignment is non-trivial. Traditional unsupervised statistical tools (Och and Ney, 2003;Dyer et al., 2013) cannot work well because they expect semantically equivalent sentence pair as input. After trying several neural methods (Garg et al., 2019;Dou and Neubig, 2021), we empirically adopt the supervised method proposed by Nagata et al. (2020).
Specifically, extended word alignment extraction is regarded as a cross-lingual span prediction problem similar to the paradigm that utilizes BERT (Devlin et al., 2019) for SQuAD v2.0 (Rajpurkar et al., 2018). mBERT is used as the basic architecture. Given a source sentence with one word marked S = [s 1 , s 2 , ..., M, s i , M, ..., s m ] (M stands for a special mark token) and the MT
T = [t 1 , t 2 , ..., t n ], mBERT is trained to identify a span T (j,k) = [t j , ..., t k ](1 ≤ j ≤ k ≤ n) that
is aligned with the marked source word s i . Cross entropy loss is adopted during training.
L align s i = − 1 2 [log(p start j ) + log(p end k )]
Because of the symmetry of word alignment, similar operations will be done again in the opposite direction. During testing, following Nagata et al. (2020), we recognize word pairs whose mean probability of both directions is greater than 0.4 as a valid word alignment. The image of the model is illustrated in Figure 2. Nagata et al. (2020) demonstrated that the mBERT-based method significantly outperforms statistical methods in ordinary word alignment extraction. According to them, extracting word alignment for each word independently is the key to outperform other methods. Traditional methods model word alignment on a joint distribution, so that an incorrect previous alignment might cause more incorrect alignments like dominos. Our experiments prove that their method consistently works for extended word alignment.
Original Word Tag Prediction
For Original tags, we conduct sequence tagging with multilingual pre-trained language models including mBERT and XLM-R. Figure 3 shows the image. Input sequence is organized in the format of "[CLS] source sentence [SEP] MT [SEP]" without any mark tokens. Two linear layers followed by Sigmoid function transform output vectors into scalar values as respective probabilities of being BAD for each token. Formally, for a source sentence S = [s 1 , s 2 , ..., s i , ..., s m ] and an MT T = [t 1 , t 2 , ..., t j , ..., t n ], the total loss is the mean of binary cross entropy of all word tags.
L tag s i = −[y s i log(p s i ) + (1 − y s i )log(1 − p s i )] L tag t j = −[y t j log(p t j ) + (1 − y t j )log(1 − p t j )] ! "#$% &'() '*+ +,-) ./#(% ! !" #$% !3#$78 '( !" !" #$% #$% !" !" !" #$% #$% !"# !"# $%& $%& '"( "# $ % & !"# $%L tag = 1 m+n ( m i=1 L s i + n j=1 L t j )
We have also implemented our models with classification top-layers 3 , but we find that regression models are consistently better since we can adopt flexible threshold to offset the bias caused by imbalance of reference tags.
Word Tag Refinement and Gap Tag Prediction
We use extended word alignment to refine the original word tags. Following the rules described in Section 3.3, we can refine word tags as Figure 4 shows. In practical situation, some BAD-tagged words are likely to be aligned with OK-tagged words. In that case, we change OK into BAD encouraging more generation of REP.
For gap tags, we adopt a method similar to the one described in Section 4.1. Specifically, we model source-gap correspondences as alignment between source words and MT gaps. We train a model that aligns an INS-tagged source word with a two-word span in MT where corresponding gap is surrounded. Figure 5 illustrates our idea. During testing, when a valid source-gap correspondence is confirmed, we tag the MT gap as INS 4 .
It is natural if we use such a method to determine gap tags based on the INS-tagged source words prediction from previous workflow. However, in experiment, we notice that absolute value of accuracy of INS-tagged source words is not high. In order not to be influenced by the previous wrong predictions, instead of treating this task as a downstream one, we conducted it independently.
Experiment
Data and Experimental Setups
We make full advantage of the En-De and En-Zh datasets of the shared task of original word-level QE in WMT20 5 . There are 7,000, 1,000, and 1,000 sentence pairs with annotation of tags respectively for training, development, and test set. Since the original datasets do not contain refined objectives, we additionally annotate the original development sets with all the objectives for refined word-level QE. Those annotated 1,000 pairs are further divided into 200 pairs for evaluation and 800 pairs for fine-tuning.
All the experiments are conducted with modified scripts from transformers-v3.3.1 6 on an NVIDIA TITAN RTX (24GB) with CUDA 10.1. For pre-trained models, we use bert-basemultilingual-cased for mBERT and xlm-robertalarge for XLM-R from Huggingface.
To train the model for original tags described in Section 4.2, we use the 7,000-pair training set provided by WMT20. 800 pairs of manually annotated data whose refined tags are degenerated into original tags are used for further training. Learning rate is set to 3e-5 and 1e-5 for mBERT and XLM-R respectively and both models are trained for 5 epochs. All the other configurations are remained unchanged as the default.
To train the models extracting extended word alignment described in Section 4.1, we utilize AWESoME (Dou and Neubig, 2021) to generate pseudo alignment data based on 7,000-pair WMT20 training set. We also use extra 800 sentence-pair annotated alignment data for finetuning. Models are pre-trained for 2 epochs and fine-tuned for 5 epochs with a learning rate tags and trust the refinement based on extended word alignment because we believe extended word alignment is easier to model. 5 http://www.statmt.org/wmt20/quality-es timation-task.html 6 https://github.com/huggingface/transfo rmers of 3e-5. Most configurations are remained unchanged as the default but max_seq_length and max_ans_length are set to 160 and 15 following Nagata et al. (2020).
To train the model extracting source-gap correspondences described in Section 4.3, similar to what described above, we firstly adopt 7,000 sentence-pair WMT20 training set, generating pseudo data by randomly dropping out some target words in PE 7 . Then we link gaps where words are dropped with their source counterparts according to the source-PE alignment extracted by AWE-SoME. Also, 800 sentence-pair of gold sourcegap correspondences are used for fine-tuning. All model configurations and training settings are kept identical as those of the model for extended word alignment extraction.
Experimental Results
Evaluation of Original Tags
We firstly compare our performance with other participants of WMT20 Therefore, we use identical test sets to evaluate and only use data from the original training set of WMT20 to train our models here. Following WMT20 (Specia et al., 2020), we adopt the Matthews correlation coefficient (MCC) as the metric. From the perspective of competition, we make every effort to boost the performance. Thus we set all gap tags as OK rather than predicting them as we find such a strategy leads to the best MCC. The results are shown in Table 1.
In general, pre-trained language models consistently outperform the baseline which is a LSTM-based predictor-estimator implemented with OpenKiwi. For En-De, our best source and MT MCC would have ranked sixth on the leaderboard of WMT20. For En-Zh, our best source and MT MCC would have ranked first and second on the leaderboard of WMT20.
It is also noteworthy that regression models consistently outperform classification models with the suffix "-cls". For regression models, we search an optimized threshold that maximize sum of source and MT MCC on the development set and adopt it on the test set to determine tags. To exclude errors caused by single optimized threshold, we further draw the ROC curves and AUC in Figure 6. The results demonstrate that our regression models based on mBERT and XLM-R statistically significantly (Wang et al., 2020) 0.597 (Lee, 2020) 0.336 (Rubino, 2020) 0.610 (Hu et al., 2020) outperform the baseline. For En-De, Wang et al. (2020) and Lee (2020) both used large-scale third-party data 8 . Besides the top-2, the third system (Rubino, 2020) is also pre-trained with 5 million sentence pairs but got 0.357 and 0.485 respectively. Therefore, we believe that we achieve acceptable performance with very small cost.
Evaluation of Word-Level Correspondences
We evaluate extended word alignment and sourcegap correspondences jointly as word-level correspondences. The results are shown in Table 2. Two baselines ("FastAlign" and "AWE-SoME") cannot predict source-gap correspondences since they are designed for ordinary word alignment. We combine their extended word alignment with prediction of source-gap correspondences by "mBERT" for fair comparison. All predictions are evaluated by F1 score as well as precision and recall. Neural-based methods significantly outperform statistical "FastAlign". The gap of 0.4% for En-De and 2.2% for En-Zh between "AWESoME" and "mBERT" is not significant. But it might implies that pre-trained language models like mBERT is able to filter noises in pseudo data and produce high-quality word-level correspondences. Additionally, a better performance of "fine-tuned mBERT" indicates that the upper bound could be higher if more annotated data is available.
Evaluation of Refined Tags
As introduced, we combine prediction of extended word alignment and original word tags 9 to get refined word tags. Moreover, we deduce gap tags from source-gap correspondences. Origin of source-gap correspondences used is kept consistent with Table 2 according to extended word alignment. For baseline, combinations of FastAlign/AWESoME and OpenKiwi is adopted. As for metric, we use F1 score of each type of tags along with a weighted mean of all those F1 scores, taking the proportion of each tag in reference as weight. The results are shown in Table 3.
Our best model outperforms the baseline by 12.9% and 6.0% respectively on source and MT refined tags in terms of mean F1 scores in En-De experiments. As for the En-Zh experiments, mean F1 scores are significantly improved by 48.9% and 16.9%.
We also notice that though fine-tuned mBERT extracts extended word alignment with good accuracy, the absolute value of refined tag accuracy Table 3: Evaluation of refined tags. Main metric is a weighted mean of F1 scores according to ratio of each type of tags in reference. "ft-" indicates that the model is fine-tuned with the extra 800-pair annotated alignment data. "rt-" indicates that the model is further trained with extra 800-pair annotated tag data.
is still unsatisfactory (especially that of INS and DEL). We will discuss that in the next section.
6 Discussion on Specific Cases 6.1 Discussion on Refined Word Tags
In Figure 7(a), our system basically succeeds in detecting errors caused by incorrect use of punctuations. Our system correctly suggests the replacements for the second comma and the halfwidth period. As for the first comma, the translation is still natural and acceptable if we delete the comma following the system's suggestion. Moreover, our system successfully detects mistranslations of "passes" and "touchdowns". In MT, those football terminologies are respectively translated as "通行证" (a pass to enter somewhere) and "摔 倒" (falling down). It is noteworthy that those two mistranslations are not revised in post-edited corpus provided by WMT21. It implies that our system performs surprisingly well as it even succeeds in detecting mistranslations that is not noticed by human annotators. In Figure 7(b), our system still works well in detecting incorrect use of half-width punctuations. However, compared with the reference, "abdominal aneurysm" is mistranslated and our model failed to detect it because both are tagged as OK during the prediction of original tags. A premature prediction of OK prevents a word from being refined into REP/INS/DEL later. We believe that an inappropriate threshold mainly leads to such an issue. Predicted probabilities of "腹部" and "动脉瘤" are respectively 0.103 and 0.134, but the optimized threshold used is 0.88 as we searched it to maximize the MCC on the whole set. Meanwhile, probabilities of all other OK-tagged MT words are actually smaller than 0.01. As a result, if we set a threshold between 0.01-0.10 for this sentence pair, we could have obtained the perfect result. In the future, we plan to investigate into methods that can determine fine-grained optimized threshold for each sentence pair. to indicate actions took place in the past in perfect tense rather than past tense. In this case, English verb "drafted" should be modified to "haben ... ausgewählt". Our model correctly suggests a correspondence between "drafted" and the MT gap in front of the period. As there are many cases need similar modification that inserts a particular word (like particle or infinitive for clause) before the period in MT, it is easier for our model to learn such laws. It probably explains the relatively good accuracy of INS in En-De experiments. In contrast, Figure 7(d) is an En-Zh example showing that our model tends to align many source words with the gap right before or after their translation in MT even the translation is correct and needs no extra insertions. The word "dissected" is unnecessarily aligned with the gap around its translation "解剖". Two human names are also unnecessarily aligned with gaps. As a result, four gaps are incorrectly tagged as INS. We observed the annotated dataset and noticed that many Chinese words in MT are slightly modified by adding prefixes and suffixes during post-editing. For example, "成年 海龟" (adult sea turtle) is modified to "成年的海龟" (adding "的" as a suffix for adjective). "演讲" (the speech) is modified to "这 一演讲" (emphasizing "this" speech). Generally, those modifications are not necessary because of the free Chinese grammar. However, existence of those modifications might mislead the model into preferring to unnecessarily align a word with the gap around its translation like Figure 7(d). To address this issue, we plan to restrict the annotation rules to exclude meaningless modification in En-Zh training data in the future.
Discussion on Gap Tags
Conclusion and Future Work
To improve post-editing assistance efficiency, we define a novel concept called extended word alignment. By incorporating extended word alignment with original word-level QE, we formally propose a novel task called refined word-level QE. To solve the task, we firstly adopt a supervised method to extract extended word alignment and then predict original tags with pre-trained language models by conducting sequence tagging. We then refine word tags with extended word alignment. Additionally, we extract source-gap correspondences and determine gap tags. We perform experiments and a discussion on specific cases.
In the future, we would like to polish our work in the following perspectives. Firstly, we want to develop methods that determines fine-grained threshold as elaborated in Section 6. Moreover, we prepare to conduct a human-evaluated experiment to prove the superiority of refined word-level QE in terms of post-editing assistance efficiency.
!
"# $% &%'($)*+ ,-. ($/0 1(23/ 324)5 (b) Refined word-level QE. Correspondences between REP are drawn in red and that between INS are drawn in purple.
Figure 1 :
1A comparison between original word-level QE and our proposal.
Figure 2 :
2'()$ %$*+$*)$ !"#$%&"'()&*+,-'./&(#0,1&2.Extracting extended word alignment by mBERT. The word will be aligned with [CLS] token if it is null-aligned.
Figure 3 :
3Determining original word tags with pretrained language models.
) 5 Figure 4 :Figure 5 :
545&%'($)*+ ,-. ($/0 1(23/ 324Refining the word tags by using extended word alignment. !"#$%#%&'"(#)*+,-'()$ %$*+$*)$ !"#$%&"'()&*+,-'./&(#0,1&2.Determining gap tags by extracting sourcegap alignments with mBERT.
Figure 6 :
6ROC curve and AUC of the baseline and our systems(* indicates that the model outperforms the baseline (OpenKiwi) with statistical significance (p<0.01)).
Figure 7 (!
7c) shows a typical En-De case that our model handles well. In German, it is more natural "#"$"%&" '()*+,"-.+"-*"%+"%&"/! ! "#$ % & ' () * "# + "$$ , -.En-Zh case with correct refined word tag prediction. #"$"%&" '()*+,"-.+"-*"%+"%&"/! "#$$% "&'#()% ! **+" ! #$% &' ! ()*+,-En-Zh case with incorrect refined word tag prediction. %&'()!#& '**+,#%-(.#/%01&#'/*1%)'*.%2+3("*.(%4!(!#%(!%567%8 !"# #$%&'()#$ *#$ +,,-"#./'0#1.2$*#(1,$.'31 4-5'6,0'.!%'%#.'3).789.:;'<= > !"#"$"%&" '()*+,"-.+"-*"%+"%&"/! En-De case with correct prediction of source-gap correspondence and gap tag. !"#$%& '( !"#"$"%&" '()*+,"-.+"-*"%+"%&"/! !"#$%&'($ !" # $ % &' ( )%&"*+!" # )* % &'+ !"$%&'($ ,'$$"-.", /(0%*+-1#2$"$+3+4/"#"%$+)%&"* ,'$$"-.", %*'0%&+-1#2($"$+5 !"#$%&'#()*$+, !" )*$+, # $ % &' ( -$%".()*$+, !" )*$+, En-Zh case with incorrect prediction of source-gap correspondences and gap tags.
Figure 7 :
7Specific cases. For visual neatness, most OK tags are omitted and some continuous spans are merged.
Table 1 :
1MCC of original tags. All MT gap tags of our systems are set to OK. For En-De, unlike top systems that employs large-scale third-party resources, we achieve acceptable performance only using QE dataset.
Extended Word Align. Source-Gap Corr.En-De (F1/P/R)
En-Zh (F1/P/R)
FastAlign
mBERT
0.828/0.812/0.844 0.739/0.773/0.709
AWESoME
0.891/0.915/0.868 0.814/0.871/0.764
mBERT
0.895/0.917/0.875 0.836/0.888/0.790
ft-mBERT
ft-mBERT
0.916/0.913/0.918 0.888/0.887/0.889
Table 2 :
2Evaluation of word-level correspondences. "mBERT" indicates mBERT trained with 7,000-pair pseudo
data and "ft-mBERT" indicates mBERT further fine-tuned with 800-pair data.
(a) En-De Results
Extended
Word Alignment
Original
QE Tags
Source F1 Scores
Mean (OK/REP/INS)
MT F1 Scores
Mean (OK/REP/DEL/INS)
FastAlign
OpenKiwi
0.626 (0.696/0.492/0.174) 0.767 (0.847/0.477/0.124/0.156)
AWESoME
0.708 (0.781/0.549/0.373) 0.807 (0.879/0.548/0.395/0.156)
mBERT
mBERT
0.739 (0.825/0.540/0.421) 0.820 (0.895/0.544/0.389/0.156)
XLM-R
0.709 (0.781/0.548/0.410) 0.809 (0.879/0.522/0.415/0.156)
ft-mBERT
rt-mBERT 0.755 (0.850/0.538/0.400) 0.827 (0.904/0.535/0.347/0.175)
rt-XLM-R 0.685 (0.748/0.544/0.431) 0.805 (0.871/0.538/0.580/0.175)
(b) En-Zh Results
Extended
Word Alignment
Original
QE Tags
Mean Source F1 Scores
(OK/REP/INS)
Mean MT F1 Scores
(OK/REP/DEL/INS)
FastAlign
OpenKiwi
0.360 (0.379/0.280/0.071) 0.728 (0.781/0.276/0.173/0.042)
AWESoME
0.371 (0.391/0.285/0.066) 0.733 (0.786/0.280/0.202/0.042)
mBERT
mBERT
0.836 (0.914/0.446/0.020) 0.891 (0.947/0.441/0.316/0.042)
XLM-R
0.843 (0.929/0.410/0.018) 0.895 (0.955/0.402/0.275/0.042)
ft-mBERT
rt-mBERT 0.848 (0.929/0.447/0.034) 0.897 (0.954/0.441/0.284/0.042)
rt-XLM-R 0.849 (0.928/0.451/0.028) 0.897 (0.955/0.446/0.289/0.042)
For convenience, source tags and MT word tags are collectively known as word tags. MT word tags and MT gap tags are collectively known as MT tags.
Besides the current method, we have also tried to use a unified model based on architectures like XLM-R to directly predict refined tags (OK/REP/INS/DEL) and word-level correspondences. However, due to lack of training data and complexity of the problem, direct approach did not work well. Therefore, we decided to adopt this multiple-phase approach.
Classification top-layers refers to a binary classification linear layer with Softmax.4 As for source words involved, we do not change their
Provided [P]ost-[E]dited sentence from MT in WMT20 dataset. It is regarded as the correct translations.
Wang et al. (2020) used parallel data from WMT20 news translation task to pre-train a predictor and Lee (2020) generated 11 million pairs of pseudo QE data with 23 million pairs of sentences.
While predicting the original tags, we did not directly use the optimized threshold determined in Section 5.2.1 since test set here originates from the original development set. Instead, we take the original test set of WMT20 for development purposes and re-searched an optimized threshold on it.
The mathematics of statistical machine translation: Parameter estimation. P F Brown, Stephen A Della, P Vincent, J , Robert L Mercer, Computational Linguistics. 192P. F. Brown, Stephen A. Della P., Vincent J., and Robert L. Mercer. 1993. The mathematics of statisti- cal machine translation: Parameter estimation. Com- putational Linguistics, 19(2):263-311.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. K Cho, B Van Merrienboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, Y Bengio, Proc. EMNLP. EMNLPK. Cho, van Merrienboer B., C. Gulcehre, D. Bah- danau, F. Bougares, H. Schwenk, and Y. Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proc. EMNLP, pages 1724-1734.
Unsupervised crosslingual representation learning at scale. A Conneau, K Khandelwal, N Goyal, V Chaudhary, G Wenzek, F Guzmán, E Grave, M Ott, L Zettlemoyer, V Stoyanov, Proc. 58th ACL. 58th ACLA. Conneau, K. Khandelwal, N. Goyal, V. Chaudhary, G. Wenzek, F. Guzmán, E. Grave, M. Ott, L. Zettle- moyer, and V. Stoyanov. 2020. Unsupervised cross- lingual representation learning at scale. In Proc. 58th ACL, pages 8440-8451.
BERT: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M Chang, K Lee, Toutanova K , Proc. 17th NAACL-HLT. 17th NAACL-HLTJ. Devlin, M. Chang, K. Lee, and Toutanova K. 2019. BERT: Pre-training of deep bidirectional transform- ers for language understanding. In Proc. 17th NAACL-HLT, pages 4171-4186.
Word alignment by finetuning embeddings on parallel corpora. Z Dou, G Neubig, Proc. 16th EACL. 16th EACLZ. Dou and G. Neubig. 2021. Word alignment by fine- tuning embeddings on parallel corpora. In Proc. 16th EACL, pages 2112-2128.
A simple, fast, and effective reparameterization of IBM model 2. C Dyer, V Chahuneau, N Smith, Proc. 11th NAACL-HLT. 11th NAACL-HLTC. Dyer, V. Chahuneau, and N. Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In Proc. 11th NAACL-HLT, pages 644-648.
Jointly learning to align and translate with transformer models. S Garg, S Peitz, U Nallasamy, M Paulik, S. Garg, S. Peitz, U. Nallasamy, and M. Paulik. 2019. Jointly learning to align and translate with trans- former models. pages 4453-4462.
MMPE: A Multi-Modal Interface for Post-Editing Machine Translation. N Herbig, T Duwel, S Pal, K Meladaki, M Monshizadeh, A Kruger, J Van Genabith, Proc. 58th ACL. 58th ACLN. Herbig, T. Duwel, S. Pal, K. Meladaki, M. Mon- shizadeh, A. Kruger, and J. van Genabith. 2020. MMPE: A Multi-Modal Interface for Post-Editing Machine Translation. In Proc. 58th ACL, pages 327- 334.
2020. The niutrans system for the wmt20 quality estimation shared task. C Hu, H Liu, K Feng, C Xu, N Xu, Z Zhou, S Yan, Y Luo, C Wang, X Meng, T Xiao, J Zhu, Proc. 5th WMT. 5th WMTC. Hu, H. Liu, K. Feng, C. Xu, N. Xu, Z. Zhou, S. Yan, Y. Luo, C. Wang, X. Meng, T. Xiao, and J. Zhu. 2020. The niutrans system for the wmt20 quality estimation shared task. In Proc. 5th WMT, pages 1018-1023.
Mid-air hand gestures for post-editing of machine translation. R A Jamara, N Herbig, A Kruger, J Van Genabith, Proc. 59th ACL. 59th ACLR. A. Jamara, N. Herbig, A. Kruger, and J. van Gen- abith. 2021. Mid-air hand gestures for post-editing of machine translation. In Proc. 59th ACL, pages 6763-6773.
Recurrent neural network based translation quality estimation. H Kim, J Lee, Proc. 1st WMT. 1st WMTH. Kim and J. Lee. 2016. Recurrent neural network based translation quality estimation. In Proc. 1st WMT, pages 787-792.
Predictor-estimator using multilevel task learning with stack propagation for neural quality estimation. H Kim, J Lee, S Na, Proc. 2nd WMT. 2nd WMTH. Kim, J. Lee, and S. Na. 2017. Predictor-estimator using multilevel task learning with stack propaga- tion for neural quality estimation. In Proc. 2nd WMT, pages 562-568.
QE BERT: Bilingual BERT using multi-task learning for neural quality estimation. H Kim, J Lim, H Kim, S Na, Proc. 4th WMT. 4th WMTH. Kim, J. Lim, H. Kim, and S. Na. 2019. QE BERT: Bilingual BERT using multi-task learning for neural quality estimation. In Proc. 4th WMT, pages 85-89.
Moses: Open source toolkit for statistical machine translation. P Koehn, H Hoang, A Birch, C Callison-Burch, M Federico, N Bertoldi, B Cowan, W Shen, C Moran, R Zens, C Dyer, O Bojar, A Constantin, E Herbst, Proc. 45th ACL. 45th ACLP. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. 45th ACL, pages 177-180.
Statistical phrase-based translation. P Koehn, F J Och, D Marcu, Proc. HLT-NAACL. HLT-NAACLP. Koehn, F. J. Och, and D. Marcu. 2003. Statisti- cal phrase-based translation. In Proc. HLT-NAACL, pages 127-133.
From word embeddings to document distances. M Kusner, Y Sun, N Kolkin, K Weinberger, Proc. 32nd ICML. 32nd ICMLM. Kusner, Y. Sun, N. Kolkin, and K. Weinberger. 2015. From word embeddings to document distances. In Proc. 32nd ICML, pages 957-966.
Two-phase cross-lingual language model fine-tuning for machine translation quality estimation. D Lee, Proc. 5th WMT. 5th WMTD. Lee. 2020. Two-phase cross-lingual language model fine-tuning for machine translation quality es- timation. In Proc. 5th WMT, pages 1024-1028.
A supervised word alignment method based on crosslanguage span prediction using multilingual BERT. M Nagata, K Chousa, M Nishino, Proc. EMNLP. EMNLPM. Nagata, K. Chousa, and M. Nishino. 2020. A supervised word alignment method based on cross- language span prediction using multilingual BERT. In Proc. EMNLP, pages 555-565.
CATaLog: New approaches to TM and post editing interfaces. T Nayek, S K Naskar, S Pal, M Zampieri, M Vela, J Van Genabith, Proc. Workshop NLP4TM. Workshop NLP4TMT. Nayek, S. K. Naskar, S. Pal, M. Zampieri, M. Vela, and J. van Genabith. 2015. CATaLog: New ap- proaches to TM and post editing interfaces. In Proc. Workshop NLP4TM, pages 36-42.
A systematic comparison of various statistical alignment models. F Och, H Ney, Computational Linguistics. 291F. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51.
Know what you don't know: Unanswerable questions for SQuAD. P Rajpurkar, R Jia, P Liang, Proc. 56th ACL. 56th ACLP. Rajpurkar, R. Jia, and P. Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. In Proc. 56th ACL, pages 784-789.
NICT kyoto submission for the wmt'20 quality estimation task: Intermediate training for domain and task adaptation. R Rubino, Proc. 5th WMT. 5th WMTR. Rubino. 2020. NICT kyoto submission for the wmt'20 quality estimation task: Intermediate train- ing for domain and task adaptation. In Proc. 5th WMT, pages 1042-1048.
Effects of word alignment visualization on post-editing quality & speed. L Schwartz, I Lacruz, T Bystrova, Proc. MT Summit XV. MT Summit XVL. Schwartz, I. Lacruz, and T. Bystrova. 2015. Effects of word alignment visualization on post-editing qual- ity & speed. In Proc. MT Summit XV.
2020. Findings of the WMT 2020 shared task on quality estimation. L Specia, F Blain, M Fomicheva, E Fonseca, V Chaudhary, F Guzmán, A Martins, Proc. 5th WMT. 5th WMTL. Specia, F. Blain, M. Fomicheva, E. Fonseca, V. Chaudhary, F. Guzmán, and A. Martins. 2020. Findings of the WMT 2020 shared task on quality estimation. In Proc. 5th WMT, pages 741-762.
Sequence to sequence learning with neural networks. I Sutskever, O Vinyals, Q Le, Proc. 27th NIPS. 27th NIPSI. Sutskever, O. Vinyals, and Q. Le. 2014. Sequence to sequence learning with neural networks. In Proc. 27th NIPS, pages 3104-3112.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A Gomez, Ł Kaiser, I Polosukhin, Proc. 31st NIPS. 31st NIPSA. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. Gomez, Ł. Kaiser, and I. Polosukhin. 2017. Attention is all you need. In Proc. 31st NIPS, pages 5998-6008.
HW-TSC's participation at WMT 2020 automatic post editing shared task. M Wang, H Yang, H Shang, D Wei, J Guo, L Lei, Y Qin, S Tao, S Sun, Y Chen, L Li, Proc. 5th WMT. 5th WMTM. Wang, H. Yang, H. Shang, D. Wei, J. Guo, L. Lei, Y. Qin, S. Tao, S. Sun, Y. Chen, and L. Li. 2020. HW-TSC's participation at WMT 2020 automatic post editing shared task. In Proc. 5th WMT, pages 1054-1059.
The impact of google neural machine translation on post-editing by student translators. M Yamada, The Journal of Specialised Translation. 31M. Yamada. 2019. The impact of google neural ma- chine translation on post-editing by student transla- tors. The Journal of Specialised Translation, 31:87- 106.
Stack-propagation: Improved representation learning for syntax. Y Zhang, D Weiss, Proc. 54th ACL. 54th ACLY. Zhang and D. Weiss. 2016. Stack-propagation: Im- proved representation learning for syntax. In Proc. 54th ACL, pages 1557-1566.
| [
"https://github.com/huggingface/transfo"
] |
[
"DISTRIBUTED READABILITY ANALYSIS OF TURKISH ELEMENTARY SCHOOL TEXTBOOKS",
"DISTRIBUTED READABILITY ANALYSIS OF TURKISH ELEMENTARY SCHOOL TEXTBOOKS"
] | [
"Betul Karakus \nComputer Engineering Department\nComputer Engineering Department\nComputer Engineering Department\nFirat University23100 Elazig, Firat University23100 Elazig, Firat University23100 ElazigTurkey, Turkey, Turkey\n",
"Galip Aydın \nComputer Engineering Department\nComputer Engineering Department\nComputer Engineering Department\nFirat University23100 Elazig, Firat University23100 Elazig, Firat University23100 ElazigTurkey, Turkey, Turkey\n",
"Ibrahim Rıza Hallac \nComputer Engineering Department\nComputer Engineering Department\nComputer Engineering Department\nFirat University23100 Elazig, Firat University23100 Elazig, Firat University23100 ElazigTurkey, Turkey, Turkey\n"
] | [
"Computer Engineering Department\nComputer Engineering Department\nComputer Engineering Department\nFirat University23100 Elazig, Firat University23100 Elazig, Firat University23100 ElazigTurkey, Turkey, Turkey",
"Computer Engineering Department\nComputer Engineering Department\nComputer Engineering Department\nFirat University23100 Elazig, Firat University23100 Elazig, Firat University23100 ElazigTurkey, Turkey, Turkey",
"Computer Engineering Department\nComputer Engineering Department\nComputer Engineering Department\nFirat University23100 Elazig, Firat University23100 Elazig, Firat University23100 ElazigTurkey, Turkey, Turkey"
] | [
"Proceedings of International Conference on Information Technology and Computer Science"
] | The readability assessment deals with estimating the level of difficulty in reading texts.Many readability tests, which do not indicate execution efficiency, have been applied on specific texts to measure the reading grade level in science textbooks. In this paper, we analyze the content covered in elementary school Turkish textbooks by employing a distributed parallel processing framework based on popular MapReduce paradigm.We outline the architecture of a distributed Big Data processing system which uses Hadoop for full-text readability analysis. The readability scores of the textbooks and system performance measurements are also given in the paper. | null | [
"https://arxiv.org/pdf/1802.03821v1.pdf"
] | 2,994,045 | 1802.03821 | 851f1604e950145b7227d1a2eae7ef8fbc4a994e |
DISTRIBUTED READABILITY ANALYSIS OF TURKISH ELEMENTARY SCHOOL TEXTBOOKS
July 11-12, 2015
Betul Karakus
Computer Engineering Department
Computer Engineering Department
Computer Engineering Department
Firat University23100 Elazig, Firat University23100 Elazig, Firat University23100 ElazigTurkey, Turkey, Turkey
Galip Aydın
Computer Engineering Department
Computer Engineering Department
Computer Engineering Department
Firat University23100 Elazig, Firat University23100 Elazig, Firat University23100 ElazigTurkey, Turkey, Turkey
Ibrahim Rıza Hallac
Computer Engineering Department
Computer Engineering Department
Computer Engineering Department
Firat University23100 Elazig, Firat University23100 Elazig, Firat University23100 ElazigTurkey, Turkey, Turkey
DISTRIBUTED READABILITY ANALYSIS OF TURKISH ELEMENTARY SCHOOL TEXTBOOKS
Proceedings of International Conference on Information Technology and Computer Science
International Conference on Information Technology and Computer Science9788193137307July 11-12, 201579ReadabilityHadoopTextbook
The readability assessment deals with estimating the level of difficulty in reading texts.Many readability tests, which do not indicate execution efficiency, have been applied on specific texts to measure the reading grade level in science textbooks. In this paper, we analyze the content covered in elementary school Turkish textbooks by employing a distributed parallel processing framework based on popular MapReduce paradigm.We outline the architecture of a distributed Big Data processing system which uses Hadoop for full-text readability analysis. The readability scores of the textbooks and system performance measurements are also given in the paper.
I. INTRODUCTION
The difficulty (or ease) of a selected text can be expressed as its readability; which in turn can be used to classify reading materials into different grade levels. Instructors can make use of readability scores to find suitable teaching materialsAside from education, readability applications have some potential usage areas such as business publications, complex financial reports, online media [3]; health care [4], military agencies'enlistment applications and technical manuals and web pages [5].
Various readability formulas have been proposed to measure the level of text difficulty. The FleschReading Ease [6], Flesch-Kincaid Grade Level [7], SMOG Index [8], Gunning Fog Index [9], Automated Readability Index [10] and Dale-Chall readability formula [11] are among the best known readability formulas. These popular formulas are widely used to improve textbooks, health literature, business and finance publications, military and governmental documents, web contents and, so forth.The readability formulas, developed by various researchers since the 1920s, aim to assess the difficulty of a given text.
Before these studies, the difficulty of the text was tried to be determined with a method based on reading comprehension. However, Fletcher [12] has emphasized the fact that the assessment of reading comprehension is difficult because it is not a process that can be directly observed.
Readability assessment plays an important role in document analysis. Web pages on the internet contain large amount of valuable information.The number of online documents has been increasing with incredible growth rates. However, the users do not usually have the means to find suitable reading materials according to their reading grade level. On the other hand, analyzing large number of documents require high computational power and storage space. English Wikipedia which has 5 TB of data as set of documents is one of the best examples.Similarly, textbooks or their electronic copies require powerful computers for efficient analysis. Therefore a big data solution is required to assess the full-text readability of textbooks.
Big data has many challenges on several aspects like variety, volume, velocity and veracity. Variety refers to unstructured data in different forms, velocity refers to how fast the data is generated and how fast they need to be analyzed and veracity refers to the trustworthiness of data to be reliable for crucial decisions [13]. There are several distributed solutions to handle big data. The most well-known is MapReduce framework, which was published by Google [14].Most of the research on readability analysis have focused on runtime performance of their implementations. In this study, we present our Distributed Readability Analysis System, which is based onHadoop Distributed Computing Framework for analyzing the readability of Turkish elementary school textbooks used by students from grade 5 to 8.
The rest of the paper is structured as follows. Section II presents the literature survey, Section III gives a review of readability analysis as needed for following discussion. Section IV presents our Distributed Readability Analysis System. Section V presents the performance evaluation. Finally, Section VI gives conclusions and explains future works.
II. RELATED WORK
Research on text readabilitybegan in the last quarter of 19 th centurywith "Analytics of Literature"by L.A. Sherman [15]. He drew attention to the importance of average sentence length and demonstrated that shorter sentences increase readability. Readability studies proposed in the following century primarily focused on the development of readability formulas to select appropriate textbooks according to ability of the students. Lively and Pressey [16] developed first readability formula to measure and reduce the vocabulary burden of textbooks. Flesch published his popular Reading Ease formulato measure reading materials after a number of studies on English readability and other popular formulas were published from the 1940s up to the middle of 1990s (see Table 1 for details).
More recently, readability formulashave been usedfor measuring the reading grade level of the textbooks and the validity of the formulas have been evaluated not only in Englishbut also in many different languages. However,it has been observed that classic readability formulas such as Flesch Reading Ease are only effective for English languages [17].On the other hand, most of the languages are similar to each other in respect to general lexical features such as average sentence length, average number of syllables, average number of words per sentence and average number of hard words (more than three syllables). Kuo et al. investigated the problem of readability analysis in Taiwanese texts and classified short essays for high school students using linguistic features [18]. The analysis results of [18] show that Flesch-Kincaid Grade Level, SMOG and Automated Readability Index scores could not achieve good results for predicting readability.
Another study on Polish language [19] has implemented and evaluated Gunning Fog Index and Flesch-based Pisarek method using specific lexical items in Polish texts. In order to assess readability of Thai text for primary school students, Daowadung and Chan [20] has developed a technique to predict readability of Thai textbooks using word segmentation and TF-IDF calculations. Similarly, TF-IDF vectors has been constructed to determine readability of primary school Chinese textbooksin [21]. The experimental results showed that the proposed method is effective for lower grades, but not effective for middle grades.
François and Fairon [22] have implementedtheir French readability formula and classification model relative to the set of textual features includingthe lexical, syntactic, and semantic features on a textbook corpus. Aluisio et al [23] have alsostudied the readability assessment for Portuguese texts using the linguistic structure of the texts. Other research for Japanese text readability has proposed a readability measurement method based on textbook corpus, which consists of 1.478 sample passages extracted from 127 textbooks [24]. They use a simple Perl program to analyze the textbook readability.
The Lasbarhetsindex Swedish Readability Formula(LIX) is also among popular readability formulas to measure the difficulty of reading a foreign text [25]. While other popular formulas count the number of syllables for average word length, LIX uses the number of letters as shallow feature, which is the traditional feature used for readability analysis.Sjöholm [26] has developed a java module taking a Swedish text as input to evaluate and measure LIX formula.
Although classic readability formulas are widely used on texts written in English, semantic languages such as Arabic need to generate new readability formulas for selecting appropriate textbooks [27]. Al-Khalifa and Al-Ajlan [28] has developed a java program to calculate a vector of values for the four features: average sentence length, average word length, average syllables per word and word frequencies. Their corpus collected from Saudi Arabia schools consists of 150 texts with 57089 words.
Early work on Turkish readability began in 1990s. Two popular formulas have been proposed to measure Turkish text readability: Atesman [29] has proposed first readability formula, which is the adaptation of Flesch Reading Ease Formula (see Table 1 for details).Cetinkaya [30] has presented second readability formula with three readability levels: highlevel (10 th , 11 th , 12th grade), intermediate reading (8 th and 9th grade) and elementary reading (5 th , 6 th and 7 th grade).
Syllables-based
Okur and Arı [31], have analyzed and compared 298 selected texts in Turkish textbooks for grades 6-8 using Atesman and Cetinkayaformulas. In their study, selected texts were divided into two categories, namely informative reading texts and narrative reading texts. It is observed that the readability level of the informative reading texts is more difficult than of the narrative ones because the average sentence length and word length in narrative reading texts are shorter when compared with the informative reading texts. Guven [32] has also studied on readability of texts in Turkish textbooks used by foreigners.
All of these readability studies point to the fact that most of the readability studies have been done on sample passages or selected units of textbooks rather than the whole of the textbooks.The studies using text samples need to carefully select the sample text in order to avoid calculating incorrect readability scores. That is, the accuracy of readability scores depends on the selected sample texts. Furthermore, due to lack of a readability system to analyze whole textbooksin a short time and due to lack of Turkish textbook corpus, the readability implementations are not used by the educational institutions in Turkey.
III. READABILITY ANALYSIS
The variables used in the readability formulas show us the skeleton of a text [33]. Traditional readability formulas have been built on many variables, which affect the reading difficulty. These variables include the following landmark features:
Readability is defined as "the ease of reading words and sentences" by Hargis [34]. That is, the readability of text depends on the clarity of words and sentences. Themost common features to predict word and sentence complexitiesare average sentence length, average word length, number of easy-hard words and number of simple sentences.
The vocabulary diversity or the number of different words in a text, plays an important factor on reading difficulty. Dale-Chall(see Table 1 for formula) compiled a list of 3000 easy words by comparing the number of different words.
Readability challenges include not only lexical difficulty, but also structural difficulty. Structural features used in various readability formulas are the number of propositions, kinds of sentences and prepositional phrases. However, these formulas lead Kintsch and Miller [35] to the conclusion that lexical features such as word and sentence length provide stronger measurement for text difficulty, when compared to structural factors related to mental properties of a reading text.
Developmentsin computer software accelerated readability analyses and today, readability studies are more popular thanin the past. Microsoft Word, the most popular word processing software, can automatically calculate the readability of text using the Flesch Reading Ease and the Flesch-Kincaid Grade Level formulas. Readability analyses have undergone a major change in the last twenty years.The ATOS Readability formula [36] isbased on three key features including average sentence length, average word length and average word difficulty level, has been one of the most important changes. While first graded vocabulary list was consist of almost 24.000 words in 2000, the list underwent a major update adding 75.000 new words in 2013. The study also emphasizes that the readability analyses in the past lead to incorrect results due to the use ofonly small samples of text rather than whole books.
As the amount of web pages and electronic copies of textbooks on the Internetgrow rapidly, it becomes difficult to analyze these documents using traditional applications for finding suitable materials for individual readers and students at different grade levels [37].
Hadoop-based electronic book conversion system proposed by [38] presents a distributed solution to process large numbers of electronic books. The study indicates that it takes very long processing times, when the proposed system is performed on a personal computer.
The main requirements in analyzing large documents are high computing power and storage capacity. Big data technologies support distributed storage and data processing, which allows us to use commodity hardware for parallel, and distributed computing. This study builds a distributed framework to analyze textbook readability using big data technologies.
IV. DISTRIBUTED READABILITY ANALYSIS SYSTEM
In this paper, we propose a Distributed Readability Analysis System (DRAS) for Turkish elementary school textbooks using Hadoop framework.The overview of the proposed system is given in Fig. 1.
The system architecture is divided into three parts namely data conversion, data pre-processing and data analysis. The main goal of the proposed system is to provide an efficient readability analysis system to help educators and parents in finding appropriate reading materials for elementary school students. We use two keyfeatures,average sentence length and average word length, which are used by Atesman Readability formula. Furthermore, we calculate the total number of distinct words in each textbook for testing the accuracyof the readability formulas.
A. Textbook Data Conversion
The official website of The Republic of Turkey's Ministry of National Education (MoNE) presents Turkish textbooks as PDF files. We first convert the PDF textbooks into text files using Apache Tika [39], which provides textual analysis software that runs on top of the Hadoop framework. MapReduce jobs were run to transform PDF files into text files and they were stored locally. Religious culture textbooks were not successfully converted due to the use of Arabic texts and 12 PDF files which only contained images were not converted. The execution time efficiency of the data conversion is detailed in Section V as
Fig.2 An overview of Distributed Readability Analysis System
B. Textbook Data Pre-processing
Before analyzing the Turkish textbooks, we have performed the following pre-processing steps.
Normalization, which is the conversion of character set to UTF-8 Unicode encoding.
Sentence segmentation, which is the segmentation of document content into sentences. We have used Zemberek, Turkish NLP Library [40] for this step.
Tokenization, which is the process of breaking up a text into tokens. In this step, multiple whitespace characters are replaced by a single whitespace character.
Filtering, which is the removal of special characters (+,-,*, #, /, \, $, =, &), numbers, stop words (i.e. words in meaningless words category such as that, this, it, etc.)and author names.
Most of the Turkish readability studies in literature pay no attention to the impact of stop words in their readability assessments. We eliminate the stop words in the selected textbooks to overcome this major shortcoming. Furthermore, we construct a set of Turkish stop word by determining term frequencies of each word. The general approach for creating a stop word list is to sort the words according to their term frequency, which is the number of times the word appears in the document. Stop words are high frequency terms, that is, they are the most common words, which are less significant than other words. Therefore, we remove the stop words from the texts.
C. Textbook Data Analysis
We use Hadoopto analyze the input data, which is the preprocessed textbook stored in HadoopDistributed File System (HDFS). Hadoop [41] is the most widely used system in big data analysis. Traditional readability methods performed on a single machine typically usessmall and controlled datasets such as sample passages or units of textbooks. However full textbook data is larger and noisy to be processed using small computational hardware. In order to avoid this problem, we useHadoop platform to analyze the readability of Elementary school textbooks.
The Hadoop platformprimarily provides two main abilities: distributed computing and parallel processing. For these, Hadoop Distributed File System (HDFS) and MapReduce jobs are used. The data are stored in HDFS and MapReduceuses these data as input. MapReduce jobs controlled by a master node are splinted into two functions as Map and Reduce. The Map function divides the data into a group of key-value pairs and the output of each map tasks are sorted by their key. The Reduce function merges the values into output data, which is stored in HDFS.In our Distributed Readability Analysis System, TheMapReduce phases are describedas follows:
1) Mapper Phase: The mapper parses each block of input data, which are the textbook contents from HDFS. The first mapper gets the number of sentences, words and syllables (shown in Table II) and then writes the following key value pair: <file name, textbook content>. The second mapper parses each wordand counts how many times it occurs in a textbook file. Then the mapper emits the numeric value "1" for each word and it writes to the reducer the following key value pairs: < (word, filename), 1>. The keys includethe distinct words (shown in Table II) in the textbook files and the values include a list of emitted numeric values for each word. In the mapper phase, the words including special characters, digits and the stop words are removed to filter the textbook data and prevent any occurrence of them inthe calculation of the distinct words.
2) Reducer Phase:The reducer gets the results from the mapper as input data and sums up the number of sentences, words and syllables. Then, it divides the number of sentences into number of words to measure average word length and also divides the number of words into number of syllables to measure average sentence length. The next step, the reducer calculates Atesman readability scores and writes the following key value pairs to the output :|<file name, list of (textbook content)>. The second reducer sums upthe number of occurrence of the distinctwords in each textbook file and it writes to the output data using <filename> as the key, <list of (n)> as the value, which includes list of the total number of distinct words in each textbook data.
After implementation of the proposed system, the graphical results of average sentence length, average word length and readability scores are shown in Fig. 2, Fig. 3 and Fig. 4, respectively. As the grade level of the textbooks increase, average sentence and word length has also increased in proportion to the readability level. After all, we have achieved high accuracy in predicting readability of Turkish elementary school textbooks. However the readability of 4 th -5 th grades and 6 th -7 th grades represents the change betweenscores 60 and 80, whereas the readability score of the 8 th grade has surprisingly reached score 40.
V. PERFORMANCE EVALUATION
In order to analyze the readability of primary school Turkish textbooks described in this paper, we have created to different applications. The first one is a Java console application for testing the performance of a single machine traditional programming approach. The console application was performed on aPC with four-core 4.1 GHz Intel processor and 16 GB of main memory and used Ubuntu 14.04 LTS as the operating system.
The second application used the distributed Hadoop based system. We used OpenStack for creating a 10node Hadoop cluster. Performance comparison for the two application is given in Table III. The tests, which running on Hadoop cluster, demonstrated that the execution time of the MapReduce application decreases as the number of the textbook increases and offers better performance than the console application, if the number of textbook is over 2.
VI. CONCLUSIONS
This study focus on how the readability system can provide high performance distributed execution for the elementary school textbook based key features such as average sentence length, average word length and distribution of distinct words. For this purpose, the proposed system is evaluated and tested on HadoopMapReduce platform as the distributed execution engine. After having built the application platform, we compared the performance of our distributed readability analysis. The performance results show that our distributed readability analysis system performed on a large number of Turkish textbooks is feasible and efficient.
The main challenge to our study is the lack of the Turkish textbook corpus. We dynamically constructed a distributed readability analysis system on the top level of Hadoop cluster because we intend to expand our textbook corpus in the feature study. Other future work will focus on improving of Turkish readability measures.
Fig. 3
3Average number of syllables per word Fig.4 Average number of words per sentence
Fig. 5 .
5Predicting readability of elementary school textbooks
TABLE IReadability approaches
IReadabilityin literatureReadability Formula
Index
Formula Content
Shallow Features
Flesch Reading Ease
6
(
)
Average Sentence
Length,
Syllables-based
Flech-Kincaid Grade
Level
7
(
)
(
)
Average Sentence
Length,
Syllables-based
SMOG
8
√
Hard words (more
than three
syllables)
Gunning Fog
9
[
(
)]
Average Sentence
Length,
Hard words
Automated Readability
10
(
)
(
)
Average Sentence
Length,
Character-based
Dale-Chall
11
(
)
(
)
Average Sentence
Length,
Wordlist-bases
Atesman [20]
27
(
)
Average Sentence
Length,
Table IV .
IVTextbooks as
PDF Files
Textbooks as
Text Files
PDF to Text
Conversion
Pre-Processing Steps
Normalization
Sentence Segmentation
Tokenization
Filtering
Turkish
Grade4.txt
Turkish
Grade5.txt
Turkish
Grade8.txt
Input Data
part-r-00000
Output Data
Distributed Readability Analysis
HADOOP
Hadoop Distributed File System
Virtual Machines
OpenStack
Server Server Server
Servers
Turkish
Textbooks.har
/
Hadoop Archive
TABLE II FEATURE
IIDISTRIBUTION OF TURKISH TEXTBOOKSTextbooks
Word
Count
Sentence
Count
Syllable
Count
Distinct
Words
4 th Grade
14917
1994
38396
4243
5 th Grade
17484
2309
45293
5095
6 th Grade
18232
2315
47726
5195
7 th Grade
20517
2482
52692
5835
8 th Grade
16895
1694
54680
5798
TABLE IIIPERFORMANCE COMPARISON
IIIPERFORMANCENumber of
Textbook
Console
Application
(sec)
MapReduce
Application(sec)
1
1.5
2
2
6.5
3
5
141.692
5
10
Out of memory
10
100
Out of memory
57
TABLE IVTEXTBOOK DATA
IVTEXTBOOKCONVERSION ON HADOOP PLATFORMNumber
of
Textbook
File Size
(GB)
Running
Time
Total
Time
(min)
10
1.3
15:21:24-
15:25:39
4.25
20
2.5
15:33:09-
15:39:25
6.26
50
5.6
15:45:32-
15:54:19
8.78
100
11.2
16:00:21-
16:14:35
14.23
S M Metev, V P M Veiko ; R, Jr Osgood, Laser Assisted Microtechnology. Berlin, GermanySpringer-Verlag2nd ed.,S. M. Metev and V. P. Veiko, Laser Assisted Microtechnology, 2nd ed., R. M. Osgood, Jr., Ed. Berlin, Germany: Springer-Verlag, 1998.
The Analysis of Directional Time Series: Applications to Wind Speed and Direction, ser. Lecture Notes in Statistics. J Breckling, Ed , Springer61Berlin, GermanyJ. Breckling, Ed., The Analysis of Directional Time Series: Applications to Wind Speed and Direction, ser. Lecture Notes in Statistics. Berlin, Germany: Springer, 1989, vol. 61.
Readability to Financial Report: A Comparative Study of Chinese and Foreign Countries. Yan Sun, Xiaofeng Wang, Yingxin Yu, Seventh International Joint Conference on. IEEESun, Yan, Xiaofeng Wang, and Yingxin Yu. "Readability to Financial Report: A Comparative Study of Chinese and Foreign Countries." Computational Sciences and Optimization (CSO), 2014 Seventh International Joint Conference on. IEEE, 2014.
Health information on the Internet: accessibility, quality, and readability in English and Spanish. Gretchen K Berland, Jama. 285Berland, Gretchen K., et al. "Health information on the Internet: accessibility, quality, and readability in English and Spanish." Jama 285.20 (2001): 2612-2621.
Ecomprehension Evaluating B2B websites using readability formulae. E M T Leong, M T Ewing, L F Pitt, 31Industrial 2 Marketing ManagementE. M. T. Leong, M. T. Ewing, and L. F. Pitt, "E- comprehension Evaluating B2B websites using readability formulae", Industrial 2 Marketing Management, Vol. 31, pp.125-131, 2002.
A new readability yardstick. Rudolph Flesch, Journal of applied. 32221Flesch, Rudolph. "A new readability yardstick." Journal of applied psychology32.3 (1948): 221.
Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. No. RBR-8-75. J Kincaid, Peter, Naval Technical Training Command Millington TN Research BranchKincaid, J. Peter, et al. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. No. RBR-8-75. Naval Technical Training Command Millington TN Research Branch, 1975.
SMOG grading: A new readability formula. G Mclaughlin, Harry, Journal of reading. 12McLaughlin, G. Harry. "SMOG grading: A new readability formula." Journal of reading 12.8 (1969): 639-646.
{The Technique of Clear Writing}. Robert Gunning, Gunning, Robert. "{The Technique of Clear Writing}." (1952).
Automated readability index. R J Senter, E A Smith, CINCINNATI UNIV OHSenter, R. J., and E. A. Smith. Automated readability index. CINCINNATI UNIV OH, 1967.
Readability revisited: The new Dale-Chall readability formula. Jeanne Chall, Edgar Sternlicht, Dale, Brookline BooksChall, Jeanne Sternlicht, and Edgar Dale. Readability revisited: The new Dale-Chall readability formula. Brookline Books, 1995.
Measuring reading comprehension. Jack M Fletcher, Scientific Studies of Reading. 103Fletcher, Jack M. "Measuring reading comprehension." Scientific Studies of Reading 10.3 (2006): 323-330.
Big data: Issues, challenges, tools and Good practices. Avita Katal, Mohammad Wazid, R H Goudar, Sixth International Conference on. IEEEContemporary Computing (IC3)Katal, Avita, Mohammad Wazid, and R. H. Goudar. "Big data: Issues, challenges, tools and Good practices." Contemporary Computing (IC3), 2013 Sixth International Conference on. IEEE, 2013.
MapReduce: simplified data processing on large clusters. Jeffrey Dean, Sanjay Ghemawat, Communications of the ACM. 51Dean, Jeffrey, and Sanjay Ghemawat. "MapReduce: simplified data processing on large clusters." Communications of the ACM 51.1 (2008): 107-113.
Analytics of literature: A manual for the objective study of English prose and poetry. Lucius Sherman, Adelno, Ginn1893Sherman, Lucius Adelno. Analytics of literature: A manual for the objective study of English prose and poetry. Ginn, 1893.
A method for measuring the vocabulary burden of textbooks. Bertha A Lively, Sidney L Pressey, 73Educational administration and supervision9Lively, Bertha A., and Sidney L. Pressey. "A method for measuring the vocabulary burden of textbooks." Educational administration and supervision9.389-398 (1923): 73.
Constructing a novel Chinese readability classification model using principal component analysis and genetic programming. Yi-Shian Lee, Advanced Learning Technologies (ICALT). IEEEIEEE 12th International Conference onLee, Yi-Shian, et al. "Constructing a novel Chinese readability classification model using principal component analysis and genetic programming."Advanced Learning Technologies (ICALT), 2012 IEEE 12th International Conference on. IEEE, 2012.
Using Linguistic Features to Predict Readability of Short Essays for Senior High School Students in Taiwan1. Wei-Ti Kuo, Chao-Shainn Huang, Chao-Lin Liu, International Journal of Computational Linguistics & Chinese Language Processing. 15Kuo, Wei-Ti, Chao-Shainn Huang, and Chao-Lin Liu. "Using Linguistic Features to Predict Readability of Short Essays for Senior High School Students in Taiwan1." International Journal of Computational Linguistics & Chinese Language Processing 15 (2010): 193-218.
Measuring Readability of Polish Texts: Baseline Experiments. Broda, Bartosz, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC14). the Ninth International Conference on Language Resources and Evaluation (LREC14)Broda, Bartosz, et al. "Measuring Readability of Polish Texts: Baseline Experiments." Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC14). 2014.
Using word segmentation and SVM to assess readability of Thai text for primary school students. Patcharanut Daowadung, Yaw-Huei Chen, Eighth International Joint Conference on. IEEEDaowadung, Patcharanut, and Yaw-Huei Chen. "Using word segmentation and SVM to assess readability of Thai text for primary school students." Computer Science and Software Engineering (JCSSE), 2011 Eighth International Joint Conference on. IEEE, 2011.
Chinese readability assessment using TF-IDF and SVM. Yaw-Huei Chen, Yi-Han Tsai, Yu-Ta Chen, Machine Learning and Cybernetics (ICMLC), 2011 International Conference on. IEEE2Chen, Yaw-Huei, Yi-Han Tsai, and Yu-Ta Chen. "Chinese readability assessment using TF-IDF and SVM." Machine Learning and Cybernetics (ICMLC), 2011 International Conference on. Vol. 2. IEEE, 2011.
An AI readability formula for French as a foreign language. Thomas François, Cédrickfairon , Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningAssociation for Computational LinguisticsFrançois, Thomas, and CédrickFairon. "An AI readability formula for French as a foreign language." Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics, 2012.
Readability assessment for text simplification. Sandra Aluisio, Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications. the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational ApplicationsAssociation for Computational LinguisticsAluisio, Sandra, et al. "Readability assessment for text simplification."Proceedings of the NAACL HLT 2010 Fifth Workshop on Innovative Use of NLP for Building Educational Applications. Association for Computational Linguistics, 2010.
Automatic Assessment of Japanese Text Readability Based on a Textbook Corpus. Sato, Sugurumatsuyoshi Satoshi, Yohsukekondoh , LREC. Sato, Satoshi, SuguruMatsuyoshi, and YohsukeKondoh. "Automatic Assessment of Japanese Text Readability Based on a Textbook Corpus."LREC. 2008.
. Carl Hugo Björnsson, Läsbarhet, Liber, Björnsson, Carl Hugo. Läsbarhet. Liber, 1968.
Probability as readability: A new machine learning approach to readability assessment for written Swedish. Johan Sjöholm, Sjöholm, Johan. "Probability as readability: A new machine learning approach to readability assessment for written Swedish." (2012).
A Corpus-Based Readability Formula for Estimate of Arabic Texts Reading Difficulty. Mat Daud, Haslina Nuraihan, Normaziah Abdul Hassan, Aziz, World Applied Sciences Journal. 21Mat Daud, Nuraihan, Haslina Hassan, and Normaziah Abdul Aziz. "A Corpus-Based Readability Formula for Estimate of Arabic Texts Reading Difficulty."World Applied Sciences Journal 21 (2013): 168-173.
Automatic readability measurements of the Arabic text: An exploratory study. S Al-Khalifa, Amany Al-Ajlan, Al-Khalifa, S., and Amany Al-Ajlan. "Automatic readability measurements of the Arabic text: An exploratory study." (2010).
Measuring readability in Turkish. E Ateşman, AU Tömer Language. 58Ateşman, E. "Measuring readability in Turkish." AU Tömer Language Journal58 (1997): 171-4.
Readability features of texts in Turkish texbooks. G Çetinkaya, L Uzun, Çetinkaya, G., and L. Uzun. "Readability features of texts in Turkish texbooks." (2010): 141-155.
Readability of Texts Turkish Textbooks in Grades 6, 7, 8. Alpaslan Okur, Ari Gökhan, Elementary Education Online. 12OKUR, Alpaslan, and Gökhan ARI. "Readability of Texts Turkish Textbooks in Grades 6, 7, 8." Elementary Education Online 12.1 (2013): 202-226.
Readability of Texts in Textbooks in Teaching Turkish to Foreigners. Ahmetzeki Guven, Anthropologist. 18Guven, AhmetZeki. "Readability of Texts in Textbooks in Teaching Turkish to Foreigners." Anthropologist 18.2 (2014): 513- 522.
The Principles of Readability. William H Dubay, Online Submission. DuBay, William H. "The Principles of Readability." Online Submission (2004).
Readability and computer documentation. Gretchen Hargis, ACM Journal of Computer Documentation (JCD). 24Hargis, Gretchen. "Readability and computer documentation." ACM Journal of Computer Documentation (JCD) 24.3 (2000): 122-131.
Readability: A view from cognitive psychology. Walter Kintsch, James R Miller, Teaching Research Reviews. Kintsch, Walter, and James R. Miller. "Readability: A view from cognitive psychology." Teaching Research Reviews (1981): 220-232.
Development of the ATOS Readability Formula. Michael Milone, Renaissance Learning. Milone, Michael. "Development of the ATOS Readability Formula", Renaissance Learning, 2014.
Chinese readability analysis and its applications on the internet. Diss. The Chinese University of Hong Kong. Lau Pang, Tak, Pang, LAU Tak. Chinese readability analysis and its applications on the internet. Diss. The Chinese University of Hong Kong, 2006.
Big data processing with MapReduce for E-book. Tae Hong, Ho, Int J MultimedUbiquitEng. 8Hong, Tae Ho, et al. "Big data processing with MapReduce for E-book." Int J MultimedUbiquitEng 8.1 (2013): 151-162.
. Apache Tika, Apache Tika, http: //tika.apache.org.
Zemberek, an open source NLP framework for Turkic Languages. Ahmetafsin Akın, Mehmet Dündar Akın, Structure. 10Akın, AhmetAfsin, and Mehmet Dündar Akın. "Zemberek, an open source NLP framework for Turkic Languages." Structure 10, 2007.
Hadoop: The definitive guide. Tom White, Reilly Media, IncWhite, Tom. Hadoop: The definitive guide. "O'Reilly Media, Inc.", 2012.
| [] |
[
"Neural Related Work Summarization with a Joint Context-driven Attention Mechanism",
"Neural Related Work Summarization with a Joint Context-driven Attention Mechanism"
] | [
"Yongzhen Wang \nSchool of Maritime Economics and Management\nDalian Maritime University\nDalianChina\n",
"Xiaozhong Liu \nSchool of Informatics, Computing and Engineering\nIndiana University Bloomington\nBloomingtonINUSA\n\nAlibaba Group\nHangzhouChina\n",
"Zheng Gao \nSchool of Informatics, Computing and Engineering\nIndiana University Bloomington\nBloomingtonINUSA\n"
] | [
"School of Maritime Economics and Management\nDalian Maritime University\nDalianChina",
"School of Informatics, Computing and Engineering\nIndiana University Bloomington\nBloomingtonINUSA",
"Alibaba Group\nHangzhouChina",
"School of Informatics, Computing and Engineering\nIndiana University Bloomington\nBloomingtonINUSA"
] | [] | Conventional solutions to automatic related work summarization rely heavily on humanengineered features. In this paper, we develop a neural data-driven summarizer by leveraging the seq2seq paradigm, in which a joint context-driven attention mechanism is proposed to measure the contextual relevance within full texts and a heterogeneous bibliography graph simultaneously. Our motivation is to maintain the topic coherency between a related work section and its target document, where both the textual and graphic contexts play a big role in characterizing the relationship among scientific publications accurately. Experimental results on a large dataset show that our approach achieves a considerable improvement over a typical seq2seq summarizer and five classical summarization baselines. | 10.18653/v1/d18-1204 | [
"https://arxiv.org/pdf/1901.09492v1.pdf"
] | 53,083,244 | 1901.09492 | 60ebf1bf9be1912a886608f2b15a19f2aa5cbd66 |
Neural Related Work Summarization with a Joint Context-driven Attention Mechanism
28 Jan 2019
Yongzhen Wang
School of Maritime Economics and Management
Dalian Maritime University
DalianChina
Xiaozhong Liu
School of Informatics, Computing and Engineering
Indiana University Bloomington
BloomingtonINUSA
Alibaba Group
HangzhouChina
Zheng Gao
School of Informatics, Computing and Engineering
Indiana University Bloomington
BloomingtonINUSA
Neural Related Work Summarization with a Joint Context-driven Attention Mechanism
28 Jan 2019
Conventional solutions to automatic related work summarization rely heavily on humanengineered features. In this paper, we develop a neural data-driven summarizer by leveraging the seq2seq paradigm, in which a joint context-driven attention mechanism is proposed to measure the contextual relevance within full texts and a heterogeneous bibliography graph simultaneously. Our motivation is to maintain the topic coherency between a related work section and its target document, where both the textual and graphic contexts play a big role in characterizing the relationship among scientific publications accurately. Experimental results on a large dataset show that our approach achieves a considerable improvement over a typical seq2seq summarizer and five classical summarization baselines.
Introduction
In scientific fields, scholars need to contextualize their contribution to help readers acquire an understanding of their research papers. For this purpose, the related work section of an article serves as a pivot to connect prior domain knowledge, in which the innovation and superiority of current work are displayed by a comparison with previous studies. While citation prediction can assist in drafting a reference collection (Nallapati et al., 2008), consuming all these papers is still a laborious job, where authors must read every source document carefully and locate the most relevant content cautiously.
As a solution in saving authors' efforts, automatic related work summarization is essentially a topic-biased multi-document problem (Cong and Kan, 2010), which relies heavily on human-engineered features to retrieve snippets * Corresponding author from the references. Most recently, neural networks enable a data-driven architecture sequenceto-sequence (seq2seq) for natural language generation (Bahdanau et al., 2014(Bahdanau et al., , 2016, where an encoder reads a sequence of words/sentences into a context vector, from which a decoder yields a sequence of specific outputs. Nonetheless, compared to scenarios like machine translation with an end-to-end nature, aligning a related work section to its source documents is far more challenging.
To address the summarization alignment, former studies try to apply an attention mechanism to measure the saliency/novelty of each candidate word/sentence (Tan et al., 2017), with the aim of locating the most representative content to retain primary coverage. However, toward summarizing a related work section, authors should be more creative when organizing text streams from the reference collection, where the selected content ought to highlight the topic bias of current work, rather than retell each reference in a compressed but balanced fashion. This motivates us to introduce the contextual relevance and characterize the relationship among scientific publications accurately.
Generally speaking, for a pair of documents, a larger lexical overlap often implies a higher similarity in their research backgrounds. Yet such a hypothesis is not always true when sampling content from multiple relevant topics. Take "DSSM" 1 as an example, from viewpoint of the abstract similarity, those references investigating "Information Retrieval", "Latent Semantic Model" or "Clickthrough Data Mining" could be of more importance in correlation and should be greatly sampled for the related work section. But in reality, this article spends a bit larger chunk of texts (about 58%) to elaborate "Deep Learning" during the literature review, which is quite difficult for machines to grasp the contextual relevance therein. In addition, other situations like emerging new concepts also suffer from the terminology variation or paraphrasing in varying degrees.
In this study, we utilize a heterogeneous bibliography graph to embody the relationship within a scalable scholarly database. Over the recent past, there is a surge of interest in exploiting diverse relations to analyze bibliometrics, ranging from literature recommendation (Yu et al., 2015) to topic evolvement (Jensen et al., 2016). In a graphical sense, interconnected papers transfer the credit among each other directly/indirectly through various patterns, such as paper citation, author collaboration, keyword association and releasing on series of venues, which constitutes the graphic context for outlining concerned topics. Unfortunately, a variety of edge types may pollute the information inquiry, where a slice of edges are not so important as the others on sampling content. Meanwhile, most existing solutions in mining heterogeneous graphs depend on the human supervision, e.g., hyperedge (Bu et al., 2010) and metapath (Swami et al., 2017). This is usually not easy to access due to the complexity of graph schemas.
Our contribution is threefold: First, we explore the edge-type usefulness distribution (EUD) on a heterogeneous bibliography graph, which enables the relationship discovery (between any pair of papers) for sampling the interested information. Second, we develop a novel seq2seq summarizer for the automatic related work summarization, where a joint context-driven attention mechanism is proposed to measure the contextual relevance within both textual and graphic contexts. Third, we conduct experiments on 8,080 papers with native related work sections, and experimental results show that our approach outperforms a typical seq2seq summarizer and five classical summarization baselines significantly.
Related Work
This study touches on several strands of research within automatic related work summarization and seq2seq summarizer as follows.
The idea of creating a related work section automatically is pioneered by Cong and Kan (2010) who design two rule-based strategies to extract sentences for general and detailed topics respectively. Subsequently, Hu and Wan (2014) (2016) construct a graph of representative keywords, in which a minimum steiner tree is figured out to guide the summarization as finding the least number of sentences to cover the discriminated nodes. In general, compared to traditional summaries, the automatic related work summarization receives less concerns over the past. However, these existing solutions cannot work without manual intervention, which limits the application scale to an extremely small size (see Table 1). The earliest seq2seq summarizer stems from Rush et al. (2015) which utilizes a feed-forward network for compressing sentences, and later is expanded by Chopra et al. (2016) with a recurrent neural network (RNN). On this basis, Nallapati et al. (2016a,c) and both present a set of RNN-based models to address various aspects of abstractive summarization. Typically, Cheng and Lapata (2016) propose a general seq2seq summarizer, where an encoder learns the representation of documents while a decoder generates each word/sentence using an attention mechanism. With further research, Nallapati et al. (2016b) extend the sentence compression by trying a hierarchical attention architecture and a limited vocabulary during the decoding phase. Next, Narayan et al. (2017) leverage the side information as an attention cue to locate focus regions for summaries. Recently, inspired by PageRank, Tan et al. (2017) introduce a graph-based attention mechanism to tackle the saliency problem. Nonetheless, these methods all discuss the single-document scenario, which is far from the nature of automatic related work summarization.
In this study, derived from the general seq2seq summarizer of Cheng and Lapata (2016), we propose a joint context-driven attention mechanism to measure the contextual relevance within full texts and a heterogeneous bibliography graph simultaneously. To our best knowledge, we make the first attempt to develop a neural data-driven solution for the automatic related work summarization, and the practice of using the joint context as an attention cue is also less explored to date. Besides, this study is launched on a dataset with up to 8,080 papers, which is much greater than previous studies and makes our results more convincing.
Since text summarization via word-byword generation is not mature at present (Cheng and Lapata, 2016;Nallapati et al., 2016b;Tan et al., 2017), we adopt the extractive sentential fashion for our summarizer, where a related work section is created by extracting and linking sentences from a reference collection. Meanwhile, this study follows the mode of Cong and Kan (2010) who assume that the collection is given as part of the input, and do not consider the citation sentences of each reference.
Methodology
Problem Formulation
To adapt the seq2seq paradigm, we formulate the automatic related work summarization into a sequential text generation problem as follows.
Given an unedited paper t (target document) and its n-size reference collection R t = {r t 1:n }, we draw up a related work section for t by selecting sentences from R t . To be specific, each reference (source document) will be traversed one time sequentially, and without loss of generality, in the descending order of their significance to t. Consequently, all sentences to be selected are concatenated into an m-length sequence S t = {s t 1:m } to feed the summarizer. For each candidate sentence s t j , once being visited, a label y t j ∈ {0, 1} will be determined synchronously based on whether or not this sentence should be covered into the output. Our objective is to maximize the loglikelihood probability of observed labels Y t = {y t 1:m } under R t , S t and summarizer parameters θ, as shown below. Prior works have illustrated that one of the most promising channels for information recommendation is the community network (Guo and Liu, 2015). In this study, we verify this hypothesis toward the content sampling of scientific summarization, by investigating heterogeneous relations among different kinds of objects such as papers, authors, keywords and venues.
max m j=1 log Pr(y t j | R t ; S t ; θ)(1)
For measuring the relationship among scientific publications, we introduce a directed graph G = (V, E) to contain various bibliographical connections, as shown in Figure 1, which involves four objects and ten edge types in total. Each edge e j,i ∈ E is assigned a value π(e j,i ) z ∈ [0, 1] to indicate the transition probability between two nodes v j , v i ∈ V, where π(e j,i ) ∈ R returns the unknown edge-type usefulness of e j,i , and z ∈ R is a normalizing weight. For most of edge types, we model the weight as one divided by the number of outgoing links of the same kind. But regarding the "contribution" category, the weight modeling is accomplished by PageRank with Priors (White and Smyth, 2003). Note that different edge types usually take very uneven importance in one particular task (Yu et al., 2015), and it is quite difficult to enable the classical heterogeneous graph mining without expert defined paths for random walk (Bu et al., 2010;Swami et al., 2017).
In this study, we propose an unsupervised approach to capture the connectivity diversity, by introducing an optimal EUD for navigating random walkers on the heterogeneous bibliography graph. Given a target document t, the optimized usefulness assignment can help those walkers lock a top-n recommendationR t to best match the reference collection R t , as shown in Eq. 2. On this basis, a well-performing algorithm node2vec (Grover and Leskovec, 2016) is adopted to conduct an unsupervised random walk to vectorize every node ∀v * ∈ V into a d-dimensional embedding ϕ(v * ) ∈ R d so that any edge ∀e * ∈ E can be calculated therefrom. Specifically, we employ evolutionary algorithm (EA) to tune the EUD, which enjoys advantages over conventional gradient methods in both convergence speed and accuracy.
arg max t n j=1 log Pr(r t j ∈R t | EUD) (2)
EA Setup We use an array of real numbers x 1:10 to code an individual in the population, where x j ∈ [0, 1] denotes the usefulness of j-th edge type. Given an EUD, PageRank (Page, 1998) runs on graph to infer the relative importance of each node for each target document, and a fitness function is applied to judge how well this EUD satisfies locating the ground truth references as Eq. 3, in which if r t j belongs toR t , then α(r t j ,R t ) ∈ N returns the ranking of r t j withinR t , and otherwise a big penalty coefficient to prevent irrelevant references to be recommended. Like most other optimizations, this procedure starts with a randomly generated population.
max 1 t n j=1 j − α(r t j ,R t )(3)
EA Operator We choose the operator from differential evolution (Das and Suganthan, 2011) to generate offsprings for each individual. The basic idea is to utilize the difference between different individuals to disturb each trial object. First, three distinct individuals x r 1 1:10 , x r 2 1:10 , x r 3 1:10 are sampled randomly from current population to create a variant x var 1:10 , as shown in Eq. 4, where f ∈ R indicates the scaling factor. Next, x var 1:10 is crossed with a trial object x tri 1:10 to build a hybrid one x hyb 1:10 as Eq. 5, in which c ∈ [0, 1] denotes the crossover factor and u ∈ [0, 1] represents an uniform random number. At last, the fitnesses of x tri 1:10 and x hyb 1:10 are compared, and the better one will be saved as the offspring into a new round of evolution.
x var j = x r 1 j + f × (x r 2 j − x r 3 j ) (4) x hyb j = x var j , if u ≤ c x tri j , otherwise(5)
Neural Extractive Summarization
As Figure 2 shows, we model our seq2seq summarizer with a hierarchical encoder and an attentionbased decoder, as described below.
Hierarchical Encoder Our encoder consists of two major layers, namely a convolutional neural network (CNN) and a long-short-term memory (LSTM)-based RNN. Specifically, the CNN deals with word-level texts to derive sentencelevel meanings, which are then taken as inputs to the RNN for handling longer-range dependency within lager units like a paragraph and even a whole paper. This conforms to the nature of document that is composed from words, sentences and higher levels of abstraction (Narayan et al., 2017). Consider a sentence of p words s t j = {w t j,1:p }, where each word w t j,i can be represented by a ddimensional embedding φ(w t j,i ) ∈ R d . Previous studies have illustrated the strength of CNN in presenting sentences, because of its capability to learn compressed expressions and address sentences with variable lengths (Kim, 2014). First, a convolution kernel k ∈ R d×q×d is applied to each possible window of q words to construct a list of feature maps as:
g t j,i = tanh k × φ(w t j,i:i+q−1 ) + b(6)
where b ∈ R d denotes the bias term. Next, maxover-time pooling (Collobert et al., 2011) is performed on all generated features to obtain the sentence embedding as:
φ(s t j ) = max 1≤i≤d g t j,1:p−q+1 [i, :](7)
where [i, :] denotes the i-th row of matrix. Given a sequence of sentences S t = {s t 1:m }, we then take the RNN to yield an equal-length array of hidden states, in which LSTM has proved to alleviate the vanishing gradient problem when training long sequences (Hochreiter and Schmidhuber, 1997). Each hidden state can be viewed as a local representation with focusing on current and former sentences together, which is updated as:
h t j = LSTM φ(s t j ), h t j−1 ∈ R d .
In practice, we use multiple kernels with various widths to produce a group of embeddings for each sentence, and average them to capture the information inside different n-grams. As Figure 2 (bottom) shows, the sentence s t j involves six words, and two kernels of widths two (orange) and three (green) abstract a set of five and four feature maps respectively. Meanwhile, since rhetorical structure theory (Mann and Thompson, 2009) points out that association must exist in any two parts of coherent texts, RNN is only applicable to manage the sentence relation within a single document, because we cannot expect the dependency between two sections from different references.
Attention-based Decoder Our decoder labels each sentence s t j as 0/1 sequentially, according to whether it is salient or novel enough, plus if relevant to the target document t or not. As shown in Figure 2 (top), the binary decision y t j is made by both the hidden state h t j and the context vectorh t j from an attention mechanism (grey background). In particular, this attention (red dash line) is acted as an intermediate stage to determine which sentences to highlight so as to provide the contextual information for current decision (Bahdanau et al., 2014). Given H t = {h t 1:m }, this decoder returns the probability of y t j = 1 as below:
Pr(y t j = 1 | R t ; S t ; θ) = sigmoid δ(h t j ,h t j ) (8) h t j = m i=1 a j,i h t i(9)
where δ(h t j ,h t j ) ∈ R denotes a fully connected layer with as input the concatenation of h t j andh t j , and a j,i ∈ [0, 1] is the attention weight indicating how much the supporting sentence s t i contributes to extracting the candidate one s t j . Apart from saliency and novelty two traditional attention factors Tan et al., 2017), we focus on the contextual relevance within both textual and graphic contexts to distinguish the relationship from near to far, as shown in Eq. 10 and Eq. 11. To be specific: 1) h tT j W s h t i represents the saliency of s t i to s t j ; 2) −d tT j W n h t i indicates the novelty of s t i to the dynamic output d t j ; 3) φ(t) T W t h t i denotes the relevance of s t i to t from the textual context; 4) ϕ(t) T W g ϕ(h t i ) refers to the relevance from the graphic context. More concretely, W * ∈ R d characterizes the learnable matrix, φ(t) returns the average of hidden states from t, ϕ(t) and ϕ(h t i ) return the node embeddings of both t and the source document that h t i belongs to respectively. Note that φ(·) and ϕ(·) represent two distinct embedding spaces, where the former reflects the lexical collocations of corpus, and the latter embodies the connectivity patterns of associated graph.
a j,i = h tT j W s h t i # saliency −d tT j W n h t i # novelty +φ(t) T W t h t i # relevance 1 +ϕ(t) T W g ϕ(h t i ) # relevance 2 (10) d t j = j−1 i=1 Pr(y t j = 1 | R t ; S t ; θ) × h t i(11)
The basic idea behind our attention mechanism is as follows: if a supporting sentence more resembles a candidate one, or overlaps less with the dynamic output, or is more relevant to the target document, then it can provide more contextual information to facilitate current decision on being extracted or not, thereby taking a higher weight in the generated context vector. This innovative attention will guide our goal related work section to maximize the representativeness of selected sentences (saliency & novelty), while minimizing the semantic distance to the target document (relevance). This is consistent with the way that scholars consume a reference collection, with the minmax objective in their minds.
Experiment
Experimental Setup
This section presents the experimental setup for assessing our approach, including 1) dataset used for training and testing, 2) implementation details, 3) contrast methods and evaluation metrics. Dataset We conduct experiments on a dataset 2 created from the ACM digital library, where metadata and full texts are derived from PDF files. To be detailed, this dataset includes 371,891 papers, 779,810 authors, 9,204 keywords and 807 venues in total. Note that we ignore the keyword with frequency below a certain threshold, and adopt greedy matching of Guo et al. (2013) to generate pseudo keywords for papers lacking topic descriptions. For each target document, the references are traversed by the descending order of the cited number in related work section (primary) and in full paper (secondary) successively. We first apply a series of pre-processings such as lowercasing and stemming to standardize candidate sentences, then remove those which are too short/long (< 7 or > 80 words). On this basis, a total of 8,080 papers are selected to evaluate our approach, each containing more than 15 references found in the dataset and a related work section of at least 500 words. But as for the heterogeneous bibliography graph, all source data have to be imported to ensure the structural integrity of communities. Besides, this graph should be constructed year-byyear to preclude the effect of later publications on earlier ones.
Implementation We use Tensorflow for implementation, where both the dimensions of embedding and hidden state are equally 128. For the CNN, word2vec (Mikolov et al., 2013) is utilized to initialize the word embeddings, which can be further tuned during the training phase. Meanwhile, we follow the work of Kim (2014) to apply a list of kernels with widths {3, 4, 5}. As for the RNN, each LSTM module is set to one single layer, and all input documents are padded to the same length, along with a mark to indicate the real number of sentences. Based on these settings, we train our summarizer using Adam with the default in Kingma and Ba (2014), and perform mini-batch cross-entropy training with a batch of one target document for 20 epochs.
To create training data for our summarizer, each reference needs to be annotated with the ground truth in advance, i.e., candidate sentences are tagged with 0/1 for indicating summary-worthy or not. Specifically, we follow a heuristic practice of Cao et al. (2016) and Nallapati et al. (2016b) to compute ROUGE-2 score (Lin and Hovy, 2003) for each sentence, in terms of the native related work sections (gold standards). Next, those sentences with high scores are chosen as the positive samples, and the rest as the negative ones, such that the total score of selected sentences is maximized with respect to the gold standard. As for testing, we relax the number of sentences to be selected, and focus on the classification probability from Eq. 8. In this study, cross validation is applied to split the dataset into ten parts equally at random, in which nine are used for training and the other one for testing.
Evaluation We adopt the widely used toolkit ROUGE (Lin and Hovy, 2003) to evaluate the summarization performance automatically. In particular, we report ROUGE-1 and ROUGE-2 (unigram and bigram overlapping) as a way to assess the informativeness, and ROUGE-L (the longest common subsequence) as a means to assess the fluency, in terms of fixed bytes of gold standards.
To validate the proposed attention mechanism, we compare our approach (denoted as P. S+N+Rteg+EUD ) against six variants, including: 1) P. void : a plain seq2seq summarizer without attentions; 2) P. S : use the saliency as an only attention factor; 3) P. S+N : leverage both the saliency and novelty; 4) P. S+N+Rt : incorporate the relevance from the textual context; 5) P. S+N+Rtog : gain the relevance from the graphic context of a homogeneous citation graph; 6) P. S+N+Rteg : utilize the heterogeneous bibliography graph, but with each edge type the same usefulness.
In addition, we also select six representative summarization methods as a benchmark group. The first one is the general seq2seq summarizer by Cheng and Lapata (2016), denoted as Pointer-Net, which employs an attention mechanism to extract sentences directly after reading them. Following are five classical generic solutions, including: 1) Luhn (Luhn, 1958): a heuristic summarization based on word frequency and distribution; 2) MMR (Carbonell and Goldstein, 1998): a diversity-based re-ranking to produce summaries; 3) LexRank (Erkan et al., 2004): a graph-based summary technique inspired by PageRank and HITS; 4) SumBasic (Nenkova and Vanderwende, 2005): a frequency-based summarizer with duplication removal; 5) NltkSum (Acanfora et al., 2014): a natural language tookit (NLTK)-based implementation for summarization.
For clarity, Luhn, LexRank and SumBasic are analogous to the work of Hu and Wan (2014) which extracts sentences scoring the highest in significance, and they are also contrasted in the latest studies on neural summarizers Tan et al., 2017). Meanwhile, MMR often serves as a part/post-processing of existing techniques to avoid the redundancy (Cohan and Goharian, 2017), and we introduce NltkSum to investigate the impact of grammatical/semantic analysis to the automatic related work summarization. Note that former studies specially for this task require extensive human involvements (see Table 1), thus we cannot apply them to such a large dataset of this study. Table 2 reports the evaluation comparison over ROUGE metrics. From the top half, all scores appear a gradual upward trend with incorporation of saliency, novelty, relevance (from both textual and graphic contexts) and EUD into consideration one after another, which demonstrates the validity of our attention mechanism for summarizing related work sections. To be specific, we further reach the following conclusions: 1) P. void vs. P. S vs. P. S+N : Both saliency and novelty are two effective factors to locate the required content for summaries, which is consistent with prior studies.
Results and Discussion
2) P. S+N vs. P. S+N+Rt : Contextual relevance does contribute to address the alignment between a related work section and its source documents.
3) P. S+N+Rt vs. P. S+N+Rtog : Textual context alone cannot provide entire evidence to characterize the relationship among scientific publications exactly. 4) P. S+N+Rtog vs. P. S+N+Rteg : Heterogeneous bibliography graph involves richer contextual information than a homogeneous citation graph. 5) P. S+N+Rteg vs. P. S+N+Rteg+EUD : EUD plays an indispensable role in organizing accurate contextual relevance on a heterogeneous graph. cluster 3 under different attention factors. It can be seen that only after adding the relevance especially that from the graphic context into attentions, our summarizer can correctly sample the content from "Deep Learning" (yellow line), and eliminate that originated from "Other Sources" by a big margin (green line). As this example falls into the methodology transferring, a host of its involved word collocations are not idiomatic combinations yet, such as "Deep Neural Network" cooccurs with "Clickthrough Data" that is more frequently related to "Latent Semantic Analysis" at that time, which results in a somewhat biased textual context. By contrast, the graphic context will suffer less from this bias because it characterizes the connectivity patterns (real-time setup) instead of n-gram statistics, thus offering a more robust measure for the contextual relevance. The bottom half of Table 2 illustrates the superiority of our approach over six representative summarization methods. Above all, Luhn, LexRank and MMR three summarizers that simply exploit shallow text features (word frequency and associated sentence similarity) to measure either significance or redundancy fall far behind the plain variant P. void , which partly reflects the strength of seq2seq paradigm in summarizing a related work section. Second, with combination of significance and redundancy, SumBasic achieves a drastic increase on ROUGE-1 and a mild raise on 3 We pack the references cited in the same subsection of the related work section as one reference cluster. ROUGE-2 respectively, but it still cannot improve ROUGE-L marginally. This is because simple text statistics cannot present deeper levels of natural language understanding to catch larger-grained units of co-occurrence. Third, NltkSum benefits from a NLTK library so as to access grammatical/semantic supports, thereby having the best informativeness (ROUGE-1 and ROUGE-2) among the five generic baselines, and meanwhile a comparable fluency (ROUGE-L) with our approach. Finally, as a deep learning solution, although PointerNet takes both hidden states and previously labeled sentences into account, at each decoding step it focuses on only current and just one previous sentences, lacking a comprehensive consideration on saliency, novelty and more importantly the contextual relevance (< P. S+N ).
To better verify the summarization performance, we also conduct a human evaluation on 35 papers containing more than 30 references in the dataset. We assign a number of raters to compare each generated related work section against the gold standard, and judge by three independent aspects as: 1) How compliant is the related work section to the target document? 2) How intuitive is the related work section for readers to grasp the key content? 3) How useful is the related work section for researchers to prepare their final literature reviews? Note that we do not allow any ties during the comparison, and each property is assessed with a 5-point scale of 1 (worst) to 5 (best). best-to-worst. Specifically, our approach comes the 1st on 40% of the time, which is followed by NltkSum that is considered the best on 21% of the time (almost half of ours), and Pointer-Net with quite equal proportions on each ranking. Furthermore, the other four summarizers account for obviously lower ratings in general. To attain the statistical significance, one-way analysis of variance (ANOVA) is performed on the obtained ratings, and the results show that our approach is better than all six contrast methods significantly (p < 0.01), which means that the conclusion drawn by Table 2 is sustained.
Conclusion
In this paper, we highlight the contextual relevance for the automatic related work summarization, and analyze the graphic context to characterize the relationship among scientific publications accurately. We develop a neural data-driven summarizer by leveraging the seq2seq paradigm, where a joint context-driven attention mechanism is proposed to measure the contextual relevance within full texts and a heterogeneous bibliography graph simultaneously. Extensive experiments demonstrate the validity of the proposed attention mechanism, and the superiority of our approach over six representative summarization baselines. In future work, an appealing direction is to organize the selected sentences in a logical fashion, e.g., by leveraging a topic hierarchy tree to determine the arrangement of the related work section (Cong and Kan, 2010). We also would like to take the citation sentences of each reference into consideration, which is another concise and universal data source for scientific summarization (Chen and Hai, 2016;Cohan and Goharian, 2017). At the end of this paper, we believe that extractive methods are by no means the final solutions for literature review generation due to pla-giarism concerns, and we are going to put forward a fully abstractive version in further studies.
Figure 1 :
1Heterogeneous bibliography graph.
Figure 2 :
2Framework of our seq2seq summarizer.
Figure 3 :
3Number of extracted words on each reference cluster under different attention factors. Continuing the "DSSM", Figure 3 visualizes the number of extracted words on each reference
exploit probabilistic latent semantic indexing toAuthors
Number of papers
Cong and Kan (2010)
20
Hu and Wan (2014)
1,050
Widyantoro and Amin (2014)
50
Chen and Hai (2016)
3
Table 1 :
1Data scales of previous studies on automatic related work summarization.split candidate texts into different topic-biased
parts, then apply several regression models to
learn the importance of each sentence. Similarly,
Widyantoro and Amin (2014) transform the sum-
marization problem into classifying rhetorical cat-
egories of sentences, where each sentence is rep-
resented as a feature vector containing word fre-
quency, sentence length and etc. Most recently,
Chen and Hai
Table 2 :
2Rouge evaluation (%) on 8,080 papers from ACM digital library.
Table 3
3displays how often raters rank each summarizer as the 1st, 2nd and so on, in terms ofMethods
1st
2nd 3rd
4th
5th
6th
7th
Mean Ranking
Luhn
0.04 0.07 0.09 0.13 0.17 0.23 0.29
5.26
MMR
0.05 0.07 0.11 0.16 0.19 0.22 0.20
4.82
LexRank
0.06 0.09 0.11 0.14 0.17 0.19 0.27
4.93
SumBasic
0.09 0.13 0.18 0.18 0.18 0.15 0.10
4.10
NltkSum
0.21 0.21 0.20 0.15 0.10 0.07 0.04
3.00
PointerNet
0.14 0.20 0.18 0.15 0.13 0.11 0.08
3.54
P. S+N+Rteg+EUD 0.40 0.22 0.14 0.09 0.06 0.04 0.02
2.34
Table 3 :
3Human evaluation (proportion) on 35 papers with more than 30 references in the dataset.
Learning deep structured semantic models for web search using clickthrough data(Huang et al., 2013)
To help readers reproduce the experiment outcome, we share part of the experiment data while the copyrighted information is removed. https://github.com/kuadmu/2018EMNLP
AcknowledgementWe would like to thank the anonymous reviewers for their valuable comments. This work is partially supported by the National Science Foundation of China under grant No. 71271034.
Natural language processing: generating a summary of flood disasters. Joseph Acanfora, Marc Evangelista, David Keimig, Myron Su, Cell. 412Joseph Acanfora, Marc Evangelista, David Keimig, and Myron Su. 2014. Natural language process- ing: generating a summary of flood disasters. Cell, 41(2):383-94.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, arXiv:1409.0473arXiv preprintDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.
Endto-end attention-based large vocabulary speech recognition. Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Proceedings of the 41st IEEE ICASSP International Conference on Acoustics, Speech and Signal Processing. the 41st IEEE ICASSP International Conference on Acoustics, Speech and Signal ProcessingShanghai, ChinaPhilemon Brakel, and Yoshua BengioDzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. 2016. End- to-end attention-based large vocabulary speech recognition. In Proceedings of the 41st IEEE ICASSP International Conference on Acoustics, Speech and Signal Processing, Shanghai, China, pages 4945-4949.
Music recommendation by unified hypergraph:combining social media information and music content. Jiajun Bu, Shulong Tan, Chun Chen, Can Wang, Hao Wu, Proceedings of the ACM SIGMM International Conference on Multimedia. the ACM SIGMM International Conference on MultimediaAmsterdam, NetherlandsLijun Zhang, and Xiaofei HeJiajun Bu, Shulong Tan, Chun Chen, Can Wang, Hao Wu, Lijun Zhang, and Xiaofei He. 2010. Music rec- ommendation by unified hypergraph:combining so- cial media information and music content. In Pro- ceedings of the ACM SIGMM International Con- ference on Multimedia, Amsterdam, Netherlands, pages 391-400.
Ziqiang Cao, Wenjie Li, Sujian Li, Furu Wei, Yanran Li, arXiv:1604.00125Attsum: Joint learning of focusing and summarization with neural attention. arXiv preprintZiqiang Cao, Wenjie Li, Sujian Li, Furu Wei, and Yan- ran Li. 2016. Attsum: Joint learning of focusing and summarization with neural attention. arXiv preprint arXiv:1604.00125.
The use of mmr, diversity-based reranking for reordering documents and producing summaries. Jaime Carbonell, Jade Goldstein, Proceedings of the 21st International ACM SIGIR Conference on Research and Development in Information Retrieval. the 21st International ACM SIGIR Conference on Research and Development in Information RetrievalNew York, USAJaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering doc- uments and producing summaries. In Proceedings of the 21st International ACM SIGIR Conference on Research and Development in Information Re- trieval, New York, USA, pages 335-336.
Summarization of related work through citations. Jingqiang Chen, Zhuge Hai, Proceedings of the 12th IEEE SKG International Conference on Semantics, Knowledge and Grids. the 12th IEEE SKG International Conference on Semantics, Knowledge and GridsBeijing, ChinaJingqiang Chen and Zhuge Hai. 2016. Summarization of related work through citations. In Proceedings of the 12th IEEE SKG International Conference on Semantics, Knowledge and Grids, Beijing, China, pages 54-61.
Distraction-based neural networks for modeling documents. Qian Chen, Xiaodan Zhu, Si Wei, Si Wei, Hui Jiang, Proceedings of the ACM IJCAI International Joint Conference on Artificial Intelligence. the ACM IJCAI International Joint Conference on Artificial IntelligenceNew York, USAQian Chen, Xiaodan Zhu, Si Wei, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks for modeling documents. In Proceedings of the ACM IJCAI International Joint Conference on Artificial Intelligence, New York, USA, pages 2754-2760.
Neural summarization by extracting sentences and words. Jianpeng Cheng, Mirella Lapata, Proceedings of the 54th ACL Annual Meeting of the Association for Computational Linguistics. the 54th ACL Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyJianpeng Cheng and Mirella Lapata. 2016. Neural summarization by extracting sentences and words. In Proceedings of the 54th ACL Annual Meeting of the Association for Computational Linguistics, Berlin, Germany.
Abstractive sentence summarization with attentive recurrent neural networks. Sumit Chopra, Michael Auli, Alexander M Rush, Proceedings of the NAACL Conference of the North American Chapter of the Association for Computational Linguistics. the NAACL Conference of the North American Chapter of the Association for Computational LinguisticsSan Diego, USASumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with at- tentive recurrent neural networks. In Proceedings of the NAACL Conference of the North American Chapter of the Association for Computational Lin- guistics, San Diego, USA, pages 93-98.
Scientific article summarization using citation-context and article's discourse structure. Arman Cohan, Nazli Goharian, arXiv:1704.06619arXiv preprintArman Cohan and Nazli Goharian. 2017. Scien- tific article summarization using citation-context and article's discourse structure. arXiv preprint arXiv:1704.06619, pages 390-400.
Natural language processing (almost) from scratch. Ronan Collobert, Jason Weston, Michael Karlen, Koray Kavukcuoglu, Pavel Kuksa, Journal of Machine Learning Research. 121Ronan Collobert, Jason Weston, Michael Karlen, Ko- ray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(1):2493-2537.
Towards automated related work summarization. Duy Vu, Hoang Cong, Min Yen Kan, Proceedings of the 23rd ACM COLING International Conference on Computational Linguistics. the 23rd ACM COLING International Conference on Computational LinguisticsBeijing, ChinaDuy Vu Hoang Cong and Min Yen Kan. 2010. Towards automated related work summarization. In Pro- ceedings of the 23rd ACM COLING International Conference on Computational Linguistics, Beijing, China, pages 427-435.
Differential evolution: A survey of the state-of-the-art. IEEE Transactions on Evolutionary Computation. Swagatam Das and Ponnuthurai Nagaratnam Suganthan151Swagatam Das and Ponnuthurai Nagaratnam Sugan- than. 2011. Differential evolution: A survey of the state-of-the-art. IEEE Transactions on Evolutionary Computation, 15(1):4-31.
Lexrank: graphbased lexical centrality as salience in text summarization. Radev Erkan, R Dragomir, Journal of Qiqihar Junior Teachers College. 22Erkan, Radev, and R Dragomir. 2004. Lexrank: graph- based lexical centrality as salience in text summa- rization. Journal of Qiqihar Junior Teachers Col- lege, 22:2004.
node2vec: Scalable feature learning for networks. Aditya Grover, Jure Leskovec, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningSan Francisco, UsaAditya Grover and Jure Leskovec. 2016. node2vec: Scalable feature learning for networks. In Proceed- ings of the 22nd ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining, San Francisco, Usa, pages 855-864.
Automatic feature generation on heterogeneous graph for music recommendation. Chun Guo, Xiaozhong Liu, Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 38th International ACM SIGIR Conference on Research and Development in Information RetrievalSantiago, ChileChun Guo and Xiaozhong Liu. 2015. Automatic fea- ture generation on heterogeneous graph for music recommendation. In Proceedings of the 38th In- ternational ACM SIGIR Conference on Research and Development in Information Retrieval, Santi- ago, Chile, pages 807-810.
Scientific metadata quality enhancement for scholarly publications. Chun Guo, Jinsong Zhang, Xiaozhong Liu, IschoolsChun Guo, Jinsong Zhang, and Xiaozhong Liu. 2013. Scientific metadata quality enhancement for schol- arly publications. Ischools.
Long short-term memory. Sepp Hochreiter, Jrgen Schmidhuber, Neural Computation. 98Sepp Hochreiter and Jrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.
Automatic generation of related work sections in scientific papers: an optimization approach. Yue Hu, Xiaojun Wan, Proceedings of the ACL EMNLP Conference on Empirical Methods in Natural Language Processing. the ACL EMNLP Conference on Empirical Methods in Natural Language ProcessingDoha, QatarYue Hu and Xiaojun Wan. 2014. Automatic gener- ation of related work sections in scientific papers: an optimization approach. In Proceedings of the ACL EMNLP Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, pages 1624-1633.
Learning deep structured semantic models for web search using clickthrough data. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, Larry Heck, Proceedings of the 22nd ACM CIKM international Conference on Information & Knowledge Management. the 22nd ACM CIKM international Conference on Information & Knowledge ManagementSan Francisco, USAPo-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM CIKM international Conference on Informa- tion & Knowledge Management, San Francisco, USA, pages 2333-2338.
Generation of topic evolution trees from heterogeneous bibliographic networks. Scott Jensen, Xiaozhong Liu, Yingying Yu, Stasa Milojevic, Journal of Informetrics. 102Scott Jensen, Xiaozhong Liu, Yingying Yu, and Stasa Milojevic. 2016. Generation of topic evolution trees from heterogeneous bibliographic networks. Jour- nal of Informetrics, 10(2):606-621.
Convolutional neural networks for sentence classification. Yoon Kim, Eprint ArxivYoon Kim. 2014. Convolutional neural networks for sentence classification. Eprint Arxiv.
Adam: a method for stochastic optimization. Diederik Kingma, Jimmy Ba, Computer Science. Diederik Kingma and Jimmy Ba. 2014. Adam: a method for stochastic optimization. Computer Sci- ence.
Automatic evaluation of summaries using n-gram cooccurrence statistics. Yew Chin, Eduard Lin, Hovy, Proceedings of the NAACL The Annual Conference of the North American Chapter of the Association for Computational Linguistics. the NAACL The Annual Conference of the North American Chapter of the Association for Computational LinguisticsStroudsburg, USAChin Yew Lin and Eduard Hovy. 2003. Auto- matic evaluation of summaries using n-gram co- occurrence statistics. In Proceedings of the NAACL The Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, Stroudsburg, USA, pages 71-78.
The automatic creation of literature abstracts. H P Luhn, IBM CorpH. P. Luhn. 1958. The automatic creation of literature abstracts. IBM Corp.
Rhetorical structure theory: Toward a functional theory of text organization. C William, Sandra A Mann, Thompson, Text & Talk. 83William C. Mann and Sandra A. Thompson. 2009. Rhetorical structure theory: Toward a functional the- ory of text organization. Text & Talk, 8(3):243-281.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, Computer Science. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. Computer Science.
Sequence-to-sequence rnns for text summarization. Ramesh Nallapati, Bing Xiang, Bowen Zhou, Proceedings of the International Conference on Learning Representations, Workshop track. the International Conference on Learning Representations, Workshop trackSan Juan, Puerto RicoRamesh Nallapati, Bing Xiang, and Bowen Zhou. 2016a. Sequence-to-sequence rnns for text summa- rization. In Proceedings of the International Confer- ence on Learning Representations, Workshop track, San Juan, Puerto Rico.
Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. Ramesh Nallapati, Feifei Zhai, Bowen Zhou, arXiv:1611.04230v1arXiv preprintRamesh Nallapati, Feifei Zhai, and Bowen Zhou. 2016b. Summarunner: A recurrent neural network based sequence model for extractive summarization of documents. arXiv preprint arXiv:1611.04230v1.
Abstractive text summarization using sequenceto-sequence rnns and beyond. Ramesh Nallapati, Bowen Zhou, Cicero Nogueira Dos Santos, Caglar Gulcehre, Bing Xiang, arXiv:1602.06023v5arXiv preprintRamesh Nallapati, Bowen Zhou, Cicero Nogueira Dos Santos, Caglar Gulcehre, and Bing Xiang. 2016c. Abstractive text summarization using sequence- to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023v5.
Joint latent topic models for text and citations. Ramesh M Nallapati, Amr Ahmed, Eric P Xing, William W Cohen, Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data MiningLas Vegas, UsaRamesh M. Nallapati, Amr Ahmed, Eric P. Xing, and William W. Cohen. 2008. Joint latent topic models for text and citations. In Proceedings of the 14th ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, Las Vegas, Usa, pages 542-550.
Shashi Narayan, Nikos Papasarantopoulos, Shay B Cohen, Mirella Lapata, arXiv:1704.04530Neural extractive summarization with side information. arXiv preprintShashi Narayan, Nikos Papasarantopoulos, Shay B. Cohen, and Mirella Lapata. 2017. Neural extrac- tive summarization with side information. arXiv preprint arXiv:1704.04530.
The impact of frequency on summarization. Microsoft Research. Ani Nenkova, Lucy Vanderwende, Ani Nenkova and Lucy Vanderwende. 2005. The im- pact of frequency on summarization. Microsoft Re- search.
The pagerank citation ranking : Bringing order to the web, online manuscript. Page, 9Stanford Digital Libraries Working PaperL Page. 1998. The pagerank citation ranking : Bring- ing order to the web, online manuscript. Stanford Digital Libraries Working Paper, 9(1):1-14.
A neural attention model for abstractive sentence summarization. Sumit Alexander M Rush, Jason Chopra, Weston, Proceedings of the ACL EMNLP Conference on Empirical Methods in Natural Language Processing. the ACL EMNLP Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAlexander M Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the ACL EMNLP Conference on Empirical Methods in Nat- ural Language Processing, Lisbon, Portugal, pages 379-389.
metapath2vec: Scalable representation learning for heterogeneous networks. Ananthram Swami, Ananthram Swami, Ananthram Swami, Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningHalifax, CanadaAnanthram Swami, Ananthram Swami, and Anan- thram Swami. 2017. metapath2vec: Scalable rep- resentation learning for heterogeneous networks. In Proceedings of the 23rd ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining, Halifax, Canada, pages 135-144.
Abstractive document summarization with a graph-based attentional neural model. Jiwei Tan, Xiaojun Wan, Jianguo Xiao, Jiwei Tan, Xiaojun Wan, Jianguo Xiao, Proceedings of the 55th ACL Annual Meeting of the Association for Computational Linguistics. the 55th ACL Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaJiwei Tan, Xiaojun Wan, Jianguo Xiao, Jiwei Tan, Xi- aojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graph-based atten- tional neural model. In Proceedings of the 55th ACL Annual Meeting of the Association for Computa- tional Linguistics, Vancouver, Canada, pages 1171- 1181.
Algorithms for estimating relative importance in networks. Scott White, Padhraic Smyth, Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data MiningWashington, USAScott White and Padhraic Smyth. 2003. Algorithms for estimating relative importance in networks. In Proceedings of the 9th ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing, Washington, USA, pages 266-275.
Citation sentence identification and classification for related work summarization. H Dwi, Imaduddin Widyantoro, Amin, Proceedings of the ICACSIS International Conference on Advanced Computer Science and Information Systems. the ICACSIS International Conference on Advanced Computer Science and Information SystemsDwi H Widyantoro and Imaduddin Amin. 2014. Ci- tation sentence identification and classification for related work summarization. In Proceedings of the ICACSIS International Conference on Advanced Computer Science and Information Systems, pages 291-296.
Random walk and feedback on scholarly network. Yingying Yu, Xiaozhong Liu, Zhuoren Jiang, Proceedings of the 1st ACM GSB@SIGIR International Workshop on Graph Search and Beyond. the 1st ACM GSB@SIGIR International Workshop on Graph Search and BeyondSantiago, ChileYingying Yu, Xiaozhong Liu, and Zhuoren Jiang. 2015. Random walk and feedback on scholarly network. In Proceedings of the 1st ACM GSB@SIGIR Inter- national Workshop on Graph Search and Beyond, Santiago, Chile, pages 33-37.
| [
"https://github.com/kuadmu/2018EMNLP"
] |
[
"How Different are Pre-trained Transformers for Text Ranking?",
"How Different are Pre-trained Transformers for Text Ranking?"
] | [
"David Rau \nUniversity of Amsterdam\n\n",
"Jaap Kamps kamps@uva.nl \nUniversity of Amsterdam\n\n"
] | [
"University of Amsterdam\n",
"University of Amsterdam\n"
] | [] | In recent years, large pre-trained transformers have led to substantial gains in performance over traditional retrieval models and feedback approaches. However, these results are primarily based on the MS Marco/TREC Deep Learning Track setup, with its very particular setup, and our understanding of why and how these models work better is fragmented at best. We analyze effective BERT-based cross-encoders versus traditional BM25 ranking for the passage retrieval task where the largest gains have been observed, and investigate two main questions. On the one hand, what is similar? To what extent does the neural ranker already encompass the capacity of traditional rankers? Is the gain in performance due to a better ranking of the same documents (prioritizing precision)? On the other hand, what is different? Can it retrieve effectively documents missed by traditional systems (prioritizing recall)? We discover substantial differences in the notion of relevance identifying strengths and weaknesses of BERT that may inspire research for future improvement. Our results contribute to our understanding of (black-box) neural rankers relative to (well-understood) traditional rankers, help understand the particular experimental setting of MS-Marco-based test collections. | 10.48550/arxiv.2204.07233 | [
"https://arxiv.org/pdf/2204.07233v1.pdf"
] | 248,003,273 | 2204.07233 | b894e3d44d0ff0248d259d49b735da01d4f80aa7 |
How Different are Pre-trained Transformers for Text Ranking?
David Rau
University of Amsterdam
Jaap Kamps kamps@uva.nl
University of Amsterdam
How Different are Pre-trained Transformers for Text Ranking?
Neural IR · BERT · Sparse Retrieval · BM25 · Analysis
In recent years, large pre-trained transformers have led to substantial gains in performance over traditional retrieval models and feedback approaches. However, these results are primarily based on the MS Marco/TREC Deep Learning Track setup, with its very particular setup, and our understanding of why and how these models work better is fragmented at best. We analyze effective BERT-based cross-encoders versus traditional BM25 ranking for the passage retrieval task where the largest gains have been observed, and investigate two main questions. On the one hand, what is similar? To what extent does the neural ranker already encompass the capacity of traditional rankers? Is the gain in performance due to a better ranking of the same documents (prioritizing precision)? On the other hand, what is different? Can it retrieve effectively documents missed by traditional systems (prioritizing recall)? We discover substantial differences in the notion of relevance identifying strengths and weaknesses of BERT that may inspire research for future improvement. Our results contribute to our understanding of (black-box) neural rankers relative to (well-understood) traditional rankers, help understand the particular experimental setting of MS-Marco-based test collections.
Introduction
Neural information retrieval has recently experienced impressive performance gains over traditional term-based methods such as BM25 or Query-Likelihood [4,3]. Nevertheless, its success comes with the caveat of extremely complex models that are hard to interpret and pinpoint their effectiveness.
With the arrival of large-scale ranking dataset MS MARCO [1] massive models such as BERT [5] found their successful application in text ranking. Due to the large capacity of BERT (110m+ parameters), it can deal with long-range dependencies and complex sentence structures. When applied to ranking BERT can build deep interactions between query and document that allow uncovering complex relevance patterns that go beyond the simple term matching. Up to this point, the large performance gains achieved by the BERT Cross-Encoder are not well understood. Little is known about underlying matching principles that BERT bases its estimate of relevance on, what features are encoded in the model, and how the ranking relates to traditional sparse rankers such as BM25 [12]. In this work, we focus on the Cross-Encoder (CE) BERT that captures relevance signals directly between query and document through term interactions between them and refer from now on to the BERT model as CE. First, we aim to gain a deeper understanding of how CE and BM25 rankings relate to each other, particularly for different levels of relevance by answering the following research questions: Second, we isolate and quantify the contribution of exact-and soft-term matching to the overall performance. To examine those are particularly interesting as they pose the most direct contrast between the matching paradigms of sparse-and neural retrieval. More concretely, we investigate: RQ2: Does CE incorporate "exact matching"? RQ3: Can CE still find "impossible" relevant results?
RQ1
Related Work
Even though little research has been done to understand the ranking mechanism of BERT previous work exists. [10], [9], [19], have undertaken initial efforts to open ranking with BERT as a black-box and empirically find evidence that exact term matching and term importance seem to play in an important role. Others have tested and defined well-known IR axioms [2], [11], [7] or tried to enforced those axioms through regularization [13]. Another interesting direction is to enforce sparse encoding and able to relate neural ranking to sparse retrieval [18], [6]. Although related, the work in [16] differs in two important aspects. First, they examine dense BERT retrievers which encode queries and documents independently. Second, they focus rather on the interpolation between BERT and BM25, whereas we specifically aim to understand how the two rankings relate to each other.
Experimental Setup
The vanilla BERT Cross-Encoder (CE) encodes both queries and documents at the same time. Given input x ∈ {[CLS], q 1 , . . . , q n [SEP ], d 1 , . . . , d m , [SEP ]}, where q represents query tokens and d document tokens, the activations of the CLS token are fed to a binary classifier layer to classify a passage as relevant or non-relevant; the relevance probability is then used as a relevance score to re-rank the passages.
We conduct our experiments on the TREC 2020 Deep Learning Track's passage retrieval task on the MS MARCO dataset [1]. For our experiments, we use the pre-trained model released by [8]. To obtain the set of top-1000 documents we use anserini's [17] BM25 (default parameters) without stemming, following [4]. Table 1 shows the baseline performance of BM25 and a vanilla BERT based cross-ranker (CE), re-ranking the 1,000 passages.
Experiments
RQ1: How do CE and BM25 rankings vary?
CE outperforms BM25 by a large margin across all metrics (see Tab. 1). To understand the different nature of the CE we trace where documents were initially ranked in the BM25 ranking. For this we split the ranking in different in four rank-ranges: 1-10, 11-100, 101-500, 501-1000 and will refer to them with ranges 10, 100, 500 and 1000 respectively from now on. We observe in which rank-range the documents were positioned with respect to the initial BM25 ranking. We show the results in form of heatmaps 1 in Figure 1.
Our initial goal is to obtain general differences between the ranking of CE and BM25 by considering all documents of the test collection (see Fig. 1 (a)). First, we note that CE and BM25 vary substantially on the top of the ranking (33% CE@10), whereas at low ranks (60% CE@1000) the opposite holds. Second, we note that CE is bringing many documents up to higher ranks. Third, we observe that documents ranked high by BM25 are rarely ranked low by CE, suggesting exact matching to be a an important underlying ranking strategy.
RQ1.2: Does CE better rank the same documents retrieved by
BM25?
To answer RQ1.2 we consider documents that were judged highly relevant or relevant according to the NIST judgments 2020. The results can be found in Fig. 1 (b),(c) respectively. Most strikingly, both rankers exhibit a low agreement (40%) on the documents in CE@10 for highly relevant documents hinting a substantial different notion of relevance for the top of the ranking of both methods. For relevant documents we observe CE and BM25 overlap 46% at the top of the ranking and a large part (32%) comes from BM25@100, implying BM25 underestimated the relevance of many documents. The highest agreement between CE and BM25 here is in CE@500 (91%). Origin of documents in CE ranking at different rank-ranges with respect to the initial BM25 ranking. More intuitively, each row indicates to what ratio documents stem from different rank-ranges. E.g., the top row can be read as the documents in rank 1-10 of the CE re-ranking originate 33% from rank 1-10, 41% from rank 11-100, 19% from rank 101-500 and 6.1% from rank 501-1000 in the initial BM25 ranking. The rank compositions are shown for (a) all, (b) highly relevant, (c) relevant, and (d) non-relevant documents according to the NIST 2020 relevant judgments.
Interestingly, highly relevant documents that appear in lower ranks originate from high ranks in BM25 (CE@100: 12%, CE@500: 5%). This is an interesting finding as CE fails and underestimates the relevance of those documents, while BM25 -being a much simpler ranker -ranks them correctly. The same effect is also present for relevant documents. When considering documents that both methods ranked low we find a perfect agreement for @1000, showing that the two methods identify the same (highly-)relevant documents as irrelevant.
What about non-relevant documents that end up high in the ranking? CE brings up to CE@10 a large amount of non-relevant documents from low ranks (47% BM25@100, 23% BM25@500, and 5% BM@1000). Therewith overestimating the relevance of many documents that were correctly considered less relevant by BM25. We also note the little agreement of non-relevant documents @1000 (33%), hinting at a different notion of irrelevance.
RQ1.3: Does CE better find documents missed by BM25?
To answer RQ1.3 we again consider documents that were judged (b) highly relevant and (c) relevant and refer to Fig. 1, especially focusing on CE@10. The nature of CE, being too expensive for running it on the whole corpus, allows us to only study recall effects within the top-1000 documents. Hence, studying the top-10 results of CE will inform us best about the recall dynamics at high ranks. According to results in Fig. 1 (b) almost half (42%) of the highly relevant documents that are missed by BM25 are brought up from BM25@100, 13% from range BM25@500, and 5% from range BM25@1000. The same effect can be observed for relevant documents. This demonstrates the superior ability of CE to pull up (highly)-relevant documents that are missed by BM25 even from very low ranks. This is the domain where the true potential of the neural models over exact matching techniques lies.
RQ2: Does CE incorporate "exact matching"?
The presence of query words in the document is one of the strongest signals for relevance in ranking [15], [14]. Our goal is to isolate the exact term matching effect, quantify its contribution to the performance, and relate it to sparse ranking. For this, we simply replace all non-query terms in the document with the [MASK] token leaving the model only with a skeleton of the original document and thus forcing it to rely on the exact term matches between query and document only. We do not fine-tune the model on this input. Note that there are no query document pairs within the underlying BM25 top-1000 run that have no term overlap. Results can be found in Tab. 2 under Only Q. CE with only the query words performs significantly lower than BM25 with regard to all metrics, finding clear support that CE is not leveraging exact matches sufficiently. As in view of finding potential ways to improve CE, our results suggest that exact term matching can be improved.
RQ3: Can CE still find "impossible" relevant results?
While CE can leverage both, exact term-as well as "soft" matches, the biggest advantage over traditional sparse retrievers holds the ability to overcome lexical matches and to take context into account. Through "soft" matches neural models can retrieve documents that are "impossible" to retrieve using traditional potentially resulting in high recall gains. To isolate and quantify the effect of "soft matches" we follow our previous experiment but this time mask the appearance of the query words in the document. The model has now to rely on the surrounding context only. We do not fine-tune the model on this input. Note that in this setting BM25 would score randomly. Results can be found in Tab. 2 under Drop Q. We observe that CE can score documents sensibly with no overlapping query terms, largely outperforming when ranking on query terms only (Only Q). The model scores 49.89 NDCG@10 points losing only around 20 points with respect to non-manipulated input. CE might be able to fill-in the masked tokens from the context, as this makes up a main part of the Masked-Language modeling pre-training task. The model demonstrates its true potential here by drawing on its ability to understand semantics through the contextualization of query and document and to leverage its associate memory.
Conclusions and Discussion
Our experiments find evidence that documents at the top of the ranking are generally ranked very differently while a stronger agreement at the bottom of the ranking seems to be present. By investigating the rankings for different relevance levels we gain further insight. Even though, for (highly-)relevant documents there exists a bigger consensus at the top of the ranking compared to the bottom we find a discrepancy in the notion of high relevance between them for some documents, highlighting core differences between the two rankers.
We discover that CE is dramatically underestimating some of the highly relevant documents that are correctly ranked by BM25. This sheds light on the sub-optimal ranking dynamics of CE, sparking clues to overcome current issues to improve ranking in the future. Our analysis finds further evidence that the main gain in precision stems from bringing (highly-)relevant documents up from lower ranks (early precision). On the other hand, CE overestimates the relevance of many non-relevant documents where BM25 scored them correctly lower.
Through masking all but the query words within the documents we show that CE is not able to rank on the basis of only exact term matches only scoring a lot lower than BM25. By masking the query words in the document we demonstrate the ability of CE to score queries and documents without any lexical overlap with a moderate loss of performance, therefore demonstrating the true strength of neural models over traditional methods, that would completely fail in this scenario, in isolation.
We leave it to further research to qualitatively investigate the query-document pairs that BERT fails, but BM25 ranks correctly.
Fig. 1 :
1Ranking differences between BERT Cross-Encoder (CE) and BM25:
: How do CE and BM25 rankings vary? RQ1.2: Does CE better rank the same documents retrieved by BM25? RQ1.3: Does CE better find documents missed by BM25?
Table 1 :
1Performance of BM25 and crossencoder rankers on the NIST judgements of the TREC Deep Learning Task 2020.Ranker
NDCG@10
MAP
MRR
BM25
49.59
27.47
67.06
BERT Cross-Encoder (CE)
69.33
45.99
80.85
Table 2 :
2Performance of keeping only or removing the query terms from the input.Model input
NDCG@10
MAP
MRR
Only Q
31.70
18.56
44.38
Drop Q
49.89
29.08
65.12
The code for reproducing the heatmaps can be found under https://github.com/ davidmrau/transformer-vs-bm25
P Bajaj, D Campos, N Craswell, L Deng, J Gao, X Liu, R Majumder, A Mcnamara, B Mitra, T Nguyen, MS MARCO: A human generated machine reading comprehension dataset. Bajaj, P., Campos, D., Craswell, N., Deng, L., Gao, J., Liu, X., Majumder, R., McNamara, A., Mitra, B., Nguyen, T., et al.: MS MARCO: A human generated machine reading comprehension dataset (2016)
Diagnosing bert with retrieval heuristics. A Câmara, C Hauff, Advances in Information Retrieval 12035. 605Câmara, A., Hauff, C.: Diagnosing bert with retrieval heuristics. Advances in In- formation Retrieval 12035, 605 (2020)
N Craswell, B Mitra, E Yilmaz, D Campos, Overview of the TREC 2020 deep learning track. Craswell, N., Mitra, B., Yilmaz, E., Campos, D.: Overview of the TREC 2020 deep learning track (2021)
N Craswell, B Mitra, E Yilmaz, D Campos, E Voorhees, Overview of the TREC 2019 deep learning track. Craswell, N., Mitra, B., Yilmaz, E., Campos, D., Voorhees, E.M.: Overview of the TREC 2019 deep learning track (2020)
BERT: pre-training of deep bidirectional transformers for language understanding. J Devlin, M Chang, K Lee, K Toutanova, Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidi- rectional transformers for language understanding (2018)
Splade v2: Sparse lexical and expansion model for information retrieval. T Formal, C Lassance, B Piwowarski, S Clinchant, arXiv:2109.10086arXiv preprintFormal, T., Lassance, C., Piwowarski, B., Clinchant, S.: Splade v2: Sparse lexical and expansion model for information retrieval. arXiv preprint arXiv:2109.10086 (2021)
A white box analysis of colbert. T Formal, B Piwowarski, S Clinchant, European Conference on Information Retrieval. SpringerFormal, T., Piwowarski, B., Clinchant, S.: A white box analysis of colbert. In: European Conference on Information Retrieval. pp. 257-263. Springer (2021)
R Nogueira, K Cho, Passage re-ranking with bert. Nogueira, R., Cho, K.: Passage re-ranking with bert (2019)
H Padigela, H Zamani, W B Croft, Investigating the successes and failures of bert for passage re-ranking. Padigela, H., Zamani, H., Croft, W.B.: Investigating the successes and failures of bert for passage re-ranking (2019)
Y Qiao, C Xiong, Z Liu, Z Liu, arXiv:1904.07531Understanding the behaviors of bert in ranking. arXiv preprintQiao, Y., Xiong, C., Liu, Z., Liu, Z.: Understanding the behaviors of bert in ranking. arXiv preprint arXiv:1904.07531 (2019)
An axiomatic approach to diagnosing neural ir models. D Rennings, F Moraes, C Hauff, 10.1007/978-3-030-15712-8_32European Conference on Information Retrieval. SpringerRennings, D., Moraes, F., Hauff, C.: An axiomatic approach to diagnosing neural ir models. In: European Conference on Information Retrieval. pp. 489-503. Springer (2019), https://doi.org/10.1007/978-3-030-15712-8 32
Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. S E Robertson, S Walker, SIGIR'94. SpringerRobertson, S.E., Walker, S.: Some simple effective approximations to the 2-poisson model for probabilistic weighted retrieval. In: SIGIR'94. pp. 232-241. Springer (1994)
An axiomatic approach to regularizing neural ranking models. C Rosset, B Mitra, C Xiong, N Craswell, X Song, S Tiwary, Proceedings of the 42nd international ACM SIGIR conference on research and development in information retrieval. the 42nd international ACM SIGIR conference on research and development in information retrievalRosset, C., Mitra, B., Xiong, C., Craswell, N., Song, X., Tiwary, S.: An axiomatic approach to regularizing neural ranking models. In: Proceedings of the 42nd in- ternational ACM SIGIR conference on research and development in information retrieval. pp. 981-984 (2019)
Introduction to Modern Information Retrieval. G Salton, M J Mcgill, McGraw-Hill, Inc., USASalton, G., McGill, M.J.: Introduction to Modern Information Retrieval. McGraw- Hill, Inc., USA (1986), https://sigir.org/resources/museum/
Relevance: A review of and a framework for the thinking on the notion in information science. T Saracevic, 10.1002/asi.4630260604Journal of the American Society for Information Science. 26Saracevic, T.: Relevance: A review of and a framework for the thinking on the notion in information science. Journal of the American Society for Information Science 26, 321-343 (1975). https://doi.org/10.1002/asi.4630260604
Bert-based dense retrievers require interpolation with bm25 for effective passage retrieval. S Wang, S Zhuang, G Zuccon, Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval. the 2021 ACM SIGIR International Conference on Theory of Information RetrievalWang, S., Zhuang, S., Zuccon, G.: Bert-based dense retrievers require interpolation with bm25 for effective passage retrieval. In: Proceedings of the 2021 ACM SIGIR International Conference on Theory of Information Retrieval. pp. 317-324 (2021)
Anserini: Reproducible ranking baselines using lucene. P Yang, H Fang, J Lin, J. Data and Information Quality. 104Yang, P., Fang, H., Lin, J.: Anserini: Reproducible ranking base- lines using lucene. J. Data and Information Quality 10(4) (Oct 2018).
. 10.1145/3239571https://doi.org/10.1145/3239571, https://doi.org/10.1145/3239571
From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing. H Zamani, M Dehghani, W B Croft, E Learned-Miller, J Kamps, Proceedings of the 27th ACM international conference on information and knowledge management. the 27th ACM international conference on information and knowledge managementZamani, H., Dehghani, M., Croft, W.B., Learned-Miller, E., Kamps, J.: From neu- ral re-ranking to neural ranking: Learning a sparse representation for inverted in- dexing. In: Proceedings of the 27th ACM international conference on information and knowledge management. pp. 497-506 (2018)
An analysis of bert in document ranking. J Zhan, J Mao, Y Liu, M Zhang, S Ma, Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SIGIR Conference on Research and Development in Information RetrievalZhan, J., Mao, J., Liu, Y., Zhang, M., Ma, S.: An analysis of bert in document ranking. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 1941-1944 (2020)
| [] |
[
"QUERY-BY-EXAMPLE ON-DEVICE KEYWORD SPOTTING",
"QUERY-BY-EXAMPLE ON-DEVICE KEYWORD SPOTTING"
] | [
"Byeonggeun Kim \nQualcomm AI Research Hakdong-ro\nGangnam-guSeoulRepublic of Korea\n",
"Mingu Lee \nQualcomm AI Research Hakdong-ro\nGangnam-guSeoulRepublic of Korea\n",
"Jinkyu Lee \nQualcomm AI Research Hakdong-ro\nGangnam-guSeoulRepublic of Korea\n",
"Yeonseok Kim \nQualcomm AI Research Hakdong-ro\nGangnam-guSeoulRepublic of Korea\n",
"Kyuwoong Hwang \nQualcomm AI Research Hakdong-ro\nGangnam-guSeoulRepublic of Korea\n"
] | [
"Qualcomm AI Research Hakdong-ro\nGangnam-guSeoulRepublic of Korea",
"Qualcomm AI Research Hakdong-ro\nGangnam-guSeoulRepublic of Korea",
"Qualcomm AI Research Hakdong-ro\nGangnam-guSeoulRepublic of Korea",
"Qualcomm AI Research Hakdong-ro\nGangnam-guSeoulRepublic of Korea",
"Qualcomm AI Research Hakdong-ro\nGangnam-guSeoulRepublic of Korea"
] | [] | A keyword spotting (KWS) system determines the existence of, usually predefined, keyword in a continuous speech stream. This paper presents a query-by-example on-device KWS system which is user-specific. The proposed system consists of two main steps: query enrollment and testing. In query enrollment step, phonetic posteriors are output by a small-footprint automatic speech recognition model based on connectionist temporal classification. Using the phoneticlevel posteriorgram, hypothesis graph of finite-state transducer (FST) is built, thus can enroll any keywords thus avoiding an out-of-vocabulary problem. In testing, a log-likelihood is scored for input audio using the FST. We propose a threshold prediction method while using the user-specific keyword hypothesis only. The system generates query-specific negatives by rearranging each query utterance in waveform. The threshold is decided based on the enrollment queries and generated negatives. We tested two keywords in English, and the proposed work shows promising performance while preserving simplicity. | 10.1109/asru46091.2019.9004014 | [
"https://arxiv.org/pdf/1910.05171v3.pdf"
] | 204,402,831 | 1910.05171 | efbd06b9c5907d31987720a5addca2ea96664e0f |
QUERY-BY-EXAMPLE ON-DEVICE KEYWORD SPOTTING
Byeonggeun Kim
Qualcomm AI Research Hakdong-ro
Gangnam-guSeoulRepublic of Korea
Mingu Lee
Qualcomm AI Research Hakdong-ro
Gangnam-guSeoulRepublic of Korea
Jinkyu Lee
Qualcomm AI Research Hakdong-ro
Gangnam-guSeoulRepublic of Korea
Yeonseok Kim
Qualcomm AI Research Hakdong-ro
Gangnam-guSeoulRepublic of Korea
Kyuwoong Hwang
Qualcomm AI Research Hakdong-ro
Gangnam-guSeoulRepublic of Korea
QUERY-BY-EXAMPLE ON-DEVICE KEYWORD SPOTTING
Index Terms-keyword spottinguser-specificquery- by-exampleon-devicethreshold prediction
A keyword spotting (KWS) system determines the existence of, usually predefined, keyword in a continuous speech stream. This paper presents a query-by-example on-device KWS system which is user-specific. The proposed system consists of two main steps: query enrollment and testing. In query enrollment step, phonetic posteriors are output by a small-footprint automatic speech recognition model based on connectionist temporal classification. Using the phoneticlevel posteriorgram, hypothesis graph of finite-state transducer (FST) is built, thus can enroll any keywords thus avoiding an out-of-vocabulary problem. In testing, a log-likelihood is scored for input audio using the FST. We propose a threshold prediction method while using the user-specific keyword hypothesis only. The system generates query-specific negatives by rearranging each query utterance in waveform. The threshold is decided based on the enrollment queries and generated negatives. We tested two keywords in English, and the proposed work shows promising performance while preserving simplicity.
INTRODUCTION
Keyword spotting (KWS) has widely been used in personal devices like mobile phones and home appliances for detecting keywords which are usually compounded of one or two words. The goal is to detect the keywords from real-time audio stream. For practical use, it is required to achieve low false rejection rate (FRR) while keeping low false alarms (FAs) per hour.
Many previous works consider predefined keywords to reach promising performance. Keywords such as "Alexa", "Okay/Hey Google", "Hey Siri" and "Xiaovi Xiaovi" are the examples. They collect numerous variations of a specific keyword utterance and train neural networks (NNs) which have been promising method in the field. [1,2] have acoustic encoder and sequence matching decoder as separate modules. The NN-based acoustic models (AMs) predict senone-level Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc posteriors. Sequence matching, traditionally modeled by hidden Markov models (HMMs), interprets the AM outputs into keyword and background parts. Meanwhile, [3,4,5,6] have end-to-end NN architectures to directly determine the presence of keywords. They use recurrent neural networks (RNN) with attention layers [3,4], dilated convolution network [5], or filters based on singular value decomposition [6].
On the other hand, there have been query-by-example approaches which detect query keywords of any kinds. Early approaches use automatic speech recognition (ASR) phonetic posterior as a posteriorgram and exploit dynamic time warping (DTW) to compare keyword samples and test utterances [7,8,9]. [10] also used posteriorgram and an edit distance metric while using Long short-term memory (LSTM) -connectionist temporal classification (CTC) ASR. Furthermore, [11] computes a simple similarity scores of LSTM output vectors between enrollment and test utterance. Recently, endto-end NN based query-by-example systems are suggested [12,13]. [12] uses a recurrent neural network transducer (RNN-T) model biased with attention over keyword. [13] suggests to use text query instead of audio.
Meanwhile, there have been other groups who explored keyword spotting problem. [14,15,16,17] solve multiple keyword detection. [18,19] focus on KWS tasks with small dataset. [18] use DTW to augment the data, and [19] suggests a few-shot meta-learning approach.
In this paper, we propose a simple yet powerful queryby-example on-device KWS approach using user-specific queries. Our system provides user-specific model by utilizing a few keyword utterances spoken by a single user. The system uses posteriorgram based graph matching algorithm using a small-footprint ASR. An ASR based CTC [20] outputs phonetic posteriors, and we build a hypothesis graph of finite-state transducer (FST). The posteriorgram consists of phonetic output which frees the model from outof-vocabulary problem. On testing, the system determines whether an input audio contains the keyword or not through a log-likelihood score according to the graph which includes constraints of phonetic hypothesis. Despite of the score normalization, score-based query-by-example on-device KWS systems usually suffer from threshold decision, because there are not enough negative examples in on-device system. We predict user-specific threshold by keyword hypothesis graphs.
We generate query-specific negatives by rearranging positives in waveform. Then we predicts a threshold by using positives and generated negatives. While keeping this simplicity, our approach shows comparable performances with recent KWS systems.
The rest of the paper is organized as follows. In Section 2, the KWS system is described including the acoustic model, the FST in the decoder, and the threshold prediction method. The performance evaluation results are discussed in Section 3 followed by the conclusion in Section 4.
QUERY-BY-EXAMPLE KWS SYSTEM
Our system consists of three parts, acoustic model, decoder, and threshold prediction part. In subsections, we denote acoustic model input features as X = x 1 , x 2 , · · · , x T where x t ∈ R M and t is a time frame index. Corresponding label utterances are Y = y 1 , y 2 , · · · , y K and usually K < T .
Acoustic model
We exploit a CTC acoustic model [20]. We denote activation of ASR as O = o 1 , o 2 , · · · , o T where o t ∈ R N and let o n t as activation of unit n at time t. Thus o n t is a probability of observing n at time t. CTC uses a extra blank output φ. We denote L = L ∪ {φ, space} where L is the set of 39 context-independent phonemes. The space output implies a short pause between words. We let L (T ) as sequence set of length T, where their elements are in L . Then, conditional probability of path P given X is p(
P |X) = T t=1 p(o Pt t ) where ∀P ∈ L (T ).
[20] suggests a many-to-one mapping B which maps activation O to label sequence Y . The mapping collapses repeats and removes blank output φ, e.g. B(xφyyφz) = B(xφφyzφ) = xyz. The conditional probability P (Y |X) is marginalizing of possible paths for Y and is defined as,
p(Y |X) = P ∈B −1 (Y ) p(P |X).
(1)
Keyword spotting decoder
The keyword spotting decoder operates in two phases: an enrollment step and testing. In the enrollment step, using AM output of the query utterance, the model finds the hypothesis and build FSTs for the path. While testing, the model calculates the score and determines whether the input utterance contains the keyword using the hypothesis.
Query enrollment
In the enrollment step, the system uses a few clean utterances of a keyword spoken by a single user. We use simple and heuristic method, max-decoding. We follow the component of maximum-posterior at each time frame. For each time step t, we choose argmax n (o n t , n = 1, · · · , N ) and get a path P. The hypothesis is defined by the mapping B, as B(P ).
A keyword 'Hey Snapdragon' gives a hypothesis like 'HH.EY. .S.N.AE.P.T. .A.AE.G.AH.N'. With the hypothesis as a sequential phonetic constraint, we generate left-to-right FST systems.
Keyword spotting
In testing, the system calculates a score of a test utterance for hypothesis FSTs. Assume that the FST has L distinct possible states S = [s (i) ], i = 1, 2, · · · , L where s (φ) denotes the blank state. The FST is left-to-right, therefore, has an ordered label Hypothesis Y = y 1 , y 2 , · · · , y K where y k ∈ S, ∀k. Given the hypothesis, the score is log likelihood of a test input, X = x 1 , x 2 , · · · , x T . At time step t, the activation of AM is o t and we denote the corresponding FST state as q t ∈ S. The transition probability a ij is p(q t = s (j) |q t−1 = s (i) ). The hypothesis limits the transition probability as Eq. (2), where q t−1 = y l−1 . If q t = s φ , then q t = q t−1 , i.e. remaining in the previous state. Hypothesis Y is usually shorter than X because we use the mapping B to get Y . Therefore it is more likely to remain at a current state than moving to the next. We naively choose the transition probabilities to reflect the scenario.
a ij (t) = 1/3, if q t ∈ {y l , y l−1 , s (φ) } 0, otherwise.(2)
A log likelihood is, Snapdragon is a registered trademark of Qualcomm Incorporated. Colored histogram shows generated negatives.
log p(X |Y ) = log{ q p(q|Y )p(X |Y , q)} ≈ max q,t0 [log{π T t=t0+1 a qt−1qt T t=t0 p(q t |x t )p(x t ) p(q t ) }] ∝ max q,t0 [log{π T t=t0+1 a qt−1qt T t=t0 p(q t |x t )}],(3)
where π denotes the initial state probability, and π = p(q 1 = y 1 ) = 1 for a given path. The p(q|Y ) is product of transition probabilities, and the likelihood, p(X |Y , q) is proportional to the posteriors of the AM. Here p(x t ) and the state prior p(q t ) are assumed to be uniform.
We normalize the score by dividing Eq.(3) by the number of non-blank states, |{q t |t = 1, · · · , T, q t = s (φ) }|. We find q and t 0 which maximize Eq.(3) by beam searching. During the search, we consider each time step t as a initial time t 0 . By doing this, the system can spot the keyword in a long audio stream.
On-device threshold prediction
In this section, query set is Q = {X 1 , X 2 , · · · , X A }, and corresponding hypothesis set is
H = {Y 1 , Y 2 , · · · , Y A }. F Y (X)
is a mapping from a test utterance, X, to log likelihood score for a hypothesis Y . We denote negative utterances as Z 1 , Z 2 , · · · , Z B . The hypothesis computes positive scores from each other's query. A threshold δ is defined as, We generate query-specific negatives from queries. Figure 1 shows an example of a keyword, 'Hey Snapdragon'. Each positive is divided to sub-parts and shuffled in waveform. We overlap 16 samples of each part boundary and apply them one-sided triangular windows to guarantee smooth waveform transition and to prevent undesirable discontinuities, i.e. impulsive noises. Figure 2 plots an example of histograms of queries, negatives, and generated negatives of hypothesis FSTs from a single speaker. A probability distribution is drawn in histogram while assuming Gaussian distribution for better visualization. We used the generated negatives as {Z b }.
δ (Q,H) = τ A(A − 1) (a,a ) F Y a (X a )| a =a + (1 − τ ) A · B (a,b) F Y a (Z b )(4)
EXPERIMENTS
Experimental setup
Query and testing data
Many previous works experiment with their own data which are not accessible. In some literature, only relative performances are reported, thus the results are hard to compare with each other and are not reproducible. To be free from this issue, we use public and well-known data.
We use two query keywords in English, 'Hey Snapdragon' and 'Hey Snips'. The audio data of 'Hey Snips' is introduced at [5]. We select 61 speakers who have at least 11 'Hey Snips' utterances each. We use 993 utterances from the data. 'Hey Snapdragon' utterances are from a publicly available dataset 1 . There are 50 speakers and each of them speaks the keyword 22 or 23 times. In total, there are 1,112 'Hey Snapdragon'. At each user-specific test, 3 query utterance are randomly picked and rest are used as positive test samples. We augment the positive utterances using five types of noises, {babble, car, music, office, typing} at three signal-to-noise ratios (SNR) {10 dB, 6 dB and 0 dB}. We use WSJ-SI200 [21] as negative samples. We sampled 24 hrs of WSJ-SI200 and segmented the whole audio stream into 2 seconds long. We augment each data with one of the five noise types, {babble, car, music, office, typing} and one SNR ratio among {10 dB, 6 dB and 0 dB}. Noise type and SNR are randomly selected.
Acoustic model details
The model is trained with Librispeech [22] data. Noises, {babble, music, street}, are added at uniform random SNRs in [−3, 15] dB range. For more generalized model, we distorted the data by speech rate, power and reverberation. We changeed the speech rate with uniform random rates between 0.9 and 1.2. For reverberation, we used several measured room impulse responses in a moderately reverberant meeting rooms in an office building. From the term 'power', we meant the input level augmentation for which we changed the peak amplitudes of the input waveforms to have a random value between 0 dB and −40 dB in the normalized full scale.
Input features are 40-dimensional Per-channel energy normalization (PCEN) mel-filterbank energy [23] with 30 ms window and 10 ms frame shift. The model has two convolutional layers followed by five unidirectional LSTM layers. Each covolutional layer is followed by batch normalization and activation function. Each LSTM layer has 256 LSTM cells. On top, there are a fully-connected layer and a softmax layer. Through the trade-off between ASR performance and network size, the model has 211 k number of parameters and shows 16.61 % phoneme-error-rate (PER) and 48.04 % word-error-rate (WER) on Librispeech test-clean dataset without prior linguistic information.
Results
We tested 111 user-specific KWS systems. 50 are from the query 'Hey Snapdragon' and the rest are from 'Hey Snips'. We used three queries from a given speaker for an enrollment. When we use one or two queries instead, the relative increase of FRR (%) at 0.5 FA per hour are 222.05 % or 2.47 % respectively at 6 dB SNR. The scores from three hypothesis are averaged for each test.
Baseline
Some previous works exploit DTW to compare the query and test sample [7,8,9]. We exploit DTW as our baseline while using the CTC-based AM model. We use KL-divergence as DTW distance, and allow a subsequence as an optmial path, which refers to subsequence DTW (S-DTW) [24]. The score is normalized by input length of DTW corresponding to the optimal path.
FST constrained by phonectic hypothesis
We build 3 hypothesis FST for each system. We tested all 111 user-specific models and average them by keywords. Table 1 compares the baseline, the S-DTW with the FST method, and we average the performances for the four SNR levels to plot a ROC curve, shown in Figure 3. The method using FST consistently outperforms the S-DTW while using a same query, and 'Hey Snapdragon' stands out than 'Hey Snips'. The query word, 'Hey Snips' is short and false alarms are more likely to occur. The performance is heavily influenced by the type of keyword and this result is also specified in [12].
In Figure 4, we plot a histogram which shows the FRR by users. Most user models show low FRR except some outliers.
Due to the limited data access, direct result comparison with previous works became difficult. Nevertheless, we compared our results with others in Table 2 to show that the results are comparable to that of predefined KWS systems [3,5,6] and query-by-example system [12]. Blanks in the table implies unknown information.
On-device threshold prediction
We tested a naive threshold prediction approach as a baseline. The baseline assumes a scenario that a device stores randomly chosen 100 general negatives. 50 negatives are from clean and the rest are from augmented data mentioned in section 3.1.1. A = 3 and B = 100 in Eq.(4).
The proposed method exploits query-specific negatives. For each query, we divide the waveform into three parts with the same lengths, thus there are five ways to shuffle to make it different from the original signal. There are three queries for each enrollment and, therefore we have 15 generated negatives. Each hypothesis from a query uses other two queries as positives and their generated negatives as negatives, thus A = 3 and B = 10. Figure 5 shows mean of positive and that of negative scores for 111 user-specific models. The baseline shows low and even negative correlation coefficient (R) value. R values for 'Hey Snapdragon' and 'Hey Snips' are -0.04 and -0.21 respectively. Meanwhile, the proposed method shows positive R values, 0.25 for 'Hey Snapdragon' and 0.40 for 'Hey Snips'. If there is a common tendencies between positives and negatives across keywords, we can expect useful threshold decision rules from them. Here we tried a simple linear interpolation introduced in Section 2.3. We search τ in Eq.(4) leveraging brute-force to get near 0.05 FAs per hour on average for 111 models. We set τ to 0.82 for baseline and 0.38 for the proposed method, and resulting FAs per hour are 0.049 for baseline and 0.050 for the proposed method on average.
Both method find the τ and reach target FAs per hour level, however, these two methods have dramatic difference in inter keywords. Inter keyword difference should be small in order to query-by-example system to work on any kind of keywords. For the baseline, 'Hey Snapdragon' shows 0.001 FAs per hour while 'Hey Snips' is not shows 0.088 FA per hour. Despite of using 6 to 7 times lower B, the proposed method shows exact same, 0.050 FAs per hour for both keyword 'hey Snapdragon' and 'Hey Snips'. Baseline shows 17.77 % FRR at 6 dB noisy positives due to the low FAs per hour while the proposed method shows 3.95 % FRR for 'hey Snapdragon'. The result is different from Table 2, because it uses given FAs per hour level for each model while this session use averaged FAs per hour.
CONCLUSIONS
In this paper, we suggest a simple and powerful approach for query-by-example on-device keyword spotting task. Our system uses user-specific queries, and CTC based AM outputs phonetic posteriorgram. We decode the output and build leftto-right FSTs as a hypothesis. The log likelihood is calculated as a score for testing. For on-device test, we suggest a method to predict a proper user and query specific threshold with the hypothesis. We generate query-specific negatives by shuffling the query in waveform. While many previous KWS approaches are not reproducible due to the limited data access, we tested our methods on public and well-known data.
In the experiments, our approach showed promising and comparable performances to the latest predefined and query-byexample methods. There is a limit to this work due to lack of public data, and we suggest naive approach for utilizing generated negatives. As a future work, we will study advanced way to predict threshold using the query-specific negatives, and test various keywords.
Fig. 1 :
1Example of a generated negative from a query utterance, 'Hey Snapdragon'. The query utterance is divided into three in waveform and shuffled.
Fig. 2 :
2A histogram of query, negative and generated negative log likelihood scores for hypothesis FSTs of a single speaker.
Fig. 3 :
3Comparison of baseline, the S-DTW with the FST constrained by phonetic hypothesis. where τ is a hyperparameter in [0, 1], a, a ∈ [A] and b ∈ [B]. Eq.(4) means the threshold as a score between mean of positive scores and that of negative scores.
Fig. 4 :
4Histograms of FRRs (%) at 0.05 FA/ hr per user model.
Fig. 5 :
5Comparison of baseline with query-specific generated negatives. The graphs show relationship between the mean positives and the mean negatives and their best-fit in lines.
Table 1 :
1FRR (%) at 0.05 FAs per hour for clean and SNR levels {10 dB, 6 dB, 0 dB} of positives.Method
Keyword
clean 10 dB 6 dB 0 dB Avg.
S-DTW
Hey Snapdragon 1.35
3.84
8.01 21.6 8.70
Hey Snips
10.5
15.8
20.7 32.8 19.9
FST
Hey Snapdragon 0.53
0.83
3.22 12.2 4.19
Hey Snips
1.85
5.36
8.59 24.7 10.13
Table 2 :
2Comparison of FRR (%) of various KWS systems at given FAs per hour levels.Method
Keyword
Params SNR FRR @ 1 FA/ hr FRR @ 0.5 FA/ hr FRR @ 0.05 FA/ hr
Shan et al. [3]
Xiao ai tong xue 84 k
-
1.02
-
-
Coucke et al. [5] Hey snips
222 k
5 dB 2
-
1.60
-
Wang et al. [6]
Hai xiao wen
-
-
4.17
-
-
He et al. [11]
Personal Name 3
-
-
-
-
8.9
S-DTW
Hey Snapdragon
211 k
6 dB
3.12
4.46
8.01
Hey Snips
13.30
15.07
20.69
FST
Hey Snapdragon
0.62
1.04
3.22
Hey Snips
2.79
3.77
8.58
Will be published with the publication of this work in ASRU 2019.
Coucke et al.[5] augmented the positive dev and test datasets by only 5 dB, while our 6 dB is only for positive dev. Our test dataset are augmented by {10, 6, 0} dB.3 He et al.[12] used queries like 'Olivia' and 'Erica'.
Time-delayed bottleneck highway networks using a dft feature for keyword spotting. J Guo, K Kumatani, M Sun, M Wu, A Raju, N Strom, A Mandal, ICASSP, IEEE International Conference on Acoustics. J. Guo, K. Kumatani, M. Sun, M. Wu, A. Raju, N. Strom, and A. Mandal, "Time-delayed bottleneck highway networks using a dft feature for keyword spot- ting," in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing -Proceedings. IEEE, 2018, pp. 5489-5493.
Compact feedforward sequential memory networks for small-footprint keyword spotting. M Chen, S Zhang, M Lei, Y Liu, H Yao, J Gao, INTERSPEECH 2018 -19 th Annual Conference of the International Speech Communication Association. M. Chen, S. Zhang, M. Lei, Y. Liu, H. Yao, and J. Gao, "Compact feedforward sequential memory networks for small-footprint keyword spotting," in INTERSPEECH 2018 -19 th Annual Conference of the International Speech Communication Association, 2018, pp. 2663- 2667.
Attentionbased end-to-end models for small-footprint keyword spotting. C Shan, J Zhang, Y Wang, L Xie, INTERSPEECH 2018 -19 th Annual Conference of the International Speech Communication Association. C. Shan, J. Zhang, Y. Wang, and L. Xie, "Attention- based end-to-end models for small-footprint keyword spotting," in INTERSPEECH 2018 -19 th Annual Con- ference of the International Speech Communication As- sociation, 2018, pp. 2037-2041.
Adversarial examples for improving end-toend attention-based small-footprint keyword spotting. X Wang, S Sun, C Shan, J Hou, L Xie, S Li, X Lei, ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing -Proceedings. IEEEX. Wang, S. Sun, C. Shan, J. Hou, L. Xie, S. Li, and X. Lei, "Adversarial examples for improving end-to- end attention-based small-footprint keyword spotting," in ICASSP, IEEE International Conference on Acous- tics, Speech and Signal Processing -Proceedings. IEEE, 2019.
Efficient keyword spotting using dilated convolutions and gating. A Coucke, M Chlieh, T Gisselbrecht, D Leroy, M Poumeyrol, T Lavril, arXiv:1811.07684in arXiv preprintA. Coucke, M. Chlieh, T. Gisselbrecht, D. Leroy, M. Poumeyrol, and T. Lavril, "Efficient keyword spot- ting using dilated convolutions and gating," in arXiv preprint arXiv:1811.07684, 2018.
End-to-end streaming keyword spotting. A Raziel, H Park, arXiv:1812.02802arXiv preprintA. Raziel and H. Park, "End-to-end streaming keyword spotting," in arXiv preprint arXiv: 1812.02802, 2019.
Query-by-example spoken term detection using phonetic posteriorgram templates. T J Hazen, W Shen, C White, 2009 IEEE Workshop on Automatic Speech Recognition & Understanding. IEEET.J. Hazen, W. Shen, and C. White, "Query-by-example spoken term detection using phonetic posteriorgram templates," in 2009 IEEE Workshop on Automatic Speech Recognition & Understanding. IEEE, 2009, pp. 421-426.
Unsupervised spoken keyword spotting via segmental dtw on gaussian posteriorgrams. Y Zhang, J R Glass, 2009 IEEE Workshop on Automatic Speech Recognition & Understanding. IEEEY. Zhang and J.R. Glass, "Unsupervised spoken key- word spotting via segmental dtw on gaussian posterior- grams," in 2009 IEEE Workshop on Automatic Speech Recognition & Understanding. IEEE, 2009, pp. 398- 403.
Memory efficient subsequence dtw for query-by-example spoken term detection. X Anguera, M Ferrarons, 2013 IEEE International Conference on Multimedia and Expo (ICME). IEEEX. Anguera and M. Ferrarons, "Memory efficient sub- sequence dtw for query-by-example spoken term detec- tion," in 2013 IEEE International Conference on Multi- media and Expo (ICME). IEEE, 2013, pp. 1-6.
Unrestricted vocabulary keyword spotting using lstm-ctc.," in Interspeech. Y Zhuang, X Chang, Y Qian, K Yu, Y. Zhuang, X. Chang, Y. Qian, and K. Yu, "Unrestricted vocabulary keyword spotting using lstm-ctc.," in Inter- speech, 2016, pp. 938-942.
Query-byexample keyword spotting using long short-term memory networks. G Chen, C Parada, T N Sainath, IEEE International Conference on Acoustics, Speech and Signal Processing -Proceedings. IEEEICASSPG. Chen, C. Parada, and T.N. Sainath, "Query-by- example keyword spotting using long short-term mem- ory networks," in ICASSP, IEEE International Confer- ence on Acoustics, Speech and Signal Processing -Pro- ceedings. IEEE, 2015, pp. 5236-5240.
Streaming small-footprint keyword spotting using sequence-to-sequence models. Y He, R Prabhavalkar, K Rao, W Li, A Bakhtin, I Mcgraw, 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). IEEEY. He, R. Prabhavalkar, K. Rao, W. Li, A. Bakhtin, and I. McGraw, "Streaming small-footprint keyword spotting using sequence-to-sequence models," in 2017 IEEE Automatic Speech Recognition and Understand- ing Workshop (ASRU). IEEE, 2017, pp. 474-481.
End-to-end asr-free keyword search from speech. K Audhkhasi, A Rosenberg, A Sethy, B Ramabhadran, B Kingsbury, IEEE Journal of Selected Topics in Signal Processing. 118K. Audhkhasi, A. Rosenberg, A. Sethy, B. Ramabhad- ran, and B. Kingsbury, "End-to-end asr-free keyword search from speech," IEEE Journal of Selected Topics in Signal Processing, vol. 11, no. 8, pp. 1351-1359, 2017.
Efficient keyword spotting using time delay neural networks. S Myer, V S Tomar, INTERSPEECH 2018 -19 th Annual Conference of the International Speech Communication Association. S. Myer and V.S. Tomar, "Efficient keyword spotting using time delay neural networks," in INTERSPEECH 2018 -19 th Annual Conference of the International Speech Communication Association, 2018, pp. 1264- 1268.
Deep residual learning for smallfootprint keyword spotting. R Tang, J Lin, ICASSP, IEEE International Conference on Acoustics. R. Tang and J. Lin, "Deep residual learning for small- footprint keyword spotting," in ICASSP, IEEE Interna- tional Conference on Acoustics, Speech and Signal Pro- cessing -Proceedings. IEEE, 2018, pp. 5484-5488.
Lstm based attentive fusion of spectral and prosodic information for keyword spotting in hindi language. L Pandey, K Nathwani, INTERSPEECH 2018 -19 th Annual Conference of the International Speech Communication Association. L. Pandey and K. Nathwani, "Lstm based attentive fu- sion of spectral and prosodic information for keyword spotting in hindi language," in INTERSPEECH 2018 -19 th Annual Conference of the International Speech Communication Association, 2018, pp. 112-116.
An application of recurrent neural networks to discriminative keyword spotting. S Fernández, A Graves, J Schmidhuber, International Conference on Artificial Neural Networks. SpringerS. Fernández, A. Graves, and J. Schmidhuber, "An ap- plication of recurrent neural networks to discriminative keyword spotting," in International Conference on Arti- ficial Neural Networks. Springer, 2007, pp. 220-229.
Fast asr-free and almost zero-resource keyword spotting using dtw and cnns for humanitarian monitoring. R Menon, H Kamper, J Quinn, T Niesler, IN-TERSPEECH 2018 -19 th Annual Conference of the International Speech Communication Association. R. Menon, H. Kamper, J. Quinn, and T. Niesler, "Fast asr-free and almost zero-resource keyword spotting us- ing dtw and cnns for humanitarian monitoring," in IN- TERSPEECH 2018 -19 th Annual Conference of the In- ternational Speech Communication Association, 2018, pp. 2608-2612.
Meta learning for few-shot keyword spotting. Y Chen, T Ko, L Shang, X Chen, X Jiang, Q Li, arXiv:1812.10233in arXiv preprintY. Chen, T. Ko, L. Shang, X. Chen, X. Jiang, and Q. Li, "Meta learning for few-shot keyword spotting," in arXiv preprint arXiv: 1812.10233, 2018.
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. A Graves, S Fernández, F Gomez, J Schmidhuber, Proceedings of the 23rd international conference on Machine learning. the 23rd international conference on Machine learningACMA. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, "Connectionist temporal classification: labelling unseg- mented sequence data with recurrent neural networks," in Proceedings of the 23rd international conference on Machine learning. ACM, 2006, pp. 369-376.
The design for the wall street journal-based csr corpus. D B Paul, J M Baker, Proceedings of the workshop on Speech and Natural Language. the workshop on Speech and Natural LanguageAssociation for Computational LinguisticsD.B. Paul and J.M. Baker, "The design for the wall street journal-based csr corpus," in Proceedings of the work- shop on Speech and Natural Language. Association for Computational Linguistics, 1992, pp. 357-362.
Librispeech: an asr corpus based on public domain audio books. V Panayotov, G Chen, D Povey, S Khudanpur, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing. ICASSPV. Panayotov, G. Chen, D. Povey, and S. Khudanpur, "Librispeech: an asr corpus based on public domain audio books," in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
. IEEE. IEEE, 2015, pp. 5206-5210.
Trainable frontend for robust and far-field keyword spotting. Y Wang, P Getreuer, T Hughes, R F Lyon, R A Saurous, ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing -Proceedings. IEEEY. Wang, P. Getreuer, T. Hughes, R.F. Lyon, and R.A. Saurous, "Trainable frontend for robust and far-field keyword spotting," in ICASSP, IEEE International Con- ference on Acoustics, Speech and Signal Processing - Proceedings. IEEE, 2017, pp. 5670-5674.
Information retrieval for music and motion. M Müller, Dynamic time warpingM. Müller, "Dynamic time warping," Information re- trieval for music and motion, pp. 69-84, 2007.
| [] |
[
"The Role of Context Types and Dimensionality in Learning Word Embeddings",
"The Role of Context Types and Dimensionality in Learning Word Embeddings"
] | [
"Oren Melamud melamuo@cs.biu.ac.il \nComputer Science Department\nBar-Ilan University\nIBM WatsonRamat-Gan, Google, New York, Yorktown HeightsNY, NYIsrael, USA, USA\n\nToyota Technological Institute at Chicago\n60637ChicagoILUSA\n",
"David Mcclosky ",
"Siddharth Patwardhan siddharth@us.ibm.com ",
"Mohit Bansal mbansal@ttic.edu "
] | [
"Computer Science Department\nBar-Ilan University\nIBM WatsonRamat-Gan, Google, New York, Yorktown HeightsNY, NYIsrael, USA, USA",
"Toyota Technological Institute at Chicago\n60637ChicagoILUSA"
] | [] | We provide the first extensive evaluation of how using different types of context to learn skip-gram word embeddings affects performance on a wide range of intrinsic and extrinsic NLP tasks. Our results suggest that while intrinsic tasks tend to exhibit a clear preference to particular types of contexts and higher dimensionality, more careful tuning is required for finding the optimal settings for most of the extrinsic tasks that we considered. Furthermore, for these extrinsic tasks, we find that once the benefit from increasing the embedding dimensionality is mostly exhausted, simple concatenation of word embeddings, learned with different context types, can yield further performance gains. As an additional contribution, we propose a new variant of the skip-gram model that learns word embeddings from weighted contexts of substitute words.3 Parses follow the Universal Dependencies formalism and were produced by Stanford CoreNLP, version 3.5.2 4 Our word embeddings are available at: www.cs.biu.ac.il/nlp/resources/downloads/embeddings-contexts/ 5 We used negative sampling = 5 and iterations = 3 in all of the experiments described in this paper. 6 For more details refer to Mikolov et al. (2013b). 7 http://bitbucket.org/yoavgo/word2vecf | 10.18653/v1/n16-1118 | [
"https://arxiv.org/pdf/1601.00893v2.pdf"
] | 5,527,031 | 1601.00893 | 5305b40d39d682854e4daa350e7619dacfd887ea |
The Role of Context Types and Dimensionality in Learning Word Embeddings
Oren Melamud melamuo@cs.biu.ac.il
Computer Science Department
Bar-Ilan University
IBM WatsonRamat-Gan, Google, New York, Yorktown HeightsNY, NYIsrael, USA, USA
Toyota Technological Institute at Chicago
60637ChicagoILUSA
David Mcclosky
Siddharth Patwardhan siddharth@us.ibm.com
Mohit Bansal mbansal@ttic.edu
The Role of Context Types and Dimensionality in Learning Word Embeddings
We provide the first extensive evaluation of how using different types of context to learn skip-gram word embeddings affects performance on a wide range of intrinsic and extrinsic NLP tasks. Our results suggest that while intrinsic tasks tend to exhibit a clear preference to particular types of contexts and higher dimensionality, more careful tuning is required for finding the optimal settings for most of the extrinsic tasks that we considered. Furthermore, for these extrinsic tasks, we find that once the benefit from increasing the embedding dimensionality is mostly exhausted, simple concatenation of word embeddings, learned with different context types, can yield further performance gains. As an additional contribution, we propose a new variant of the skip-gram model that learns word embeddings from weighted contexts of substitute words.3 Parses follow the Universal Dependencies formalism and were produced by Stanford CoreNLP, version 3.5.2 4 Our word embeddings are available at: www.cs.biu.ac.il/nlp/resources/downloads/embeddings-contexts/ 5 We used negative sampling = 5 and iterations = 3 in all of the experiments described in this paper. 6 For more details refer to Mikolov et al. (2013b). 7 http://bitbucket.org/yoavgo/word2vecf
Introduction
Word embeddings have become increasingly popular lately, proving to be valuable as a source of features in a broad range of NLP tasks with limited supervision (Turian et al., 2010;Collobert et al., 2011;Socher et al., 2013;Bansal et al., 2014). word2vec 1 skip-gram (Mikolov et al., 2013a) * Majority of work performed while at IBM Watson. 1 http://code.google.com/p/word2vec/ and GloVe 2 (Pennington et al., 2014) are among the most widely used word embedding models today. Their success is largely due to an efficient and user-friendly implementation that learns highquality word embeddings from very large corpora.
Both word2vec and GloVe learn lowdimensional continuous vector representations for words by considering window-based contexts, i.e., context words within some fixed distance of each side of the target word. However, the underlying models are equally applicable to different choices of context types. For example, Bansal et al. (2014) and Levy and Goldberg (2014) showed that using syntactic contexts rather than window contexts in word2vec captures functional similarity (as in lion:cat) rather than topical similarity or relatedness (as in lion:zoo). Further, Bansal et al. (2014) and Melamud et al. (2015b) showed the benefits of such modified-context embeddings in dependency parsing and lexical substitution tasks. However, to the best of our knowledge, there has not been an extensive evaluation of the effect of multiple, diverse context types on a wide range of NLP tasks.
Word embeddings are typically evaluated on intrinsic and extrinsic tasks. Intrinsic tasks mostly include predicting human judgments of semantic relations between words, e.g., as in WordSim-353 (Finkelstein et al., 2001), while extrinsic tasks include various 'real' downstream NLP tasks, such as coreference resolution and sentiment analysis. Re-cent works have shown that while intrinsic evaluations are easier to perform, their correlation with results on extrinsic evaluations is not very reliable (Schnabel et al., 2015;Tsvetkov et al., 2015), stressing the importance of the latter.
In this work, we provide the first extensive evaluation of word embeddings learned with different types of context, on a wide range of intrinsic similarity and relatedness tasks, and extrinsic NLP tasks, namely dependency parsing, named entity recognition, coreference resolution, and sentiment analysis. We employ contexts based of different word window sizes, syntactic dependencies, and a lesserknown substitute words approach (Yatbaz et al., 2012). Finally, we experiment with combinations of the above word embeddings, comparing two approaches: (1) simple vector concatenation that offers a wider variety of features for a classifier to choose and learn weighted combinations from, and (2) dimensionality reduction via either Singular Value Decomposition or Canonical Correlation Analysis, which tries to find a smaller subset of features.
Our results suggest that it is worthwhile to carefully choose the right type of word embeddings for an extrinsic NLP task, rather than rely on intrinsic benchmark results. Specifically, picking the optimal context type and dimensionality is critical. Furthermore, once the benefit from increasing the embedding dimensionality is mostly exhausted, concatenation of word embeddings learned with different context types can yield further performance gains.
Word Embedding Context Types
Learning Corpus
We use a fixed learning corpus for a fair comparison of all embedding types: a concatenation of three large English corpora: (1) English Wikipedia 2015, (2) UMBC web corpus (Han et al., 2013), and (3) English Gigaword (LDC2011T07) newswire corpus (Parker et al., 2011). Our concatenated corpus is diverse and substantial in size with approximately 10B words. This allows us to learn high quality embeddings that cover a large vocabulary. After extracting clean text from these corpora, we used Stanford CoreNLP for sentence splitting, tokenization, part-of-speech tagging and dependency parsing. 3 Then, all tokens were lowercased, and sentences were shuffled to prevent structured bias. When learning word embeddings, we ignored words with corpus frequency lower than 100, yielding a vocabulary of about 500K words. 4
Window-based Word Embeddings
We used word2vec's skip-gram model with negative sampling (Mikolov et al., 2013b) to learn window-based word embeddings. 5 This popular method embeds both target words and contexts in the same low-dimensional space, where the embeddings of a target and context are pushed closer together the more frequently they co-occur in a learning corpus. Indirectly, this also results in similar embeddings for target words that co-occur with similar contexts. More formally, this method optimizes the following objective function:
(1) L = (t,c)∈PAIRS L t,c (2) L t,c = log σ(v c · v t ) + neg∈NEGS (t,c) log σ(−v neg · v t )
where v t and v c are the vector representations of target word t and context word c. PAIRS is the set of window-based co-occurring target-context pairs considered by the model that depends on the window size, and NEGS (t,c) is a set of randomly sampled context words used with the pair (t, c). 6 We experimented with window sizes of 1, 5, and 10, and various dimensionalities. We denote a window-based word embedding with window size of n and dimensionality of m with Wn m . For example, W5 300 is a word embedding learned using a window size of 5 and dimensionality of 300.
Dependency-based Word Embeddings
We used word2vecf 7 (Levy and Goldberg, 2014), to learn dependency-based word embeddings from the parsed version of our corpus, similar to the approach of Bansal et al. (2014). word2vecf accepts as its input arbitrary target-context pairs. In the case of dependency-based word embeddings, the context elements are the syntactic contexts of the target word, rather than the words in a window around it. Specifically, following Levy and Goldberg (2014), we first 'collapsed' prepositions (as implemented in word2vecf). Then, for a target word t with modifiers m 1 ,...,m k and head h, we paired the target word with the context elements (m 1 , r 1 ),...,(m k , r k ),(h, r −1 h ), where r is the type of the dependency relation between the head and the modifier (e.g., dobj, prep of ) and r −1 denotes an inverse relation. We denote a dependency-based word embedding with dimensionality of m by DEP m . We note that under this setting word2vecf optimizes the same objective function described in Equation (1), with PAIRS now comprising dependencybased pairs instead of window-based ones.
Substitute-based Word Embeddings
Substitute vectors are a recent approach to representing contexts of target words, proposed in Yatbaz et al. (2012). Instead of the neighboring words themselves, a substitute vector includes the potential filler words for the target word slot, weighted according to how 'fit' they are to fill the target slot given the neighboring words. For example, the substitute vector representing the context of the word love in "I love my job", could look like: [quit 0.5, love 0.3, hate 0.1, lost 0.1]. Substitute-based contexts are generated using a language model and were successfully used in distributional semantics models for part-of-speech induction (Yatbaz et al., 2012), word sense induction (Baskaya et al., 2013), functional semantic similarity (Melamud et al., 2014) and lexical substitution tasks (Melamud et al., 2015a).
Similar to Yatbaz et al. (2012), we consider the words in a substitute vector, as a weighted set of contexts 'co-occurring' with the observed target word. For example, the above substitute vector is considered as the following set of weighted target-context pairs: {(love, quit, 0.5), (love, love, 0.3), (love, hate, 0.1), (love, lost, 0.1)}. To learn word embeddings from such weighted target-context pairs, we extended word2vecf by modifying the objective
(3) L = (t,c)∈PAIRS α t,c · L t,c
where α t,c is the weight of the target-context pair (t, c). With this simple modification, the effect of target-context pairs on the learned word representations becomes proportional to their weights.
To generate the substitute vectors we followed the methodology in (Yatbaz et al., 2012;Melamud et al., 2015a). We learned a 4-gram Kneser-Ney language model from our learning corpus using KenLM (Heafield et al., 2013). Then, we used FASTSUBS ) with this language model to efficiently generate substitute vectors, where the weight of each substitute s is the conditional probability p(s|C) for this substitute to fill the target slot given the sentential context C. For efficiency, we pruned the substitute vectors to their top-10 substitutes, s 1 ..s 10 , and normalized their probabilities such that i=1..10 p(s i |C) = 1. We also generated only up to 20,000 substitute vectors for each target word type. Finally, we converted each substitute vector into weighted target-substitute pairs and used our extended version of word2vecf to learn the substitute-based word embeddings, denoted SUB m .
Qualitative Effect of Context Type
To motivate the rest of our work, we first qualitatively inspect the top most-similar words to some target words, using cosine similarity of their respective embeddings. As illustrated in Table 1, in embeddings learned with large window contexts, we see both functionally similar words and topically similar words, sometimes with a different part-of-speech. With small windows and dependency contexts, we generally see much fewer topically similar words, which is consistent with previous findings (Bansal et al., 2014;Levy and Goldberg, 2014). Finally, with substitute-based contexts, there appears to be even a stronger preference for functional similarity, with a tendency to also strictly preserve verb tense.
Word Embedding Combinations
As different choices of context type yield word embeddings with different properties, we hypothesize that combinations of such embeddings could be more informative for some extrinsic tasks. We experimented with two alternative approaches to combine different sets of word embeddings: (1) Simple vector concatenation, which is a lossless combination that comes at the cost of increased dimensionality, and (2) SVD and CCA, which are lossy combinations that attempt to capture the most useful information from the different embeddings sets with lower dimensionality. The methods used are described in more detail next.
Concatenation
Perhaps the simplest way to combine two different sets of word embeddings (sharing the same vocabulary) is to concatenate their word vectors for every word type. We denote such a combination of word embedding set A with word embedding set B using the symbol (+). For example W10+DEP 600 is the concatenation of W10 300 with DEP 300 . Naturally, the dimensionality of the concatenated embeddings is the sum of the dimensionalities of the component embeddings. In our experiments, we only ever combine word embeddings of equal dimensionality.
The motivation behind concatenation relates primarily to supervised models in extrinsic tasks. In such settings, we hypothesize that using concatenated word embeddings as input features to a classifier could let it choose and combine (i.e., via learned weights) the most suitable features for the task. Consider a situation where the concatenated embedding W10+DEP 600 is used to represent the word inputs to a named entity recognition classifier. In this case, the classifier could choose, for instance, to represent entity words mostly with dependency-based embedding features (reflecting functional semantics), and surrounding words with large window-based embedding features (reflecting topical semantics).
Singular Value Decomposition
Singular Value Decomposition (SVD) has been shown to be effective in compressing sparse word representations (Levy et al., 2015). In this work, we use this technique in the same way to reduce the dimensionality of concatenated word embeddings.
Canonical Correlation Analysis
Recent work used Canonical Correlation Analysis (CCA) to derive an improved set of word embeddings. The main idea is that two distinct sets of word embeddings, learned with different types of input data, are considered as multi-views of the same vocabulary. Then, CCA is used to project each onto a lower dimensional space, where correlation between the two is maximized. The correlated information is presumably more reliable. Dhillon et al. (2011) considered their two CCA views as embeddings learned from the left and from the right context of the target words, showing improvements on chunking and named entity recognition. Faruqui and Dyer (2014) and Lu et al. (2015) considered multilingual views, showing improvements in several intrinsic tasks, such as word and phrase similarity.
Inspired by this prior work, we consider pairs of word embedding sets, learned with different types of context, as different views and correlate them using linear CCA. 8 We use either the SimLex-999 or WordSim-353-R intrinsic benchmark (section 4.1) to tune the CCA hyperparameters 9 with the Spearmint Bayesian optimization tool 10 (Snoek et al., 2012). This results in different projections for each of these tuning objectives, where SimLex-999/WordSim-353-R is expected to give some bias towards functional/topical similarity, respectively.
Evaluation
Intrinsic Benchmarks
We employ several commonly used intrinsic benchmarks for assessing how well word embeddings mimic human judgements of semantic similarity of words. The popular WordSim-353 dataset (Finkelstein et al., 2001) includes 353 word pairs manually annotated with a degree of similarity. For example, computer:keyboard is annotated with 7.62, indicating a relatively high degree of similarity. While WordSim-353 does not make a distinction between different 'flavors' of similarity, Agirre et al. (2009) proposed two subsets of this dataset, WordSim-353-S and WordSim-353-R, which focus on functional and topical similarities, respectively. SimLex-999 (Hill et al., 2014) is a larger word pair similarity dataset with 999 annotated pairs, purposely built to focus on functional similarity. We evaluate our embeddings on these datasets by computing a score for each pair as the cosine similarity of two word vectors. The Spearman's correlation 11 between the ranking of word pairs induced from the human annotations and that from the embeddings is reported.
The TOEFL task contains 80 synonym selection items, where a synonym of a target word is to be selected out of four possible choices. We report the overall accuracy of a system that uses cosine distance between the embeddings of the target word and each of the choices to select the one most similar to the target word as the answer.
Extrinsic Benchmarks
The following four diverse downstream NLP tasks serve as our extrinsic benchmarks. 12 1) Dependency Parsing (PARSE) The Stanford Neural Network Dependency (NNDEP) parser uses dense continuous representations of words, parts-of-speech and dependency labels. While it can learn these representations entirely during the training on labeled data, show that initialization with word embeddings, which were pre-trained on unlabeled data, yields improved performance. Hence, we used our different types of embeddings to initialize the NNDEP parser and compared their performance on a standard Penn Treebank benchmark. We used WSJ sections 2-21 for training and 22 for development. We used predicted tags produced via 20-fold jackknifing on sections 2-21 with the Stanford CoreNLP tagger.
2) Named Entity Recognition (NER)
We used the NER system of Turian et al. (2010), which allows adding word embedding features (on top of various other features) to a regularized averaged perceptron classifier, and achieves near state-of-the-art results using several off-the-shelf word representations. We varied the type of word embeddings used as features when training the NER model, to evaluate their effect on NER benchmarks results. Following Turian et al. (2010), we used the CoNLL-2003 shared task dataset (Tjong Kim Sang and De Meulder, 2003) with 204K/51K train/dev words, as our main benchmark. We also performed an out-of-domain evaluation, using CoNLL-2003 as the train set and the MUC7 formal run (59K words) as the test set. 13 (Durrett and Klein, 2013), which achieves near state-of-the-art results with a log-linear supervised model. Most of the features in this model are associated with pairs of current and antecedent reference mentions, for which a coreference decision needs to be made. To evaluate the contribution of different word embedding types to this model, we extended it to support the following additional features:
3) Coreference Resolution (COREF) We used the Berkeley Coreference System
{a i } i=1..m , {c i } i=1..m and {a i · c i } i=1.
.m , where a i or c i is the value of the ith dimension in a word embedding vector representing the antecedent or current mention, respectively. We considered two different word embedding representations for a mention: (1) the embedding of the head word of the mention and (2) the average embedding of all words in the mention. The features of both types of representations were presented to the learning model as inputs at the same time. They were added on top of Berkeley's full feature list ('FI-NAL') as described in Durrett and Klein (2013). We evaluated our features on the CoNLL-2012 coreference shared task (Pradhan et al., 2012). Faruqui et al. (2014), we used a sentence-level binary decision version of the sentiment analysis task from Socher et al. (2013). In this setting, neutral sentences were discarded and all remaining sentences were labeled coarsely as positive or negative. Maintaining the original split into train/dev results, we get a dataset containing 6920/872 sentences. To evaluate different types of word embeddings, we represented each sentence as an average of its word embeddings and then used an L2-regularized logistic regression classifier trained on these features to predict the sentiment labels.
4) Sentiment Analysis (SENTI) Following
Results
Intrinsic Results for Context Types
The results on the intrinsic tasks are illustrated in Figure 1. First, we see that the performance on all tasks generally increases with the number of dimensions, reaching near-optimal performance at around 300 dimensions, for all types of contexts. This is in line with similar observations on skip-gram word embeddings (Mikolov et al., 2013a).
Looking further, we observe that there are significant differences in the results when using different types of contexts. The effect of context choice is perhaps most evident in the WordSim-353-R task, which captures topical similarity. As might be expected, in this benchmark, the largest-window word embeddings perform best. The performance decreases with the decrease in window size and then reaches significantly lower levels for dependency (DEP) and substitute-based (SUB) embeddings. Conversely, in WordSim-353-S and SimLex-999, both of which capture a more functional similarity, the DEP embeddings are the ones that perform best, strengthening similar observations in Levy and Goldberg (2014). Finally, in the TOEFL benchmark, all contexts except for SUB, perform comparably.
Extrinsic Results for Context Types
The extrinsic tasks results are illustrated in Figure 2. A first observation is that optimal extrinsic results may be reached with as few as 50 dimensions. Furthermore, performance may even degrade when us- ing too many dimensions, as is most evident in the NER task. This behavior presumably depends on various factors, such as the size of the labeled training data or the type of classifier used, and highlights the importance of tuning the dimensionality of word embeddings in extrinsic tasks. This is in contrast to intrinsic tasks, where higher dimensionality typically yields better results.
Next, comparing the results of different types of contexts, we see, as might be expected, that dependency embeddings work best in the PARSE task. More generally, embeddings that do well in functional similarity intrinsic benchmarks and badly in topical ones (DEP, SUB and W1) work best for PARSE, while large window contexts perform worst, similar to observations in Bansal et al. (2014).
In the rest of the tasks it's difficult to say which context works best for what. One possible expla- nation to this in the case of NER and COREF is that the embedding features are used as add-ons to an already competitive learning system. Therefore, the total improvement on top of a 'no embedding' baseline is relatively small, leaving little room for significant differences between different contexts. We did find a more notable contribution of word embedding features to the overall system performance in the out-of-domain NER MUC evaluation, described in Table 2. In this out-of-domain setting, all types of contexts achieve at least five points improvement over the baseline. Presumably, this is because continuous word embedding features are more robust to differences between train and test data, such as the typical vocabulary used. However, a detailed investigation of out-of-domain settings is out of scope for this paper and left for future work.
Extrinsic Results for Combinations
A comparison of the results obtained on the extrinsic tasks using the word embedding concatenations (concats), described in section 3.1, versus the original single context word embeddings (singles), appears in Table 3. To control for dimensionality, concats are always compared against sin-gles with identical dimensionality. For example, the 200-dimensional concat W10+DEP 200 , which is a concatenation of W10 100 and DEP 100 , is compared against 200-dimensional singles, such as W10 200 . Looking at the results, it seems like the benefit from concatenation depends on the dimensionality and task at hand, as also illustrated in Figure 3. Given task X and dimensionality d, if d 2 is in the range where increasing the dimensionality yields significant improvement on task X, then it's better to simply increase dimensionality of singles from d 2 to d rather than concatenate. The most evident example for this are the results on the SENTI task with d = 50. In this case, the benefit from concatenating two 25-dimensional singles is notably lower than that of using a single 50-dimensional word embedding. On the other hand, if d 2 is in the range where near-optimal performance is reached on task X, then concatenation seems to pay off. This can be seen in SENTI with d = 600, PARSE with d = 200, and NER with d = 50. More concretely, looking at the best performing concatenations, it seems like combinations of the topical W10 embedding with one of the more functional ones, SUB, DEP or W1, typically perform best, suggesting that there is added value in combining embeddings of different nature.
Finally, our experiments with the methods using SVD (section 3.2) and CCA (section 3.3) yielded degraded performance compared to single word embeddings for all extrinsic tasks and therefore are not reported for brevity. These results seem to further strengthen the hypothesis that the information captured with varied types of context is different and complementary, and therefore it is beneficial to preserve these differences as in our concatenation approach.
Related Work
There are a number of recent works whose goal is a broad evaluation of the performance of different word embeddings on a range of tasks. However, to the best of our knowledge, none of them focus on embeddings learned with diverse context types as we do. Levy et al. (2015), Lapesa and Evert (2014), and Lai et al. (2015) evaluate several design choices when learning word representations. However, Levy et al. (2015) and Lapesa and Evert (2014) Table 3: Extrinsic tasks development set results obtained with word embeddings concatenations. 'best' and 'best+' are the best results achieved across all single context types and context concatenations, respectively (best performing embedding indicated in parenthesis). 'mean' and 'mean+' are the mean results for the same. Due to computational limitations of the employed systems, some of the evaluations were not performed.
perform only intrinsic evaluations and restrict context representation to word windows, while Lai et al. (2015) do perform extrinsic evaluations, but restrict their context representation to a word window with the default size of 5. Schnabel et al. (2015) and Tsvetkov et al. (2015) report low correlation between intrinsic and extrinsic results with different word embeddings (they did not evaluate different context types), which is consistent with differences we found between intrinsic and extrinsic performance patterns in all tasks, except parsing. Bansal et al. (2014) show that functional (dependency-based and small-window) embeddings yield higher parsing improvements than topical (large-window) embeddings, which is consistent with our findings.
Several works focus on particular types of contexts for learning word embeddings. Cirik and Yuret (2014) investigates S-CODE word embeddings based on substitute word contexts. Ling et al. (2015b) and Ling et al. (2015a) propose extensions to the standard window-based context modeling. Alternatively, another recent popular line of work (Faruqui et al., 2014;Kiela et al., 2015) attempts to improve word embeddings by using manuallyconstructed resources, such as WordNet. These techniques could be complementary to our work. Finally, Yin and Schütze (2015) and Goikoetxea et al. (2016) propose word embeddings combinations, using methods such as concatenation and CCA, but evaluate mostly on intrinsic tasks and do not consider different types of contexts.
Conclusions
In this paper we evaluated skip-gram word embeddings on multiple intrinsic and extrinsic NLP tasks, varying dimensionality and type of context. We show that while the best practices for setting skipgram hyperparameters typically yield good results on intrinsic tasks, success on extrinsic tasks requires more careful thought. Specifically, we suggest that picking the optimal dimensionality and context type are critical for obtaining the best accuracy on extrinsic tasks and are typically task-specific. Further improvements can often be achieved by combining complementary word embeddings of different context types with the right dimensionality.
Figure 1 :
1Intrinsic tasks' results for embeddings learned with different types of contexts.
Figure 2 :
2Extrinsic tasks' development set results for embeddings learned with different types of contexts. 'base' denotes the results with no word embedding features. Due to computational limitations we tested NER and PARSE with only up to 300 dimensions embeddings, and COREF with up to 100.
Figure 3 :
3Mean development set results for the tasks PARSE and SENTI. 'mean' and 'mean+' stand for mean results across all single context types and context concatenations, respectively.
Table 2 :
2NER MUC out-of-domain results for dif-
ferent embeddings with dimensionality = 25.
http://nlp.stanford.edu/projects/glove/ arXiv:1601.00893v2 [cs.CL] 19 Jul 2017
See Faruqui and Dyer (2014), Lu et al. (2015) for details. 9 These are projection dimensionality and regularization. 10 github.com/JasperSnoek/spearmint
We used spearmanr, SciPy version 0.15.1. 12 Since our goal is to explore performance trends, we mostly experimented with the tasks' development sets.
See Turian et al. (2010) for more details on this setting.
AcknowledgmentsWe thank Do Kook Choe for providing us the jackknifed version of WSJ. We also wish to thank the IBM Watson team for helpful discussions and our anonymous reviewers for their comments. This work was partially supported by the Israel Science Foundation grant 880/12 and the German Research Foundation through the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1).
A study on similarity and relatedness using distributional and WordNet-based approaches. Agirre, Proceedings of NAACL. Association for Computational Linguistics. NAACL. Association for Computational LinguisticsMarius Paşca, and Aitor SoroaAgirre et al.2009] Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Paşca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and WordNet-based approaches. In Proceedings of NAACL. Association for Computa- tional Linguistics.
Tailoring continuous word representations for dependency parsing. Bansal, Proceedings of the Annual Meeting of the Association for Computational Linguistics. the Annual Meeting of the Association for Computational Linguistics[Bansal et al.2014] Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring continuous word rep- resentations for dependency parsing. In Proceedings of the Annual Meeting of the Association for Compu- tational Linguistics.
Ai-ku: Using substitute vectors and co-occurrence modeling for word sense induction and disambiguation. [ Baskaya, Proceedings of the Se-mEval. the Se-mEvalOsman BaskayaVolkan Cirik, and Deniz Yuret[Baskaya et al.2013] Osman Baskaya, Enis Sert, Volkan Cirik, and Deniz Yuret. 2013. Ai-ku: Using substitute vectors and co-occurrence modeling for word sense in- duction and disambiguation. In Proceedings of the Se- mEval.
A fast and accurate dependency parser using neural networks. Manning2014] Danqi Chen, Christopher D Manning, arXiv:1407.6853Cirik and Yuret2014] Volkan Cirik and Deniz Yuret. 2014. Substitute based scode word embeddings in supervised nlp tasks. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa1arXiv preprintNatural language processing (almost) from scratchand Manning2014] Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), volume 1, pages 740-750. [Cirik and Yuret2014] Volkan Cirik and Deniz Yuret. 2014. Substitute based scode word embeddings in su- pervised nlp tasks. arXiv preprint arXiv:1407.6853. [Collobert et al.2011] Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language process- ing (almost) from scratch. The Journal of Machine Learning Research, 12:2493-2537.
Improving vector space word representations using multilingual correlation. Dhillon, Proceedings of Deep Learning and Representation Learning Workshop, NIPS. Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan RuppinDeep Learning and Representation Learning Workshop, NIPSACMProceedings of AAAI[Dhillon et al.2011] Paramveer Dhillon, Dean P Foster, and Lyle H Ungar. 2011. Multi-view learning of word embeddings via cca. In Advances in Neural Informa- tion Processing Systems, pages 199-207. [Durrett and Klein2013] Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proc. of EMNLP. [Faruqui and Dyer2014] Manaal Faruqui and Chris Dyer. 2014. Improving vector space word representations using multilingual correlation. In Proceedings of EACL. [Faruqui et al.2014] Manaal Faruqui, Jesse Dodge, Su- jay K Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2014. Retrofitting word vectors to semantic lexicons. In Proceedings of Deep Learning and Rep- resentation Learning Workshop, NIPS. [Finkelstein et al.2001] Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th international conference on World Wide Web, pages 406-414. ACM. [Goikoetxea et al.2016] Josu Goikoetxea, Eneko Agirre, and Aitor Soroa. 2016. Single or multiple? combining word representations independently learned from text and wordnet. In Proceedings of AAAI.
UMBC EBIQUITY-CORE: Semantic Textual Similarity Systems. Proceedings of the Second Joint Conference on Lexical and Computational Semantics. the Second Joint Conference on Lexical and Computational SemanticsIvan Pouzyrevsky, Jonathan H. Clark, and Philipp KoehnKenneth HeafieldHeafield et al.2013et al.2013] Lushan Han, Abhay L. Kashyap, Tim Finin, James Mayfield, and Johnathan Weese. 2013. UMBC EBIQUITY-CORE: Semantic Textual Simi- larity Systems. In Proceedings of the Second Joint Conference on Lexical and Computational Semantics. Association for Computational Linguistics, June. [Heafield et al.2013] Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn.
Simlex-999: Evaluating semantic models with (genuine) similarity estimation. arXiv:1408.3456Proceedings of ACL. Hill et al.2014] Felix Hill, Roi Reichart, and Anna KorhonenACLFelix Hill, and Stephen ClarkarXiv preprintScalable modified Kneser-Ney language model estimation. Kiela et al.2015. Specializing word embeddings for similarity or relatednessScalable modified Kneser-Ney language model estimation. In Proceedings of ACL. [Hill et al.2014] Felix Hill, Roi Reichart, and Anna Ko- rhonen. 2014. Simlex-999: Evaluating semantic models with (genuine) similarity estimation. arXiv preprint arXiv:1408.3456. [Kiela et al.2015] Douwe Kiela, Felix Hill, and Stephen Clark. 2015. Specializing word embeddings for simi- larity or relatedness.
A large scale evaluation of distributional semantic models: Parameters, interactions and model selection. arXiv:1507.05523Proceedings of ACL. ACL2arXiv preprintLapesa and Evert2014] Gabriella Lapesa and Stefan Evert. Levy and Goldberg2014] Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddingset al.2015] Siwei Lai, Kang Liu, Liheng Xu, and Jun Zhao. 2015. How to generate a good word embed- ding? arXiv preprint arXiv:1507.05523. [Lapesa and Evert2014] Gabriella Lapesa and Stefan Ev- ert. 2014. A large scale evaluation of distributional semantic models: Parameters, interactions and model selection. Transactions of the Association for Compu- tational Linguistics, 2:531-545. [Levy and Goldberg2014] Omer Levy and Yoav Gold- berg. 2014. Dependencybased word embeddings. In Proceedings of ACL.
Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics. 3et al.2015] Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Transac- tions of the Association for Computational Linguistics, 3:211-225.
Not all contexts are created equal: Better word representations with variable attention. Proceedings of EMNLP. EMNLPet al.2015a] Wang Ling, Lin Chu-Cheng, Yulia Tsvetkov, Silvio Amir, Ramón Fernandez Astudillo, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015a. Not all contexts are created equal: Better word representations with variable attention. In Proceed- ings of EMNLP.
Two/too simple adaptations of word2vec for syntax problems. Proceedings of NAACL-HLT. NAACL-HLTet al.2015b] Wang Ling, Chris Dyer, Alan Black, and Isabel Trancoso. 2015b. Two/too simple adap- tations of word2vec for syntax problems. In Proceed- ings of NAACL-HLT.
Deep multilingual correlation for improved word embeddings. Proceedings of NAACL. NAACLSteven JManning et al.2014et al.2015] Ang Lu, Weiran Wang, Mohit Bansal, Kevin Gimpel, , and Karen Livescu. 2015. Deep mul- tilingual correlation for improved word embeddings. In Proceedings of NAACL. [Manning et al.2014] Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J.
Distributed representations of words and phrases and their compositionality. David Bethard, ; Mcclosky, Melamud, Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Mikolov et al.2013b] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean52nd Annual Meeting of the Association for Computational Linguistics: System DemonstrationsNianwen Xue, Olga Uryupina12Joint Conference on EMNLP and CoNLL-Shared Task. Association for Computational LinguisticsBethard, and David McClosky. 2014. The Stan- ford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Associa- tion for Computational Linguistics: System Demon- strations, pages 55-60. [Melamud et al.2014] Oren Melamud, Ido Dagan, Jacob Goldberger, Idan Szpektor, and Deniz Yuret. 2014. Probabilistic modeling of joint-context in distribu- tional similarity. In Proceedings of CoNLL. [Melamud et al.2015a] Oren Melamud, Ido Dagan, and Jacob Goldberger. 2015a. Modeling word meaning in context with substitute vectors. In Proceedings of NAACL. [Melamud et al.2015b] Oren Melamud, Omer Levy, and Ido Dagan. 2015b. A simple word embedding model for lexical substitution. In Proceedings of the Vector Space Modeling for NLP Workshop, NAACL. [Mikolov et al.2013a] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estima- tion of word representations in vector space. In Pro- ceedings of ICLR. [Mikolov et al.2013b] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS. [Parker et al.2011] Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2011. English Gigaword Fifth edition. Linguistic Data Consortium, LDC2011T07, June. [Pennington et al.2014] Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceed- ings of the Empiricial Methods in Natural Language Processing (EMNLP 2014), volume 12. [Pradhan et al.2012] Sameer Pradhan, Alessandro Mos- chitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes. In Joint Conference on EMNLP and CoNLL-Shared Task, pages 1-40. Association for Computational Linguis- tics.
Evaluation methods for unsupervised word embeddings. Schnabel, Proc. of EMNLP. of EMNLP[Schnabel et al.2015] Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evalu- ation methods for unsupervised word embeddings. In Proc. of EMNLP.
Introduction to the conll-2003 shared task: Language-independent named entity recognition. Snoek, Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003. Tsvetkov et al.2015] Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guillaume Lample, and Chris Dyerthe seventh conference on Natural language learning at HLT-NAACL 2003Association for Computational Linguistics1631Proc. of ACLSnoek et al.2012] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. 2012. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951-2959. [Socher et al.2013] Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recur- sive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the conference on empirical methods in natural language processing (EMNLP), volume 1631, page 1642. Citeseer. [Tjong Kim Sang and De Meulder2003] Erik F Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142-147. Association for Computational Linguistics. [Tsvetkov et al.2015] Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Guillaume Lample, and Chris Dyer. 2015. Evaluation of word vector representations by subspace alignment. In Proc. of EMNLP. [Turian et al.2010] J. Turian, L. Ratinov, and Y. Bengio. 2010. Word representations: A simple and general method for semisupervised learning. In Proc. of ACL, pages 384-394.
Learning syntactic categories using paradigmatic representations of word context. [ Yatbaz, arXiv:1508.04257Proceedings of EMNLP. [Yin and Schütze2015] Wenpeng Yin and Hinrich Schütze. 2015. Learning word meta-embeddings by using ensembles of embedding sets. EMNLP. [Yin and Schütze2015] Wenpeng Yin and Hinrich Schütze. 2015. Learning word meta-embeddings by using ensembles of embedding setsarXiv preprint[Yatbaz et al.2012] Mehmet Ali Yatbaz, Enis Sert, and Deniz Yuret. 2012. Learning syntactic categories us- ing paradigmatic representations of word context. In Proceedings of EMNLP. [Yin and Schütze2015] Wenpeng Yin and Hinrich Schütze. 2015. Learning word meta-embeddings by using ensembles of embedding sets. arXiv preprint arXiv:1508.04257.
FASTSUBS: An efficient and exact procedure for finding the most likely lexical substitutes based on an n-gram language model. Deniz Yuret, Signal Processing Letters. 1911IEEEDeniz Yuret. 2012. FASTSUBS: An ef- ficient and exact procedure for finding the most likely lexical substitutes based on an n-gram language model. Signal Processing Letters, IEEE, 19(11):725- 728.
| [] |
[
"Evaluating Long-form Text-to-Speech: Comparing the Ratings of Sentences and Paragraphs",
"Evaluating Long-form Text-to-Speech: Comparing the Ratings of Sentences and Paragraphs"
] | [
"Rob Clark rajclark@google.com ",
"Hanna Silen silen@google.com ",
"Tom Kenter tomkenter@google.com ",
"Ralph Leith Google ",
"U K "
] | [] | [] | Text-to-speech systems are typically evaluated on single sentences. When long-form content, such as data consisting of full paragraphs or dialogues is considered, evaluating sentences in isolation is not always appropriate as the context in which the sentences are synthesized is missing.In this paper, we investigate three different ways of evaluating the naturalness of long-form text-to-speech synthesis. We compare the results obtained from evaluating sentences in isolation, evaluating whole paragraphs of speech, and presenting a selection of speech or text as context and evaluating the subsequent speech. We find that, even though these three evaluations are based upon the same material, the outcomes differ per setting, and moreover that these outcomes do not necessarily correlate with each other. We show that our findings are consistent between a single speaker setting of read paragraphs and a two-speaker dialogue scenario. We conclude that to evaluate the quality of long-form speech, the traditional way of evaluating sentences in isolation does not suffice, and that multiple evaluations are required. | 10.21437/ssw.2019-18 | [
"https://arxiv.org/pdf/1909.03965v1.pdf"
] | 202,268,824 | 1909.03965 | 37cfe851a67c1fd975311e73afa11aa706d89e21 |
Evaluating Long-form Text-to-Speech: Comparing the Ratings of Sentences and Paragraphs
Rob Clark rajclark@google.com
Hanna Silen silen@google.com
Tom Kenter tomkenter@google.com
Ralph Leith Google
U K
Evaluating Long-form Text-to-Speech: Comparing the Ratings of Sentences and Paragraphs
arXiv:1909.03965v1 [eess.AS] 9 Sep 2019
Text-to-speech systems are typically evaluated on single sentences. When long-form content, such as data consisting of full paragraphs or dialogues is considered, evaluating sentences in isolation is not always appropriate as the context in which the sentences are synthesized is missing.In this paper, we investigate three different ways of evaluating the naturalness of long-form text-to-speech synthesis. We compare the results obtained from evaluating sentences in isolation, evaluating whole paragraphs of speech, and presenting a selection of speech or text as context and evaluating the subsequent speech. We find that, even though these three evaluations are based upon the same material, the outcomes differ per setting, and moreover that these outcomes do not necessarily correlate with each other. We show that our findings are consistent between a single speaker setting of read paragraphs and a two-speaker dialogue scenario. We conclude that to evaluate the quality of long-form speech, the traditional way of evaluating sentences in isolation does not suffice, and that multiple evaluations are required.
Introduction
Traditionally, text-to-speech (TTS) systems are trained on corpora of isolated sentences. As such, their output is optimized, if only indirectly and inadvertently, for synthesizing isolated sentences. As the use of TTS proliferates and the application of TTS extends into domains where the required output is high quality discourse, long-form (multi-sentence) data is being used more frequently to build voices and to evaluate the quality of the long-form output.
The traditional evaluation approaches used in TTS are designed to assess the quality of synthesized sentences in isolation using metrics such as mean opinion score (MOS) [1] and side by side (SxS) 1 discriminative tasks. For long-form TTS, i.e., speech passages longer than one sentence, this evaluation scenario is limited in terms of what it can be used to evaluate; presenting sentences in isolation means that they are being evaluated out of their natural context. Long-form speech-which may consist of either single speaker data, such as an audio book, a news article, or a public speech; or multi-speaker data such as a conversation between multiple participants-should ideally be evaluated as a whole, because evaluating the quality of isolated sentences will not inform us of the overall quality of the discourse experience, which includes factors such as the appropriateness of prosody in context and fluency at paragraph-level.
The most obvious approach to evaluate long-form TTS is to use the existing standard evaluation techniques and simply present whole paragraphs or dialogues to raters. Doing so, however, raises questions about the impact of providing longer stim- 1 Also referred to as AB tasks. uli that vary in length, both from the perspective of increasing the cognitive load of the raters through presenting them with more material, as from the perspective of increasing the overall variability in the length of stimuli. Including paragraph length as a factor in any subsequent analysis is often impractical as it drastically increases the amount of evaluation material required to fully control for it and still obtain a meaningful result.
An additional scenario, which sits between evaluating isolated sentences and full long-form passages, would be to evaluate the quality of passages of speech in their immediate context. In this scenario, full long-form passages are divided into two parts to form a context part and a stimulus part. Raters are asked to evaluate the quality of the speech stimulus part as a continuation of a given context part, and are presented with the speech (or, potentially, just the text) of the context immediately before hearing the stimulus. Our hypothesis is that we can achieve a higher sentence-level precision in this scenario than we could if the sentences were presented in isolation-as listeners are explicitly asked to evaluate whether the stimulus is appropriate for a specific context rather than being allowed to hypothesize a context for which the stimulus would be appropriate-while keeping the cognitive load for raters low compared to presenting them with full paragraphs.
To develop a better understanding of the potentials of the methods described above:
• We analyze three different ways of evaluating long-form TTS speech. To the best of our knowledge this is the first time a formal comparison based on a multitude of experiments has been performed;
• We show that both evaluating long-form TTS speech as paragraphs and as context-stimulus pairs yields results distinctly different from the traditional single sentence evaluation approach, which is remarkable given that the evaluations in all settings are based on the same material;
• We propose to combine these evaluations to get the most complete picture of long-form TTS quality.
As we are interested primarily in the relative differences of results between the various evaluation scenarios, rather than the relative differences between the TTS systems used, we focus on MOS tasks in this paper, and leave out SxS evaluations.
The remainder of this paper is organized as follows: Section 2 discusses related work and existing approaches to (longform) TTS evaluation. Section 3 details the three ways of evaluating long-form TTS that we propose. Experimental details are presented in Section 4. Sections 5 and 6 present the results of the main and additional experiments, respectively. Section 7 concludes. "It was super easy to do, I saved her at least $80, and I thought, 'I'd like to do more of this'," Adam, from Oklahoma, told the BBC.
Related Work
The currently used MOS [1] and SxS tasks for evaluating TTS naturalness were established in [2,3]. Extensions and improvements to MOS evaluation have been made previously [4,5,6], but none of this work covers the long-form scenario.
In [7], the point is made that evaluating sentences in isolation when they are in fact part of a dialogue does not represent a real-world end-use scenario. An alternative evaluation setup is proposed in which raters interact with an avatar. The experiments on conversational data in Section 5.2 follow this work, in the sense that turns in the dialogue are presented in context rather than in isolation. A key difference is that we do not incorporate an interactive setting. This allows for comparison between the three different settings we propose, none of which involve interaction.
In [8] discourse structure is taken into account for improving prosody of longer passages of text. The focus in this work, however, is on the improvements of a supervised signal pertaining to rhetorical structure, rather than on the evaluation.
It is observed in [9] that evaluating sentences in isolation "may not be appropriate to measure the performance of intonation models." However, the objective in [9] is to show that when evaluating single sentences without providing context, multiple prosodic variants of the same sentence might be equally valid according to raters. No experiments were done to determine how those ratings change if a context is provided.
Lastly, an evaluation protocol for an audiobook reading task, adapted from the scales proposed by [10], is presented in [11]. The method is aimed at a fine-grained analysis of the audiobooks task in particular, and does not cover an analysis of different evaluation alternatives.
In short, to the best of our knowledge, no systematic analysis of the effect of different ways to evaluate long-form TTS context has been carried out before. The absence of such investigation is the primary motivation for this study.
Evaluating Long-form TTS
We present three ways to evaluate long-form material: as single sentences in isolation, as full paragraphs, and as context-stimuli pairs. We should note that, even as the discussion below is presented in terms of sentences in a paragraph, it equally applies to turns in a dialogue. Furthermore, although the discussion is applied to MOS, it is independent of what type of evaluation is performed and applies equally to SxS tests as well as other varieties of evaluation such as MUSHRA [12].
Evaluating sentences in isolation
Firstly, we can use the traditional TTS approach and evaluate individual sentences separately as if they were isolated sentences. As mentioned in Section 1, the obvious disadvantage of this approach is that in evaluating isolated sentences, we are not considering the fact that these sentences are part of a larger discourse which may affect the way they should be synthesized. There are, however, advantages to this method of presentation. Raters, for example, are less likely to be able to infer the content based on context, in this setting, so lack of intelligibility is more likely to result in bad naturalness scores.
In the work presented here we treat this method of evaluation as a reference to compare other results to, which allows us to determine empirically whether we learn something different using alternative evaluation methods.
Evaluating full paragraphs
At the other end of the scale is the evaluation of full paragraphs. Evaluating full paragraphs imposes a higher cognitive load on raters which may impact the responses obtained. Paragraph length, becomes an issue in its own right, and we may get different results depending on how long the paragraphs are. An advantage of this setting, however, is that it is possible for raters to make judgments on the overall flow of the sentences in the paragraph, something they cannot do when they hear them in isolation.
Evaluating context-stimulus pairs
To compromise between evaluating isolated sentences and paragraphs we can present one or more sentences of the paragraph as context to the rater, and the subsequent sentence or sentences as the stimulus to be rated.
This approach raises questions regarding the amount of material that should be presented, both as context and as stimulus. Should we constrain the length of the context and stimulus in terms of the number of sentences or by overall length in words or syllables? E.g, a single long sentence may be longer than two short sentences. In the work presented here, we choose to control the variation in terms of number of sentences and length : MOS results on the news reading data set across evaluation strategies. 'R' refers to real speech, 'T' is for TTS (synthesized speech), 'Text' means no speech but text. For evaluations without context, superscript 'p' denotes a full paragraph and superscript 'i' denotes sentences in isolation. For evaluations with context, 'R 1 R 1 ' is a context-stimulus pair of one line of real speech context and one line of real speech stimulus, 'T 2 T 1 ' is two lines of TTS context, one line of TTS stimulus.
of paragraphs. We also evaluate whether paragraph length influences paragraph MOS scores (see Section 6.1). Figure 1 shows various options for contexts. To keep the figure clear a single sentence stimulus is shown, but we note that multiple sentences can be presented as a stimulus too.
Experimental Setup
We compare the three approaches for long-form TTS evaluation outlined above: 1) sentences in isolation, 2) full paragraphs 3) context-stimulus pairs.
To test for consistency across different domains, we present results of evaluations in two distinctly different scenarios: news-reading and read conversations. We use a WaveNet [13] TTS voice that was built incorporating in-domain training data.
Data and TTS system
For our first series of evaluations we use a proprietary data set of read news articles. We select only paragraphs containing two sentences or more. Single sentence paragraphs are less interesting for our evaluations as any evaluation comparing single sentences to one-sentence paragraphs will come out even. In our final dataset, we have 103 paragraphs of news material. The longest paragraph contains 9 sentences, and the mean length is 3.0 sentences. To further compare the contexts of different lengths, we select a subset of paragraphs with a minimum length of three sentences resulting in a subset of 57 paragraphs with a mean length of 3.8 sentences.
The second data set consists of read conversations where two speakers take turns speaking. We use turns in conversation as the units making up our stimuli (similar to using sentences in paragraphs in the previous setting). An individual turn itself may consist of multiple sentences, which we keep together as a single turn. The conversation design determines the total amount of variation of length per turn and keeps the amount of material per turn reasonably balanced. We use two pairs of speakers. The first pair recorded 42 conversations and the second pair recorded 71. A key difference between this dataset and the news reading one is that the speaker changes between turns.
We should note that for the first dataset, a held-out set of passages was used for evaluation. In the conversation case we did not have sufficient data to do this, and the conversations used for evaluation were used as training example for the WaveNet voice as well. This is suboptimal, but as we are not trying to assess how well a particular TTS model can generalize, this should not affect the results presented here-we have seen little evidence that WaveNet models over-fit in such a way that any one utterance can have a significant impact on the resulting voice.
To synthesize speech we use a two-step approach where one model is trained to produce prosodic parameters (F0, c0 and duration) [14] to be used by a version of WaveNet [13], trained separately to produce speech from linguistic features and the predicted prosodic parameters. The model is not context-aware; it synthesizes speech sentence by sentence.
Rating task
We use a crowd-sourced MOS rating task for evaluation, where raters are asked to rate naturalness for the settings that do not include context, and appropriateness where the stimulus follows a context. Stimuli are rated on a scale of 1-5. The whole number points of the scale are labeled 'poor', 'bad', 'fair', 'good' and 'excellent'. Raters are allowed to rate at 0.5 increments of the scale, as we find this gives slightly finer resolution in MOS scores at the top end of the scale. Stimuli are presented to raters in blocks of 10, except for the full paragraphs, which are presented in blocks of 5. Each stimulus is presented 8 times per experiment to randomly assigned raters and the MOS results presented are calculated from the averages of those 8 ratings for each stimulus. Raters not using headphones are omitted from the analysis. The number of raters per task varies due to the overall number of stimuli in the task, with the lowest number of raters in a task being 35.
Evaluation tasks
For news reading the following evaluations are carried out:
1. Sentences in isolation Both real speech and TTS versions of each sentence are presented as stimulus. Below, these results are referred to as R i (Real speech, individual sentences) and T i (TTS, individual sentences).
Full paragraphs
The same data is used as above, but presented as full paragraphs. Both real speech and TTS versions are presented to the raters. These results are labelled R p and T p , respectively. 3. Context-stimulus pairs The first and second lines of paragraphs are presented, where the first line is the context and the second line is the stimulus to be rated. We experiment using a combination of real speech, TTS and text as the context, and both real speech and TTS as the stimulus. Additionally, to evaluate varying the length of the context, we provide two lines either as context or as stimulus. In this setting, only TTS is used as context: R 1 R 1 One sentence real speech as context, one sentence real speech as stimulus; R 1 T 1 One sentence real speech context, one sentence TTS as stimulus; T 1 T 1 One sentence TTS context, one sentence TTS stimulus;
Text 1 T 1 One sentence textual context, one sentence TTS stimulus.
T 2 T 1 Two sentence TTS context, one sentence TTS stimulus;
T 1 T 2 One sentence TTS context, two sentence TTS stimulus;
In the news reading tasks, real speech samples R i are cleaned from sentence-initial breathing noise and R p from paragraphinitial breathing noise. All real speech samples are downsampled to match the TTS sampling rate.
The conversational data includes two pairs of speakers: F1 paired with M1, and M2 paired with F2, where F and M denote female and male speakers, respectively. The evaluations use TTS samples and real speech samples from all four speakers:
T F1 , T F2 , T M1 , T M2 and R F1 , R F2 , R M1 , R M2 , respectively.
WaveNet voices were built for each of these speakers.
Results
In this section we discuss the results of the two sets of experiments performed. We use a two-tailed independent t-test with α = 0.05 for calculating significance between results. Figure 2 shows the results for all MOS evaluations, ordered from high to low. The first block of results confirms the intuition that real speech scores higher than all settings involving TTS. The highest ratings are for appropriateness of a real speech stimulus in a real speech context (R 1 R 1 ). The scores are slightly higher than for naturalness ratings of both real speech paragraphs (R p ) and real speech isolated sentences (R i ). Real speech paragraphs (R p ) themselves are rated slightly higher than real speech isolated sentences (R i ). Within this grouping of real speech results, there are significant differences between all three conditions. These results alone already indicate that there is a difference between evaluating sentences in isolation and in context, even when only real speech is involved.
News reading
The next block of results in Figure 2 shows the results for context-stimuli pair evaluations. Presenting two sentences as context, while rating one follow-on sentence (T 2 T 1 ) scores highest, followed by one sentence as context with one sentence rated (T 1 T 1 ). The lowest scores are obtained when one sentence is presented as context followed by two sentences being rated (T 1 T 2 ). T 1 T 2 is found to be significantly different from T 1 T 1 and T 2 T 1 . The final bar in this block shows the result of presenting the context as text rather than speech (Text 1 T 1 ) and this gives a score not significantly different from T 1 T 2 . These results indicate that the length of the context presented does not appear to have a significant effect on the MOS results, but increasing the length of the stimulus lowers the MOS result.
The next block holds the results for evaluating TTS sentence (T i ) and paragraph (T p ) naturalness in isolation. These results are significantly lower than the ones in the previous blocks. One potential explanation why raters would rate paragraphs lower than their individual sentences, is that ratings are strongly influenced by the worst thing they hear in the stimulus and thus as the stimulus becomes longer the rating is likely to be lower. This interpretation is consistent with the result above where a lower MOS was found when increasing the stimulus length in a context. It could suggest a (weak) correlation between the minimum sentence MOS and the paragraph MOS (cf. Table 1 discussed in Section 6.1). Alternatively, it may be that higher cognitive load simply results in lower ratings.
It is interesting to see that sentences with context are rated higher than when presented in isolation. As noted in Section 4.1, the TTS model used is not taking any paragraph level context into consideration, so the difference has to be attributed either to the task itself, or the fact that the content of a paragraph non-initial sentence sounds less natural when presented out of context.
The final and lowest result in Figure 2 is for TTS stimuli with real speech context (R 1 T 1 ). A key observation to point out is that these results are considerably lower than the ones where the same stimuli are presented with a TTS context. This seems to indicate an anchoring effect of the real speech lowering the perceived quality of the TTS, suggesting that when rating appropriateness in context, raters pay particular attention to whether the quality of the stimulus matches the quality of the context. Lastly, the fact that cases where the TTS context was used score higher than when sentences are rated in isolation suggests that part of the appropriateness judgment relates to similarity in quality compared to the context, and the rating does not just relate to overall naturalness and how well the prosody is suited in context to the paragraph. The implication here is that the context-stimuli setting cannot be considered to be an alternative to the sentence-in-isolation naturalness MOS task, because it will produce varying results depending on the quality of the context. The MOS result a context-stimulus evaluation yields can be substantially higher than one obtained for a sentence in isolation when there is a quality match between the context and stimulus, or lower it when the quality of the context is higher than that of the stimulus.
Conversations
To determine if the differences observed between ratings for sentences presented in isolation versus sentences presented in context are consistent across domains, we perform evaluations on a distinctly different dataset that consists of conversations. We restrict the evaluation to using only the first and second turns of the dialogues, as we saw previously that amount of context presented did not greatly affect the results. We both evaluate the first and second turns in isolation, and we evaluate the second turns using the first turns as context. Note that, different from the news data, the context in this scenario is uttered by a different speaker. In two separate tasks, we present the context either as real speech or as TTS.
The results of this experiment are shown in Figure 3. The MOS for the evaluations involving only recorded voices R F1 , R F2 , R M1 and R M2 range between 4.6 -4.7 with no statistically significant difference between the scores for the second turns presented in isolation or in their recorded context-the only statistically significant difference in this group of evaluations is observed between R F1 and R F1 R M2 or R F2 . Furthermore, MOS scores for the synthesized turns in isolation range from 3.8 for voices T M1 , T M2 and T F2 and to 4.0 for voice T F1 . Conversely, when the second turns of the dialogues are preceded by their context, the MOS for the TTS voices rises to the 4.3 -4.4 range, mirroring the effect we saw for the read news data. Furthermore, using real speech as context (R F1 T M1 and R M2 T F2 ) decreases the resulting MOS for TTS stimuli as again the raters appear to consider the quality of the context as an anchor. However in this case these ratings do not drop below the ratings of the turns in isolation. We attribute this to the fact that, even if the context is presented as the natural speech of a different speaker, this still acts as an anchor, but a weaker one than the natural speech of the same speaker would be.
Further analysis
The results presented in the previous section show that rating a full paragraph gives different results then rating sentences in isolation does, regardless of how the task is set up. To gain more insight into this observation, we analyze correlations between full paragraph and sentence ratings. These tests are carried out on the news reading data set. Table 1 shows the correlations (Pearson's r) between paragraph MOS scores and various sentence MOS ratings. We see significant correlations, at the 5% level, of around 0.3 between the MOS rating of the full paragraph and the 'Mean sentence MOS', 'The last sentence MOS' and the 'Minimum sentence MOS'. Furthermore, there is a significant correlation of 0.345, at 1% level, between paragraph MOS and the maximum MOS of the sentences the paragraphs consists of. All of these r values are small, however, and only 12% of the variance can be accounted by the maximum sentence MOS of r = 0.345. These correlations show that paragraph MOS is influenced by the individual sentences, both collectively through the means and individually through the extremes, yet only less than half the variance can be accounted for this way. We conclude that, while paragraph and individual sentence ratings cannot be considered to be independent, the majority of the variance seen in paragraph MOS scores is not accounted for by the MOS ratings of the individual sentences.
Correlating ratings of full paragraphs and single sentences
Lastly, the rightmost columns of Table 1 shows the correlations between the paragraph length (measured in sentences or words) and the paragraph MOS rating. There is a correlation, supporting the intuition that MOS ratings go down as paragraphs get longer in terms of sentences, but the r value is so small that it does not appear to be meaningful.
Correlating ratings of full paragraphs and single sentences and their positions
An alternative hypothesis is that, even if little correlation between the MOS ratings of paragraphs and the MOS ratings of each individual constituent sentence is found when these are analyzed all together, perhaps the latter can be inferred from the former if the order of sentences is taken into account. To test this hypothesis we create linear regression models predicting the paragraph MOS from the individual sentence MOS values, depending on their position in the paragraph. We restrict these experiments to paragraphs of length two, three and four sentences, as we do not have sufficient data in the current experiments to analyze longer paragraphs, and it is not immediately clear how models allowing for variable paragraph length should be designed.
The results of the regression experiments are shown in Table 2. First, we note that the only model with a significant Rsquared value, i.e., that can account for a variance in a significant way, is the model for paragraphs of three sentences. For this model the only significantly non-zero contribution is made by the MOS of the first sentence. That trend is not repeated for the other models.
For the two sentence model only the constant term contributes in a non-zero way, i.e., the paragraph MOS is the same for all paragraphs under this model. Hence, it is unsurprising that this has an insignificant R-squared value.
For paragraphs of four sentences there are non-zero contributions from both the constant term and the first sentence in the paragraph, but the low and non-significant R-squared value for this model means this model does not fit the data well.
In short, we conclude from these results that the individual MOS ratings of sentences are bad predictors of paragraph MOS, and if a MOS which reflects the overall quality of the paragraph is required, it needs to be obtained directly.
Conclusions
Now that the performance of TTS systems has come to a level where voice quality itself is close to human level, interesting and challenging new tasks are being undertaken, like synthesizing speech for an entire audio book or in a multi-turn conversation. The experiments presented here suggest that, as these new tasks go beyond the scope of traditional TTS, new ways of evaluation should be considered including task based evaluations.
We demonstrated that long-form evaluation can be improved beyond evaluating isolated sentences by showing that different results are obtained when the material is presented in different ways. Asking raters to rate the paragraph as a whole does not give the same results as asking raters to rate the constituent sentences in isolation or asking raters to rate using the previous parts of the paragraph as context. Additionally, we proved that it is difficult and inconclusive to try to predict paragraph MOS from the MOS of the individual sentences in it, which suggests that raters do pay attention to contextual cues when performing these different tasks.
We conclude, therefore, that to fully evaluate long-form paragraphs or dialogues, a combination of tests is necessary. In some circumstances it may be sufficient to only evaluate the paragraphs as a whole, and this is probably what should be done if resources are limited and the paragraphs are not too long. Yet, as observed above, this method gives lower scores than the scores for individual sentences when rated either in isolation or with discourse context. One potential reason is that although our TTS training data consists of multi-sentence data, no significant effort has been made to model paragraph level structures in a TTS system, for example varying the prosody of a sentence based on the content or realization of the previous sentence, and it will be interesting to see if successfully doing so can close the gap between the rating for paragraphs and sentences.
One shortcoming of the three different approaches of evaluating long-form TTS we presented is that they do not consider unbalanced numbers of sentences per paragraph in the data. That is, we have a lot more second sentences in a paragraph than we do fifth sentences in a paragraph. Future work could investigate how to handle unbalanced data in a rigorous way.
Lastly, evaluating sentences in context produced interesting results with higher scores in general, specifically when the context was also TTS: with the same voice in the case of the read news experiments, but also when the context was a different TTS speaker, in case of the conversation experiments. We attribute this effect to the raters including a similarity judgment between the quality of the context and stimulus in their scores. This is corroborated by the experiments with real speech context, which yielded lower ratings. Evaluating in context is therefore our recommended way to evaluate long-form material as it allows sentences to be presented individually, while paragraph effect judgments can be considered in the rating.
Acknowledgments
We would like to acknowledge contributions to this work from the wider TTS research community within Google AI and DeepMind, your thirst for understanding lead to this study. Specific thanks to Xingyang Cai, Anna Greenwood, Mateusz Westa, Dina Kelesi and Leilani Kurtak-McDonald for help with evaluation tools and voice building.
Feeling inspired to help more people in need, Adam and his wife, Toni, set up Hard Luck Automotive Services (HLAS) in 2017.
Figure 1 :
1Illustration of three ways to evaluate single sentences that are part of a three sentence paragraph, using other parts of the paragraph as context. Green boxes contain the audio to be evaluated. Yellow boxes are sentences presented as context (text and/or audio), not to be evaluated. White boxes show sentences of the paragraph not used in the rating task. (a) and (b) present the single previous sentence as context, while (c) presents two previous sentences in the paragraph. (Text courtesy of BBC News)
Figure 2
2Figure 2: MOS results on the news reading data set across evaluation strategies. 'R' refers to real speech, 'T' is for TTS (synthesized speech), 'Text' means no speech but text. For evaluations without context, superscript 'p' denotes a full paragraph and superscript 'i' denotes sentences in isolation. For evaluations with context, 'R 1 R 1 ' is a context-stimulus pair of one line of real speech context and one line of real speech stimulus, 'T 2 T 1 ' is two lines of TTS context, one line of TTS stimulus.
Figure 3 :
3MOS results on the conversational data set, presented with context (F1 M1 and M2 F2) and in isolation (F1, M1, M2, F2).
( a )
aWhen former paratrooper and helicopter mechanic Adam Ely offered to fix his daughter's friend's car, he had what he calls "a light bulb moment"."It was super easy to do, I saved her at least $80, and I thought, 'I'd like to do more of this'," Adam, from Oklahoma, told the BBC.Feeling inspired to help more people in need, Adam and his wife, Toni, set up Hard Luck Automotive Services (HLAS) in 2017.When former paratrooper and helicopter mechanic Adam Ely offered to fix his daughter's friend's car, he had what he calls "a light bulb moment"."It was super easy to do, I saved her at least $80, and I thought, 'I'd like to do more of this'," Adam, from Oklahoma, told the BBC.Feeling inspired to help more people in need, Adam and his wife, Toni, set up Hard Luck Automotive Services (HLAS) in 2017.When former paratrooper and helicopter mechanic Adam Ely offered to fix his daughter's friend's car, he had what he calls "a light bulb moment".(b)
(c)
Table 1 :
1Correlations of sentence MOS scores with paragraph MOS (news reading data).Correlate Mean
sentence
MOS
First
Sentence
MOS
Second
Sentence
MOS
Last
sentence
MOS
Min.
sentence
MOS
Max.
sentence
MOS
Paragraph
no.
of
sentences
Paragraph
no.
of
words
r
0.296
0.087
0.114
0.268
0.234
0.345
-0.020
0.029
p
< 0.05
> 0.05
> 0.05
< 0.05
< 0.05
< 0.01
< 0.05
> 0.05
Table 2 :
2Regressionmodel coefficients for predicting the para-
graph MOS from individual sentence MOS for paragraphs of
lengths two, three and four sentences long (news reading data).
Model for paragraphs of two sentences
Num. of paragraphs
46
R 2 = 0.04, (F = 0.95, p > 0.05)
coef
std err
t
P >|t|
intercept
2.50
0.92
2.74
0.01
s1
0.15
0.15
0.99
0.33
s2
0.17
0.15
1.13
0.26
Model for paragraphs of three sentences
Num. of paragraphs
31
R 2 = 0.27, (F = 3.35, p < 0.05)
coef
std err
t
P >|t|
intercept
1.30
0.95
1.37
0.18
s1
0.39
0.14
2.77
0.01
s2
0.01
0.12
0.11
0.92
s3
0.22
0.14
1.63
0.17
Model for paragraphs of four sentences
Num. of paragraphs
15
R 2 = 0.54, (F = 2.97, p > 0.05)
coef
std err
t
P >|t|
intercept
4.23
2.03
2.08
0.06
s1
-0.53
0.25
-2.14
0.06
s2
0.12
0.16
0.73
0.48
s3
0.17
0.17
1.00
0.34
s4
0.11
0.22
0.51
0.62
International Telecommunication Union. Itu-T P, 800Mean opinion score (MOS) terminologyITU-T P.800.1, "Mean opinion score (MOS) terminology," Inter- national Telecommunication Union, 2016.
Quality evaluation of synthesized speech. V J Van Heuven, R Van Bezooijen, Speech Coding and Synthesis. W. B. Kleijn and K. K. PaliwalElsevierV. J. van Heuven and R. van Bezooijen, "Quality evaluation of synthesized speech." in Speech Coding and Synthesis, W. B. Kleijn and K. K. Paliwal, Eds. Elsevier, 1995.
Evaluation of speech synthesis. N Campbell, Evaluation of text and speech systems. SpringerN. Campbell, "Evaluation of speech synthesis," in Evaluation of text and speech systems. Springer, 2007.
Measuring speech quality for text-to-speech systems: development and assessment of a modified mean opinion score (MOS) scale. M Viswanathan, M Viswanathan, Computer Speech & Language. 191M. Viswanathan and M. Viswanathan, "Measuring speech qual- ity for text-to-speech systems: development and assessment of a modified mean opinion score (MOS) scale," Computer Speech & Language, vol. 19, no. 1, pp. 55-83, 2005.
Are we using enough listeners? No! An empirically-supported critique of Interspeech 2014 TTS evaluations. M Wester, C Valentini-Botinhao, G E Henter, InterspeechM. Wester, C. Valentini-Botinhao, and G. E. Henter, "Are we us- ing enough listeners? No! An empirically-supported critique of Interspeech 2014 TTS evaluations," in Interspeech, 2015.
MOS Naturalness and the Quest for Human-Like Speech. S Shirali-Shahreza, G Penn, 2018 IEEE Spoken Language Technology Workshop (SLT). S. Shirali-Shahreza and G. Penn, "MOS Naturalness and the Quest for Human-Like Speech," in 2018 IEEE Spoken Language Technology Workshop (SLT), 2018.
Beyond the Listening Test: An Interactive Approach to TTS Evaluation. J Mendelson, M P Aylett, InterspeechJ. Mendelson and M. P. Aylett, "Beyond the Listening Test: An Interactive Approach to TTS Evaluation," in Interspeech, 2017.
Discourse prosody and its application to speech synthesis. N Hu, P Shao, Y Zu, Z Wang, W Huang, S Wang, 2016 10th International Symposium on Chinese Spoken Language Processing (ISCSLP). N. Hu, P. Shao, Y. Zu, Z. Wang, W. Huang, and S. Wang, "Dis- course prosody and its application to speech synthesis," in 2016 10th International Symposium on Chinese Spoken Language Pro- cessing (ISCSLP), 2016.
Speech intonation for TTS: Study on evaluation methodology. J Latorre, K Yanagisawa, V Wan, B Kolluru, M J Gales, InterspeechJ. Latorre, K. Yanagisawa, V. Wan, B. Kolluru, and M. J. Gales, "Speech intonation for TTS: Study on evaluation methodology," in Interspeech, 2014.
A method for subjective performance assessment of the quality of speech voice output devices. . P Itu-T Rec, International Telecommunication Union. ITU-T Rec. P.85, "A method for subjective performance assess- ment of the quality of speech voice output devices," International Telecommunication Union, 1985.
An Evaluation Protocol for the Subjective Assessment of Text-tospeech in Audiobook Reading Tasks. F Hinterleitner, G Neitzel, S Möller, C Norrenbrock, Proceedings of the Blizzard Challenge Workshop (ISCA). the Blizzard Challenge Workshop (ISCA)F. Hinterleitner, G. Neitzel, S. Möller, and C. Norrenbrock, "An Evaluation Protocol for the Subjective Assessment of Text-to- speech in Audiobook Reading Tasks," in Proceedings of the Bliz- zard Challenge Workshop (ISCA), 2011.
Method for the subjective assessment of intermediate quality level of coding systems. Itu-R Bs, International Telecommunication Union. ITU-R BS. 1534-1, "Method for the subjective assessment of intermediate quality level of coding systems," International Telecommunication Union, 2003.
Parallel wavenet: Fast high-fidelity speech synthesis. A Van Den Oord, Y Li, I Babuschkin, K Simonyan, O Vinyals, K Kavukcuoglu, G Driessche, E Lockhart, L Cobo, F Stimberg, International Conference on Machine Learning. A. van den Oord, Y. Li, I. Babuschkin, K. Simonyan, O. Vinyals, K. Kavukcuoglu, G. Driessche, E. Lockhart, L. Cobo, F. Stimberg et al., "Parallel wavenet: Fast high-fidelity speech synthesis," in International Conference on Machine Learning, 2018, pp. 3915- 3923.
CHiVE: Varying Prosody in Speech Synthesis with a Linguistically Driven Dynamic Hierarchical Conditional Variational Network. V Wan, C Chan, T Kenter, J Vit, R Clark, International Conference on Machine Learning. V. Wan, C. Chan, T. Kenter, J. Vit, and R. Clark, "CHiVE: Vary- ing Prosody in Speech Synthesis with a Linguistically Driven Dy- namic Hierarchical Conditional Variational Network," in Interna- tional Conference on Machine Learning, 2019, pp. 3331-3340.
| [] |
[
"Bucking the Trend: Large-Scale Cost-Focused Active Learning for Statistical Machine Translation",
"Bucking the Trend: Large-Scale Cost-Focused Active Learning for Statistical Machine Translation"
] | [
"Michael Bloodgood bloodgood@jhu.edu \nHuman Language Technology Center of Excellence\nCenter for Language and Speech Processing\nJohns Hopkins University Baltimore\nJohns Hopkins University Baltimore\n21211, 21211MD, MD\n",
"Chris Callison-Burch \nHuman Language Technology Center of Excellence\nCenter for Language and Speech Processing\nJohns Hopkins University Baltimore\nJohns Hopkins University Baltimore\n21211, 21211MD, MD\n"
] | [
"Human Language Technology Center of Excellence\nCenter for Language and Speech Processing\nJohns Hopkins University Baltimore\nJohns Hopkins University Baltimore\n21211, 21211MD, MD",
"Human Language Technology Center of Excellence\nCenter for Language and Speech Processing\nJohns Hopkins University Baltimore\nJohns Hopkins University Baltimore\n21211, 21211MD, MD"
] | [
"Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics"
] | We explore how to improve machine translation systems by adding more translation data in situations where we already have substantial resources. The main challenge is how to buck the trend of diminishing returns that is commonly encountered. We present an active learning-style data solicitation algorithm to meet this challenge. We test it, gathering annotations via Amazon Mechanical Turk, and find that we get an order of magnitude increase in performance rates of improvement. | null | null | 1,302,329 | 1410.5877 | d05a158fd52eb996da34c62b6ddca4dc2012fade |
Bucking the Trend: Large-Scale Cost-Focused Active Learning for Statistical Machine Translation
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2010. 2010
Michael Bloodgood bloodgood@jhu.edu
Human Language Technology Center of Excellence
Center for Language and Speech Processing
Johns Hopkins University Baltimore
Johns Hopkins University Baltimore
21211, 21211MD, MD
Chris Callison-Burch
Human Language Technology Center of Excellence
Center for Language and Speech Processing
Johns Hopkins University Baltimore
Johns Hopkins University Baltimore
21211, 21211MD, MD
Bucking the Trend: Large-Scale Cost-Focused Active Learning for Statistical Machine Translation
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
the 48th Annual Meeting of the Association for Computational LinguisticsUppsala, Sweden; cAssociation for Computational LinguisticsJuly 2010. 2010
We explore how to improve machine translation systems by adding more translation data in situations where we already have substantial resources. The main challenge is how to buck the trend of diminishing returns that is commonly encountered. We present an active learning-style data solicitation algorithm to meet this challenge. We test it, gathering annotations via Amazon Mechanical Turk, and find that we get an order of magnitude increase in performance rates of improvement.
Introduction
Figure 1
shows the learning curves for two state of the art statistical machine translation (SMT) systems for Urdu-English translation. Observe how the learning curves rise rapidly at first but then a trend of diminishing returns occurs: put simply, the curves flatten.
This paper investigates whether we can buck the trend of diminishing returns, and if so, how we can do it effectively. Active learning (AL) has been applied to SMT recently ) but they were interested in starting with a tiny seed set of data, and they stopped their investigations after only adding a relatively tiny amount of data as depicted in Figure 1.
In contrast, we are interested in applying AL when a large amount of data already exists as is the case for many important lanuage pairs. We develop an AL algorithm that focuses on keeping annotation costs (measured by time in seconds) low. It succeeds in doing this by only soliciting translations for parts of sentences. We show that this gets a savings in human annotation time above and beyond what the reduction in # words annotated would have indicated by a factor of about three and speculate as to why. The y-axis measures BLEU score. Note the diminishing returns as more data is added. Also note how relatively early on in the process previous studies were terminated. In contrast, the focus of our main experiments doesn't even begin until much higher performance has already been achieved with a period of diminishing returns firmly established.
We conduct experiments for Urdu-English translation, gathering annotations via Amazon Mechanical Turk (MTurk) and show that we can indeed buck the trend of diminishing returns, achieving an order of magnitude increase in the rate of improvement in performance.
Section 2 discusses related work; Section 3 discusses preliminary experiments that show the guiding principles behind the algorithm we use; Section 4 explains our method for soliciting new translation data; Section 5 presents our main results; and Section 6 concludes.
Related Work
Active learning has been shown to be effective for improving NLP systems and reducing annotation burdens for a number of NLP tasks (see, e.g., (Hwa, 2000;Sassano, 2002;Bloodgood and Vijay-Shanker, 2008;Bloodgood and Vijay-Shanker, 2009b;Mairesse et al., 2010;Vickrey et al., 2010)). The current paper is most highly related to previous work falling into three main areas: use of AL when large corpora already exist; cost-focused AL; and AL for SMT.
In a sense, the work of Banko and Brill (2001) is closely related to ours. Though their focus is mainly on investigating the performance of learning methods on giant corpora many orders of magnitude larger than previously used, they do lay out how AL might be useful to apply to acquire data to augment a large set cheaply because they recognize the problem of diminishing returns that we discussed in Section 1.
The second area of work that is related to ours is previous work on AL that is cost-conscious. The vast majority of AL research has not focused on accurate cost accounting and a typical assumption is that each annotatable has equal annotation cost. An early exception in the AL for NLP field was the work of Hwa (2000), which makes a point of using # of brackets to measure cost for a syntactic analysis task instead of using # of sentences. Another relatively early work in our field along these lines was the work of Ngai and Yarowsky (2000), which measured actual times of annotation to compare the efficacy of rule writing versus annotation with AL for the task of BaseNP chunking. Osborne and Baldridge (2004) argued for the use of discriminant cost over unit cost for the task of Head Phrase Structure Grammar parse selection. King et al. (2004) design a robot that tests gene functions. The robot chooses which experiments to conduct by using AL and takes monetary costs (in pounds sterling) into account during AL selection and evaluation. Unlike our situation for SMT, their costs are all known beforehand because they are simply the cost of materials to conduct the experiments, which are already known to the robot. Hachey et al. (2005) showed that selectively sampled examples for an NER task took longer to annotate and had lower inter-annotator agreement. This work is related to ours because it shows that how examples are selected can impact the cost of annotation, an idea we turn around to use for our advantage when developing our data selection algorithm. Haertel et al. (2008) emphasize measuring costs carefully for AL for POS tagging. They develop a model based on a user study that can estimate the time required for POS annotating. Kapoor et al. (2007) assign costs for AL based on message length for a voicemail classification task. In contrast, we show for SMT that annotation times do not scale according to length in words and we show our method can achieve a speedup in annotation time above and beyond what the reduction in words would indicate. Tomanek and Hahn (2009) measure cost by # of tokens for an NER task. Their AL method only solicits labels for parts of sentences in the interest of reducing annotation effort. Along these lines, our method is similar in the respect that we also will only solicit annotation for parts of sentences, though we prefer to measure cost with time and we show that time doesn't track with token length for SMT. ), and Ambati et al. (2010 investigate AL for SMT. There are two major differences between our work and this previous work. One is that our intended use cases are very different. They deal with the more traditional AL setting of starting from an extremely small set of seed data. Also, by SMT standards, they only add a very tiny amount of data during AL. All their simulations top out at 10,000 sentences of labeled data and the models learned have relatively low translation quality compared to the state of the art.
On the other hand, in the current paper, we demonstrate how to apply AL in situations where we already have large corpora. Our goal is to buck the trend of diminishing returns and use AL to add data to build some of the highest-performing MT systems in the world while keeping annotation costs low. See Figure 1 from Section 1, which contrasts where ) stop their investigations with where we begin our studies.
The other major difference is that ) measure annotation cost by # of sentences. In contrast, we bring to light some potential drawbacks of this practice, showing it can lead to different conclusions than if other annotation cost metrics are used, such as time and money, which are the metrics that we use.
Simulation Experiments
Here we report on results of simulation experiments that help to illustrate and motivate the design decisions of the algorithm we present in Section 4. We use the Urdu-English language pack 1 from the Linguistic Data Consortium (LDC), which contains ≈ 88000 Urdu-English sentence translation pairs, amounting to ≈ 1.7 million Urdu words translated into English. All experiments in this paper evaluate on a genre-balanced split of the NIST2008 Urdu-English test set. In addition, the language pack contains an Urdu-English dictionary consisting of ≈ 114000 entries. In all the experiments, we use the dictionary at every iteration of training. This will make it harder for us to show our methods providing substantial gains since the dictionary will provide a higher base performance to begin with. However, it would be artificial to ignore dictionary resources when they exist.
We experiment with two translation models: hierarchical phrase-based translation (Chiang, 2007) and syntax augmented translation (Zollmann and Venugopal, 2006), both of which are implemented in the Joshua decoder (Li et al., 2009). We hereafter refer to these systems as jHier and jSyntax, respectively.
We will now present results of experiments with different methods for growing MT training data. The results are organized into three areas of investigations:
1. annotation costs; 2. managing uncertainty; and 3. how to automatically detect when to stop soliciting annotations from a pool of data.
Annotation Costs
We begin our cost investigations with four simple methods for growing MT training data: random, shortest, longest, and VocabGrowth sentence selection. The first three methods are selfexplanatory. VocabGrowth (hereafter VG) selection is modeled after the best methods from previous work , which are based on preferring sentences that contain phrases that occur frequently in unlabeled data and infrequently in the so-far labeled data. Our VG method selects sentences for translation that contain n-grams (for n in {1,2,3,4}) that 1 LDC Catalog No.: LDC2006E110.
Init:
Go through all available training data (labeled and unlabeled) and obtain frequency counts for every n-gram (n in {1, 2, 3, 4}) that occurs. sortedN Grams ← Sort n-grams by frequency in descending order. Loop until stopping criterion (see Section 3.3) is met 1. trigger ← Go down sortedN Grams list and find the first n-gram that isn't covered in the so far labeled training data. 2. selectedSentence ← Find a sentence that contains trigger.
3. Remove selectedSentence from unlabeled data and add it to labeled training data. do not occur at all in our so-far labeled data. We call an n-gram "covered" if it occurs at least once in our so-far labeled data. VG has a preference for covering frequent n-grams before covering infrequent n-grams. The VG method is depicted in Figure 2. Figure 3 shows the learning curves for both jHier and jSyntax for VG selection and random selection. The y-axis measures BLEU score (Papineni et al., 2002),which is a fast automatic way of measuring translation quality that has been shown to correlate with human judgments and is perhaps the most widely used metric in the MT community. The x-axis measures the number of sentence translation pairs in the training data. The VG curves are cut off at the point at which the stopping criterion in Section 3.3 is met. From Figure 3 it might appear that VG selection is better than random selection, achieving higher-performing systems with fewer translations in the labeled data.
End Loop
However, it is important to take care when measuring annotation costs (especially for relatively complicated tasks such as translation). Figure 4 shows the learning curves for the same systems and selection methods as in Figure 3 but now the x-axis measures the number of foreign words in the training data. The difference between VG and random selection now appears smaller.
For an extreme case, to illustrate the ramifica- tions of measuring translation annotation cost by # of sentences versus # of words, consider Figures 5 and 6. They both show the same three selection methods but Figure 5 measures the x-axis by # of sentences and Figure 6 measures by # of words. In Figure 5, one would conclude that shortest is a far inferior selection method to longest but in Figure 6 one would conclude the opposite.
Measuring annotation time and cost in dollars are probably the most important measures of annotation cost. We can't measure these for the simulated experiments but we will use time (in seconds) and money (in US dollars) as cost measures in Section 5, which discusses our nonsimulated AL experiments. If # sentences or # words track these other more relevant costs in predictable known relationships, then it would suffice to measure # sentences or # words instead. But it's clear that different sentences can have very different annotation time requirements according to how long and complicated they are so we will not use # sentences as an annotation cost any more. It is not as clear how # words tracks with annotation time. In Section 5 we will present evidence showing that time per word can vary considerably and also show a method for soliciting annotations that reduces time per word by nearly a factor of three.
As it is prudent to evaluate using accurate cost accounting, so it is also prudent to develop new AL algorithms that take costs carefully into account. Hence, reducing annotation time burdens instead of the # of sentences translated (which might be quite a different thing) will be a cornerstone of the algorithm we describe in Section 4.
Managing Uncertainty
One of the most successful of all AL methods developed to date is uncertainty sampling and it has been applied successfully many times (e.g., (Lewis and Gale, 1994;Tong and Koller, 2002)). The intuition is clear: much can be learned (potentially) if there is great uncertainty. However, with MT being a relatively complicated task (compared with binary classification, for example), it might be the case that the uncertainty approach has to be re-considered. If words have never occurred in the training data, then uncertainty can be expected to be high. But we are concerned that if a sentence is translated for which (almost) no words have been seen in training yet, though uncertainty will be high (which is usually considered good for AL), the word alignments may be incorrect and then subsequent learning from that translation pair will be severely hampered.
We tested this hypothesis and Figure 7 shows empirical evidence that it is true. Along with VG, two other selection methods' learning curves are charted in Figure 7: mostNew, which prefers to select those sentences which have the largest # of unseen words in them; and moderateNew, which aims to prefer sentences that have a moderate # of unseen words, preferring sentences with ≈ ten unknown words in them. One can see that most-New underperforms VG. This could have been due to VG's frequency component, which mostNew doesn't have. But moderateNew also doesn't have a frequency preference so it is likely that mostNew winds up overwhelming the MT training system, word alignments are incorrect, and less is learned as a result. In light of this, the algorithm we develop in Section 4 will be designed to avoid this word alignment danger.
Automatic Stopping
The problem of automatically detecting when to stop AL is a substantial one, discussed at length in the literature (e.g., (Bloodgood and Vijay-Shanker, 2009a;Schohn and Cohn, 2000;Vlachos, 2008)). In our simulation, we stop VG once all n-grams (n in {1,2,3,4}) have been covered. Though simple, this stopping criterion seems to work well as can be seen by where the curve for VG is cut off in Figures 3 and 4. It stops after 1,293,093 words have been translated, with jHier's BLEU=21.92 and jSyntax's BLEU=26.10 at the stopping point. The ending BLEU scores (with the full corpus annotated) are 21.87 and 26.01 for jHier and jSyntax, respectively. So our stopping criterion saves 22.3% of the annotation (in terms of words) and actually achieves slightly higher BLEU scores than if all the data were used. Note: this "less is more" phenomenon has been commonly observed in AL settings (e.g., (Bloodgood and Vijay-Shanker, 2009a;Schohn and Cohn, 2000)).
Highlighted N-Gram Method
In this section we describe a method for soliciting human translations that we have applied successfully to improving translation quality in real (not simulated) conditions. We call the method the Highlighted N-Gram method, or HNG, for short. HNG solicits translations only for trigger n-grams and not for entire sentences. We provide sentential context, highlight the trigger n-gram that we want translated, and ask for a translation of just the highlighted trigger n-gram. HNG asks for translations for triggers in the same order that the triggers are encountered by the algorithm in Figure 2. A screenshot of our interface is depicted in Figure 8. The same stopping criterion is used as was used in the last section. When the stopping criterion becomes true, it is time to tap a new unlabeled pool of foreign text, if available.
Our motivations for soliciting translations for only parts of sentences are twofold, corresponding to two possible cases. Case one is that a translation model learned from the so-far labeled data will be able to translate most of the non-trigger words in the sentence correctly. Thus, by asking a human to translate only the trigger words, we avoid wasting human translation effort. (We will show in the next section that we even get a much larger speedup above and beyond what the reduction in number of translated words would give us.) Case two is that a translation model learned from the sofar labeled data will (in addition to not being able to translate the trigger words correctly) also not be able to translate most of the non-trigger words correctly. One might think then that this would be a great sentence to have translated because the machine can potentially learn a lot from the translation. Indeed, one of the overarching themes of AL research is to query examples where uncertainty is greatest. But, as we showed evidence for in the last section, for the case of SMT, too much uncertainty could in a sense overwhelm the machine and it might be better to provide new training data in a more gradual manner. A sentence with large #s of unseen words is likely to get word-aligned incorrectly and then learning from that translation could be hampered. By asking for a translation of only the trigger words, we expect to be able to circumvent this problem in large part. The next section presents the results of experiments that show that the HNG algorithm is indeed practically effective. Also, the next section analyzes results regarding various aspects of HNG's behavior in more depth.
! " # $ % " & ' ( ) ' * + , - . / 0 ) 1 2 3 4 5 6 7 8 9 : - ! ! " # $ % $ & ' $ & ( ) * + ; < = ' $ > / ? @ 3 / A > . + B ! C D ) C E F G H ? I ' 3 " ) D ) + 0 ) + & . " J & " J & " $ K $ ! 1 2 L ) M 8 ' : ? 3 N ! O # ) P & G Q 6 - ' & R 7 @ * / & S T & S T & ! 9 , 8 U V ) W X ' 8 , " * ) - ! . ( / 0 . " 2 3 4 ! . C 2 3 4 ! D # 8 E Y ) . < 3 ' 8 M H 3 G : Z ! " - [ $ % ' 8 R 3 \ 5 # ) T = 5 # ) ] ' 3 E & > ' # ) P 8 > < & . : S _ < * ' ( * C + & + : Z ' * / $ > a U H $ G X " & 5 , - . b ' 8 " $ c 9 * S _ / & < * d H # $ ! < ) + & ? e ( @ ) f e 3 < g : # 1 . 2 # 1 . 2 " ( : Z . < * e @ * ' : ) K ) C + ) E # ) ' * + $ / H 0 ) + & < * G : I ' 3 . 4 5 ' ' & ' ) C ' , $ % " & 5 # : 6 8 + $ . ! 1 ' ) ' ( , ) 6 7 $ ! 8 ' ) 9 G Q ) P I ' & U I ' ) . h X + & ! . C ! 1 ' $ " i 3 ! " - ! f " ( : Z ' : ) K ) / H 0 ) ' .
Experiments and Discussion
General Setup
We set out to see whether we could use the HNG method to achieve translation quality improvements by gathering additional translations to add to the training data of the entire LDC language pack, including its dictionary. In particular, we wanted to see if we could achieve translation improvements on top of already state-of-the-art performing systems trained already on the entire LDC corpus. Note that at the outset this is an ambitious endeavor (recall the flattening of the curves in Figure 1 from Section 1). Snow et al. (2008) explored the use of the Amazon Mechanical Turk (MTurk) web service for gathering annotations for a variety of natural language processing tasks and recently MTurk has been shown to be a quick, cost-effective way to gather Urdu-English translations (Bloodgood and Callison-Burch, 2010). We used the MTurk web service to gather our annotations. Specifically, we first crawled a large set of BBC articles on the internet in Urdu and used this as our unlabeled pool from which to gather annotations. We applied the HNG method from Section 4 to determine what to post on MTurk for workers to translate. 2 We gathered 20,580 n-gram translations for which we paid $0.01 USD per translation, giving us a total cost of $205.80 USD. We also gathered 1632 randomly chosen Urdu sentence translations as a control set, for which we paid $0.10 USD per sentence translation. 3
Accounting for Translation Time
MTurk returns with each assignment the "Work-TimeInSeconds." This is the amount of time between when a worker accepts an assignment and when the worker submits the completed assignment. We use this value to estimate annotation times. 4 Figure 9 shows HNG collection versus random collection from MTurk. The x-axis measures the number of seconds of annotation time. Note that HNG is more effective. A result that may be particularly interesting is that HNG results in a time speedup by more than just the reduction in translated words would indicate. The average time to translate a word of Urdu with the sentence postings to MTurk was 32.92 seconds. The average time to translate a word with the HNG postings to MTurk was 11.98 seconds. This is nearly three times faster. Figure 10 shows the distribution of speeds (in seconds per word) for HNG postings versus complete sentence postings. Note that the HNG postings consistently result in faster translation speeds than the sentence postings 5 .
We hypothesize that this speedup comes about because when translating a full sentence, there's the time required to examine each word and translate them in some sense (even if not one-to-one) and then there is an extra significant overhead time to put it all together and synthesize into a larger sentence translation. The factor of three speedup is evidence that this overhead is significant effort compared to just quickly translating short n-grams from a sentence. This speedup is an additional benefit of the HNG approach.
Bucking the Trend
We gathered translations for ≈ 54,500 Urdu words via the use of HNG on MTurk. This is a relatively small amount, ≈ 3% of the LDC corpus. Figure 11 shows the performance when we add this training data to the LDC corpus. The rect-4 It's imperfect because of network delays and if a person is multitasking or pausing between their accept and submit times. Nonetheless, the times ought to be better estimates as they are taken over larger samples. 5 The average speed for the HNG postings seems to be slower than the histogram indicates. This is because there were a few extremely slow outlier speeds for a handful of HNG postings. These are almost certainly not cases when the turker is working continuously on the task and so the average speed we computed for the HNG postings might be slower than the actual speed and hence the true speedup may even be faster than indicated by the difference between the average speeds we reported. angle around the last 700,000 words of the LDC data is wide and short (it has a height of 0.9 BLEU points and a width of 700,000 words) but the rectangle around the newly added translations is narrow and tall (a height of 1 BLEU point and a width of 54,500 words). Visually, it appears we are succeeding in bucking the trend of diminishing returns. We further confirmed this by running a least-squares linear regression on the points of the last 700,000 words annotated in the LDC data and also for the points in the new data that we acquired via MTurk for $205.80 USD. We find that the slope fit to our new data is 6.6245E-06 BLEU points per Urdu word, or 6.6245 BLEU points for a million Urdu words. The slope fit to the LDC data is only 7.4957E-07 BLEU points per word, or only 0.74957 BLEU points for a million words. This is already an order of magnitude difference that would make the difference between it being worth adding more data and not being worth it; and this is leaving aside the added time speedup that our method enjoys. Still, we wondered why we could not have raised BLEU scores even faster. The main hurdle seems to be one of coverage. Of the 20,580 ngrams we collected, only 571 (i.e., 2.77%) of them ever even occur in the test set.
Beyond BLEU Scores
BLEU is an imperfect metric (Callison-Burch et al., 2006). One reason is that it rates all ngram mismatches equally although some are much more important than others. Another reason is it's not intuitive what a gain of x BLEU points means in practice. Here we show some concrete example translations to show the types of improvements we're achieving and also some examples which suggest improvements we can make to our AL selection algorithm in the future. Figure 12 shows a prototypical example of our system working. Figure 13 shows an example where the strategy is working partially but not as well as it might. The Urdu phrase was translated by turkers as "gowned veil". However, since the word aligner just aligns the word to "gowned", we only see "gowned" in our output. This prompts a number of discussion points. First, the 'after system' has better translations but they're not rewarded by BLEU scores because the references use the words 'burqah' or just 'veil' without 'gowned'. Second, we hypothesize that we may be able to see improvements by overriding the automatic alignment software whenever we obtain a many-to-one or one-to-many (in terms of words) translation for one of our trigger phrases. In such cases, we'd like to make sure that every word on the 'many' side is aligned to the single word on the 'one' side. For example, we would force both 'gowned' and 'veil' to be aligned to the single Urdu word instead of allowing the automatic aligner to only align 'gowned'. Figure 14 shows an example where our "before" system already got the translation correct without the need for the additional phrase translation. This is because though the "before" system had never seen the Urdu expression for "12 May", it had seen the Urdu words for "12" and "May" in isolation and was able to successfully compose them. An area of future work is to use the "before" system to determine such cases automatically and avoid asking humans to provide translations in such cases.
Conclusions and Future Work
We succeeded in bucking the trend of diminishing returns and improving translation quality while keeping annotation costs low. In future work we would like to apply these ideas to domain adaptation (say, general-purpose MT system to work for scientific domain such as chemistry). Also, we would like to test with more languages, increase the amount of data we can gather, and investigate stopping criteria further. Also, we would like to investigate increasing the efficiency of the selection algorithm by addressing issues such as the one raised by the 12 May example presented earlier.
Figure 1 :
1Syntax-based and Hierarchical Phrase-Based MT systems' learning curves on the LDC Urdu-English language pack. The x-axis measures the number of sentence pairs in the training data.
Figure 2 :
2The VG sentence selection algorithm
Figure 3 :
3Random vs VG selection. The x-axis measures the number of sentence pairs in the training data. The y-axis measures BLEU score.
Figure 4 :
4Random vs VG selection. The x-axis measures the number of foreign words in the training data. The y-axis measures BLEU score.
Figure 5 :
5Random vs Shortest vs Longest selection. The x-axis measures the number of sentence pairs in the training data. The y-axis measures BLEU score.
Figure 6 :
6Random vs Shortest vs Longest selection. The x-axis measures the number of foreign words in the training data. The y-axis measures BLEU score.
Figure 7 :
7VG vs MostNew vs ModerateNew selection. The x-axis measures the number of sentence pairs in the training data. The y-axis measures BLEU score.
Figure 8 :
8Screenshot of the interface we used for soliciting translations for triggers.
Figure 9 :
9HNG vs Random collection of new data via MTurk. y-axis measures BLEU. x-axis measures annotation time in seconds.
Figure 10 :
10Distribution of translation speeds (in seconds per word) for HNG postings versus complete sentence postings. The y-axis measures relative frequency. The x-axis measures translation speed in seconds per word (so farther to the left is faster).
Figure 11 :
11Bucking the trend: performance of HNG-selected additional data from BBC web crawl data annotated via Amazon Mechanical Turk. y-axis measures BLEU. x-axis measures number of words annotated.
Figure 12 :
12Example of strategy working.
Figure 13 :
13Example showing where we can improve our selection strategy.
Figure 14 :
14Example showing where we can improve our selection strategy.
Bucking the Trend: JHiero Translation Quality versus Number of Foreign Words Annotated1
1.2
1.4
1.6
1.8
x 10
6
21
21.5
22
22.5
23
23.5
BLEU Score
Number of Foreign Words Annotated
the approx. 54,500 foreign words
we selectively sampled
for annotation
cost = $205.80
last approx. 700,000
foreign words
annotated in LDC data
For practical reasons we restricted ourselves to not considering sentences that were longer than 60 Urdu words, however.3 The prices we paid were not market-driven. We just chose prices we thought were reasonable. In hindsight, given how much quicker the phrase translations are for people we could have had a greater disparity in price.
AcknowledgementsThis work was supported by the Johns Hopkins University Human Language Technology Center of Excellence. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsor.
Active learning and crowd-sourcing for machine translation. Vamshi Ambati, Stephan Vogel, Jaime Carbonell, Proceedings of the Seventh conference on International Language Resources and Evaluation (LREC'10). the Seventh conference on International Language Resources and Evaluation (LREC'10)Valletta, Malta, mayEuropean Language Resources Association (ELRAVamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2010. Active learning and crowd-sourcing for ma- chine translation. In Proceedings of the Seventh con- ference on International Language Resources and Evaluation (LREC'10), Valletta, Malta, may. Euro- pean Language Resources Association (ELRA).
Scaling to very very large corpora for natural language disambiguation. Michele Banko, Eric Brill, Proceedings of 39th Annual Meeting of the Association for Computational Linguistics. 39th Annual Meeting of the Association for Computational LinguisticsToulouse, FranceAssociation for Computational LinguisticsMichele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language disambigua- tion. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics, pages 26-33, Toulouse, France, July. Association for Com- putational Linguistics.
Using mechanical turk to build machine translation evaluation sets. Michael Bloodgood, Chris Callison-Burch, Proceedings of the Workshop on Creating Speech and Language Data With Amazon's Mechanical Turk. the Workshop on Creating Speech and Language Data With Amazon's Mechanical TurkLos Angeles, CaliforniaAssociation for Computational LinguisticsMichael Bloodgood and Chris Callison-Burch. 2010. Using mechanical turk to build machine translation evaluation sets. In Proceedings of the Workshop on Creating Speech and Language Data With Amazon's Mechanical Turk, Los Angeles, California, June. Association for Computational Linguistics.
An approach to reducing annotation costs for bionlp. Michael Bloodgood, K Vijay-Shanker, Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing. the Workshop on Current Trends in Biomedical Natural Language ProcessingColumbus, OhioAssociation for Computational LinguisticsMichael Bloodgood and K Vijay-Shanker. 2008. An approach to reducing annotation costs for bionlp. In Proceedings of the Workshop on Current Trends in Biomedical Natural Language Processing, pages 104-105, Columbus, Ohio, June. Association for Computational Linguistics.
A method for stopping active learning based on stabilizing predictions and the need for user-adjustable stopping. Michael Bloodgood, K Vijay-Shanker, Proceedings of the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009). the Thirteenth Conference on Computational Natural Language Learning (CoNLL-2009)Boulder, ColoradoAssociation for Computational LinguisticsMichael Bloodgood and K Vijay-Shanker. 2009a. A method for stopping active learning based on stabi- lizing predictions and the need for user-adjustable stopping. In Proceedings of the Thirteenth Confer- ence on Computational Natural Language Learning (CoNLL-2009), pages 39-47, Boulder, Colorado, June. Association for Computational Linguistics.
Taking into account the differences between actively and passively acquired data: The case of active learning with support vector machines for imbalanced datasets. Michael Bloodgood, K Vijay-Shanker, Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)Boulder, ColoradoAssociation for Computational LinguisticsMichael Bloodgood and K Vijay-Shanker. 2009b. Tak- ing into account the differences between actively and passively acquired data: The case of active learning with support vector machines for imbal- anced datasets. In Proceedings of Human Lan- guage Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 137- 140, Boulder, Colorado, June. Association for Com- putational Linguistics.
Re-evaluating the role of Bleu in machine translation research. Chris Callison, - Burch, Miles Osborne, Philipp Koehn, 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL-2006). Trento, ItalyChris Callison-Burch, Miles Osborne, and Philipp Koehn. 2006. Re-evaluating the role of Bleu in ma- chine translation research. In 11th Conference of the European Chapter of the Association for Computa- tional Linguistics (EACL-2006), Trento, Italy.
Hierarchical phrase-based translation. David Chiang, Computational Linguistics. 332David Chiang. 2007. Hierarchical phrase-based trans- lation. Computational Linguistics, 33(2):201-228.
Investigating the effects of selective sampling on the annotation task. Ben Hachey, Beatrice Alex , Markus Becker, Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005). the Ninth Conference on Computational Natural Language Learning (CoNLL-2005)Ann Arbor, MichiganAssociation for Computational LinguisticsBen Hachey, Beatrice Alex, and Markus Becker. 2005. Investigating the effects of selective sampling on the annotation task. In Proceedings of the Ninth Confer- ence on Computational Natural Language Learning (CoNLL-2005), pages 144-151, Ann Arbor, Michi- gan, June. Association for Computational Linguis- tics.
Assessing the costs of sampling methods in active learning for annotation. Robbie Haertel, Eric Ringger, Kevin Seppi, James Carroll, Peter Mcclanahan, Proceedings of ACL-08: HLT, Short Papers. ACL-08: HLT, Short PapersColumbus, OhioAssociation for Computational LinguisticsRobbie Haertel, Eric Ringger, Kevin Seppi, James Car- roll, and Peter McClanahan. 2008. Assessing the costs of sampling methods in active learning for an- notation. In Proceedings of ACL-08: HLT, Short Pa- pers, pages 65-68, Columbus, Ohio, June. Associa- tion for Computational Linguistics.
Active learning for multilingual statistical machine translation. Gholamreza Haffari, Anoop Sarkar, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLPSuntec, SingaporeAssociation for Computational LinguisticsGholamreza Haffari and Anoop Sarkar. 2009. Active learning for multilingual statistical machine trans- lation. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th In- ternational Joint Conference on Natural Language Processing of the AFNLP, pages 181-189, Suntec, Singapore, August. Association for Computational Linguistics.
Active learning for statistical phrase-based machine translation. Gholamreza Haffari, Maxim Roy, Anoop Sarkar, Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational LinguisticsBoulder, ColoradoAssociation for Computational LinguisticsGholamreza Haffari, Maxim Roy, and Anoop Sarkar. 2009. Active learning for statistical phrase-based machine translation. In Proceedings of Human Language Technologies: The 2009 Annual Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, pages 415-423, Boulder, Colorado, June. Association for Computa- tional Linguistics.
Sample selection for statistical grammar induction. Rebecca Hwa, Proceedings of the 2000 Joint SIG-DAT Conference on Empirical Methods in Natural Language Processing. Hinrich Schütze and Keh-Yih Suthe 2000 Joint SIG-DAT Conference on Empirical Methods in Natural Language ProcessingSomerset, New JerseyAssociation for Computational LinguisticsRebecca Hwa. 2000. Sample selection for statistical grammar induction. In Hinrich Schütze and Keh- Yih Su, editors, Proceedings of the 2000 Joint SIG- DAT Conference on Empirical Methods in Natural Language Processing, pages 45-53. Association for Computational Linguistics, Somerset, New Jersey.
Selective supervision: Guiding supervised learning with decision-theoretic active learning. Ashish Kapoor, Eric Horvitz, Sumit Basu, Proceedings of the 20th International Joint Conference on Artificial Intelligence. Manuela M. Velosothe 20th International Joint Conference on Artificial IntelligenceHyderabad, IndiaAshish Kapoor, Eric Horvitz, and Sumit Basu. 2007. Selective supervision: Guiding supervised learn- ing with decision-theoretic active learning. In Manuela M. Veloso, editor, IJCAI 2007, Proceed- ings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, January 6- 12, 2007, pages 877-882.
. Ross D King, Kenneth E Whelan, M Ffion, Ross D. King, Kenneth E. Whelan, Ffion M.
Functional genomic hypothesis generation and experimentation by a robot scientist. Philip G K Jones, Christopher H Reiser, Stephen H Bryant, Douglas B Muggleton, Stephen G Kell, Oliver, Nature. 427Jones, Philip G. K. Reiser, Christopher H. Bryant, Stephen H. Muggleton, Douglas B. Kell, and Stephen G. Oliver. 2004. Functional genomic hy- pothesis generation and experimentation by a robot scientist. Nature, 427:247-252, 15 January.
A sequential algorithm for training text classifiers. D David, William A Lewis, Gale, SI-GIR '94: Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval. New York, NY, USA; New York, IncSpringer-VerlagDavid D. Lewis and William A. Gale. 1994. A se- quential algorithm for training text classifiers. In SI- GIR '94: Proceedings of the 17th annual interna- tional ACM SIGIR conference on Research and de- velopment in information retrieval, pages 3-12, New York, NY, USA. Springer-Verlag New York, Inc.
Joshua: An open source toolkit for parsing-based machine translation. Zhifei Li, Chris Callison-Burch, Chris Dyer, Juri Ganitkevitch, Sanjeev Khudanpur, Lane Schwartz, Wren Thornton, Jonathan Weese, Omar Zaidan, Proceedings of the Fourth Workshop on Statistical Machine Translation. the Fourth Workshop on Statistical Machine TranslationAthens, Greece, MarchAssociation for Computational LinguisticsZhifei Li, Chris Callison-Burch, Chris Dyer, Juri Gan- itkevitch, Sanjeev Khudanpur, Lane Schwartz, Wren Thornton, Jonathan Weese, and Omar Zaidan. 2009. Joshua: An open source toolkit for parsing-based machine translation. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 135-139, Athens, Greece, March. Association for Computational Linguistics.
Phrase-based statistical language generation using graphical models and active learning. Francois Mairesse, Milica Gasic, Filip Jurcicek, Simon Keizer, Jorge Prombonas, Blaise Thomson, Kai Yu, Steve Young, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL). the 48th Annual Meeting of the Association for Computational Linguistics (ACL)Uppsala, SwedenAssociation for Computational LinguisticsFrancois Mairesse, Milica Gasic, Filip Jurcicek, Simon Keizer, Jorge Prombonas, Blaise Thomson, Kai Yu, and Steve Young. 2010. Phrase-based statistical language generation using graphical models and ac- tive learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Lin- guistics (ACL), Uppsala, Sweden, July. Association for Computational Linguistics.
Rule writing or annotation: cost-efficient resource usage for base noun phrase chunking. Grace Ngai, David Yarowsky, Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics. the 38th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsGrace Ngai and David Yarowsky. 2000. Rule writ- ing or annotation: cost-efficient resource usage for base noun phrase chunking. In Proceedings of the 38th Annual Meeting of the Association for Compu- tational Linguistics. Association for Computational Linguistics.
Ensemblebased active learning for parse selection. Miles Osborne, Jason Baldridge, HLT-NAACL 2004: Main Proceedings. Daniel Marcu Susan Dumais and Salim RoukosBoston, Massachusetts, USAAssociation for Computational LinguisticsMiles Osborne and Jason Baldridge. 2004. Ensemble- based active learning for parse selection. In Daniel Marcu Susan Dumais and Salim Roukos, ed- itors, HLT-NAACL 2004: Main Proceedings, pages 89-96, Boston, Massachusetts, USA, May 2 -May 7. Association for Computational Linguistics.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of 40th Annual Meeting of the Association for Computational Linguistics. 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA, July. Association for Computa- tional Linguistics.
An empirical study of active learning with support vector machines for japanese word segmentation. Manabu Sassano, ACL '02: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Morristown, NJ, USAAssociation for Computational LinguisticsManabu Sassano. 2002. An empirical study of active learning with support vector machines for japanese word segmentation. In ACL '02: Proceedings of the 40th Annual Meeting on Association for Computa- tional Linguistics, pages 505-512, Morristown, NJ, USA. Association for Computational Linguistics.
Less is more: Active learning with support vector machines. Greg Schohn, David Cohn, Proc. 17th International Conf. on Machine Learning. 17th International Conf. on Machine LearningSan Francisco, CAMorgan KaufmannGreg Schohn and David Cohn. 2000. Less is more: Active learning with support vector machines. In Proc. 17th International Conf. on Machine Learn- ing, pages 839-846. Morgan Kaufmann, San Fran- cisco, CA.
Cheap and fast -but is it good? evaluating non-expert annotations for natural language tasks. Rion Snow, O' Brendan, Daniel Connor, Andrew Jurafsky, Ng, Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. the 2008 Conference on Empirical Methods in Natural Language ProcessingHonolulu, Hawaii, OctoberAssociation for Computational LinguisticsRion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast -but is it good? evaluating non-expert annotations for natu- ral language tasks. In Proceedings of the 2008 Con- ference on Empirical Methods in Natural Language Processing, pages 254-263, Honolulu, Hawaii, Oc- tober. Association for Computational Linguistics.
Semisupervised active learning for sequence labeling. Katrin Tomanek, Udo Hahn, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLPSuntec, SingaporeAssociation for Computational LinguisticsKatrin Tomanek and Udo Hahn. 2009. Semi- supervised active learning for sequence labeling. In Proceedings of the Joint Conference of the 47th An- nual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 1039-1047, Suntec, Singapore, August. Association for Computational Linguistics.
Support vector machine active learning with applications to text classification. Simon Tong, Daphne Koller, Journal of Machine Learning Research. 2JMLRSimon Tong and Daphne Koller. 2002. Support vec- tor machine active learning with applications to text classification. Journal of Machine Learning Re- search (JMLR), 2:45-66.
An active learning approach to finding related terms. David Vickrey, Oscar Kipersztok, Daphne Koller, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL). the 48th Annual Meeting of the Association for Computational Linguistics (ACL)Uppsala, SwedenAssociation for Computational LinguisticsDavid Vickrey, Oscar Kipersztok, and Daphne Koller. 2010. An active learning approach to finding related terms. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguis- tics (ACL), Uppsala, Sweden, July. Association for Computational Linguistics.
A stopping criterion for active learning. Andreas Vlachos, Computer Speech and Language. 223Andreas Vlachos. 2008. A stopping criterion for active learning. Computer Speech and Language, 22(3):295-312.
Syntax augmented machine translation via chart parsing. Andreas Zollmann, Ashish Venugopal, Proceedings of the NAACL-2006 Workshop on Statistical Machine Translation (WMT06). the NAACL-2006 Workshop on Statistical Machine Translation (WMT06)New York, New YorkAndreas Zollmann and Ashish Venugopal. 2006. Syn- tax augmented machine translation via chart pars- ing. In Proceedings of the NAACL-2006 Workshop on Statistical Machine Translation (WMT06), New York, New York.
| [] |
[
"Political Ideology and Polarization: A Multi-dimensional Approach",
"Political Ideology and Polarization: A Multi-dimensional Approach"
] | [
"Barea Sinno barea.sinno@gmail.com \nPolitical Science\nRutgers University\n\n",
"Bernardo Oviedo bernyoviedo@utexas.edu \nComputer Science\n\n",
"Katherine Atwell \nComputer Science\nUniversity of Pittsburgh\n\n",
"Malihe Alikhani malihe@pitt.edu \nLinguistics\nThe University of Texas at Austin\n\n\nComputer Science\nUniversity of Pittsburgh\n\n",
"Junyi Jessy Li "
] | [
"Political Science\nRutgers University\n",
"Computer Science\n",
"Computer Science\nUniversity of Pittsburgh\n",
"Linguistics\nThe University of Texas at Austin\n",
"Computer Science\nUniversity of Pittsburgh\n"
] | [] | Analyzing ideology and polarization is of critical importance in advancing our grasp of modern politics. Recent research has made great strides towards understanding the ideological bias (i.e., stance) of news media along the leftright spectrum. In this work, we instead take a novel and more nuanced approach for the study of ideology based on its left or right positions on the issue being discussed. Aligned with the theoretical accounts in political science, we treat ideology as a multi-dimensional construct, and introduce the first diachronic dataset of news articles whose ideological positions are annotated by trained political scientists and linguists at the paragraph level. We showcase that, by controlling for the author's stance, our method allows for the quantitative and temporal measurement and analysis of polarization as a multidimensional ideological distance. We further present baseline models for ideology prediction, outlining a challenging task distinct from stance detection. | null | [
"https://arxiv.org/pdf/2106.14387v2.pdf"
] | 235,659,085 | 2106.14387 | c831c63d139405d90e3847647d56ef00c90dec25 |
Political Ideology and Polarization: A Multi-dimensional Approach
Barea Sinno barea.sinno@gmail.com
Political Science
Rutgers University
Bernardo Oviedo bernyoviedo@utexas.edu
Computer Science
Katherine Atwell
Computer Science
University of Pittsburgh
Malihe Alikhani malihe@pitt.edu
Linguistics
The University of Texas at Austin
Computer Science
University of Pittsburgh
Junyi Jessy Li
Political Ideology and Polarization: A Multi-dimensional Approach
Analyzing ideology and polarization is of critical importance in advancing our grasp of modern politics. Recent research has made great strides towards understanding the ideological bias (i.e., stance) of news media along the leftright spectrum. In this work, we instead take a novel and more nuanced approach for the study of ideology based on its left or right positions on the issue being discussed. Aligned with the theoretical accounts in political science, we treat ideology as a multi-dimensional construct, and introduce the first diachronic dataset of news articles whose ideological positions are annotated by trained political scientists and linguists at the paragraph level. We showcase that, by controlling for the author's stance, our method allows for the quantitative and temporal measurement and analysis of polarization as a multidimensional ideological distance. We further present baseline models for ideology prediction, outlining a challenging task distinct from stance detection.
Introduction
Political ideology rests on a set of beliefs about the proper order of a society and ways to achieve this order (Jost et al., 2009;Adorno et al., 2019;Campbell et al., 1980). In Western politics, these worldviews translate into a multi-dimensional construct that includes: equal opportunity as opposed to economic individualism; general respect for tradition, hierarchy and stability as opposed to advocating for social change; and a belief in the un/fairness and in/efficiency of markets (Jost et al., 2009).
The divergence in ideology, i.e., polarization, is the undercurrent of propaganda and misinformation (Vicario et al., 2019;Bessi et al., 2016;Stanley, 2015). It can congest essential democratic functions with an increase in the divergence of political ideologies. Defined as a growing ideo-* Equal contribution ordered by first name. Two dimensions: trade and economic liberalism
The U.S. aim is to create a monetary system with enough flexibility to prevent bargain-hungry money from rolling around the world like loose ballast on a ship disrupting normal trade and currency flows. Nixon goals: dollar, trade stability. This must be accompanied, Washington says, by reduction of [trade] barriers ... One dimension: trade protectionism
The controls program, which Mr. Nixon inaugurated Aug. 15, 1971, has helped to reduce inflation to about 3 percent yearly, and to boost annual U.S. economic growth to more than 7 percent... Table 1: Excerpts from news article #730567 in COHA (Davies, 2012). The first paragraph advocates for liberalism and the reduction of trade barriers. It also has a domestic economic dimension. The second paragraph, on the contrary, advocates for protectionism and a domestic controls program. logical distance between groups, polarization has waxed and waned since the advent of the American Republic (Pierson and Schickler, 2020). 1 Two eras-post-1896 and -1990s-have witnessed deleterious degrees of polarization (Jenkins et al., 2004;Jensen et al., 2012). More recently, COVID-19, the murder of George Floyd, and the Capitol riots have exposed ideological divergences in opinion in the US through news media and social media. With the hope of advancing our grasp of modern politics, we study ideology and polarization through the lens of computational linguistics by presenting a carefully annotated corpus and examining the efficacy of a set of computational and statistical analyses.
In contrast to studying the bias or the stance of the author of the text via linguistic framing (Kulkarni et al., 2018;Kiesel et al., 2019;Baly et al., 2019Baly et al., , 2020Chen et al., 2020;Stefanov et al., 2020), we study the little explored angle that is nonetheless critical in political science research: ideology of the issue (e.g., policy or concept) under discussion. That is, in lieu of examining the author's stance, we focus on addressing the at-issue content of the text and the ideology that it represents in the implicit social context. The nuanced co-existence of stance and ideology can be illustrated in the following excerpt:
"Republicans and Joe Biden are making a huge mistake by focusing on cost. The implication is that government-run health care would be a good thing-a wonderful thing!-if only we could afford it." (The Federalist, 9/27/2019) The author is attacking a liberal social and economic policy; therefore, the ideology being discussed is liberal on two dimensions-social and economic, while the author's stance is conservative. Moreover, our novel approach acknowledges that ideology can also vary within one article. In Table 1, we show an example in which one part of an article advocates for trade liberalism, while another advocates for protectionism.
Together, author stance and ideology inform us not only that there is bias in the media, but also which beliefs are being supported and/or attacked. A full analysis of polarization (that reflects a growing distance of political ideology over time) can then be derived if diachronic data for both author stance and ideology were available. However, while there has been data for the former (with articles from recent years only) (Kiesel et al., 2019), to date, there has been no temporal data on the latter.
In this paper, we present a multi-dimensional framework, and an annotated, diachronic, stanceneutral corpus, for the analysis of ideology in text. This allows us to study polarization as a state of ideological groups with divergent positions on a political issue as well as polarization as a process whose magnitude grows over time (DiMaggio et al., 1996). We use proclaimed center, center-left and center-right media outlets who claim to be objective in order to focus exclusively and more objectively on the ideology of the issue being discussed, without the subjectivity of author stance annotation. We study ideology within every paragraph 2 of an article and aim to answer the following question: which ideological dimension is present and to which ideological position does it correspond to on the liberal-conservative spectrum.
Our extensive annotation manual is developed by a political scientist, and the data then annotated by three linguists after an elaborate training phase (Section 3). After 150 hours of annotation, we present a dataset of 721 fully adjudicated annotated paragraphs, from 175 news articles and covering an average of 7.86 articles per year (excerpts shown in Tables 1, 2, and 3). These articles originate from 5 news outlets related the US Federal Budget from 1947-1975 covering the center-left, center, center-right spectrum: Chicago Tribune (CT), Christian Science Monitor (CSM), the New York Times (NYT), Time Magazine (TM), and the Wall Street Journal (WSJ).
With this data, we reveal lexical insights on the language of ideology across the left-right spectrum and across dimensions. We observe that linguistic use even at word level can reveal the ideology behind liberal and conservative policies (Section 4). Our framework also enables fine-grained, quantitative analysis of polarization, which we demonstrate in Section 5. This type of analysis, if scaled up using accurate models for ideology prediction, has the potential to reveal impactful insights into the political context of our society as a whole.
Finally, we present baselines for the automatic identification of multi-dimensional ideology at the paragraph level (Section 6). We show that this is a challenging task with our best baseline yielding an F measure of 0.55; exploring pretraining with existing data in news ideology/bias identification, we found that this task is distinct from, although correlated with, labels automatically derived from news outlets. We contribute our data and code at https://github. com/bernovie/political-polarization.
Setup
Many political scientists and political psychologists argue for the use of at least a bidimensional ideology for domestic politics that distinguishes between economic and social preferences (Carsey and Layman, 2006;Carmines et al., 2012;Feldman and Johnston, 2014). 3 We start with these two dimensions while adding a third dimension, "Foreign", when the article tackles foreign issues. Specifically, our annotation task entails examining a news article and annotating each dimension (detailed below) along three levels-liberal, conservative, neutral-for each paragraph. The neutral level for every dimension is reserved for paragraphs related to a specific dimension but either (a) contain both conservative and liberal elements that annotators were unable to ascertain an ideological dimension with confidence, or (b) do not portray any ideology. We additionally provide an irrelevant option if a dimension does not apply to the paragraph. The three dimensions are: Social: While the (1) socially conservative aspect of this dimension is defined as respect for tradition, fear of threat and uncertainty, need for order and structure, concerns for personal and national security, and preference for conformity, its (2) socially liberal counterpart has been associated with a belief in the separation of church and state, tolerance for uncertainty and change (Jost et al., 2009). Economic: Similarly, while the (3) economically conservative aspect of this dimension refers to motivations to achieve social rewards, power, and prestige such as deregulation of the economy, lower taxes and privatization (i.e., being against deficit) spending and advocating for a balanced budget, its (4) economically liberal counterpart refers to motivation for social justice and equality such as issues related to higher taxes on rich individuals and businesses and more redistribution. Foreign: After piloting the bidimensional approach on 300 articles, we find that using only 2 dimensions conflates two important aspects of ideology related to domestic economy and foreign trade. Tariffs, import quotas, and other nontariffbased barriers to trade that are aimed at improving employment and the competitiveness of the US on the international market did not map well onto the bidimensional framework. After consulting several senior political scientists, we adopted a third dimension that dealt with the markets as well as the relations of the US with the rest of the world. While morals, values and traits such as freedom, safety, harm, care, reciprocity, in-group loyalty, authority, equality are formed. Since, some scholars have used these traits to predict ideology whereas others have attempted to understand what traits unites people with the same ideology. (3) Framing: frames are used in many ways in political science. They can refer to different ways scholars describe the same information or when scholars talk about different aspects of a single problem (Chong and Druckman, 2007). the (5) globalist counterpart of this dimension accounts for free-trade, diplomacy, immigration and treaties such as the non-proliferation of arms, its (6) interventionist aspect is nationalist in its support for excise tax on imports to protect American jobs and economic subsidies and anti-immigration.
With the annotated data, we demonstrate quantitative measures of polarization (Section 5) and introduce the modeling task (Section 6) of automatically identifying the ideology of the policy positions being discussed.
Data collection and annotation
Raw data Since polarization is a process that needs to be analyzed over time (DiMaggio et al., 1996), our annotated articles are sampled from a diachronic corpus of 1,749 news articles across nearly 3 decades (from 1947 till 1974). Articles in this corpus are from political news articles of Desai et al. (2019) from the Corpus of Historical American English (COHA, Davies (2012)) covering years 1922-1986. These 1,749 articles are extracted such that: (1) they cover broad and politically relevant topics (ranging from education and health to economy) but still share discussions related to the federal budget to make our annotations tractable 4 ; (2) balanced in the number of articles across 5 news outlets with center-left, central, and center-right ideology (c.f. Section 5): Chicago Tribune (CT), Wall Street Journal (WSJ), Christian Science Monitor (CSM), the New York Times (NYT), and Time Magazine (TM). A detailed description of our curation process is in Appendix A.
The raw texts were not segmented into paragraphs, thus we used Topic Tiling (Riedl and Biemann, 2012) for automatic segmentation. Topic Tiling finds segment boundaries again using LDA and, thus, identifies major subtopic changes within the same article. The segmentation resulted in articles with 1 to 6 paragraphs. The average number of paragraphs per article was 4.
Annotation process Our team (including a political science graduate student) developed an annotation protocol for expert annotators using definitions in Section 2. The annotation process is independently reviewed by four political science professors from two universities in the US who are not authors -Namara] threw his full support today behind the Administration's drive against poverty. ...Mr. Mc-Namara said : "It is the youth that we can expect to be the most immediate beneficiaries of the war on poverty." He said he was endorsing the "entire program" both as a citizen and as a member of the Cabinet. His endorsement came as his fellow Republicans in Congress continued to hammer away at parts of the Administration's antipoverty program. . . . Two dimensions: socially and economically conservative
The antipoverty program, the Republicans insisted, would undercut the authority of the Cabinet members by making Sargent Shriver a "poverty czar." "I don't see how you can lie down and be a doormat for this kind of operation. "... Table 2: Excerpts from article #723847 in COHA. Because the first paragraph calls for minimizing income inequality, it is socially liberal; and because advocating for such a program call for an budgetary expenditure, it is also has an economic liberal dimension. The second paragraph advocates for the exact opposites of the positions in the first paragraph. Therefore, it is socially and economically conservative. Sentences most relevant to these labels are highlighted. of this paper; the research area of two of them is ideology and polarization in the US. We will release our full annotation interface, protocol, and procedure along with the data upon publication.
We sampled on average 7.86 articles per year for annotation, for a total of 721 paragraphs across 175 articles. We divided the annotation task into two batches of 45 and 130 articles, the smaller batch was for training purposes.
In addition to the political science graduate student, we recruited three annotators, all of whom are recent Linguistics graduates in the US. The training sessions consisted of one general meeting (all annotators met) and six different one-on-one meetings (each annotator met with another annotator once). During initial training, the annotators were asked to highlight sentences based on which the annotation was performed.
After the annotations of this batch were finalized, the annotators met with the political science student to create ground truth labels in cases of disagreement. Then, the three annotators received the second batch and each article was annotated by 2 annotators. This annotation was composed of two stages to account for possible subjectivity. In stage 1, each annotator worked on a batch that that the problem was to expand consumption rather than production. ... "Production is the goose that lays the golden egg," Mr. Humphrey replied. "Payrolls make consumers." Table 3: Excerpts from article #716033 in COHA. The first paragraph is void of ideology. In the second paragraph the topic is anti tax reduction on businesses, thus it is economically liberal. The third paragraph is simultaneously economically conservative and liberal because one speaker is advocating for decreasing tax on businesses and asserting that production gives an advantage to businesses, the other is advocating for decreasing tax on the poor because they need the income and asserting that healthy businesses are the ones who pay salaries for the low income bracket worker. overlapped with only one other annotator. In stage 2, the two annotators examine paragraphs that they disagree, and met with the third annotator acting as consolidator to adjudicate. Tables 1. 3 and 2 are examples of adjudicated annotation in the data.
Agreement To assess the inter-annotator agreement of stage 1, we report Krippendorf's α (Hayes and Krippendorff, 2007) for each dimension for the 135 articles after training and before any discussion/adjudication: economic (0.44), foreign (0.68), social (0.39). The agreements among annotators for the economic and foreign dimensions are moderate & substantial (Artstein and Poesio, 2008), respectively; for social, the 'fair' agreement was noticed during annotation, and additional discussion for each paragraph was then held. Afterwards, 25 more articles were independently annotated and assessed with an α of 0.53. Although the agreements were not perfect and reflected a degree of subjectivity in this task, all dimensional labels were adjudicated after discussions between annotators. In total, creating this dataset cost ∼150 hours of manual multi-dimensional labeling. #Docs Econ. Soc. Fgn . Total CSM 37 115 63 82 260 CT 14 48 33 16 97 NYT 60 219 114 130 463 TM 52 134 60 89 283 WSJ 12 42 21 21 84 Total 175 558 291 338 1187 Table 4: Dimensional label counts across all 721 paragraphs in the adjudicated data (there can be multiple dimensions per paragraph).
Qualitative analysis of text highlights For the 25 articles used in training, all annotators highlighted the sentences that are relevant to each dimension they annotated. This helped annotators to focus on the sentences that drived their decision, and provided insights to the language of ideology, which we discuss here. On average, 21%-54% of the sentences in a paragraph were highlighted. We found entities such as "President" and "Congress" were the most prevalent in the highlights, and they tackled social and economic issues combined. This is not surprising as it suggests that when the media quotes or discusses the "President" and "Congress", they do so with reference to more complex policy issues. In contrast, individual congresspeople tackled mostly economic or social issues. This also is not surprising as it suggests that individual congresspeople are more concerned with specific issues. Interestingly, "House" and "Senate" almost always figured more in social issues. This suggest that when news media speaks about a specific chamber, they do so associating this chamber with social issues. Finally, party affiliation was infrequent and was mostly associate with social issues.
Ideology analyses
The number of paragraphs per dimension in total is: Economics (558), Social (291), Forign (338), across the 175 articles. In Table 4 we tabulate this for each of the news outlets. Figure 1 shows the dimensional label distributions per outlet for each dimension. Expectedly, the dimensional labels often diverge from proclaimed ideology of the news outlet.
We also analyze the percentage of articles that contain at least one pair of paragraph labels that lean in different directions; for instance, a paragraph with a label of globalist (i.e., liberal) in the foreign dimension and another paragraph with a label of conservative or neutral in the fiscal dimension. The percentage of such articles is 78.3%. Out of these articles, we examine the average proportion of neutral, liberal, and conservative paragraph labels, and find neutral labels have the highest share (43.27%), followed by liberal (33.20%) and conservative (23.53%). In Figure 2 (right), we visualize the percentage of articles where two dimensional labels co-occur within the same article. The figure indicates that ideology varies frequently within an article, showing that a single article-level label will not be fine-grained enough to capture variances within an article.
In Figure 2 (left) we also show paragraph-level label co-occurrence. Unlike the article-level, the co-occurrences are less frequent and we are more likely to observe co-occurrences along the same side of ideology. Still, we see interesting nuances; for example, on both the paragraph and the article level, the economic dimension is often neutral, and this tends to co-occur with both liberal and conservative positions in other dimensions.
Lexical analysis To understand ways ideology is reflected in text, we also look into the top vocabulary that associates with conservative or liberal ideology. To do so, we train a logistic regression model for each dimension to predict whether a paragraph is labeled conservative or liberal on that dimension, using unigram frequency as features. In Table 5 or right-leaning (R) vocabulary with their weights. The table intimately reproduces our annotation of ideology. For example, words like federal and Senator allude to the fact that the topic is at the federal level. The importance of education and labor to liberals is also evident in the economic and social dimensions in words like school, education, and wage. The importance of the topic of taxation and defense is evident in conservative ideology in words such as tax, business, missile, and force.
Polarization
In this section, we demonstrate how our framework can be used to analyze ideological polarization, quantitatively. To say that two groups are polarized is to say that they are moving to opposite ends of an issue on the multi-dimensional ideological spectrum while, at the same time, their respective political views on ideological issues converge within a group, i.e. socially liberals become also economically liberal (Fiorina and Abrams, 2008). In political science when ideology is multidimensional, polarization is often quantified by considering three measures that capture complementary aspects (Lelkes, 2016): (1) sorting (Abramowitz and Saunders, 1998) (the extent to which the annotated ideology deviates from an outlet's proclaimed ideological bias); (2) issue constraint (Baldassarri and Gelman, 2008) (a correlational analysis between pairs of ideological dimensions); (3) ideological divergence (Fiorina and Abrams, 2008) (the magnitude of the distance between two groups along a single dimension). Together these measures describe changes in the ideological environment over time: a concurrent increase in all three measures indicates polarization in media. Limitations: We use only the fully adjudicated data and refrain from using model predictions, since our baseline experiments in Section 6 show that predicting ideology is challenging. Hence, the analysis are demonstrations of what our framework enables, which we discuss at the end of this section, and the conclusions are drawn for our annotated articles only. We group our data in four-year periods to reduce sparsity.
Measure 1: Sorting We adapt the sorting principle of Abramowitz and Saunders (1998) to our data and investigate the difference between the proclaimed ideological bias of a news outlet and the ideology of annotated articles from the outlet. To obtain the bias B j of a news outlet j, we average the ratings of each news outlet across common sites that rates media bias (Adfontes, Allsides, and MBFC), yielding: CSM (-0.07), CT (0.15), NYT (-0.36), TM (-0.4), WSJ (0.32) (c.f. Table 8 in Appendix B for ratings from each site).
To obtain the overall ideology I (j) i of article i from outlet j, we take the average of liberal (-1), neutral (0), and conservative (1) labels across its paragraphs in all three dimensions. Thus, for each 4-year time period with m articles for outlet j, the sorting measure would be the absolute distance of article vs. outlet ideology |avg m i=1 (I Figure 3, we plot the sorting measure, averaged across news outlets of the same proclaimed ideological bias. The figure shows that in our sample of articles, the left-leaning news outlets were closest to their proclaimed ideological bias measure over time, whereas the neutral outlets were more liberal before 1957 and after 1964. The rightleaning outlets were more conservative at that time than their proclaimed ideological bias. sions over time (Baldassarri and Gelman, 2008) so as to assess, for example, if socially liberal dimensions are more and more associated with economic liberal dimensions for the news outlets. Concretely, for each article we derive its ideology along a single dimension as the average of paragraph annotations along that dimension. We, then, calculate the Pearson correlation between the article ideology of each pair of dimensions, over all articles from one outlet in the same period.
(j) i ) − B j |/B j . In
Results in Figure 4, again averaged across news outlets of the same ideological bias, show that for right-leaning media, the correlation between any two dimensions in the annotated data are largely positive (e.g., economically conservative were also socially conservative) until 1967 or 1970. However, for proclaimed left-leaning and neutral outlets, the correlations fluctuates especially when considering the foreign dimension.
Measure 3: Ideological divergence This measures the distance between two ideological groups on a single dimension (Fiorina and Abrams, 2008). We follow Lelkes (2016) and calculate the bimodality coefficient Pfister et al., 2013) per dimension over articles from all news outlets over the same time period. The bimodality coefficient ranges from 0 (unimodal, thus not at all polarized) to 1 (bimodal, thus completely Figure 5: The evolution of the ideological divergence measure stratified by dimension. The dotted line refers to the bimodality threshold (Lelkes, 2016). Higher values mean the ideology of an article along that one dimension is bimodal. polarized). Figure 5 shows the evolution of the ideological divergence measure of every dimension. A bimodality measure assesses whether this divergence attained the threshold for the cumulative distribution to be considered bimodal. Ideological distance, as a result, refers to the three bimodal coefficients. We note, for example, that the foreign dimension crossed this threshold between 1956 and 1968. This means that proclaimed left-leaning and right-leaning outlets grew further apart on foreign issues during this time period.
Discussion Taken together, the graphs indicate that the years between 1957 and 1967 are the most noteworthy. During this period, from our sample of articles, we see that polarization was only present in conservative news media because it (1) sorted, as it was significantly more conservative than its composite bias measure, (2) constrained its issues, as evidence by high positive correlation values, and (3) became increasingly bimodal, as the ideological distance between their positions and those of their liberal counterpart on foreign issues increased over time. While this conclusion applies to only the set of articles in our dataset, the above analysis illustrates that our framework enables nuanced, quantitative analyses into polarization. We leave for future work, potentially equipped with strong models for ideology prediction, to analyze the data at scale.
Experiments
We present political ideology detection experiments as classification tasks per-dimension on the paragraph level.
We performed an 80/10/10 split to create the train, development, and test sets. The development and test sets contain articles uniformly distributed from our time period (1947 to 1974) such that no particular decade is predominant. To ensure the integrity of the modeling task, all paragraphs belonging to the same article are present in a single split. The number of examples in the splits for each dimension for the adjudicated data are as follows: for the economic dimension, we had 450 training, 50 development, and 58 test examples. For the social dimension, we had 253 for training, 13 for development, and 25 for testing. For the foreign dimension, we had 266 for training, 33 for development, and 39 for testing.
Models
Recurrent neural networks We trained a 2layer bidirectional LSTM (Hochreiter and Schmidhuber, 1997), with sequence length and hidden size of 256, and 100D GloVe embeddings (Pennington et al., 2014).
Pre-trained language models We used BERTbase (Devlin et al., 2019) from HuggingFace (Wolf et al., 2020) and trained two versions, with and without fine-tuning. In both cases we used a custom classification head consisting of 2 linear layers with a hidden size of 768 and a ReLU between them. To extract the word embeddings we followed Devlin et al. (2019) and used the hidden states from the second to last layer. To obtain the embedding of the whole paragraph 5 we averaged the word embeddings and passed this vector to the classification head.
To find the best hyperparameters we performed a grid search in each dimension. For the economic dimension, the best hyperparameters consisted of a learning rate of 2e-6, 6 epochs of training, a gamma value of 2, no freezing of the layers, a 768 hidden size, and 10% dropout. For the social dimension, the best hyperparameters were a learning rate of 2e-5, 12 epochs, a gamma of 4, no freezing of the layers, a 768 hidden size, and 10% dropout. Finally, for the foreign dimension the best hyperparameters consisted of a learning rate of 2e-5, 6 epochs, a gamma of 2, no freezing of the layers, a 768 hidden size, and a 10% dropout.
Focal loss. To better address the imbalanced label distribution of this task, we incorporated focal loss (Lin et al., 2017), originally proposed for dense object detection. Focal loss can be interpreted as a 5 99% of the paragraphs in the dataset have ≤512 tokens. dynamically scaled cross-entropy loss, where the scaling factor is inversely proportional to the confidence on the correct prediction. This dynamic scaling, controlled by hyperparameter γ, leads to a higher focus on the examples that have lower confidences on the correct predictions, which in turn leads to better predictions on the minority classes.
Since a γ of 0 essentially turns a focal loss into a cross entropy loss, it has less potential to hurt performance than to improve it. We found the best γ values to be 2 or 4 depending on the dimension.
Task-guided pre-training. We also explored supervised pre-training on two adjacent tasks that can give insights to the relationship between tasks. We used distant supervision that labeled the ideological bias of each article according to that of its news outlet from www.allsides.com (Kulkarni et al., 2018). This procedure allowed us to use the unannotated articles. 6 Table 6 shows the macro F1 for each configuration, averaged across 10 runs with different random initializations. The fine-tuned BERT model, with no task-guided pre-training shows the best performance across all 3 ideology dimensions. It is important to note that all the models do better than randomly guessing, and better than predicting the majority class. This shows that the models are capturing some of the complex underlying phenomena in the data. However, the classification tasks still remain challenging for neural models, leaving plenty of room for improvement in future work. The BERT ft -focal loss setting ablates the effect of focal loss against a weighted cross entropy loss. with weights inversely proportional to the distribution of the classes in the dimension. This loss helped get a bump in the macro F1 score of around 0.1 for each dimension compared to an unweighted cross entropy loss. However, the focal loss gave further improvements for 2 of the 3 dimensions. Although task-guided pre-training improved the BERT (no fine-tuning) model for 1 of the 3 dimensions, it led to worse performance than BERT (fine-tuned). The improvement on the no fine-tuning setting indicates that there is a potential correlation to be exploited by the ideology of the news outlet, but such labels are not that informative for multi-dimensional prediction. We hope that this dataset provides a testbed for future work to evaluate more distant supervision data/methods.
Results
Related work
In contrast to our multi-dimensional approach that examines the ideology of the issue being discussed instead of the author stance, much of the recent work in computational linguistics has been dedicated to the latter (detection of ideological bias in news media) while collapsing ideology to one dimension (Budak et al., 2016;Kulkarni et al., 2018;Kiesel et al., 2019;Baly et al., 2019Baly et al., , 2020Chen et al., 2020;Ganguly et al., 2020;Stefanov et al., 2020). The proposed computational models classify the partiality of media sources without quantifying their ideology (Elejalde et al., 2018).
Other researchers interested in the computational analysis of the ideology have employed text data to analyze congressional text data at the legislative level (Sim et al., 2013;Gentzkow et al., 2016) and social media text at the electorate level (Saez-Trumper et al., 2013;Barberá, 2015).
In political science, the relationship between (news) media and polarization is also an active area of research. Prior work has studied media ideological bias in terms of coverage (George and Waldfogel, 2006;Valentino et al., 2009). Prior (2013) argues there is no firm evidence of a direct causal relationship between media and polarization and that this relationship depends on preexisting attitudes and political sophistication. On the other hand, Gentzkow et al. (2016) have established that polarization language snippets move from the legislature in the direction of the media whereas (Baumgartner et al., 1997) have shown that the media has an impact on agenda settings of legislatures.
Conclusion
We take the first step in studying multi-dimensional ideology and polarization over time and in news articles relying on the major political science theories and tools of computational linguistics. Our work opens up new opportunities and invites researchers to use this corpus to study the spread of propaganda and misinformation in tandem with ideological shifts and polarization. The presented corpus also provides the opportunity for studying ways that social context determines interpretations in text while distinguishing author stance from content.
This work has several limitations. We only focus on news whereas these dynamics might be different in other forms of communication such as social media posts or online conversations, and the legislature. Further, our corpus is relatively small although carefully annotated by experts. Future work may explore semi-supervised models or active learning techniques for annotating and preparing a larger corpus that may be used in diverse applications.
A Data curation
A diachronic corpus is required to measure and analyze polarization over time (DiMaggio et al., 1996). We collect and annotate data across a long period to address the issue of distributional shifts across years (Desai et al., 2019;Rijhwani and Preotiuc-Pietro, 2020;Bender et al., 2021) and help build robust models that can generalize beyond certain periods.
Additionally, the raw data on top of which we annotate needs to satisfy the following constraints:
(1) for human annotation to be tractable, the articles should share some level of topical coherence; (2) for the data to be useful for the larger community, the content should also cover a range of common discussions in politics across the aisle; and (3) the articles should come from a consistent set of news outlets, forming a continuous and ideologically balanced corpus.
We start with the diachronic corpus of political news articles of Desai et al. (2019) which covers years 1922-1986, the longest-spanning dataset to our knowledge. This corpus is a subset of news articles from the Corpus of Historical American English (COHA, Davies (2012)). To extract topically coherent articles, we investigate the topics and articles across multiple LDA (Blei et al., 2003) runs varying the number of topics (15,20,30,50), aiming to arrive at a cluster of topics that share common points of discussion and collectively will yield a sizable number of articles each year from the same news outlets.
The LDA models consistently showed one prominent topic-the federal budget-across 5 news outlets with balanced ideology (c.f. Table 8): Chicago Tribune (CT), Wall Street Journal (WSJ), Christian Science Monitor (CSM), the New York Times (NYT), and Time Magazine (TM). Because federal budget stories touch on all aspects of the federal activity, this topic appeals to both liberal and conservative media and thus can provide a good testing ground to showcase our proposed ideological annotation method. In addition to the core federal budget topic (topic 5 of Table 7), we also include other topics such as health and education that are integral parts of ideological beliefs in the United States, and when discussed at the federal government level, are typically related to the federal budget. The top vocabulary of the cluster is shown in Table 7. In an effort to purge articles unrelated to the federal budget, we selected Topic1: Trade bank,market,farm,loan,export,agricultur,farmer,dollar,food,debt Topic2: Business incom,tax,revenu,profit,corpor,financ,treasuri,pay,sale,bond Topic3: Education school,univers,educ,student,colleg,professor,institut,teacher,research,graduat Topic4: Defense nuclear,missil,weapon,atom,test,energi,strateg,bomb,space,pentagon Topic5: Economy budget,billion,economi,inflat,economic,deficit,unemploy,cut,dollar,rate Topic6: Health/Race negro,hospit,medic,health,racial,southern,discrimin,doctor,contra,black Topic7: Industry compani,contract,plant,steel,coal,wage,railroad,corpor,manufactur,miner only those that contain words such as "federal" and "congress", and excluded those that mention state budget, and letters to editors. (Note that during annotation, we also discard articles that are unrelated to the federal budget.) After this curation, the total number of articles is 5,706 from the 5 outlets.
To account for the sparsity of articles in the first decades and their density in later decades, we narrowed down the articles to the period from 1947 to 1974. We believe this period is fitting because it includes various ideological combinations of the tripartite composition of the American government, Congress and presidency. 10 The total number of articles in the final corpus of political articles on the federal budget from 1947 to 1974 is 1,749. Table 8: Ideological bias of news outlets from common references of media bias. We use the average in our analyses.
B Proclaimed ideology of news outlets
Figure 1 :
1Dimensional label distribution per outlet.
Figure 2 :
2Co-occurrence matrices on the paragraph level (left) and article level (right)
we show the top most left-leaning (
Figure 3 :
3The evolution of the sorting measure, aggregating conservative/neutral/liberal outlets. Moving further away from the zero means articles deviate more from the proclaimed ideology of their outlets.
Measure 2 :Figure 4 :
24Issue constraint This measure refers to the tightness between ideological dimen-The evolution of the issue constraint measure, stratified by pairs of dimensions. Higher values mean some dimensions correlate more strongly than others. Due to the lack of articles that simultaneously contains social & economic dimensions (1st graph), and economic & foreign dimensions (3rd graph) from conservative outlets their respective blue lines start in 1958.
... Secretary of Defense Robert S. [McTwo
dimensions:
socially and
economically
liberal
Zero dimension . . . "The committee is holding public hearings on President Eisenhower's Economic Report, which he sent to Congress last week. The Secretary's [Humphrey] appearance before the group provided an opportunity for political exchanges. Senators Paul H. Douglas of Illinois, J. W. Fulbright of Representative Wright Patman of Texas, all Democrats, were active in questioning Mr. Humphrey. The Democrats asserted that the Administration's tax reduction program was loaded in favor of business enterprises and shareholders in industry and against the taxpayer in the lowest income brackets. . . . Senator Fulbright .. declare[d]One
dimension:
econom-
ically
liberal
One
dimension:
economi-
cally
neutral
Table 5 :
5Words with the most positive and negative weights from a logistic regression model trained to predict liberal/conservative ideology for each dimension.
Table 6 :
6Macro F1 of the models averaged across 10 runs.
Table 7 :
7Top words from topics selected in our cluster, from the 50-topic LDA model that yielded the most well-deliminated topics.
We use automatically segmented paragraphs since the raw texts were not paragraph-segmented.
It is important to distinguish between ideology and several other concepts. (1) Partisanship (party identity)(Campbell et al., 1980): a partisan person changes their ideology when their party changes its ideology, whereas an ideological person changes their party when their party changes its ideology. Partisanship is easily conflated with party ID using a unidimensional conceptualization of ideology, but not with a multi-dimensional one. (2) Moral foundations:Haidt et al. (2009) gave an evolutionary explanation of how human
Because federal budget stories touch on all aspects of the federal activity, this topic appeals to both liberal and conservative media and thus can provide a good testing ground to showcase our proposed ideological annotation method.
We also experimented with pre-training on the dataset fromChen et al. (2020). However, because their dataset starts from 2006 (outside of our time domain), this setting performed poorly.
For example, between 1947-49, Congress was Republican and the President was a Democrat while the story flipped between 1955-57.
1 We distinguish ourselves from work that considers other types of polarization, e.g., as a measure of emotional distance(Iyengar et al., 2019)or distance between political parties(Lauka et al., 2018).AcknowledgementsThis research is partially supported by NSF grants IIS-1850153, IIS-2107524, and Good Systems, 7 a UT Austin Grand Challenge to develop responsible AI technologies. We thank Cutter Dalton, Kathryn Slusarczyk and Ilana Torres for their help with complex data annotation. We thank Zachary Elkins, Richard Lau, Beth Leech, and Katherine McCabe for their feedback of this work and its annotation protocol, and Katrin Erk for her comments. Thanks to Pitt Cyber 8 for supporting this project. We also acknowledge the Texas Advanced Computing Center (TACC) 9 at UT Austin and The Center for Research Computing at the University of Pittsburgh for providing the computational resources for many of the results within this paper.
Ideological realignment in the us electorate. I Alan, Kyle L Abramowitz, Saunders, The Journal of Politics. 603Alan I Abramowitz and Kyle L Saunders. 1998. Ideo- logical realignment in the us electorate. The Journal of Politics, 60(3):634-652.
. Theodor Adorno, Else Frenkel-Brenswik, J Daniel, Theodor Adorno, Else Frenkel-Brenswik, Daniel J 7 http://goodsystems.utexas.edu 8 https://www.cyber.pitt.edu 9 https://www.tacc.utexas.edu
The authoritarian personality. Levinson, Sanford, Verso BooksLevinson, and R Nevitt Sanford. 2019. The authori- tarian personality. Verso Books.
Inter-coder agreement for computational linguistics. Ron Artstein, Massimo Poesio, Computational Linguistics. 344Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computa- tional Linguistics, 34(4):555-596.
Partisans without constraint: Political polarization and trends in american public opinion. Delia Baldassarri, Andrew Gelman, American Journal of Sociology. 1142Delia Baldassarri and Andrew Gelman. 2008. Partisans without constraint: Political polarization and trends in american public opinion. American Journal of Sociology, 114(2):408-446.
We can detect your bias: Predicting the political ideology of news articles. Ramy Baly, Giovanni Da San, James Martino, Preslav Glass, Nakov, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Ramy Baly, Giovanni Da San Martino, James Glass, and Preslav Nakov. 2020. We can detect your bias: Predicting the political ideology of news articles. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4982-4991.
Multi-task ordinal regression for jointly predicting the trustworthiness and the leading political ideology of news media. Ramy Baly, Georgi Karadzhov, Abdelrhman Saleh, James Glass, Preslav Nakov, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Ramy Baly, Georgi Karadzhov, Abdelrhman Saleh, James Glass, and Preslav Nakov. 2019. Multi-task ordinal regression for jointly predicting the trustwor- thiness and the leading political ideology of news media. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 2109-2116.
Birds of the same feather tweet together: Bayesian ideal point estimation using twitter data. Pablo Barberá, Political analysis. 231Pablo Barberá. 2015. Birds of the same feather tweet together: Bayesian ideal point estimation using twit- ter data. Political analysis, 23(1):76-91.
Media attention and congressional agendas. Do the media govern. R Frank, Baumgartner, D Bryan, Beth L Jones, Leech, Frank R Baumgartner, Bryan D Jones, and Beth L Leech. 1997. Media attention and congressional agendas. Do the media govern, pages 349-363.
On the dangers of stochastic parrots: Can language models be too big?. M Emily, Timnit Bender, Angelina Gebru, Shmargaret Mcmillan-Major, Shmitchell, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. the 2021 ACM Conference on Fairness, Accountability, and TransparencyEmily M Bender, Timnit Gebru, Angelina McMillan- Major, and Shmargaret Shmitchell. 2021. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Confer- ence on Fairness, Accountability, and Transparency, pages 610-623.
Homophily and polarization in the age of misinformation. Alessandro Bessi, Fabio Petroni, Michela Del Vicario, Fabiana Zollo, Aris Anagnostopoulos, Antonio Scala, Guido Caldarelli, Walter Quattrociocchi, The European Physical Journal Special Topics. 22510Alessandro Bessi, Fabio Petroni, Michela Del Vi- cario, Fabiana Zollo, Aris Anagnostopoulos, Anto- nio Scala, Guido Caldarelli, and Walter Quattrocioc- chi. 2016. Homophily and polarization in the age of misinformation. The European Physical Journal Special Topics, 225(10):2047-2059.
Latent dirichlet allocation. M David, Blei, Y Andrew, Michael I Jordan Ng, Journal of machine Learning research. 3David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of ma- chine Learning research, 3(Jan):993-1022.
Fair and balanced? Quantifying media bias through crowdsourced content analysis. Ceren Budak, Sharad Goel, Justin M Rao, Public Opinion Quarterly. 80S1Ceren Budak, Sharad Goel, and Justin M Rao. 2016. Fair and balanced? Quantifying media bias through crowdsourced content analysis. Public Opinion Quarterly, 80(S1):250-271.
The american voter. Angus Campbell, Philip E Converse, Warren E Miller, Donald E Stokes, University of Chicago PressAngus Campbell, Philip E Converse, Warren E Miller, and Donald E Stokes. 1980. The american voter. University of Chicago Press.
Who fits the left-right divide? partisan polarization in the american electorate. G Edward, Carmines, J Michael, Michael W Ensley, Wagner, American Behavioral Scientist. 5612Edward G Carmines, Michael J Ensley, and Michael W Wagner. 2012. Who fits the left-right divide? parti- san polarization in the american electorate. Ameri- can Behavioral Scientist, 56(12):1631-1653.
Changing sides or changing minds? party identification and policy preferences in the american electorate. M Thomas, Geoffrey C Carsey, Layman, American Journal of Political Science. 502Thomas M Carsey and Geoffrey C Layman. 2006. Changing sides or changing minds? party iden- tification and policy preferences in the american electorate. American Journal of Political Science, 50(2):464-477.
Analyzing political bias and unfairness in news articles at different levels of granularity. Wei-Fan Chen, Khalid Al Khatib, Henning Wachsmuth, Benno Stein, Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science. the Fourth Workshop on Natural Language Processing and Computational Social ScienceWei-Fan Chen, Khalid Al Khatib, Henning Wachsmuth, and Benno Stein. 2020. Analyzing political bias and unfairness in news articles at different levels of gran- ularity. In Proceedings of the Fourth Workshop on Natural Language Processing and Computational Social Science, pages 149-154.
Framing theory. Dennis Chong, James N Druckman, Annual Review of Political Science. 10Dennis Chong and James N Druckman. 2007. Framing theory. Annual Review of Political Science, 10:103- 126.
Expanding horizons in historical linguistics with the 400-million word corpus of historical american english. Mark Davies, Corpora. 72Mark Davies. 2012. Expanding horizons in historical linguistics with the 400-million word corpus of his- torical american english. Corpora, 7(2):121-157.
Adaptive ensembling: Unsupervised domain adaptation for political document analysis. Shrey Desai, Barea Sinno, Alex Rosenfeld, Junyi Jessy Li, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Shrey Desai, Barea Sinno, Alex Rosenfeld, and Junyi Jessy Li. 2019. Adaptive ensembling: Unsu- pervised domain adaptation for political document analysis. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4712-4724.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.
Have american's social attitudes become more polarized?. Paul Dimaggio, John Evans, Bethany Bryson, American journal of Sociology. 1023Paul DiMaggio, John Evans, and Bethany Bryson. 1996. Have american's social attitudes become more polarized? American journal of Sociology, 102(3):690-755.
On the nature of real and perceived bias in the mainstream media. Erick Elejalde, Leo Ferres, Eelco Herder, PloS one. 133193765Erick Elejalde, Leo Ferres, and Eelco Herder. 2018. On the nature of real and perceived bias in the main- stream media. PloS one, 13(3):e0193765.
Understanding the determinants of political ideology: Implications of structural complexity. Stanley Feldman, Christopher Johnston, Political Psychology. 353Stanley Feldman and Christopher Johnston. 2014. Un- derstanding the determinants of political ideology: Implications of structural complexity. Political Psy- chology, 35(3):337-358.
Political polarization in the american public. P Morris, Fiorina, J Samuel, Abrams, Annu. Rev. Polit. Sci. 11Morris P Fiorina and Samuel J Abrams. 2008. Political polarization in the american public. Annu. Rev. Polit. Sci., 11:563-588.
Assessing bimodality to detect the presence of a dual cognitive process. Behavior research methods. B Jonathan, Rick Freeman, Dale, 45Jonathan B Freeman and Rick Dale. 2013. Assessing bimodality to detect the presence of a dual cognitive process. Behavior research methods, 45(1):83-97.
Empirical evaluation of three common assumptions in building political media bias datasets. Soumen Ganguly, Juhi Kulshrestha, Jisun An, Haewoon Kwak, Proceedings of the International AAAI Conference on Web and Social Media. the International AAAI Conference on Web and Social Media14Soumen Ganguly, Juhi Kulshrestha, Jisun An, and Hae- woon Kwak. 2020. Empirical evaluation of three common assumptions in building political media bias datasets. In Proceedings of the International AAAI Conference on Web and Social Media, vol- ume 14, pages 939-943.
Measuring polarization in high-dimensional data: Method and application to congressional speech. Matthew Gentzkow, Jesse M Shapiro, Matt Taddy, National Bureau of Economic Research. Matthew Gentzkow, Jesse M Shapiro, and Matt Taddy. 2016. Measuring polarization in high-dimensional data: Method and application to congressional speech. National Bureau of Economic Research.
The new york times and the market for local newspapers. M Lisa, Joel George, Waldfogel, American Economic Review. 961Lisa M George and Joel Waldfogel. 2006. The new york times and the market for local newspapers. American Economic Review, 96(1):435-447.
Above and below left-right: Ideological narratives and moral foundations. Jonathan Haidt, Jesse Graham, Craig Joseph, Psychological Inquiry. 202-3Jonathan Haidt, Jesse Graham, and Craig Joseph. 2009. Above and below left-right: Ideological narratives and moral foundations. Psychological Inquiry, 20(2- 3):110-119.
Answering the call for a standard reliability measure for coding data. Communication methods and measures. F Andrew, Klaus Hayes, Krippendorff, 1Andrew F Hayes and Klaus Krippendorff. 2007. An- swering the call for a standard reliability measure for coding data. Communication methods and mea- sures, 1(1):77-89.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Comput. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput., 9(8):1735-1780.
The origins and consequences of affective polarization in the united states. Shanto Iyengar, Yphtach Lelkes, Matthew Levendusky, Neil Malhotra, Sean J Westwood, Annual Review of Political Science. 22Shanto Iyengar, Yphtach Lelkes, Matthew Levendusky, Neil Malhotra, and Sean J Westwood. 2019. The origins and consequences of affective polarization in the united states. Annual Review of Political Science, 22:129-146.
A Jeffery, Eric Jenkins, Jamie L Schickler, Carson, Constituency cleavages and congressional parties: Measuring homogeneity and polarization. 28Jeffery A Jenkins, Eric Schickler, and Jamie L Carson. 2004. Constituency cleavages and congressional parties: Measuring homogeneity and polarization, 1857-1913. Social Science History, 28(4):537-573.
Political polarization and the dynamics of political language: Evidence from 130 years of partisan speech. Jacob Jensen, Suresh Naidu, Ethan Kaplan, Laurence Wilse-Samson, David Gergen, Michael Zuckerman, Arthur Spirling, with comments and discussionJacob Jensen, Suresh Naidu, Ethan Kaplan, Laurence Wilse-Samson, David Gergen, Michael Zuckerman, and Arthur Spirling. 2012. Political polarization and the dynamics of political language: Evidence from 130 years of partisan speech [with comments and discussion].
. Brookings Papers on Economic Activity. Brookings Papers on Economic Activ- ity, pages 1-81.
Political ideology: Its structure, functions, and elective affinities. T John, Jost, M Christopher, Jaime L Federico, Napier, Annual review of psychology. 60John T Jost, Christopher M Federico, and Jaime L Napier. 2009. Political ideology: Its structure, func- tions, and elective affinities. Annual review of psy- chology, 60:307-337.
Semeval-2019 task 4: Hyperpartisan news detection. Johannes Kiesel, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, Martin Potthast, Proceedings of the 13th International Workshop on Semantic Evaluation. the 13th International Workshop on Semantic EvaluationJohannes Kiesel, Maria Mestre, Rishabh Shukla, Em- manuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. 2019. Semeval- 2019 task 4: Hyperpartisan news detection. In Pro- ceedings of the 13th International Workshop on Se- mantic Evaluation, pages 829-839.
Multi-view models for political ideology detection of news articles. Vivek Kulkarni, Junting Ye, Steven Skiena, William Yang Wang, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingVivek Kulkarni, Junting Ye, Steven Skiena, and William Yang Wang. 2018. Multi-view models for political ideology detection of news articles. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3518- 3527.
Mass partisan polarization: Measuring a relational concept. Alban Lauka, Jennifer Mccoy, Rengin B Firat, American behavioral scientist. 621Alban Lauka, Jennifer McCoy, and Rengin B Firat. 2018. Mass partisan polarization: Measuring a relational concept. American behavioral scientist, 62(1):107-126.
Mass polarization: Manifestations and measurements. Yphtach Lelkes, Public Opinion Quarterly. 80S1Yphtach Lelkes. 2016. Mass polarization: Manifesta- tions and measurements. Public Opinion Quarterly, 80(S1):392-410.
Focal loss for dense object detection. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionKaiming He, and Piotr DollárTsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. 2017. Focal loss for dense ob- ject detection. In Proceedings of the IEEE interna- tional conference on computer vision, pages 2980- 2988.
GloVe: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsJeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
Good things peak in pairs: a note on the bimodality coefficient. Roland Pfister, A Katharina, Markus Schwarz, Rick Janczyk, Jon Dale, Freeman, Frontiers in psychology. 4700Roland Pfister, Katharina A Schwarz, Markus Janczyk, Rick Dale, and Jon Freeman. 2013. Good things peak in pairs: a note on the bimodality coefficient. Frontiers in psychology, 4:700.
Madison's constitution under stress: A developmental analysis of political polarization. Paul Pierson, Eric Schickler, Annual Review of Political Science. 23Paul Pierson and Eric Schickler. 2020. Madison's con- stitution under stress: A developmental analysis of political polarization. Annual Review of Political Science, 23:37-58.
Media and political polarization. Markus Prior, Annual Review of Political Science. 16Markus Prior. 2013. Media and political polarization. Annual Review of Political Science, 16:101-127.
Topictiling: a text segmentation algorithm based on lda. Martin Riedl, Chris Biemann, Proceedings of ACL 2012 Student Research Workshop. ACL 2012 Student Research WorkshopMartin Riedl and Chris Biemann. 2012. Topictiling: a text segmentation algorithm based on lda. In Pro- ceedings of ACL 2012 Student Research Workshop, pages 37-42.
Temporally-informed analysis of named entity recognition. Shruti Rijhwani, Daniel Preotiuc-Pietro, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsShruti Rijhwani and Daniel Preotiuc-Pietro. 2020. Temporally-informed analysis of named entity recognition. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 7605-7617.
Social media news communities: gatekeeping, coverage, and statement bias. Diego Saez-Trumper, Carlos Castillo, Mounia Lalmas, Proceedings of the 22nd ACM international conference on Information & Knowledge Management. the 22nd ACM international conference on Information & Knowledge ManagementDiego Saez-Trumper, Carlos Castillo, and Mounia Lal- mas. 2013. Social media news communities: gate- keeping, coverage, and statement bias. In Pro- ceedings of the 22nd ACM international conference on Information & Knowledge Management, pages 1679-1684.
Measuring ideological proportions in political speeches. Yanchuan Sim, D L Brice, Justin H Acree, Noah A Gross, Smith, Proceedings of the 2013 conference on empirical methods in natural language processing. the 2013 conference on empirical methods in natural language processingYanchuan Sim, Brice DL Acree, Justin H Gross, and Noah A Smith. 2013. Measuring ideological pro- portions in political speeches. In Proceedings of the 2013 conference on empirical methods in natu- ral language processing, pages 91-101.
How propaganda works. Jason Stanley, Princeton University PressJason Stanley. 2015. How propaganda works. Prince- ton University Press.
Predicting the topical stance and political leaning of media using tweets. Peter Stefanov, Kareem Darwish, Atanas Atanasov, Preslav Nakov, Peter Stefanov, Kareem Darwish, Atanas Atanasov, and Preslav Nakov. 2020. Predicting the topical stance and political leaning of media using tweets.
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsIn Proceedings of the 58th Annual Meeting of the As- sociation for Computational Linguistics, pages 527- 537.
Selective exposure in the internet age: The interaction between anxiety and information utility. A Nicholas, Antoine J Valentino, Banks, L Vincent, Anne K Hutchings, Davis, Political Psychology. 304Nicholas A Valentino, Antoine J Banks, Vincent L Hutchings, and Anne K Davis. 2009. Selective ex- posure in the internet age: The interaction between anxiety and information utility. Political Psychol- ogy, 30(4):591-613.
Polarization and fake news: Early warning of potential misinformation targets. Del Michela, Walter Vicario, Antonio Quattrociocchi, Fabiana Scala, Zollo, ACM Transactions on the Web (TWEB). 132Michela Del Vicario, Walter Quattrociocchi, Antonio Scala, and Fabiana Zollo. 2019. Polarization and fake news: Early warning of potential misinforma- tion targets. ACM Transactions on the Web (TWEB), 13(2):1-22.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander M Lhoest, Rush, Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations. the 2020 conference on empirical methods in natural language processing: system demonstrationsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 conference on empirical methods in natural language processing: system demonstrations, pages 38-45.
| [] |
[
"Bridging Music and Text with Crowdsourced Music Comments: A Sequence-to-Sequence Framework for Thematic Music Comments Generation",
"Bridging Music and Text with Crowdsourced Music Comments: A Sequence-to-Sequence Framework for Thematic Music Comments Generation",
"Bridging Music and Text with Crowdsourced Music Comments: A Sequence-to-Sequence Framework for Thematic Music Comments Generation",
"Bridging Music and Text with Crowdsourced Music Comments: A Sequence-to-Sequence Framework for Thematic Music Comments Generation"
] | [
"Peining Zhang ",
"Junliang Guo ",
"· Linli ",
"You ·Xu · Mu ",
"Junming Yin junmingy@arizona.edu ",
"Peining Zhang ",
"Junliang Guo ",
"Linli Xu linlixu@ustc.edu.cn ",
"Mu You ",
"Junming Yin ",
"\nUniversity of Science and Technology of China\nUniversity of Science and Technology\nof China\n",
"\nUniversity of Science and Technology\nof China\n",
"\nUniversity of Science and Technology\nof China\n",
"\nUniversity of Arizona\n\n",
"Peining Zhang ",
"Junliang Guo ",
"· Linli ",
"You ·Xu · Mu ",
"Junming Yin junmingy@arizona.edu ",
"Peining Zhang ",
"Junliang Guo ",
"Linli Xu linlixu@ustc.edu.cn ",
"Mu You ",
"Junming Yin ",
"\nUniversity of Science and Technology of China\nUniversity of Science and Technology\nof China\n",
"\nUniversity of Science and Technology\nof China\n",
"\nUniversity of Science and Technology\nof China\n",
"\nUniversity of Arizona\n\n"
] | [
"University of Science and Technology of China\nUniversity of Science and Technology\nof China",
"University of Science and Technology\nof China",
"University of Science and Technology\nof China",
"University of Arizona\n",
"University of Science and Technology of China\nUniversity of Science and Technology\nof China",
"University of Science and Technology\nof China",
"University of Science and Technology\nof China",
"University of Arizona\n"
] | [] | We consider a novel task of automatically generating text descriptions of music. Compared with other well-established text generation tasks such as image caption, the scarcity of well-paired music and text datasets makes it a much more challenging task. In this paper, we exploit the crowdsourced music comments to construct a new dataset and propose a sequenceto-sequence model to generate text descriptions of music. More concretely, we use the dilated convolutional layer as the basic component of the encoder and a memory based recurrent neural network as the decoder. To enhance the authenticity and thematicity of generated texts, we further propose to fine-tune the model with a discriminator as well as a novel topic evaluator. To measure the quality of generated texts, we also propose two new evaluation metrics, which are more aligned with human evaluation than traditional metrics such as BLEU. Experimental results verify that our model is capable of generating fluent and meaningful comments while containing thematic and content information of the original music. | 10.48550/arxiv.2209.01996 | [
"https://export.arxiv.org/pdf/2209.01996v1.pdf"
] | 252,090,097 | 2209.01996 | e06791f8b85e31c1408b8d617b8d4e15f89d8417 |
Bridging Music and Text with Crowdsourced Music Comments: A Sequence-to-Sequence Framework for Thematic Music Comments Generation
5 Sep 2022
Peining Zhang
Junliang Guo
· Linli
You ·Xu · Mu
Junming Yin junmingy@arizona.edu
Peining Zhang
Junliang Guo
Linli Xu linlixu@ustc.edu.cn
Mu You
Junming Yin
University of Science and Technology of China
University of Science and Technology
of China
University of Science and Technology
of China
University of Science and Technology
of China
University of Arizona
Bridging Music and Text with Crowdsourced Music Comments: A Sequence-to-Sequence Framework for Thematic Music Comments Generation
5 Sep 2022Received: date / Accepted: dateNoname manuscript No. (will be inserted by the editor) 2 Peining Zhang et al.Music to Text · Conditional Text Generation · Neural Networks · Generative Adversarial Network
We consider a novel task of automatically generating text descriptions of music. Compared with other well-established text generation tasks such as image caption, the scarcity of well-paired music and text datasets makes it a much more challenging task. In this paper, we exploit the crowdsourced music comments to construct a new dataset and propose a sequenceto-sequence model to generate text descriptions of music. More concretely, we use the dilated convolutional layer as the basic component of the encoder and a memory based recurrent neural network as the decoder. To enhance the authenticity and thematicity of generated texts, we further propose to fine-tune the model with a discriminator as well as a novel topic evaluator. To measure the quality of generated texts, we also propose two new evaluation metrics, which are more aligned with human evaluation than traditional metrics such as BLEU. Experimental results verify that our model is capable of generating fluent and meaningful comments while containing thematic and content information of the original music.
Introduction
Music is an art form that reflects the emotions of human being, and is also a data form that people are frequently getting exposed to through various media platforms. Each day, a large number of comments are posted on music streaming applications. Most of them either express the emotions of users while they listen to music, or describe the content and background of music such as the genre and musician. Considering the natural correspondence between music and comments, in this work, we explore the possibility of generating meaningful and thematic text descriptions from music.
We formulate this problem as a task of natural language generation (NLG) from heterogeneous data source, i.e., music. Such heterogeneous text generation tasks have important applications in human-computer interaction and recommendation systems. For example, image caption generation, an NLG task that takes images as input to generate text descriptions, has achieved remarkable progress with the advances of deep learning (Xu et al., 2015;Yang et al., 2016;Krause et al., 2017). However, the progress in text generation based on other data forms, especially music, still falls behind image caption generation, largely because of the lack of matched corpus, workable model, and reliable evaluation metrics. Specifically, the current NLG models heavily rely on supervised training to explicitly learn a correspondence between the input data form and text. However in our setting, such well-paired music and text datasets are unavailable. Regarding workable models, while generative adversarial networks (GAN) are widely adopted in text generation (Yu et al., 2017;Weili Nie and Patel, 2019;Guo et al., 2018), there still exists certain fundamental issues to be addressed, including instable training and mode collapse. In terms of evaluation, the traditional n-gram based metrics of text generation (Papineni et al., 2002) are shown to primarily focus on exact matching, while other essential qualities of human languages such as coherence and semantic information are largely overlooked (Dai et al., 2017). Therefore, complementary metrics should also be used to provide a comprehensive evaluation of the generated text. In summary, generating text from music is viewed as a challenging task due to the aforementioned technical difficulties.
In this work, we formulate the music to text problem in the framework of sequence-to-sequence learning. To deal with the scarcity of music datasets along with the corresponding reliable text description, we exploit the crowdsourced music comments during training. The features of raw music are extracted by a WaveNet based encoder (Oord et al., 2016). To ensure that the generated comments contain the essential information of music and convey rich semantic meaning, the decoder is first trained by Maximum Likelihood Estimation (MLE) to learn a basic language model, and then is fine-tuned in a conditional GAN framework to make the generated text more coherent and informative. Furthermore, to address the issue of mode collapse, we introduce a topic evaluator to make the generated text more thematic and diverse. As for evaluation, because the traditional n-gram based metrics are often insufficient when evaluating the semantic and topic quality of the generated text, we introduce two new adversarial based metrics that are able to evaluate the semantic information (including content and theme) of the results. We empirically demonstrate that our model has significant advantages over baseline methods in terms of both traditional n-gram based metrics and new adversarial evaluation metrics, and we find that our proposed evaluation metrics are more aligned with the human evaluation than traditional metrics.
Our main contributions in this work can be summarized as follows:
-We explore a new direction of natural language generation with music as input, and formulate a new task called music to text generation. -We bulid a sequence-to-sequence model for solving the task. To generate coherent and informative text, we propose a conditional GAN based framework with novel augmentations including a two-step training strategy and a topic evaluator. -We propose complementary evaluation metrics to measure the coherence and themes of text. Extensive experiments are conducted to demonstrate the superior quality of the generated comments.
The rest of the paper is organized as follows. In Section 2, we give a brief review of the related work. Then we present the proposed sequence-to-sequence framework to solve the task of generating text from music in Section 3. Extensive experimental results are provided in Section 4 to demonstrate the effectiveness of the model. We finally conclude the paper in Section 5.
Related Work
Natural Language Generation
Generating text description from music can be viewed as a task of conditional Natural Language Generation (NLG), which aims at generating corresponding text from other types of data. One of the most active and representative NLG problems is image caption. The main stream of image caption models use the sequence-to-sequence structure to embed images into a latent state, based on which text descriptions are generated. To alleviate the problem of limited diversity and distinctiveness of models trained by traditional teacher forcing, the Generative Adversarial Network (GAN) training paradigm is applied. While GAN is originally proposed to generate continuous data such as images (Goodfellow et al., 2014), extending GAN training to generation of discrete data including text has been an active research topic. To address the issue of non-differentiability of discrete data, reinforcement learning methods have been employed (Yu et al., 2017;Guo et al., 2018;Fedus et al., 2018;Dai et al., 2017). Meanwhile, other GANs without RL methods use the annealed softmax to approximate the argmax and work on the continuous space of the discriminator (Zhang et al., 2017;Kusner and Hernández-Lobato, 2016;Weili Nie and Patel, 2019).
Along with the development of text generation methods, various evaluation metrics have been proposed to measure the quality of the generated sentences. Among them, the n-gram based metrics including BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and METEOR (Banerjee and Lavie, 2005) are most widely used. CIDEr (Vedantam et al., 2015) also uses the weighted statistics over n-grams. These metrics mostly depend on matching n-grams with real texts, and as a consequence, sentences that contain frequent n-grams are scored higher in general than those with clear topics. Recently, a new metric based on the advanced neural network classifiers is applied to evaluate the descriptions for images (Dai et al., 2017). The metric is demonstrated to score the diverse and natural human descriptions higher, which shows more relevance to human evaluations. However, this metric relies on the training of a classifier based on the generated data, making the result heavily sensitive to the hyper-parameters such as the learning rate and its decay rate, which impairs the stability and reliability of the metric. In this paper, in addition to the traditional n-gram evaluation, we introduce two adversarial metrics to evaluate the content and theme of the generated text.
On the other hand, text generation from music can be viewed as a task analogous to domain translation, which is to map one artistic type to another such that the essential semantics are preserved. Among the domain translation works, the model in Harmon (2017) uses the semantic meaning extracted from an input text to generate ambient music, while in this paper, we consider the task in the reverse direction, i.e., generating comments from music.
Generation Involving Acoustic Data
Speech generation has been widely studied in recent years, where an audio wave is often generated based on other types of information such as text (text to speech) (Kalchbrenner et al., 2018;Wang et al., 2017), different speaker identities (multi-speaker speech generation) (Oord et al., 2016), and their combination (Jia et al., 2018). WaveNet (Oord et al., 2016) has been accepted as the baseline model on speech generation. As for music, another common type of acoustic data, while it often appears with text descriptions (such as lyrics and comments), few work has explored the direction of generating text descriptions from music. Most existing works study either pure music generation or pure text generation, without combining them together. The study on music generation mainly focuses on leveraging different instruments to compose a song (Zhu et al., 2018), or generating music from recurring elements such as motifs and phrases through deep learning models (Briot et al., 2017;Huang et al., 2018). The works on text generation often utilize feature based text generation models to generate lyrics within a particular pre-defined style (Malmi et al., 2016;Pudaruth et al., 2014), without considering music audio. In this paper, we propose to generate text descriptions based on music audio by incorporating both two types of information into a sequence-to-sequence framework.
Methodology
Music conveys rich information from acoustic signals to semantic topics that can stimulate thematic comments. To build a natural mapping from music to text, we propose a framework that exploits the acoustic information in music to generate textual comments.
Problem Definition. Given a raw music audio x ∈ R l as input where l is the length of the audio sequence, the proposed framework outputs a comment y = f (x) through a mapping function f (·). The comment y is supposed to be a text description generated from the perspective of the genre and content of the music audio x. To eliminate the effect of human voice and to ensure that the generated text is entirely based on the melody, we only consider and collect instrumental music as the input x.
Overview of the Framework
The proposed framework contains a feature extractor E, a generator G, a discriminator D, and an evaluator V , as illustrated in Figure 1. In our framework, given a music audio x ∈ R l , the feature extractor processes the raw audio into a feature vector e ∈ R d , where d is the dimension of the feature vector. The audio feature vector e is then concatenated with a random noise vector as the initial state of the generator to ensure the diversity of the generated texts. The generator G relies on a Relational Memory RNN, which generates a sentence s word by word in an autoregressive manner.
The discriminator D and the evaluator E are both convolutional neural networks (CNNs), with similar architectures to process the text generated by the generator G. Given a sentence s, the discriminator D takes s as input and measures its quality by predicting whether it is a real comment. The evaluator E takes both s and the audio feature vector e as input, and measures the similarity between the comment and the music by computing their distance. In the following sections, we will introduce all components in details.
WaveNet Based Feature Extractor
WaveNet (Oord et al., 2016) has been widely applied in speech generation related tasks. Take a waveform x = {x 1 , ..., x T } as input, the main component of WaveNet is a dilated causal convolution network to factorize x t as a product of conditional probabilities of the previous steps from x 1 to x t−1 . In this paper, we also employ a WaveNet based encoder with novel adaptations because of Fig. 1 An overview of the proposed music to comment framework, where E, G, D, and V denote the feature extractor, text generator, text discriminator, and topic evaluator respectively. It illustrates the modules and data involved in the training process of the framework, and details are described in Section 3. Best view in color.
Dilated CNN Audio Wave x LSTM Audio Feature e Feature Extractor E Evaluator V Relation Memory RNN Generator G Gumbel Softmax Random Noise Generated Sentence s Discriminator D CNN Linear + CNN Linear
the difference in the training objectives. Specifically, we use non-causal dilated convolution because audio serves as the input in our setting rather than the output. The standard WaveNet model with dilated convolution allows for extraction of features at different scales, but also makes the network too deep for gradients to propagate. To simplify the optimization and focus more on local features, we add up the output of each dilated convolution layer as z = i z i , where z i is the output feature at layer i. In addition, the length of the output audio feature vector z is much longer than the length of text descriptions, which is problematic for postprocessing. To address this issue, we first introduce an additional average pooling layer to reduce the dimension of z from R l×d to R T ×d , where T = l 8000 in our setting and d is the filter depth of the dilated convolution layer. Then we view e as a sequence of hidden representations of length T in R d , and introduce a LSTM network to map z into a one-dimensional vector e ∈ R d . We take e as the final hidden representation of the raw waveform x in the following modules.
As for the loss function of the feature extractor, to explicitly emphasize the thematic information contained in the extracted features, we utilize the average cross entropy of the classification task on the music labels as our loss function of the feature extractor E. Specifically, suppose there are N different music songs in the training set, given the raw music audio x which belongs to the j-th song in the dataset, its label can be represented as an N dimensional one-hot vector o = (0, ..., 0, 1, 0, ...0) with 1 for the j-th coordinate and 0 elsewhere. Then the loss function of the feature extractor E can be written as:
L E (x, o; Θ E ) = − log P (o|x; Θ E ),(1)
where Θ E denotes the parameters of the feature extractor. This is different from the autoregressive generation loss function used in the original WaveNet model, which relies on the inherent and local features of the audio waves.
Text Generator and Discriminator
We introduce the proposed conditional text generation model in this section. We first discuss the training strategy and then move to the architecture details.
Training the text generator consists of two stages. First, we train the feature extractor E and the text generator G jointly in an end-to-end manner, using the negative log-likelihood word prediction as the loss function:
L MLE (x, s; Θ E , Θ G ) = − log P (s|x; Θ E , Θ G ),(2)
with
P (s|x) = Ts t=1 P (s t |s <t , x; Θ E , Θ G ),
where s indicates the golden text sequence with T s tokens. We find that the generator trained by Equation (2) tends to generate general and low quality descriptions which are short of the music information and textual coherence. Therefore, we then propose to fine-tune the generator G with Generative Adversarial Network (GAN). We adopt a conditional GAN based framework to fine-tune the generator G. When applying GAN to sequence generation tasks, discriminating between real and fake samples is much easier than generating texts for the GAN system, resulting in an insufficient expression ability of the generator and mode collapse during inference. To address these issues, we propose an enhanced generator based on relational memory networks (Santoro et al., 2018). The basic idea of relational memory is to consider a fixed set of memory slots and allow for interactions between memory slots to implement the mechanism of selfattention (Vaswani et al., 2017). Intuitively, we utilize multiple memory slots and attention across them to increase the expression power of the generator and its ability of generating longer sentences. The memory-based model also better accommodates the requirement of conditional GAN to maintain the conditional information throughout the sentence generation process.
Specfically, we denote M t as the memory slot at timestep t and v t as the newly observated word embedding. The computation flow of updating the memory slots is formulated as follows, which is based on the multi-head selfattention (Vaswani et al., 2017). We omit the subscripts of different attention heads and describe the computation of a single attention head:
M t+1 = softmax Q t K T t √ d k V t ,(3)where Q t = M t W q , K t = [M t ; v t ]W k , and V t = [M t ; v t ]W v areM t+1 = f θ1 ( M t+1 , M t ),(4)o t = f θ2 ( M t+1 , M t ),(5)
where the two functions f θ1 and f θ2 are the compositions of skip connections, multi-layer perceptions, and gated operations. We denote the real and generated text descriptions as s r and s g respectively, which serve as the input to the discriminator and the evaluator.
As for the discriminator, we adopt a CNN based model which is a common choice of related works (Kim, 2014;Yu et al., 2017), where the input sentence s is represented by an embedding matrix. To encourage the discriminator to provide more diverse and comprehensive guidance for the generator, we adopt the multi-representation trick (Weili Nie and Patel, 2019) that employs multiple embedded representations for each sentence, with each representation independently passing through the discriminator with an individual score. The average of these individual scores will serve as the guiding information to update the generator. The multiple embedded representations are expected to capture diverse information of the input sentence from different aspect.
Finally, the loss functions of the discriminator and generator can be written as:
L D (s r , s g ; Θ D ) = E log[1 − D(s r )]) + log[D(s g )] ,(6)L 1 G (s g ; Θ G ) = E log[1 − D(s g )] ,(7)
where Θ D and Θ G are the parameters of the discriminator and the generator respectively.
Topic Relevance Evaluator
In our preliminary studies, we find that a single discriminator encourages the generator to generate some natural but general comments such as "the song Fig. 2 An illustration of the structure of the evaluator V . The text feature vector s (i) is multiplied by the audio feature vector e (i) to get their similarity. Linear layers are used for output. We adopt the multi-representation trick which results in r distinct embedding matrices for each sentence. In addition, the structure of the evaluator is similar to the discriminator, except that it exploits additional information from the audios.
is excellent". We attribute this problem to the difficulty of learning the topic of comments. Therefore, we need to evaluate how well a generated comment is thematically related to the given music in addition to its authenticity. Unfortunately, in our task, assessing the relevance and realness of text are two independent sub-problems, which makes it hard to evaluate both authenticity and thematicity simultaneously with a single model. Furthermore, we empirically find that the balance on the two loss functions impairs the training of both functions, which motivates us to adopt an independent CNN model as the evaluator V to assess the thematicity of generated comments.
The functionality of the evaluator V is to take a pair of audio and text as input, and classify the pair to evaluate if they are matched. As shown in Figure 2, the evaluator takes both the text description s and the audio feature e as input. s is encoded by a CNN based model equipped with the multirepresentation trick, which has the same architecture with the discriminator. Then a multi-layer perceptron (MLP) is utilized to measure how well the text s matches the audio e: V (s, e) = σ(MLP(CNN(s), e)),
where V (s, e) is the output of the evaluator that measures how well the text s matches the audio e, and σ(·) is the sigmoid function. The evaluator is trained to correctly measure the topic relevance between s and e by negative sampling. Specifically, given the audio vector e p , we sample a negative audio vector e n which is generated by first uniformly sampling another music from the dataset and then feed it into the WaveNet encoder. Therefore, the topic related loss of the evaluator and generator can be written as:
L V (s r , e p , e n ; Θ V ) = E log[1 − V (s r , e p )] + log[V (s r , e n )] ,(9)L 2 G (s g , e p ; Θ G ) = E log[1 − V (s g , e p )] ,(10)
where Θ V indicates the parameters of the evaluator. The loss of the evaluator is only calculated on the real data, which makes it more objective and stable to alleviate the problem of hard training for GAN. The overall loss for the generator is composed of the losses from the discriminator and the evaluator:
L G (s g , e p ; Θ G ) = L 1 G (s g ; Θ G ) + L 2 G (s g , e p ; Θ G ).(11)
Training and Inference
Training our model consists of three stages, i.e., pre-training of the audio feature extractor, joint training of the text generator and the audio feature extractor with the MLE loss, as well as GAN fine-tuning. More specifically, we first pre-train the audio feature extractor E using the loss function L E (x, o; Θ E ) in Equation (1). Then we train the feature extractor E and the text generator G using the loss function L MLE (x, s; Θ E , Θ G ) in Equation (2). Finally, we finetune the text generator G in an adversarial way with the discriminator D and topic evaluator V as introduced above:
L GAN (s g , s r , e p , e n ; Θ E , Θ D , θ G ) = L G (s g , e p ; Θ G ) (12) + L D (s r , s g ; Θ D ) + L V (s r , e p , e n ; Θ V ).
To make our framework differentiable in training, we adopt the Gumbel-Softmax (Kusner and Hernández-Lobato, 2016) technique to reparametrize the sample process into a continuous space. Specifically, let U be a categorical distribution with probabilities [π 1 , π 2 , ..., π c ] where c is the number of classes, samples from U are approximated as:
u = softmax(β(g i + log π i ))(13)
where g i ∼ Gumbel(0, 1), and β > 0 is the inverse temperature. In our setting, let U be the distribution of the generator output logits {o t } T t=1 , then we can get a differentiable approximation of the generated sentence s g . In general, larger values of β encourage more exploration for better sample diversity while smaller values of β encourage more exploitation for better sample quality. We therefore increase the inverse temperature β via an exponential policy in the training process: β n = β n/N max , where β max denotes the maximum value of β, n and N are the current iteration index and the total number of iterations respectively. The increasing inverse temperature draws a balance between exploration and exploitation.
Experiments
Dataset
In this paper, we are interested in the task of generating thematic comments from music. To investigate the performance of our proposed framework on this task, a dataset of music-comment pairs is required. Due to the scarcity of the music datasets with the corresponding reliable text descriptions, we exploit the crowdsourced music comments to construct a new dataset of matched musiccomment pairs. Specifically, we collect 83 music audios and the corresponding 1.2M comments from NetEase 1 , one of the largest and most popular public music streaming platforms in China. On this platform, each comment is associated with a feature called votes which reflects how many users agree with this comment. We consider this feature as the confidence of each comment, i.e., comments with higher votes can better represent the content and theme of the music. Therefore we manually increase the proportion of the comments with high confidence (more than 10 votes) by duplicating them 10 times in the dataset to enhance the contents and themes learned in the trained model.
As stated in Section 3, to ensure the generated text is entirely based on the melody, we only consider instrumental music or accompanying music in our dataset. The music audios are of lengths between 3 minutes to 6 minutes in the mp3 format. We split each audio into clips with a duration around 20 seconds and sample them with a rate of 16k per second. In the meantime, the maximum number of comments for a single music piece in the dataset is around 25.4k, and the minimum is 5.6k. We filter the comments by their length such that the maximum length is 50, while the minimum length is 10, resulting in an average length of 19.4 in the dataset. To construct the training pairs, for every audio clip, we randomly select one comment of this audio, which forms a pair of audio clip and comment as a sample in the dataset. We randomly select 80% of samples in the dataset as the training set, 10% as the validation set, and the rest as the test set. We implement our model on Tensorflow, and plan to release the code and dataset once the paper is open to the public.
Model Settings
To train the model, we use a vocabulary with 6.7k characters in Chinese, while all English words and numbers are replaced by specific tokens. The number of attention heads in the relational memory cell is set to 2, and the dimensions of the attention heads, audio feature vectors, and word embeddings are all set to 128.
The model is trained with the Adam optimizer (Kingma and Ba, 2014). The batch size is set to 16, 512, and 64 for the audio feature extractor training, MLE training, and GAN fine-tuning respectively. The learning rates for the three parts are 1e-3, 1e-2, and 1e-4 respectively and no decays are adopted. The word embedding matrix is initialized by Word2Vec (Mikolov et al., 2013).
Training Details
Adaptive Length Adjustment
We found that the current GAN-based models have a strong tendency to generate shorter sentences. One of the reasons might be a shorter sentence can fool the discriminator more easily, for it contains less information and involves simpler structure. Another reason might be that most of the current GANs for text generation only use BLEU as a metric, where shorter sentences will have an advantage in the test, so the architecture is fundamentally inclined to generate shorter sentences. We propose a simple yet effective technique called adaptive length adjustment to deal with this problem, where we use an adaptive adjusted parameter to multiply the probablity of EOS during training, which keeps the expected lengths of the generated texts consistent with the texts in the training set.
Label Smoothing
We adopt label smoothing (Szegedy et al., 2016) to improve the stability of GAN training. We set the label of a real sentence to be a uniform random number between 0.9 and 1, and the label of a fake generated sentence between 0 and 0.1. It can make the discriminator converge more slowly and the training procedure more stable. This technique is also applied to the evaluator to make the losses corresponding to different music pieces more balanced and reduce the variance of the gradients.
Baselines
We consider the following models as our baselines.
-MLE The basic sequence-to-sequence model optimized with the negative log-likelihood loss function in Equation 2. We compare with it to show the effect of the proposed GAN fine-tuning. -RELRNN The model with the same architecture as the relational memory based text generator but without GAN fine-tuning and the audio feature. We compare with it to show the effect of the audio information.
We denote our model as Music-to-Comment Generator (MCG). We consider several variants of the proposed model to examine the influence of the inverse temperature by setting β max in Equation (13) to different numbers. We also remove the Evaluator to construct another variant of the model to illustrate the effectiveness of the Evaluator.
Evaluation Metrics
We conduct both automatic and human evaluation to ensure the objectivity and scalability. We propose evaluation metrics mainly based on the following aspects of the generated comments:
-Fluency: Does the comment read fluently and smoothly? -Coherence: Is the comment coherent across its clauses? -Meaning: Does the comment have a reasonable meaning? -Consistency: Does the topics of the comment match the given music?
Automatic Evaluation
We consider conventional BLEU (Papineni et al., 2002) metrics including BLEU-{3,4,5} and their geometric mean BLEU, and propose two additional metrics relevant to the Evaluator V . We refer to the two metrics as V-Score and H-Score. The BLEU scores are used to evaluate the fluency and coherence. As for the proposed auxiliary metrics, V-Score refers to the difference between the average probability of the matched sentence-audio pairs and 0.5, where the probability is calculated by the Evaluator according to Equation (8):
V-Score = 1 N N i=1 V (s i , e) − 0.5.(14)
Here the Evaluator is trained independently on the same training set according to Equation (9), and s i indicates the i-th of the N comments generated from the audio feature e. This metric measures the amount of information contained in the sentence that matches the music, as the well trained evaluator gives an output larger than 0.5 when it predicts the text and the audio are matched. Therefore, V-Score acts as the confidence score which is used to evaluate the meaning and consistency of the sentence. H-Score is the harmonic mean of BLEU and V-Score, which measures the general quality of the generated texts from both aspects.
Human Evaluation
We invite 8 volunteers who are daily users of music streaming applications with sufficient familiarity of the test music pieces to conduct human evaluation. We sample 5 songs from our testing set, and let each model generate 2 comments. Volunteers are asked to rate these comments by a score from 1 to 10 from the 4 aspects mentioned above. The averaged scores over generated comments are taken as the final score of each model.
Results and Discussion
The results of automatic evaluation are summarized in Table 1. When setting β max = 1, the proposed model can achieve much higher scores than the Golden Table 3 The Pearson's correlation coeffcient between the average human evaluation scores and automatic evaluation metrics, including the traditional n-gram based metrics and the proposed metrics.
baseline in terms of the n-gram based BLEU scores. This is not surprising, as the n-gram based metrics primarily focus on exact matching of words, while overlooking other important qualities including themes and diversity.
In Table 1, one can find that smaller β max can always improve the results of our model on n-gram based metrics, when the model gets more tendency to repeatedly generate words with high frequency. These results also clearly verify that n-gram based metrics are insufficient when evaluating the overall quality of the generated sentences. By contrast, the proposed metrics V-Score and H-Score rate the Golden baseline with scores higher than other models and these two metrics exhibit more consistency with the human evaluation results in Table 2. We also provide statistical analysis in Table 3, where we compute the Pearson correlation coefficient between the considered automatic evaluation metrics and the average human evaluation scores, showing that the proposed two scores have much higher correlations with human evaluation results than BLEU scores. In
Song
The theme song of the movie Monkey King. (悟空)
MCG (100) It is also unintentional to deceive the world. I really want to have a crush on you. Apologize to become the Monkey King.
隐瞒尘世也無意。真想跟你暗恋。挽歉成为美猴王。
Song
The theme song of the movie The Myth. (美丽的神话)
MCG (100) Hu Ge is very handsome. At that time, I fell in love with him in the first ten episodes of Qin Shi.
胡歌挺帅的,当时爱上的是秦时的前⼗集了。
Song
Faded by Alan Walker.
MCG(100)
We even don't know why. The tears fell down at that moment. Then I knew how good the song was.
我们都不知道为什么。那那⼀刻泪崩了。我就知道这曲多赞。
Song
The theme song of the Legend of the Paladin.(此生不换)
MCG (100) Finally, I was accompanied by Zi Xuan, and still was very bad. Only Zi Xuan was gone.
最后叫紫萱陪着我,又依依是很烂,只有紫萱走了。
Song
The theme song of the TV series Ashes of Love.(不染)
MCG (100) Mao Buyi hear these two sentences, he tears. Come on, because Kuang Lu loves these words. Fig. 3 Case studies of the proposed model, including five samples of generated comments by MCG (βmax=100) under different music conditions. terms of these metrics, our model outperforms all the baselines with a large margin, showing the effectiveness of the proposed GAN fine-tuning framework and the necessity of the audio feature, as well as demonstrating that the comments generated by our model can better represent the thematic and topic information of music. As for human evaluation, from the results summarized in Table 2, we can observe that the proposed model outperforms all the baselines from each aspect. It is worth noting that our model is especially superior to other baselines in terms of "Meaning" and "Consistency" which measure how well the meaningful content and thematic information of the music is preserved in the generated texts, illustrating the effectiveness of the topic evaluator. In addition, the comparison between the model with or without the evaluator also verifies the importance of the evaluator.
毛不易听到这两句,流泪。加油,因为邝露热爱这词话。
Case Studies
To illustrate the quality of the generated comments, we select five representative examples of comments generated by MCG (β max =100) from music with different background (Figure 3). We can observe that the generated comments match very well with the given music in terms of both content and style. For example, the third song is the instrumental version of a hit song "faded", which is known to be emotional and inspiring, and the corresponding generated comment well reflects how the audience feel when they first hear the song. Moreover, sometimes the generated comments can provide fine-grained information that may be a good explanation of the corresponding music. Take the second case as an example, Hu Ge is the main actor of the movie The Myth, while Qin Shi is a TV series in which Hu Ge is the leading actor. The results demonstrate that our model can capture not only the content background but also the thematic information of the music.
Conclusion
In this paper, we investigate a new task of text generation from music, and propose a sequence-to-sequence framework to solve the task. Given a music piece, we first extract the audio features as the input of the text generator, and propose a two-stage training paradigm to generate fluent and thematic comments. The text generator is first trained based on the traditional MLE loss, and then is fine-tuned in an adversarial manner with a discriminator and a topic evaluator. We propose both human and automatic evaluation metrics in the experiments, and the results show that the proposed MCG model can generate comments that capture the meaningful content and thematic information of the music from various aspects. In the future, we plan to extend our work from the following two aspects. First, it would be interesting to encourage the model to generate structured text descriptions from music, such as lyrics or poems. Second, we plan to apply the idea of topic relevance evaluator on other multi-modal tasks to facilitate the utilization of high-level semantic information.
Automatic evaluation of all the compared models w.r.t BLEU scores and the two proposed scores. "Golden" indicates the performance of real comments. For all scores, the higher the better. Human evaluation results of all compared models. "Golden" indicates the performance of real comments. For all scores, the higher the better.Method
BLEU-3 BLEU-4 BLEU-5 BLEU V-Score H-Score
Golden
0.473
0.334
0.229
0.330
0.423
0.371
MLE
0.414
0.259
0.166
0.261
0.271
0.265
RelRNN
0.392
0.247
0.162
0.250
0.014
0.026
MCG (w/o evaluator)
0.459
0.304
0.205
0.305
0.265
0.284
MCG (βmax=1)
0.646
0.480
0.485
0.209
0.091
0.153
MCG (βmax=10)
0.515
0.344
0.248
0.352
0.210
0.263
MCG (βmax=100)
0.481
0.309
0.211
0.315
0.396
0.351
Table 1 Method
Fluency Coherence Meaning Consistency
Average
Golden
8.44
8.01
8.01
7.06
7.88
MLE
5.13
4.81
5.10
5.00
5.01
RelRNN
4.20
3.99
4.90
4.24
4.33
MCG (w/o evaluator)
5.40
4.89
5.10
4.91
5.08
MCG (βmax=1)
5.21
4.90
5.06
4.41
4.90
MCG (βmax=10)
4.79
4.64
4.91
4.40
4.69
MCG (βmax=100)
5.39
4.97
5.49
5.36
5.30
Table 2 Metrics
BLEU BLEU-4 V-Score H-Score
Human
0.21
0.34
0.77
0.80
https://music.163.com/
Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. S Banerjee, A Lavie, Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization. the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarizationBanerjee S, Lavie A (2005) Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In: Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pp 65-72
Deep learning techniques for music generation-a survey. J P Briot, G Hadjeres, F D Pachet, arXiv:170901620arXiv preprintBriot JP, Hadjeres G, Pachet FD (2017) Deep learning techniques for music generation-a survey. arXiv preprint arXiv:170901620
Towards diverse and natural image descriptions via a conditional gan. B Dai, S Fidler, R Urtasun, D Lin, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionDai B, Fidler S, Urtasun R, Lin D (2017) Towards diverse and natural image descriptions via a conditional gan. In: Proceedings of the IEEE International Conference on Computer Vision, pp 2970-2979
Maskgan: better text generation via filling in the. W Fedus, I Goodfellow, A M Dai, arXiv:180107736arXiv preprintFedus W, Goodfellow I, Dai AM (2018) Maskgan: better text generation via filling in the . arXiv preprint arXiv:180107736
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in neural information processing systems. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672-2680
Long text generation via adversarial training with leaked information. J Guo, S Lu, H Cai, W Zhang, Y Yu, J Wang, Thirty-Second AAAI Conference on Artificial Intelligence. Guo J, Lu S, Cai H, Zhang W, Yu Y, Wang J (2018) Long text generation via adversarial training with leaked information. In: Thirty-Second AAAI Conference on Artificial Intelligence
Narrative-inspired generation of ambient music. S Harmon, ICCCHarmon S (2017) Narrative-inspired generation of ambient music. In: ICCC, pp 136-142
Music transformer: Generating music with long-term structure generating music with long-term structure. Cza Huang, A Vaswani, J Uszkoreit, N Shazeer, Isc Hawthorne, A M Dai, M D Hoffman, M Dinculescu, D Eck, arXiv:180904281arXiv preprintHuang CZA, Vaswani A, Uszkoreit J, Shazeer N, Hawthorne ISC, Dai AM, Hoffman MD, Dinculescu M, Eck D (2018) Music transformer: Generating music with long-term structure generating music with long-term structure. arXiv preprint arXiv:180904281
Transfer learning from speaker verification to multispeaker text-to-speech synthesis. Y Jia, Y Zhang, R Weiss, Q Wang, J Shen, F Ren, P Nguyen, R Pang, I L Moreno, Y Wu, Advances in neural information processing systems. Jia Y, Zhang Y, Weiss R, Wang Q, Shen J, Ren F, Nguyen P, Pang R, Moreno IL, Wu Y, et al. (2018) Transfer learning from speaker verification to mul- tispeaker text-to-speech synthesis. In: Advances in neural information pro- cessing systems, pp 4480-4490
N Kalchbrenner, E Elsen, K Simonyan, S Noury, N Casagrande, E Lockhart, F Stimberg, Oord Avd, S Dieleman, K Kavukcuoglu, arXiv:180208435Efficient neural audio synthesis. arXiv preprintKalchbrenner N, Elsen E, Simonyan K, Noury S, Casagrande N, Lockhart E, Stimberg F, Oord Avd, Dieleman S, Kavukcuoglu K (2018) Efficient neural audio synthesis. arXiv preprint arXiv:180208435
Convolutional neural networks for sentence classification. Y Kim, arXiv:14085882arXiv preprintKim Y (2014) Convolutional neural networks for sentence classification. arXiv preprint arXiv:14085882
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:14126980arXiv preprintKingma DP, Ba J (2014) Adam: A method for stochastic optimization. arXiv preprint arXiv:14126980
A hierarchical approach for generating descriptive image paragraphs. J Krause, J Johnson, R Krishna, L Fei-Fei, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionKrause J, Johnson J, Krishna R, Fei-Fei L (2017) A hierarchical approach for generating descriptive image paragraphs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 317-325
Gans for sequences of discrete elements with the gumbel-softmax distribution. M J Kusner, J M Hernández-Lobato, arXiv:161104051arXiv preprintKusner MJ, Hernández-Lobato JM (2016) Gans for sequences of dis- crete elements with the gumbel-softmax distribution. arXiv preprint arXiv:161104051
Rouge: A package for automatic evaluation of summaries. C Y Lin, Text summarization branches out. Lin CY (2004) Rouge: A package for automatic evaluation of summaries. In: Text summarization branches out, pp 74-81
Dopelearning: A computational approach to rap lyrics generation. E Malmi, P Takala, H Toivonen, T Raiko, A Gionis, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningMalmi E, Takala P, Toivonen H, Raiko T, Gionis A (2016) Dopelearning: A computational approach to rap lyrics generation. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp 195-204
Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, G S Corrado, J Dean, Advances in neural information processing systems. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed rep- resentations of words and phrases and their compositionality. In: Advances in neural information processing systems, pp 3111-3119
Oord Avd, S Dieleman, H Zen, K Simonyan, O Vinyals, A Graves, N Kalchbrenner, A Senior, K Kavukcuoglu, arXiv:160903499Wavenet: A generative model for raw audio. arXiv preprintOord Avd, Dieleman S, Zen H, Simonyan K, Vinyals O, Graves A, Kalchbren- ner N, Senior A, Kavukcuoglu K (2016) Wavenet: A generative model for raw audio. arXiv preprint arXiv:160903499
Bleu: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W J Zhu, Proceedings of the 40th annual meeting on association for computational linguistics, Association for Computational Linguistics. the 40th annual meeting on association for computational linguistics, Association for Computational LinguisticsPapineni K, Roukos S, Ward T, Zhu WJ (2002) Bleu: a method for auto- matic evaluation of machine translation. In: Proceedings of the 40th annual meeting on association for computational linguistics, Association for Com- putational Linguistics, pp 311-318
Automated generation of song lyrics using cfgs. S Pudaruth, S Amourdon, J Anseline, 2014 Seventh International Conference on Contemporary Computing (IC3). IEEEPudaruth S, Amourdon S, Anseline J (2014) Automated generation of song lyrics using cfgs. In: 2014 Seventh International Conference on Contempo- rary Computing (IC3), IEEE, pp 613-616
Relational recurrent neural networks. A Santoro, R Faulkner, D Raposo, Rae J Chrzanowski, M Weber, T Wierstra, D Vinyals, O Pascanu, R Lillicrap, T , Advances in Neural Information Processing Systems. Santoro A, Faulkner R, Raposo D, Rae J, Chrzanowski M, Weber T, Wierstra D, Vinyals O, Pascanu R, Lillicrap T (2018) Relational recurrent neural networks. In: Advances in Neural Information Processing Systems, pp 7299- 7310
Rethinking the inception architecture for computer vision. C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSzegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818-2826
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Advances in neural information processing systems. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. In: Advances in neural information processing systems, pp 5998-6008
Cider: Consensus-based image description evaluation. R Vedantam, Lawrence Zitnick, C Parikh, D , Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionVedantam R, Lawrence Zitnick C, Parikh D (2015) Cider: Consensus-based image description evaluation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4566-4575
Y Wang, R Skerry-Ryan, D Stanton, Y Wu, R J Weiss, N Jaitly, Z Yang, Y Xiao, Z Chen, S Bengio, arXiv:170310135Tacotron: Towards end-to-end speech synthesis. arXiv preprintWang Y, Skerry-Ryan R, Stanton D, Wu Y, Weiss RJ, Jaitly N, Yang Z, Xiao Y, Chen Z, Bengio S, et al. (2017) Tacotron: Towards end-to-end speech synthesis. arXiv preprint arXiv:170310135
Relgan: Relational generative adversarial networks for text generation. Weili Nie, N N Patel, A , Weili Nie NN, Patel A (2019) Relgan: Relational generative adversarial net- works for text generation. In: ICLR
Show, attend and tell: Neural image caption generation with visual attention. K Xu, J Ba, R Kiros, K Cho, A Courville, R Salakhudinov, R Zemel, Y Bengio, International conference on machine learning. Xu K, Ba J, Kiros R, Cho K, Courville A, Salakhudinov R, Zemel R, Bengio Y (2015) Show, attend and tell: Neural image caption generation with visual attention. In: International conference on machine learning, pp 2048-2057
Review networks for caption generation. Z Yang, Y Yuan, Y Wu, W W Cohen, R R Salakhutdinov, Advances in Neural Information Processing Systems. Yang Z, Yuan Y, Wu Y, Cohen WW, Salakhutdinov RR (2016) Review net- works for caption generation. In: Advances in Neural Information Processing Systems, pp 2361-2369
Seqgan: Sequence generative adversarial nets with policy gradient. L Yu, W Zhang, J Wang, Y ; Yu, Y Zhang, Z Gan, K Fan, Z Chen, R Henao, D Shen, Carin L , Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine Learning70Thirty-First AAAI Conference on Artificial IntelligenceYu L, Zhang W, Wang J, Yu Y (2017) Seqgan: Sequence generative adversarial nets with policy gradient. In: Thirty-First AAAI Conference on Artificial Intelligence Zhang Y, Gan Z, Fan K, Chen Z, Henao R, Shen D, Carin L (2017) Ad- versarial feature matching for text generation. In: Proceedings of the 34th International Conference on Machine Learning-Volume 70, JMLR. org, pp 4006-4015
Xiaoice band: A melody and arrangement generation framework for pop music. H Zhu, Q Liu, N J Yuan, C Qin, J Li, K Zhang, G Zhou, F Wei, Y Xu, E Chen, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ACM. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ACMZhu H, Liu Q, Yuan NJ, Qin C, Li J, Zhang K, Zhou G, Wei F, Xu Y, Chen E (2018) Xiaoice band: A melody and arrangement generation framework for pop music. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, ACM, pp 2837-2846
| [] |
[
"On the Necessity of Mixed Models: Dynamical Frustrations in the Mind",
"On the Necessity of Mixed Models: Dynamical Frustrations in the Mind"
] | [
"Diego Gabriel Krivochen diegokrivochen@hotmail.com "
] | [] | [] | In the present work we will present and analyze some basic processes at the local and global level in linguistic derivations that seem to go beyond the limits of Markovian or Turing-like computation, and require, in our opinion, a quantum processor. We will first present briefly the working hypothesis and then focus on the empirical domain. At the same time, we will argue that a model appealing to only one kind of computation (be it quantum or not) is necessarily insufficient, and thus both linear and non-linear formal models are to be invoked in order to pursue a fuller understanding of mental computations within a unified framework.Keywords: dynamical frustration; Markovian models; quantum human computer; Turing-computation anymore, despite the reduction ad absurdum arguments Gilbert Ryle had given in 1949 against unification frameworks (see Dennet, 1991 for discussion), partially based on the Cartesian idea that nature is to be divided in two non-related (and non-unifiable) parts: mind and matter 1 . Local reductionism and determinism, characteristics of classical physics, were now questioned, particularly after the first observations of hypersensitivity to initial conditions (consider that the first 'chaotic' observations by Lorenz took place around 1963) and further developments in complex systems.Going a step further from the many-body problem mentioned in the previous paragraph, inserting language (and the mind as a whole) in the natural world, as a physical system just like any other, allows us to dispense with the undesirable consequences of looking at it as a closed system (that is, a system which is insensitive to external factors): let us imagine that we have N (where N is a natural number) strings, using Chomsky's own terminology, and n automata (nevermind whether they are alive or not) making use of those strings. If interactions are binary (that is, only two automata are interacting at any given time T), the "cycle" it would take for a string to re-appear, that is, the total amount of possible states of the system of interactions of n "saying" N is defined by the expression 2 N | null | [
"https://arxiv.org/pdf/1307.4986v1.pdf"
] | 11,883,501 | 1307.4986 | 3deafa9e4d918ff873f621ca759f71a47c84ce06 |
On the Necessity of Mixed Models: Dynamical Frustrations in the Mind
Diego Gabriel Krivochen diegokrivochen@hotmail.com
On the Necessity of Mixed Models: Dynamical Frustrations in the Mind
1dynamical frustrationMarkovian modelsquantum human computerTuring-computation
In the present work we will present and analyze some basic processes at the local and global level in linguistic derivations that seem to go beyond the limits of Markovian or Turing-like computation, and require, in our opinion, a quantum processor. We will first present briefly the working hypothesis and then focus on the empirical domain. At the same time, we will argue that a model appealing to only one kind of computation (be it quantum or not) is necessarily insufficient, and thus both linear and non-linear formal models are to be invoked in order to pursue a fuller understanding of mental computations within a unified framework.Keywords: dynamical frustration; Markovian models; quantum human computer; Turing-computation anymore, despite the reduction ad absurdum arguments Gilbert Ryle had given in 1949 against unification frameworks (see Dennet, 1991 for discussion), partially based on the Cartesian idea that nature is to be divided in two non-related (and non-unifiable) parts: mind and matter 1 . Local reductionism and determinism, characteristics of classical physics, were now questioned, particularly after the first observations of hypersensitivity to initial conditions (consider that the first 'chaotic' observations by Lorenz took place around 1963) and further developments in complex systems.Going a step further from the many-body problem mentioned in the previous paragraph, inserting language (and the mind as a whole) in the natural world, as a physical system just like any other, allows us to dispense with the undesirable consequences of looking at it as a closed system (that is, a system which is insensitive to external factors): let us imagine that we have N (where N is a natural number) strings, using Chomsky's own terminology, and n automata (nevermind whether they are alive or not) making use of those strings. If interactions are binary (that is, only two automata are interacting at any given time T), the "cycle" it would take for a string to re-appear, that is, the total amount of possible states of the system of interactions of n "saying" N is defined by the expression 2 N
Introduction: A brief history of "quantum mind" proposals
With respect to the scientific developments that led to the different versions of quantum theories of mind, Stapp (2009: 4) claims that "This [quantum] model of the mind/brain system is no isolated theoretical development. It is the rational outcome of a historical process that has occupied most of this century, and that links a series of revolutions in psychology and physics."
Even if the historical antecedents that Stapp mentions go back as far as the 19 th century, our brief review will start in more recent times. Already in the '40s it was clear to some that Statistical Mechanics and linear models could not account for the stability and (chaotic) ordering of natural systems (e.g., meteorology, which was the original field of Lorenz's studies), even within biology (Vitiello, 2001: 69;Schrödinger, 1944). The Cognitive Revolution of the '50s brought along a strong support for computational theories of the mind, and the formalism that outmatched the others was, by and large, Alan Turing's: to this day, there are Turing models of the mind (see, for instance, Watumull, 2012). However, the quantum revolution that had taken place in the early decades of the 20 th century had influenced part of the field of cognitive studies, and the idea that quantum effects are not just oddities at the Planck scale (ultimately, an idea stemming from the EPR paradox and Einstein's research on relativity) began to grow and develop. In this scenario, cooperation between physicists and brain scientists (cognitivists and neurologists) started around 1960, with the possibility of conceiving the brain as a many-body system: there are subsystems and their repeated complex interactions create quantum correlations. This, incidentally, implied abandoning materialistic dualism as a philosophical stance: the clear-cut separation between brain and mind was not so clear-cut outcomes in a quantum system are binary 3 , we are always talking about pairs of measurements (Stapp, 2009: 5), which relates to an apparently essential property of phrase markers and constituents in general, at least at the interface level of semantic structure (C-I). Interestingly enough, the predicted interferences between experimental results on measurements are macroscopic phenomena, not Planckscale effects; and Vitiello reports neurophysiological evidence of long-distance neurological action, which cannot be explained by means of single-neuron models. Memory (information retrieving from the Long Term Memory LTM) seems to be an obvious example, and the evidence Phylyshyn (2007) presents in favor of distributed computation of Prepositional Phrases PP (in localist terms, figure-ground dynamics, see also Talmy, 2000) in the temporal and parietal lobes seems also an interesting path to take. The point made by Vitiello, echoing Freeman (2000) is that, even if it cannot be claimed that all neural connections and brain activity respond to quantum modeling, there are processes that just cannot be modeled in a traditional model. In recent years, not only studies in human neurophysiology but also AI (in a move that was somehow anticipated by Penrose, 1997) have attempted to generate a quantum theory of the mind (some, more inclined to so-called "consciousness"), maintaining the computer analogy. This, needless to say, required a deep revision of the fundamental assumptions of AI (unfortunately, to the best of our knowledge, there has been no such revision in computational linguistics, which remains strongly statistical and primarily descriptive) when the first advances in quantum computers saw the light, not too long ago. True, quantum mechanics is a statistical theory, but in a whole different sense: prior to observation / measurement, a particle's momentum (for example) is to be defined as a probability, not a certain datum. Moreover, the particle itself is not a little ball of non-divisible matter, but more likely a complex unit itself, product of the vibration frequency of 1-D strings at the Planck scale (1*10 -35 m), if (some version of) string theory is on the right track (see Greene, 1999 for discussion at an introductory level). This complexity in interactions gives rise to systems whose behavior cannot, foreseeably, be fully accounted for by classical (i.e., Newtonian) mechanics. The mind, it is argued by some (including us) is one of those systems. What is more, some mental systems (as we will argue, language among them), in the sense of symbolic structures generated by neurological processes display macro (i.e., observable) quantum properties of the kind mentioned earlier. This thesis, which is sometimes called "quantum human computer hypothesis" (QHC) is crucially independent of the narrower thesis that language itself is a chaotic system, which we have also put forth in previous works (Krivochen, 2013), in connection to the QHC. It is essential to point out that the two theses are independent, and it is possible to adhere to one without necessarily adhering to the other. For 3 This, in traditional quantum physics, derives from the so-called "wave-particle duality". We will see that this is not always the case, as we will work with elements that present more than two possible outcomes. example, Uriagereka's (2012) CLASH model, based on the notion of geometrical frustration 4 (see Binder, 2008 for details) is compatible with the second thesis (the chaos thesis), but major changes would have to be performed in the theoretical apparatus if the CLASH system is to be implemented in a quantum mind 5 . For the purposes of the present argumentation, and following the line of Krivochen (2011a, b, 2012a, b, c, 2013) we will simply characterize the "quantum human computer" as follows:
1)
a. It is a computational system, which builds on the assumption that mental processes are derivational b. It builds on the assumption that derivations create representations that are evaluated by interpretative systems, which interface with the generator (GEN) algorithm c. It allows any object O of arbitrary complexity to comprise, before interpretation (i.e., transfer to the interpretative systems, whichever they are), n > 1 states at once. n collapses to one of the possible outcomes at the interpretative levels, not before.
d. It is blind to the characteristics of the manipulated objects
The aforementioned assumptions are related to (even if in a non-necessary way) a proposal about the architecture of the cognitive system underlying language production and comprehension, and the mathematics necessary to model it. The architecture we assume is the following: 4 A geometrical frustration presents global and local tendencies which are mutually contrary. Binder (2008: 322) illustrates the situation with a Lorenz attractor, whereas in Uriagereka's model (and our own) global tendencies can be exemplified with semantic information (the CS-LF arrow in figure (2)) while local tendencies arise from a Multiple Spell Out model, and involve the materialization of locally determined chunks of structure (the arrows leading periodically to PF). 5 The adaptations that traditional models would have to undergo if the QHC hypothesis turns out correct is a fascinating matter in itself. Consider, for example, the following quotation from Stapp (2009: 18): "The fact that, for example, a certain pointer appears to any community of communicating observers to have swung only one way, or only the other way, not both ways at once, is understood in terms of the idea that the universe splits, at the macroscopic level, into various non-communicating branches" (emphasis in the original). It is obvious how the idea of non-communicating branches (i.e., not related by any dominance / sisterhood relation) impacts on phrase structure, particularly regarding the displacement property of human language. See Krivochen (2013) for discussion, but the matter is far from being solved.
2)
In our terms, a derivation does not start with a Numeration (a set of elements with numerical subindexes indicating how many times they will be used in a derivation, see Chomsky, 1995), but with a pre-linguistic purely conceptual structure, in the line of Fodor (1975) and, more recently, Jackendoff (2002), Culicover & Jackendoff (2005), Uriagereka (2008), and the sense in which D-Structure is understood in Uriagereka's (2012) CLASH model. That structure is syntactic in a wide sense, as concepts are structured (taking "syntactic" not in the narrow sense of "linguistically structured" but in a strict sense of "structured" 6 ). This conceptual structure, shaped by the speaker's intention to convey a certain propositional meaning through linguistic means, is what, in our proposal, drives Select, the selection of a subset of LEX, in turn a set of linguistic types, to be instantiated as tokens in the "syntax" (actually, not a component but a workspace, in the sense of Baddeley, 2003) driven by the need to minimize entropy as the derivation unfolds. The assumption we make at this respect is the following:
3) Minimal Selection:
Select the minimal amount of types that can instantiate a conceptual structure CS into a linguistic structure LS losing as few information as possible. 6 Cf. Culicover & Jackendoff (2005: 20 fn. 8): "Algebraic combinatorial systems are commonly said to ''have a syntax''. In this sense, music has a syntax, computer languages have a syntax, phonology has a syntax, and so does Conceptual Structure. However, within linguistics, ''syntax'' is also used to denote the organization of sentences in terms of categories such as NP, VP, and the like. These categories are not present in any of the above combinatorial systems, so they are not ''syntax'' in this narrower sense." In this paper, and in general within our theory, "syntax" is used in the wider sense, for two main reasons: to begin with, there is no compelling evidence that the "syntactic mechanisms" vary from one system to another (except insofar as the units affect the algorithm, in case that actually happens); and also, an adequately wide formalization of syntactic mechanisms could reveal deep facts about the structure of more than a single system. Admittedly, this requires interdisciplinary co-working and terminology unification, which are unfortunately not the norm now. The intuition behind this assumption is clear: we want to linguistically instantiate a CS in the most economical way possible 7 , ceteris paribus. Given the fact that the CS includes not only rough propositional content but also added information (what most linguists would put under the "pragmatics" label: inferences, and other extra-propositional which is, nonetheless, built upon the clues syntactic structure provides the semantic component with), the reference set for each potential derivation is unary: there is one and only one candidate which can express CS in an optimal way.
Assuming the existence of (some form of) a Lexicon for human language, Select, then, builds an array of lexical types from that Lexicon. Then, units are blindly manipulated in the workspace via concatenation: 4) Concatenation defines a chain of coordinates in n-dimensional generative workspaces W of the form {(x, y, z…n) ⊂ W X , … (x, y, z…n) ⊂ W Y , …(x, y, z…n) ⊂ W n }.
Simplifying the matter almost excessively for the sake of clarity, take "dimensions" to mean the number of coordinates necessary to define the position of a point. Thus, each set of coordinates depends on the number of dimensions in the relevant generative workspace, such that an element is to be defined by all of its coordinates in W (that is to say, there are no "superfluous" coordinates in a dimensional specification). We assume only one condition for any X and any Y to enter the concatenation relation: they must share what we have called "ontological format": ontological format refers to the nature of the entities involved. For example, Merge can apply ("ergatively", as nobody / nothing "applies Merge" agentively) to an n number of roots because they are all linguistic instantiations of generic concepts (Krivochen, 2011a: 10;Boeckx, 2010). With ontological format we want to acknowledge the fact that a root and a generic concept cannot merge, for example. It is particularly useful if we want to explain in simple terms why Merge cannot apply cross-modularly: a root and a phoneme do not share ontological format (they have different nature, one conceptual, the other phonological), therefore, the system blocks such an operation from square one.
Given this scenario, let us see how an XP would be formed, say, a DP (assuming the simplest possible structure: [D, √]): 7 In more technical terms, Selection must reduce entropy. If the theory of Merge we have developed in past works is correct, the generative algorithm, driven by interface requirements, should also be "counter-entropic" (see also Uriagereka, 2011). The possibility is currently under research.
5)
Both D and √ having the same ontological format, Concatenate can (and thus must) apply in the following form:
6) Concatenate (D, √) = {(x, y, z) ⊂ W1, (x', y', z') ⊂ W1)}
The coordinates of the result of the operation (a DP, or {D}, construction) are defined as the Cartesian product of the (in this case) two sets of coordinates of the elements involved in the merger. In the more familiar tree form, the result would be represented as (7):
7)
A note is in order here, particularly taking into account the discussion in sections below: the newly formed syntactic object, even if irrelevant for the generative algorithm as such, must be identified as a unit for the purposes of further computations, what is customarily referred to as a "label". In past works (mainly, Krivochen, 2011a) we have argued against the existence of labeling in the syntactic workspace, primarily given their null pertinence to the derivation being the algorithm both free and blind. This means that, if existent at all, labels are only relevant at the LF interface (since it is very difficult to argue how labels could be of any interest or relevance for PF purposes). Instead of providing a stipulative labeling algorithm, based on alleged UG principles (Chomsky, 2005a;Gallego, 2007), we claim that the label of an object is nothing more than a "summary" of its semantic properties, which, just as categories, or Case; is recognized at the interface as the result of a configuration. Gallego (2007: 75) claims that [in a Merge (α, β) situation] "we cannot know whether β is a LI or an XP […] without labels". Our objections to this position are simple: a) at the syntactic workspace, it is not necessary to know it, because the algorithm is blind; and b) the label is the reading of a configuration, not the other way around. Having D and √, should C-I "label" as √, there would be a crash, since the root is too semantically underspecified to be used to refer (either to a sortal or an eventive entity). The only way out is to recognize the whole construction as a D, a sortal D √ Workspace 1 (W1) entity. In this sense, we dispense with labeling algorithms like those summarized in Gallego (2007) and including Chomsky, Boeckx, and Hornstein; and propose a theory that is even simpler than the "label-free" alternative of Collins (2002), as we do not need the notion of locus (which ultimately amounts to selection). In any case, the labeling discussion is well outside the study of dependencies in the generative workspace.
We would like, at this point, to make our architecture crystal-clear. We base our theory, like Culicover & Jackendoff (2005); Uriagereka (2012), among others, on a pre-linguistic, syntactically built conceptual structure, which has to be instantiated via language, considering requirements and limitations from both phonology and semantics. However, complementarily to Uriagereka (2012), we focus on the semantic side of the story, and explicitly state the preeminence of semantics over phonology for conservation (i.e., anti-entropic) purposes. As we will see, most of the problems we find hard to solve from a Turing-computer perspective arise when one goes beyond inferring syntax from phonology (as Kayne, 1994;Moro, 2000, and much subsequent work do). We adhere to Uriagereka's (1999) Multiple Spell-Out model, which implies that access to the phonological interface (or, in our terms, access from the phonological interface to the syntactic workspace) is performed multiple times within a derivation, thus basing the computation on the notion of local cycle, and extend it also to the semantic interface. The difference with Chomsky's (1998Chomsky's ( , 2005a phase-system is that Urigereka's proposal, and our own, are based on interface requirements (in Uriagereka's case, the impossibility of linearizing determined phrase markers), which, if the interfaces are independent, means that PF phases and LF phases need not coincide (contra Chomsky, 2005a, even though references to the matter in Chomsky's work are too vague to constitute a stance). The derivational dynamics we will assume hitherto (summarizing points and discussion made in previous works, see Krivochen, 2011aKrivochen, , b, 2012aKrivochen, , b, 2013 gives rise to a frustration, on which the whole system is built. On a similar line, we will assume a strong optimalization thesis, to be (informally) formulated as follows:
9) Every externalized linguistic object E is the optimal resolution of the geometrical frustration involving the global infinitude of syntax and the local (un)availability of phonological exponents in L.
Our goal in this paper will be to give evidence in favor of the thesis that some processes (at least) cannot be Turing-computable or even modeled by a simple, linear L-grammar. We will focus on two such cases (while mentioning others in the conclusion, for reasons of space): categorization, and case.
Remarks on Categorization
Chomsky's (1970) Remarks on Categorization (RC) have the strange merit of being considered the foundational stone for two opposite conceptions about syntactic categories: lexicalism and distributed morphology. On the one hand, we have a theory that assigns the Lexicon generative power to different extents, from the GB-influenced L-Syntax of Hale & Keyser (1993) to the highly developed non-transformational model put forth by Ackerman et. al. (2011), the so-called "implicative morphology". In any case, the basic thesis of lexicalism is that syntactic mechanisms do not make reference to word-internal processes, nor can they manipulate smaller-than-words constituents, be them morphemes or roots. In one form or another, lexicalism assumes the Y-model, depicted in (10): 10)
The "syntax" lexicalism often refers to is the so-called "narrow syntax" (Hauser, Chomsky & Fitch, 2002), which builds symbolic representations from lexical items, at that point opaque to external influence. Elements enter a derivation as sets of features (an assumption shared by Minimalism and non-transformational models, like HPSG or LFG), including semantic and phonological features, as well as, in some cases (e.g., Green, 2011) syntactic specifications regarding subcategorization frames (quite like GB lexical entries, but considerably richer). Two tendencies can be distinguished, broadly speaking: for some (see Williams & Di Sciullo, 1987;Lasnik, 1999;Solá, 1996;Green, 2011) (Chomsky, 1999), unvalued features are assigned a value during the course of the derivation and then, according to some proposals (e.g., Kitahara, 1997), erased (but see Epstein & Seely, 2002 for powerful arguments against the notion of erasure). Needless to say, Chomsky's system requires categories to be fixed in the Lexicon, a stipulation that comes concomitant to that determining which features are valued in which category. However, this is, to the best of our knowledge, not a way to solve a problem, but merely to wipe it under the rug. Problematization came from lexical decomposition perspectives, Distributed Morphology (Halle & Marantz, 1993), and Exo-Skeletal Models (Borer, 2005(Borer, , 2009
11) √water
We have used an English word to stand for the root content, but it is worth noting that roots are language-neutral, that is, the set of roots is most likely universal. Now consider the two following contexts:
12) a. John watered the plants b. John drank a glass of water
We have two options: either we posit that the Lexicon has two fully-fledged (i.e., already categorized and with some fixed features) entries, water V and water N , or we assume that there is a root √water that somehow acquires category in a specific context 8 . Lexicalism assumes the first option, we assume the second on empirical and theoretical grounds. One of the strongest arguments in favor of "postsyntactic categorization" is the existence of not only categorial, but also argumental alternances. For example:
13) a. John broke the glass b. The glass broke And so on. In a strong lexicalist model, we would have not only N and V diacritics within the lexicon, but also some notation to differentiate [break ERG ] from [break CAUS ]. That notation would go directly against any Occam-related desideratum, since entities (in this case, lexical entries) would be multiplied beyond necessity (if we can come up with a more economical theory). Before getting fully into the topic, let us make explicit some assumptions we will draw upon during our inquiry:
1) Categories, phases and other units are not primitives of the syntactic theory, but arise as a result of the interaction of a free Merge system with interface conditions: the dynamics of the derivation and the legibility conditions of certain interpretative mental faculties or any other computational module. (see Krivochen, 2012;De Belder, 2011, Boeckx, 2010; also work in Distributed Morphology like Marantz, 1997 andFábregas, 2005 and Exo Skeletal Models, see Borer, 2005Borer, , 2009 among others).
2) There is no distinction between "lexical derivations" and "syntactic derivations", and this goes beyond positing a single generative mechanism: there are just derivations, regardless the nature of the elements that are manipulated, since the generative operation is blind. This means that there is no pre-syntactic generative lexicon (Cf. Pustejovsky, 1995;Hale & Keyser, 1993) Our reasoning goes as follows: if a root √ can be externalized as X, Y…n, then it must bear the potentiality to have those functions. In other words, if a root can surface as either an N, an A or a V, then it must have the potential to be an N, an A and a V. What is more, prior to a specific derivation, in isolation, the root's status can be described, following a very well-known convention in physics first formalized by Erwin Schrödinger, as the addition of the possible outcomes, configuring a "wave function" instead of locating the root within the cognitive workspace in terms of classical coordinates (see, e.g., Langacker, 2007;Talmy, 2000Talmy, , 2007. The structure of the lexicon, thus, is to be deeply revisited, insofar as so-called "lexical categories" (or "conceptual categories", in a more Relevanceoriented framework, see Escandell Vidal & Leonetti, 2000 for discussion) can be seen as roots in their "ψ-state" (i.e., comprising all possible outcomes, following Schrödinger, 1935, Section 5 (Marantz, 1997;Fábregas, 2005; Panagiotidis, 2010). b) Via interface reading of a local dependency between a root and a functional head not specifically devised for categorization purposes.
The difference is great in both theoretical and empirical domains: the first approach needs "categorizers", functional heads whose only contribution to LF is to provide category to the roots they have scope over. However, this does not solve the problem, it is simply a stipulation, as sometimes those alleged "categorizers" have no impact over PF (that is, they are not realized as morphemes) and
sometimes they are, depending not only on the language (e.g., English is much more inclined to conversion than Spanish) but also on the relevant root, a difference that is left unexplained in the literature about categorization we know of. It is also quite an anti-minimalist answer, since it assumes a functional head per "part of speech" (see Fábregas, 2005: 32 The prepositional node, which can adopt two values (centralterminal coincidence), relates two entities in a figure-ground manner (Hale & Keyser, 2002: 218). Properties of entities (be them sortal or eventive) are grounds, syntactically located as complement to the P head (Hale & Kayser, 2002: 47, ff.). Being that P phonologically defective, it triggers conflation of its sister, which is sometimes spelled out as an affix (e.g., beautiful = with+beauty).
Let us now express what we have discussed above in a more schematic form:
16) A lexical item LI is a structure {X…α…√} ∈ W X , where X is a distributionally specified functional category 9 (Determiner, Tense, Preposition), α is an n number of non-intervenient nodes for category recognition purposes at the semantic interface, and √ is a root.
And the correlations result in the following distributional patterns:
17) a. N = [D…α…√] b. V = [T…α…√] c. A / Adv = [P…α…√]
where α is an n number of non-intervenient nodes for Minimality purposes, because they are not distributionally specified enough. Let us see some cases: v is, in our opinion, not specified enough to generate a categorial interpretation at the semantic interface (thus collapsing the root's ψ-state), (2000), we assume that functional categories are procedural insofar as they provide the semantic interface with instructions as to how to interpret the relation between entities over which they have scope. requirement whatsoever to trigger Merge (Cf. Wurmbrand, 2013;Pesetsky & Torrego, 2007, among others), therefore the merger of a root and a DP is not banned in principle.
b. So far, we have a sortal entity [the city] and a root generically denoting an event. The label, for C-I purposes, is then VP, as the "projection" has been closed since the next derivational step will introduce a different kind of information 10 (but see Krivochen, 2011a, 2012 for discussion about the possibility of having a different labeling system, dispensing with bar-notation).
c. Next, we introduce another semantically interpretable element, the primitive cause (see Mateu Fontanals, 2005 for discussion). The construction is thus read by C-I as a caused transitive event.
d. The primitive cause requires the introduction of an actant in the construal: an initiator (independently of the presence of an object, consider for example unergative verbs). A further structural position is licensed, where a DP is merged and interpreted thematically as the agent/initiator of the event over which the primitive cause has scope. The causative projection 10 Admittedly, this step requires some look-ahead, which is a problem for real-time labeling under traditional assumptions. For reasons of space, we have not discussed labeling in a system of invasive interfaces, as we do in Krivochen (2011a, 2012), but we refer the reader to those works for details. To summarize, until a distributionally specified node is inserted in the structure, be it D, T or P, the state of the symbolic object in hand is to be described as the "sum" of all possible outcomes, comprising many possible states at once as potentialities. This, we argue, is only modelable by means of quantum computations.
√destroy DP
VP
The city
cause v DP vP The enemies Going beyond the word-level, the Case-Theta system also offers a good example of a "many possible outcomes" situation. The case for Case we have made in previous works applies here as well, so we will summarize our arguments and refer the reader to those works for more discussion and examples.
To begin with: what is Case? Does it have any syntactic relevance? Our answer to these questions are somehow one and the same: Case is, just as category, an interface reading of a syntactic configuration.
Just like category, also, we need particular procedural nodes that convey the relevant instructions for C-I to read and interpret. That, as we have said, is one cycle. The other, morpho-phonological cycle, is where, as many have claimed (within and outside Chomskyan orthodoxy), inter-linguistic variation lies 11 . The morphological realization of Case as a morpheme, despite some inter-linguistic regularities (e.g., the <m> is associated to Accusative in Latin, English, and German plural), is an epiphenomenon as far as syntactic-semantic processes are concerned. Which are the relevant processes, then? At this point, we would like to introduce an interesting parallel between the Case/Thematic and categorial systems we have explored in past works (mainly, Krivochen, 2011a(mainly, Krivochen, , 2012aKrivochen & Luder, 2012): they are both interface readings of configurations of the kind [X…α…Y], where X is a procedural node, α is an n number of non-intervenient nodes and Y is an object of arbitrary complexity, more specifically an entity, either sortal of eventive. Case, as it is obvious, affects only sortal entities, which can, in very broad terms, either affect or be affected. This semantic distinction leads to the binary Case systems, nominative-accusative and ergative-absolutive. Those labels, however, refer to the morpho-phonological cycle, and notions of markedness (e.g., which is the unmarked Case in L?) which have no place in a semantic approach. Consider now the following scenario, partly depicted above: there are two event-related nodes that take arguments (following Hale & Keyser, 2002;Mateu Fontanals, 2002, namely, v (the causative node requires an initiator, realized categorially by means of a sortal entity) and P (the locative node relates a figure and a ground, both sortal entities). The V node is a transitional node, which conveys Aktionsart-related information (that is, if the event is dynamic or stative), but takes no arguments. This leaves us with the following structure: 11 Above, we have referred to a global semantic tension and local phonological tensions. Consider, then, semantics as a macro-cycle and phonology as micro-cycles, with opposing tendencies. There, a geometrical frustration arises.
20)
We have three structural positions available for arguments, all, as we have said, associated with a specific semantic interpretation. At this respect, DeLancey ( While it has already been pointed out that "surfice morphosyntax" has little to do with the problem of Case (Spanish, for instance, only marks ACC and DAT Case on pronouns and clitics, but abstract Case, in the sense of Vergnaud, 1977), the intimate relation between Case-and Theta-positions is a strong point in De Lancey's presentation, and in ours (see also Krivochen & Luder, 2012 for discussion). From this paragraph, we conclude that, should there be at most three argumental positions, there are only three possible Case-Theta positions at most, in case we are dealing with a ditransitive structure. Inter-linguistic variation regarding the availability of Vocabulary Items to be inserted in terminal nodes and materialize Case (in a separationist framework, see Halle & Marantz, 1993 for the first developments of the notion of "late insertion") seems to go against the eliminative proposal of De Lancey, quite minimalist in spirit (way more than, for instance, Pesetsky prototype-periphery semantic dynamics (Krivochen, 2011a, 2012aKrivochen & Luder, 2012). In this framework, the three spheres are NOM, ACC and DAT, more accurately dubbed Initiator Case, Theme Case, and Location Case. As the reader may have noticed, we keep the "semantic preeminence" thesis, making reference to the semantic contribution of an element X in a position P to the LF rather than to morpho-phonological characteristics. With respect to the spheres, it is clear that the prototypical NOM occurrence is as an Initiator, structurally, Spec-vP, and there is nothing else you can do with it: NOM is, in all systems, the most distributionally constrained Case.
ACC, on the other hand, may appear as either object in a transitive structure, or subject in an accusativus cum infinitivus clause, thus overlapping with what we would expect from NOM. The ACC sphere also includes those instantiations of elements that are semantically Themes moving towards a Location but displaying different morphological marks (e.g., Instrumental Case). DAT sphere includes all locative-like Cases, that is, all Cases in which there is a locative relation established between two entities, be it movement (unde, quo, qua) or possession. Thus, DAT sphere semantically includes morphological Locative, Genitive, and Ablative (Krivochen, 2012a: 79, ff.). (20), if there is a P involved, then there is locative meaning in the construal, and the complement of that P is the ground in the localist dynamics (Talmy, 2000;Anderson, 1977, among others). That ground corresponds to a Location, either literal (a place) or metaphorical (a property). Therefore, it is quite safe to assume that a local relation with P is the condition for the DAT sphere to be interpreted at the semantic interface in a particular DP. The figure, that is, the Theme that moves towards a Location, varies between NOM sphere and ACC sphere depending on whether it is an affected object or not: if we are dealing with a caused construal, then the figure in local relation with v will license ACC, if the construal is uncaused (e.g., unaccusative), the next functional element is T, licensing NOM. The final reflection is quite the same as in the previous section: if a DP can adopt any of the three spheres as a final state, it must bear the potentiality in isolation. Therefore, prior to the merger of v, P, or T, the Case-Theta status of a DP is, in the sense specified above, quantum. Summarizing: The inner complexity of the relevant quantum object (say, a DP) is nothing for the "syntax" to worry about, if by "syntax" we just mean a generative, multipurpose workspace generated ad hoc via (according to D'Espósito, 2007) the activation of the pre-frontal neocortex and other relevant areas of the brain (e.g., temporal and parietal lobes, in the case of localist structures, see Pylyshyn, 2007).
Going back to the diagram in
However, it would be too strong a hypothesis to claim that all mental processes share the quantum nature of language, which is partly due to the fact that there are two kinds of systems involved: generative and interpretative. Generative systems, being free and blind, can maintain and manipulate quantum objects, whereas transfer to interpretative systems collapses those objects to one of the possible outcomes. Not all subsystems in the mind work this way, and not even every linguistic computation is quantum, however. In the next section we will explore this possibility, which will ultimately lead us to a mixed model in which different processes involve different kinds of computations, either Markovian or non-Markovian; linear or quantum.
A Mixed Mind
The preceding discussion touches on an interesting point, namely, there are "macro" processes in which a quantum approach seems unavoidable. The scale of the modeling is essential for any argumentation regarding quantum computation in the human mind, since otherwise it is exposed to Litt et. al.'s (2006: 1-2) criticism regarding relevance of quantum considerations for mental phenomena:
"We argue, however, that explaining brain function by appeal to quantum mechanics is akin to explaining bird flight by appeal to atomic bonding characteristics. The structures of all bird wings do involve atomic bonding properties that are correlated with the kinds of materials in bird wings: most wing feathers are made of keratin, which has specific bonding properties. Nevertheless, everything we might want to explain about wing function can be stated independently of this atomic structure. Geometry, stiffness, and strength are much more relevant to the explanatory target of flight, even though atomic bonding properties may give rise to specific geometric and tensile properties. Explaining how birds fly simply does not require specifying how atoms bond in feathers."
If any, the contribution we would like to make here and in our past works (Krivochen, 2011a(Krivochen, , b, 2012a is that quantum phenomena can be found beyond the Planck scale, in mental computations 12 . With categorization and Case-Theta interpretation we have provided an example that, even though accounted for with current theories (with different degrees of descriptive and explanatory adequacy), serves our purpose insofar as our explanation is, we believe, theoretically simpler and at the same time empirically robust, as it allows for coinage of neologisms and conversion just as long as the result is C-I interpretable.
We have reached a point in which we can say "there are at least some processes whose explanation requires an element to be described as a wave function". However, there is a missing part of the picture: are there all processes quantum within the mind? Our provisional answer, pending much research, is no. Beyond Litt et. al.'s case against quantum models based on "consciousness" and "mathematical thinking" (which we will not discuss here, at least not directly), we will analyze linguistic dynamics that do not seem to require quantum explanations. This is only natural if we consider a fundamental geometrical frustration on the basis of generation-interpretation dynamics:
global and local tendencies go in opposite directions (Binder, 2008: 322;Uriagereka, 2012). If there are quantum phenomena in language, then there must be Markovian (or other kind of classically computable) phenomena in the same system, thus configuring the opposing tendency. The claim that "quantum properties are irrelevant to explaining brain functions" (Litt et. al., 2006: 2) is, in our opinion, too strong. At this point, it cannot be denied from square one that there might be quantum phenomena in the mind, particularly taking into consideration the evidence proposed by the authors we mentioned in the first part of the present work. What is more, provided the thesis of geometrical frustration is on the right track (a matter still to be solved), there would be a strong architectural argument in favor of both quantum computation and traditional computation in the mind, without the need to dismiss any possibilities of non-linear computation. It is not clear, for instance, how Litt et. al. would deal with phenomena like categorization or multiple-candidate filtering in an OT-like architecture if not allowing the processor to perform multiple tasks at once and maintaining elements in a ψ-state until transferred.
In this section we will discuss the opposite tendency, exemplified by means of Markovian structures.
Markovian models were claimed to be insufficient to account for all grammatical processes in Chomsky (1957), but this does not mean that parts of the grammar (e.g., specific constructions, if one adopts a Construction Grammar approach) cannot be Markovian. There are apparently two clear cases documented in recent literature (but drawing on old theories, going back to the '40s): iteration and adjunction. The case for iteration is simple: pure repetition (without semantic or syntactic scope involved) is better described as Markovian loops than by using phrase structure diagrams. For instance (see Uriagereka, 2008, Chapter 6;Lasnik, 2011: 355, ff. andLasnik &Uriagereka, 2012):
22) The old, old,…man/men come/s However, a Markovian syntax for such instances may not capture the semantic properties of some specific iterative constructions. Take, for example:
23) María es una mujer, mujer (Spanish) 'Mary is a woman, woman'
The meaning of this construction is not merely derived from the iteration, but, idiomatically, it means something like "Mary is very femenine". The power of Markovian explanations for iteration rests, partly, on whether idiomaticity is to be regarded a semantic or a syntactic effect. In our opinion, since semantics is syntactically structured, there is no choice but a mixed explanation, which takes into account the syntax-semantics interface (as partially done in Uriagereka, 2008, Chapter 6).
Anticipating discussion from Krivochen (in preparation), in turn heavily based on Uriagereka (2005Uriagereka ( , 2012, Markovian structures also seem to be relevant for Spell-Out purposes. In Uriagereka's (2012: 53) terms Finite State grammars find their limits in monotonic Merge, which is the application of the generative function in a successive way involving always a terminal node:
24)
We see that the third step involves the inclusion of a terminal (i.e., non-branching node) γ which is merged with a non-terminal, {α, β}, and the same happens in the fourth step, where φ is merged to a non-terminal {γ, {α, β}}. The mechanism represented in (24) exemplifies this kind of application of the generative algorithm, which Uriagereka calls monotonic. Non-monotonic merge involves two nonterminals, as in (25): 25) In (25) we see that the second step involves the merger of two non-terminals, giving rise to a complex object. Each non-terminal, in turn, has been assembled by monotonic Merge in a separate workspace, and the unification takes place in a third workspace (in our proposal) or at the interfaces, after Spell-Out (in Uriagereka's). Relevantly, it seems that phonology works with Markovian dependencies (see Isardi & Raimy, in press), which means that both monotonic and non-monotonic structures (whose mathematical properties will not be discussed here) are to be "Markovized" via Spell-Out to be readable by S-M. This means that Spell-Out is nothing but dynamic markovization of non-Markovian material (e.g., complex lexical structures like path-of-motion and resultative predicates) or re-Markovization of elements that enter a workspace already in a "finite state grammar" format (e.g., 13 This is an essential point: the Hierarchy should probably be revisited, if the interpretation of "higher levels presupposing lower ones", since, should that be true, there would be no "additional structure" problem like that pointed out above. The mere idea of a mixed mind, looking for the simplest formalization for each particular type of cases, seems to call for interrelated study of the different formal languages, but by no means establishing an implicational hierarchy. A valid analogy, to the best of our knowledge, would be that of Euclidean, Hyperbolic, and Elliptical geometries. If we have a triangle whose inner angles sum 180 degrees, we will probably use Euclidean trigonometry to make calculations, not non-euclidean trigonometry: not because this makes calculations impossible (we well know it does not) or because there is a hierarchy of geometries, but because it is the simplest option for the problem in hand. Against this point of view, see Gallego (2007), who basically repeats Chomsky's case. 27) a. If S 1 , then S 2 b. Either S 3 or S 4 c. The man who said that S 5 is arriving today
The problem is described in terms of the "recursion-iteration" opposition in Chomsky's work.
However, since "recursion" is an undefined term even today (see for example the Everett-Pesetsky debate about Piraha, mainly due to the lack of agreement on a criterion to determine the presence of recursion and the use, as synonyms, of 'recursion', 'embedding' and related terms in the critics), let us try to phrase the problem in less problematic terms. We agree with Chomsky in that there are great portions of human languages that cannot be appropriately described by means of finite state grammars, as those exemplified for English in (27) (22), there are several ways in which one could represent the structure involved, we will just compare two: 28) a.
If sisterhood imposes relations of scope (as c-command definitions lead us to assume, either in representational - Reinhart, 1976-or derivational -Epstein et. al. 1998, then (28 b) is imposing too rich a structure for what is really a flat relation between elements, without any of them having scope over the others. A strict phrase structure model (e.g., Chomsky & Miller, 1963 allowing F to be infinite (since there can be infinite instances of "old"), which is a trivial generative procedure, apart from computationally and biologically implausible. Formally, it would tell us nothing (as a non-trivial procedure must be restrictive enough to determine conditions of well-formation, in a Standard-Theory-like grammar), and empirically, it would generate too much. A Markovian representation, then, is not only a desirable scenario, but, as far as we can see, the only plausible one.
As regards mathematical modeling, it is to be noticed that a step-by-step derivational engine (be it Markovian or not) is modeled using difference equations, which allow us to calculate the state of the system at T X as a function of the preceding terms T X-n , T X-y . The Fibonacci sequence dynamics that Uriagereka (1998Uriagereka ( , 2012 finds in clause structure, for instance, is an example of these kind of equations. For any term F of the sequence, 29) F n = F n-1 + F n-2
If Fib is to be generated via an L-grammar of the kind Σ, F, however, it is not clear whether a difference equation could help in giving us the generative procedure used to get to a certain derivational point. This is particularly visible in the development of Phrase Structure Rules of the kind discussed in Chomsky (1957): unless we know that S → NP, VP; given VP it is impossible to know how the system got there. Bottom-up models, on the other hand, could make better use of difference equations in developing generative algorithms which build the tree "from the bottom", independently of how many terms are involved in a concatenation relation.
Provided the notion of frustration we have introduced before actually applies to mental systems, as we believe, there would be an interesting tension arising here: the consideration of step-by-step derivational mechanisms within the mind seems to call for a difference equation modeling, but global tendencies, arising in complex systems with continuous time (that is, not chunked as we have done before) seems to call for a differential equation modeling. Consider a symbolic object derived via, say, monotonic concatenation. The step-by-step bottom-up derivation could be modeled using difference equations, but the overall pattern is that of a self-similar fractal: any syntactic object or arbitrary complexity, can be subordinated to another or establish with another a paratactic relation giving origin to a new object containing two complex units. Thus, if, according to Madrid (2011: 67), "a continuous dynamic system is chaotic if and only if there is a Poincaré section in which a discrete chaotic system can be defined" [our translation], it is highly likely the global tendency in linguistic (narrowing our scope down) computations is differential, whereas the on-line, local dynamics obey difference equations. The issue is very interesting and potentially revealing, and is the center of our current investigation.
We have briefly reviewed instances of Markovian objects within language, both in phonology and in the so-called "narrow syntax". Their presence was predicted by our model, if the mind actually displays geometrical frustrations in different sub-systems. This means that, just as a pure connectionist or purely modularist model do not accurately describe high-level and low-level processing (Carreiras, 1997: Chapter 4), providing arguments for mixed models which include connectionist networks for non-symbolic structures (being focused on interactive, multi-layered neural networks) and modular architectures for generative, uni-directional processes (as in Fodor's 1983 model, whose unidirectionality is shared by orthodox Chomskyan syntax by virtue of its syntacticocentrism, as Culicover & Jackendoff point out); the development of a mixed model, including different kinds of structures (Markovian, linear, chaotic, and quantum) seems to be a plausible road to take.
Conclusion
In this paper we have argued in favor of the existence of quantum processes in the mind, exemplifying with (but by no means limiting ourselves to) natural language. In our argumentation it became obvious that trying to subsume all computational processes in the mind to a single model (Markovian, phrase-structural, transformational, quantum) results in failure due, and that, just as it happens with neural networks, a mixed approach, distributing phenomena between different layers in the Chomsky
Hierarchy, is at the same time more powerful and simpler. What is more, we have seen that the Chomsky Hierarchy (if it is to persist) is to be enriched with non-linear grammars, including chaotic and quantum phenomena. As Stapp (2009) puts it, quantum mechanics allow us to bridge the gap between "mind and matter" without the need to resort to stipulations in either side. We are well aware that there have been recent attempts to unify computational processes, manipulation of symbolic representations (for example, the "Turing program for linguistic theory" advocated for by Watumull, 2012; as well as the 'Flat Structure' proposal by Culicover & Jackendoff, 2005), but we doubt they can accommodate all the phenomena we have briefly presented and discussed here. If anything, the present work is a plea for mixed approaches and multidisciplinary interaction, focusing on language but without forgetting it is an integral part of the natural world and should not be studied in substantive isolation.
Bibliography
2001 ,
2001Lecture 3) says: "(…) suppose we could demonstrate that there are, say, exactly x universal semantic roles which can occur as core arguments in a clause in human language. The most obvious language design would have x case markers, one for each underlying role; every argument would simply be marked for its semantic role, which could then be read directly off the surface morphosyntax (…)" [our highlighting]
21 )
21Nominative: read off from a {Time, {D}} local relation, and thematically interpreted as Agent / Force Accusative: read off from a {Cause, {D}} local relation, and interpreted thematically as Theme, the object (Figure) located in / moving towards, etc. a Ground. Dative: read off from a {P, {D}} local relation, and interpreted thematically as Location, the Ground in Talmy's terms.
ff. for comparison and discussion). Chomsky considers that features enter a syntactic derivation either valued or unvalued, depending on the category they are part of. Thus, Person/Number are inherently valued in N and Pronouns, whereas they are unvalued in V. Since unvalued features cannot be interpreted by the interfaces PF and LF, lexical
items enter the derivation fully inflected, perhaps with some exceptions (verbs [be] and [have], in
Lasnik's proposal). This thesis is sometimes called "Strong Lexicalist Thesis", and claims that both
inflection and derivation belongs to a module which is separate from the syntax, ruled by different
Lexicon
Phonology
Semantics
Narrow Syntax
principles. For others, including Aronoff (1976) and Chomsky (1998), Case and Tense inflection are
processes that take place within the Narrow Syntax (NS), in the case of the latter via feature valuation
(see De Belder, 2011: 22,
). The common denominator to these approaches is that categories arise as the result of interactions within the syntactic workspace (see De Belder, 2011 for discussion). The issue, complicated though it might seem, can be exemplified very easily. Consider (11):
and no constraints on Merge (Cf. Chomsky, 2005a and his "Edge Feature" as a sine qua non condition for Merge to apply; also Pesetsky & Torrego's 2007 vehicle requirement on Merge; Wurmbrand's, 2013 Merge Condition, among many others). For the historical basis of this claim, see Halle & Marantz, 1993, and subsequent work in Distributed Morphology.
). This simplifies the lexicon enormously, as, for instance, [shelf N ] and [shelve V ] are grouped under a single entry, [√shelf]. But how do roots get "categorized", then? We find two possibilities: a) Via Merge with specific category-defining functional heads, like v, n, a; etc.
they must arise at the LF interface, after transfer. We claim that a category is the result of a local relation between a root and a distributionally specified functional head. But, which are the correct correlations? Let us take a quote from Aristotle's Poetics:"A Noun is a composite significant sound, not marking time […] A Verb is a compositeNeedless to say, there are more recent references to the matter, but no doubt less clear and stained with some theoretical framework or the other. This fragment presents a fact, which in more contemporary terms could be rephrased as "there is no T node within DPs". This is already something, since if T is absent from DPs, it cannot be T that categorizes a root as N. On the other hand, and in parallel, there is no D within an eventive structure. Summarizing the discussion made in). In the second proposal, we have a
very narrow set of semantically relevant functional elements, which in other works we have made
explicit as v (comprising causativity), T (comprising time), P (comprising location), D (comprising
sortal referentiality) and C (comprising illocutionary force). What is more, if the syntactic component
is as underspecified and blind as we have characterized it, then there is no place for "categories" there:
significant sound, marking time, in which, as in the noun, no part is in itself significant. For
'man', or 'white' does not express the idea of 'when'; but 'he walks', or 'he has walked'
does connote time, present or past". (Aristotle, Poetics XX, 8-9)
Krivochen (2012: 90, ff.), T is distributionally specified enough to generate an eventive reading, and
D is distributionally specified enough to generate a sortal reading. So far, we have derived two tyoes
of entities, sortal (N) and eventive (V), but what about properties of those entities (Adj. and Adv.)? In
this respect, we follow the localist theory of Talmy (2000) (also adopted in Jackendoff, 1987) and the
lexical decomposition perspective explained in length in Mateu Fontanals (2002) and Hale & Keyser
(2002), among others. From the combination of these perspectives there follows the conclusion that
both Adverbs and Adjectives are abstract locations in unaccusative conceptual structures, therefore
prepositional in nature. Let us give an example:
14) Mary is beautiful
[ V BE [ P Mary [[WITH] √beauty]]]
15) Berlin is far away
[ V BE [ P Berlin [[AT] [ P far away]]]]
because it can appear in both sortal and eventive contexts, if the sortal entity is a derived nominal. Forexample:
18) a. The enemies destroyed the city
b. The enemies' destruction of the city
Let us analyze the derivation step by step.
19) a. We start with a DP [the city], which is merged with a node [√destroy], underspecified as
regards category. Since our generator function is blind and free, there is no featural
9 Following Escandell & Leonetti
is then closed, since there is no more information of the same nature (i.e., eventive / causative) to add to the construal. e. So far, nothing has been said about category recognition, and this is because, up to this point, there is no certainty about the distribution of the construction. For all we know, it could be either "the enemies destroyed the city" or "the enemies' destruction of the city", since those constructions have both (semantically speaking, and for all that matters) the same underlying construal: a caused transitive event. Neither V nor v are distributionally specified enough to generate a categorial interpretation at the semantic interface, which means that, up to this point, the whole vP is in a ψ-state as far as category is concerned. This is important because it means, should it be true, that the syntactic workspace can host a structure of arbitrary complexity in its ψ-state, comprising all possible outcomes, and for as long as necessary. If transfer is nothing more than the interfaces taking from the workspace the minimal units they can read (and not the syntax sending information to the interfaces, as inChomsky's 1998, et. seq. proposals), then, in principle, there is no limit to the amount of non-Markovian / non-Turing computable structure that can be kept active. Of course, there are issues of memory, but that is quite another problem, having little to do with computational capacity (consider, for example, that Turing machines are claimed to have unlimited memory,√destroy DP
The city
√destroy DP
VP
The city
√destroy DP
VP
The city
cause
v
see Uriagereka, 2012: 230-231; yet they are clearly unable to process non-linear
dependencies, as we would find in a Lorenz attractor and, perhaps, even in human language,
see Krivochen, 2013 for discussion). If there is a geometrical frustration deep inside language
design, then we have to add a level to the Chomsky hierarchy, to include non-classical
computation, among which we count quantum computation.
& Torrego's 2004, where stipulations over feature valuation complicate the scenario beyond both necessity and desirability). Consider the Chomskyan proposal: if Case is an unvalued/uninterpretable feature, and those are valued (and thus made interpretable) via probe-goal relations with functional categories, a system like Sanskrit's would require eight distinct functional categories, one per "surface morphosyntax" expression of Case. Same happens with Latin's "6 Cases", or Ancient Greek's 5. We have argued in past works that there are only three fundamental Cases, structured as spheres, with av
v
V
P
P
A 1
[CAUSE]
[GO/BE]
A 2
[WITH/TO]
A 3
Vocabulary Insertion Morphophonology no hierarchy, directed graph, phonological content LINEARIZATION-3 = Serialization Phonology no hierarchy, linear order, phonological string Arguably, the morphophonological module and the phonological module are Markovian in nature, since there is no hierarchy. Between morphosyntax and morphophonology there must exist a dimensional flattening (in the terms of Krivochen, 2012b) algorithm, which transforms a hierarchical structure into a flat structure, without imposing extra structure. A phrase structure approach to vocabulary insertion and linearization, even though possible, is undesirable if a simpler solution isavailable. That is, in words ofLasnik & Uriagereka (2012), the "inadequacy of powerful solutions to simple structuring". Grammars which are high in the Chomsky Hierarchy are sometimes too complex for simple, Markovian structures; and the theory frequently falls in a diametrically opposite mistake as that pointed out inChomsky (1957) 13 : Σ, F grammars (where Σ is a set of initial strings and F a set of post-style instruction formulae for rewriting) alone are inadequate for discontinuous dependencies, asα
α
β
α
β
γ
α
β
γ
φ
α
β
γ
α
β
γ
θ
δ
adjuncts, according to Uriagereka, 2005), having been formed via monotonic merge in a separate
workspace. This means that, according to the theory so far sketched, there are two kinds of Markovian
objects in a linguistic derivation:
a) Those derived by monotonic merge in a single workspace W X
b) Those derived in W Y (where X ≠ Y) and non-monotonically merged to Markovian objects
derived in W X
Taking into account Isardi & Raimy (in press), they must undergo a further process of Markovization,
Spell-Out. They distinguish three "modules" of linearization, with different characteristics (Isardi &
Raimy, in press: 3):
26) Module
Characteristics
Narrow syntax
hierarchy, no linear order, no phonological content
LINEARIZATION-1 = Immobilization
Morphosyntax
hierarchy, adjacency, no phonological content
LINEARIZATION-2 = in (27) (from Chomsky, 1957: 22):
. However, it would be a mistake to think that phrase structure grammars, either incorporating a transformational algorithm or not (e.g., HPSG, LFG, CG) can account for all constructions in all human languages. The reason, we argue (somehow following the line of reasoning ofLasnik & Uriagereka, 2012) is that there are naturally Markovian objects in natural languages which resist phrase structure description. We saw in previous sections that the Chomsky Hierarchy was sometimes too weak to account for (say) quantum phenomena: now, we add that it is sometimes too powerful insofar as natural languages are classified as phrase structure grammars plus a transformational component, with the computational and formal requirements this implies. Going back to our example
) is thus inadequate, we have to go one step below the Chomsky Hierarchy. Notice, incidentally, that (28 a) could be generated with a Σ, F grammar, where Σ = A and F = terminal strings (lexical items) but onlyold
old
old
A
NP
N
man
old
N'
NP
old
man
N'
N'
old
b.
Examples analogous to (12) are easily found onHale & Keyser (1993, Mateu Fontanals(2002), and related work on lexical decomposition and argumental alternances.
Quantum effects beyond the Planck scale within physics have been identified, as we have said, since EPR's seminal work.
Defeating Lexicocentrism. C Boeckx, Ms. ICREA/UAB. lingBuzz/001130Boeckx, C. (2010) Defeating Lexicocentrism. Ms. ICREA/UAB. lingBuzz/001130
Roots and categories. Talk presented at the 19th Colloquium on Generative Grammar. H Borer, Name Only: Structuring Sense vol I. OxfordOxford University PressUniversity of the Basque CountryBorer, H. (2005) In Name Only: Structuring Sense vol I. Oxford: Oxford University Press. (2009) Roots and categories. Talk presented at the 19th Colloquium on Generative Grammar, University of the Basque Country, April, 1-3 2009.
Descubriendo y procesando el lenguaje. M Carreiras, Madrid: TrottaCarreiras, M. (1997) Descubriendo y procesando el lenguaje. Madrid: Trotta.
Minimalist Inquiries. The Framework. MIT Occasional Papers in Linguistics 15. (1999) Derivation by Phase. MIT Occasional Papers in Linguistics 18. (2005a) On Phases. N Chomsky, Ginn. 184-221.Ms. MIT. Quaderni del Dipartimento di Lingüística14MA. MIT pressThe Biolinguistic Perspective after 50 years. Sito Web dell'Accademia della Crusca -AprileChomsky, N. (1957) Syntactic Structures. The Hague, Mouton. (1970) "Remarks on nominalization". In: Jacobs, Roderick & Peter Rosenbaum (eds.) Readings in English transformational grammar. Waltham, MA: Ginn. 184-221. (1995) The Minimalist Program Cambridge, MA. MIT press. (1998) Minimalist Inquiries. The Framework. MIT Occasional Papers in Linguistics 15. (1999) Derivation by Phase. MIT Occasional Papers in Linguistics 18. (2005a) On Phases. Ms. MIT. (2005b) The Biolinguistic Perspective after 50 years. Sito Web dell'Accademia della Crusca -Aprile 2005. In Quaderni del Dipartimento di Lingüística (14), Firenze.
Eliminating Labels". In Derivation and Explanation in the Minimalist Program. C Collins, S. D. Epstein and T. D. SeelyBlackwell PublishingCollins, C. (2002) "Eliminating Labels". In Derivation and Explanation in the Minimalist Program (eds S. D. Epstein and T. D. Seely), Blackwell Publishing. 42-64.
Simpler Syntax. P & R Culicover, Jackendoff, OUPOxfordCulicover, P. & R. Jackendoff (2005) Simpler Syntax. Oxford: OUP.
Roots and Affixes: Eliminating Lexical Categories from Syntax. M De Belder, Utrecht UniversityPhD DissertationDe Belder, M. (2011) Roots and Affixes: Eliminating Lexical Categories from Syntax. PhD Dissertation, Utrecht University.
How to merge a root. M J De Belder, Van Craenenbroeck, Ms., HUBrussel & Utrecht UniversityDe Belder, M. & J. van Craenenbroeck (2011) "How to merge a root". Ms., HUBrussel & Utrecht University.
From Cognitive to Neural Models of Working Memory. M D' Espósito, 1481. 761-772Phil. Trans. R. Soc. B. 29D' Espósito, M. (2007) "From Cognitive to Neural Models of Working Memory". Phil. Trans. R. Soc. B 29 May 2007 vol. 362 no. 1481. 761-772
. S De Lancey, Lectures on Functional Syntax. Ms. University of Oregon. Available. De Lancey, S. (2001) Lectures on Functional Syntax. Ms. University of Oregon. Available at: http://www.uoregon.edu/~delancey/sb/functional_syntax.doc
Consciousness explained. D Dennet, Brown and CompanyBoston: LittleDennet, D. (1991) Consciousness explained. Boston: Little, Brown and Company.
Can Quantum-Mechanical Description of Physical Reality be Considered Complete?. A ; B Einstein, & N Podolsky, Rosen, Physical Review. 4710Einstein, A.; B. Podolsky & N. Rosen (1935). "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?". Physical Review 47 (10): 777-780
A Derivational Approach to Syntactic Relations. S Epstein, E Groat, R Kawashima, & H Kitahara, Oxford University PressOxfordEpstein, S., E. Groat, R. Kawashima & H. Kitahara (1998) A Derivational Approach to Syntactic Relations. Oxford University Press: Oxford.
Derivation and Explanation in the Minimalist Program. Epstein, S. & T. D. Seely eds.BlackwellOxfordEpstein, S. & T. D. Seely eds. (2002) Derivation and Explanation in the Minimalist Program. Oxford: Blackwell.
La definición de la categoría gramatical en una morfología orientada sintácticamente. A Fábregas, UAMPhD DissertationFábregas, A. (2005) La definición de la categoría gramatical en una morfología orientada sintácticamente. PhD Dissertation, UAM.
Neurodynamics. An Exploration of Mesoscopic Brain Dynamics. W J Freeman, Springer-VerlagLondon UKFreeman W.J. (2000) Neurodynamics. An Exploration of Mesoscopic Brain Dynamics. London UK: Springer-Verlag.
Nonlinear brain dynamics as macroscopic manifestation of underlying many-body field dynamics. W & G Freeman, Vitiello, arXiv:q-bio/0511037v1MsFreeman, W. & G. Vitiello (2005) "Nonlinear brain dynamics as macroscopic manifestation of underlying many-body field dynamics". Ms. arXiv:q-bio/0511037v1
The Modularity of Mind: An Essay on Faculty Psychology. J Fodor, MIT PressCambridge, MassThe Language of ThoughtFodor, J. (1975) The Language of Thought, Harvard University Press. (1983) The Modularity of Mind: An Essay on Faculty Psychology. Cambridge, Mass.: MIT Press
Connectivity in Markovian Dependencies. A Gallego, Proceedings of ConSOLE XIV. ConSOLE XIVGallego, A. (2007) "Connectivity in Markovian Dependencies". Proceedings of ConSOLE XIV. 73- 98.
Non-Transformational Syntax. Formal and Explicit Models of Grammar. G Green, Borsley, R. & K. BörjarsLondon, BlackwellElementary Principles of HPSGGreen, G. (2011) Elementary Principles of HPSG. In Borsley, R. & K. Börjars (Eds.) Non- Transformational Syntax. Formal and Explicit Models of Grammar. London, Blackwell. 9-53.
The Elegant Universe. B Greene, W. W. NortonNew YorkGreene, B. (1999) The Elegant Universe. New York, W. W. Norton.
On argument structure and the lexical expression of Syntactic Relations. K S J Hale, Keyser, Essays in honor of Sylvain Bromberger. Kenneth Hale and Samuel Jay KeyserCambridge, MassMIT Press20Prolegomenon to a Theory of Argument StructureHale, K. & S. J. Keyser (1993) On argument structure and the lexical expression of Syntactic Relations. In The view from Building 20: Essays in honor of Sylvain Bromberger, ed. by Kenneth Hale and Samuel Jay Keyser, MIT Press. (1997) The Basic Elements of Argument Structure. Ms. MIT. (adapted as Chapter 1 of Hale & Keyser, 2002) (2002) Prolegomenon to a Theory of Argument Structure. Cambridge, Mass.: MIT Press.
Distributed Morphology and the pieces of Inflection. M & A Halle, Marantz, The view from building 20. Hale, Kenneth & Samuel Jay KeyserCambridgeMIT PressHalle, M. & A. Marantz (1993) "Distributed Morphology and the pieces of Inflection". In: Hale, Kenneth & Samuel Jay Keyser (eds.) The view from building 20. Cambridge: MIT Press. 111-176.
The Faculty of Language: What Is It, Who Has It, and How It Evolve?. M D Hauser, N & W T Chomsky, Fitch, Science. 2985598Hauser, M.D., N. Chomsky & W.T. Fitch (2002) "The Faculty of Language: What Is It, Who Has It, and How It Evolve?" Science 298, (5598): 1569-79.
Three types of linearization and the temporal aspects of speech. W Idsardi, & E Raimy, in pressIdsardi, W. & E. Raimy (in press) "Three types of linearization and the temporal aspects of speech".
Principles of linearization. T. Biberauer and Ian RobertsBerlinMouton de GruyterIn T. Biberauer and Ian Roberts (Editors) Principles of linearization. Berlin: Mouton de Gruyter.
Foundations of Language. R Jackendoff, OUPOxfordJackendoff, R. (2002) Foundations of Language. Oxford: OUP.
The Antisymmetry of Syntax. R Kayne, MITCambridge, MassKayne, R. (1994) The Antisymmetry of Syntax. Cambridge, Mass.: MIT.
What Kind of Computing Device is the Human Language Faculty?. H ; H Kitahara, The Biolinguistic Enterprise. Di Sciullo, A-M. & C. BoeckxOUPOxford: Blackwell.Kitahara, H. (1997) Elementary Operations and Optimal Derivations. Cambridge Mass.: MIT Press Lasnik, H. (1999) Minimalist Analysis. Oxford: Blackwell. (2011) "What Kind of Computing Device is the Human Language Faculty?". In Di Sciullo, A-M. & C. Boeckx (Eds.) The Biolinguistic Enterprise. Oxford: OUP. 354-65.
Structure. H & J Lasnik, Uriagereka, Philosophy of Linguistics. R. Kempson, T. Fernando, and N. AsherElsevier14Lasnik, H. & J. Uriagereka (2012) "Structure". In R. Kempson, T. Fernando, and N. Asher (eds.) Handbook of Philosophy of Science Volume 14: Philosophy of Linguistics. Elsevier. 33-61.
Is the Brain a Quantum Computer?. A Litt, C Eliasmith, F Kroon, S Weinstein, & P Thagard, Cognitive Science XX. Litt, A., C. Eliasmith, F. Kroon, S. Weinstein & P. Thagard (2006) "Is the Brain a Quantum Computer?" Cognitive Science XX (2006) 1-11.
La mariposa y el tornado: teoría del Caos y cambio climático. C Madrid, Madrid, RBAMadrid, C. (2011) La mariposa y el tornado: teoría del Caos y cambio climático. Madrid, RBA.
No escape from syntax: Don't try morphological analysis in the privacy of your own lexicon. A Marantz, Proceedings of the 21st Annual Penn Linguistics Colloquium: Penn Working Papers. Dimitriadis, Alexis et al.the 21st Annual Penn Linguistics Colloquium: Penn Working Papersin Linguistics 4.2Marantz, A. (1997) "No escape from syntax: Don't try morphological analysis in the privacy of your own lexicon". In: Dimitriadis, Alexis et al. (eds.) Proceedings of the 21st Annual Penn Linguistics Colloquium: Penn Working Papers in Linguistics 4.2, 201-225.
Dynamic Antisymmetry. A Moro, MITCambridge, MassMoro, A. (2000) Dynamic Antisymmetry. Cambridge, Mass.: MIT.
Physics and the mind. R Penrose, The large, the small and the human mind. M. LongairCambridgeCambridge University PressPenrose, R. (1997) "Physics and the mind". In M. Longair (Ed.), The large, the small and the human mind. Cambridge: Cambridge University Press. 93-143.
The syntax of valuation and the interpretability of features. D Pesetsky, E Torrego, Phrasal and Clausal Architecture. S. Karimi et. al.Amsterdam: John BenjaminsPesetsky, D. & Torrego, E. (2007) "The syntax of valuation and the interpretability of features". In Phrasal and Clausal Architecture. Syntactic Derivation and Interpretation, S. Karimi et. al. (eds.). Amsterdam: John Benjamins. 262-294.
Things and places: How the mind connects with the perceptual world. Z W Pylyshyn, Jean Nicod Lectures). MIT PressPylyshyn, Z. W. (2007) Things and places: How the mind connects with the perceptual world (2004 Jean Nicod Lectures). Cambridge, MA: MIT Press.
The Syntactic Domain of Anaphora. PhD Dissertation, MIT. T Reinhart, Reinhart, T. (1976) The Syntactic Domain of Anaphora. PhD Dissertation, MIT.
What is Life?. E Schrödinger, Cambridge, MassSchrödinger, E. (1944) What is Life? Cambridge, Mass.: CUP.
Morphology and word order in Germanic languages. J Sola, Minimal Ideas: Syntactic Studies in the Minimalist Framework. W. Abraham et al.Amsterdam: John BenjaminsSola, J. (1996) "Morphology and word order in Germanic languages". In Minimal Ideas: Syntactic Studies in the Minimalist Framework, W. Abraham et al.(eds.). Amsterdam: John Benjamins. 217-251.
H Stapp, Matter and Quantum Mechanics. SpringerStapp, H. (2009) Mind, Matter and Quantum Mechanics. Springer.
The relation of Grammar to Cognition. L Talmy, The Cognitive Linguistics Reader. Evans, V., B. Bergen & J. ZinkenCambridge, Mass; LondonToward a cognitive semanticsTalmy, L. (2000) Toward a cognitive semantics. Cambridge, Mass.: MIT. (2007) "The relation of Grammar to Cognition". In Evans, V., B. Bergen & J. Zinken (eds.) The Cognitive Linguistics Reader. London: Equinox. 481-544.
Derivations: Exploring the Dynamics of Syntax. J Uriagereka, Sketch of the Grammar in Non-Classical Conditions. Ms. UMD. (2012) Spell-Out and the Minimalist Program. N. Hornstein & S. EpsteinLondon, Routledge; OxfordOUPSyntactic Anchors: On Semantic RestructuringUriagereka, J. (1998) Rhyme and Reason. MIT Press. (1999) Multiple Spell-Out. In N. Hornstein & S. Epstein (eds.), Working Minimalism, Cambdridge (Mass.), MIT Press, 251-282. (2002) Multiple Spell-Out. In Uriagereka, ed. Derivations: Exploring the Dynamics of Syntax. London, Routledge. (2005) A Markovian Syntax for Adjuncts. Ms. UMD. (2008) Syntactic Anchors: On Semantic Restructuring. Cambridge: CUP. (2011) A Sketch of the Grammar in Non-Classical Conditions. Ms. UMD. (2012) Spell-Out and the Minimalist Program. Oxford: OUP.
Letter to Noam Chomsky & Howard Lasnik re. their manuscript "Filters and Control. J-R Vergnaud, Vergnaud, J-R. (1977) Letter to Noam Chomsky & Howard Lasnik re. their manuscript "Filters and Control". Ms. http://ling.auf.net/lingbuzz/000461
My Double Unveiled. G Vitiello, John BenjaminsAmsterdamVitiello, G. (2001) My Double Unveiled. Amsterdam, John Benjamins.
A Turing Program for Linguistic Theory. J Watumull, In Biolinguistics, 6.2. 222-245Watumull, J. (2012) A Turing Program for Linguistic Theory. In Biolinguistics, 6.2. 222-245.
On the definition of word. E A M Williams, Di Sciullo, The MIT PressCambridge, MAWilliams, E. & A. M. Di Sciullo (1987) On the definition of word. Cambridge, MA: The MIT Press.
The Merge Condition: A syntactic approach to selection. S Wurmbrand, To appear in Minimalism and Beyond: Radicalizing the interfaces. P. Kosta, L. Schürcks, S. Franks, and T. Radeva-BorkAmsterdamJohn BenjaminsWurmbrand, S. (2013) "The Merge Condition: A syntactic approach to selection". To appear in Minimalism and Beyond: Radicalizing the interfaces, ed. by P. Kosta, L. Schürcks, S. Franks, and T. Radeva-Bork. Amsterdam: John Benjamins
| [] |
[
"Identifying Metaphoric Antonyms in a Corpus Analysis of Finance Articles",
"Identifying Metaphoric Antonyms in a Corpus Analysis of Finance Articles"
] | [
"gerowa@tcd.ieAaron Gerow \nSchool of Computer Science & Statistics\nSchool of Computer Science & Informatics\nTrinity College Dublin College Green\nDublin 2Ireland\n",
"Mark T Keane mark.keane@ucd.ie \nUniversity College Dublin Belfield\nDublin 4Ireland\n"
] | [
"School of Computer Science & Statistics\nSchool of Computer Science & Informatics\nTrinity College Dublin College Green\nDublin 2Ireland",
"University College Dublin Belfield\nDublin 4Ireland"
] | [] | Using a corpus of 17,000+ financial news reports (involving over 10M words), we perform an analysis of the argument-distributions of the UP and DOWN verbs used to describe movements of indices, stocks and shares. In Study 1 participants identified antonyms of these verbs in a free-response task and a matching task from which the most commonly identified antonyms were compiled. In Study 2, we determined whether the argument-distributions for the verbs in these antonym-pairs were sufficiently similar to predict the most frequently-identified antonym. Cosine similarity correlates moderately with the proportions of antonym-pairs identified by people (r = 0.31). More impressively, 87% of the time the most frequently-identified antonym is either the first-or second-most similar pair in the set of alternatives. The implications of these results for distributional approaches to determining metaphoric knowledge are discussed. | null | [
"https://arxiv.org/pdf/1212.3139v2.pdf"
] | 259,119 | 1212.3139 | 2339cf5f93dd830f7a3443b8a879989233ec7a33 |
Identifying Metaphoric Antonyms in a Corpus Analysis of Finance Articles
gerowa@tcd.ieAaron Gerow
School of Computer Science & Statistics
School of Computer Science & Informatics
Trinity College Dublin College Green
Dublin 2Ireland
Mark T Keane mark.keane@ucd.ie
University College Dublin Belfield
Dublin 4Ireland
Identifying Metaphoric Antonyms in a Corpus Analysis of Finance Articles
Metaphorcorpus analysisword meaningsemanticsexperimental linguisticsgrounding
Using a corpus of 17,000+ financial news reports (involving over 10M words), we perform an analysis of the argument-distributions of the UP and DOWN verbs used to describe movements of indices, stocks and shares. In Study 1 participants identified antonyms of these verbs in a free-response task and a matching task from which the most commonly identified antonyms were compiled. In Study 2, we determined whether the argument-distributions for the verbs in these antonym-pairs were sufficiently similar to predict the most frequently-identified antonym. Cosine similarity correlates moderately with the proportions of antonym-pairs identified by people (r = 0.31). More impressively, 87% of the time the most frequently-identified antonym is either the first-or second-most similar pair in the set of alternatives. The implications of these results for distributional approaches to determining metaphoric knowledge are discussed.
Introduction
In recent years, significant progress has been made in deriving meaning from statistical analyses of distributions of words (Gerow & Keane, 2011a;Landauer & Dumais, 1997;Michel et al., 2010;Turney & Pantel, 2010). This distributional approach to meaning takes the view that words that occur in similar contexts tend to have similar meanings (cf. Wittgenstein, 1953) and that by analysing word usage we get at their meaning. For example, the word co-occurrence statistics derived in Latent Semantic Analysis (LSA) seem to tell us about the structure of the lexicon, as they are good predictors of reaction times in lexical decision tasks (Landauer & Dumais, 1997). More generally, it has been suggested that significant insights into human culture and behaviour can be derived from analysing very large corpora, like the Google Books repository (Michel et al., 2010). In this paper, we apply similar distributional analyses to understand metaphorically-structured knowledge underlying the antonyms between "UP" and "DOWN" verbs from a corpus of financial news reports. (see Gerow & Keane, 2011b, for an analysis of metaphor hierarchies in the same data.) Lakoff (1992;Lakoff & Johnson, 1980) have argued that our understanding of many concepts, such as emotions and mental states, are grounded in a few ubiquitous metaphors. The spatial metaphors that structure emotional states -HAPPINESS IS UP and SADNESS IS DOWN -are found in almost all languages. Similar spatial metaphors, of the kind we examine here, seem to ground many stock-market reports. Accounts of index, stock-market, and share movements tend to converge around metaphors of rising and falling, attack and retreat, gain and loss. These concepts appear to be grounded by core metaphors, with an antonymic relationship to one another, that could be glossed as GOOD IS UP and BAD IS DOWN. Lakoff and Johnson (1980) have pointed to this UP-DOWN metaphor opposition as underlying accounts of wealth (WEALTH IS UP as in high class), the rise and fall of numbers (MORE IS UP; LESS IS DOWN) and changes in quantity (CHANGE IN QUANTITY IS WAR as in retreating profits and defensive trades).
In the present paper, we look at the distributive structure of these verbs' arguments to determine whether there is empirical support for metaphoric opposites. Specifically, we try to determine whether the antonyms identified by participants in a psychological study can be shown to meaningfully correspond to a computational analysis of the argument-distributions in our corpus.
The Corpus
In January, 2010, we carried out automated web searches that selected all articles referring to the three major world stock indices (Dow Jones, FTSE 100, and NIKKEI 225) from three websites: the New York Times (NYT, www.nyt.com), the Financial Times (FT, www.ft.com) and the British Broadcasting Corporation (BBC, www.bbc.co.uk). These searches harvested 17,713 articles containing 10,418,266 words covering a 4-year period: January 1st, 2006 to January 1st, 2010. The by-source breakdown was FT (13,286), NYT (2,425), and BBC (2,002). The by-year breakdown was 2006 (3,869), 2007 (4,704), 2008 (5,044), 2009 (3,960), and 2010 (136). The corpus included editorials, market reports, popular pieces, and technical exposés. These three resources were chosen because they are in English and have a wide-circulation and online availability. The Financial Times made up the majority of the articles; however, the spread was actually much wider as many articles were syndicated from the Associated Press, Reuters, Bloomberg News, and Agence France-Presse. The uniqueness of the articles in the database was ensured by keying them on their first 50 characters.
Once retrieved, the articles were stripped of HTML, converted to UTF-8, and shallow-parsed to extract phrasal structure using a modified version of the Apple Pie Parser (Sekine, 1997). Each article was stored in a relational database with sentential parses of embedded noun-and verbphrases. Sketch Engine was used to lemmatise and tag the corpus (Kilgarriff et al., 2004). Sketch Engine is a webbased, corpus-analysis tool that lemmatises and tags customised corpora with part-of-speech tags using the TreeTagger schema (Schmid, 1994). A lemma is a singular, part-of-speech token (e.g., verb or noun) that includes all tenses, declensions, and pluralizations of a given word. For example, the one verb lemma "fall" includes instances such as "fall", "fell" and "falls", whereas the noun lemma "fall" includes "a fall" and "three falls". Sketch Engine provides so-called "sketches" of individual lemmas. For example, the sketch for fall-n (the word "fall" as a noun) is different from the sketch for fall-v ("fall" as a verb.) With some lemmas, the differences marked by part-of-speech are large, such as with store-n compared to store-v. These sketches facilitated the statistical analysis of the most common arguments of verbs. For example, one of the most common verbs in the corpus was "fall," which took a range of arguments with different frequencies (e.g., "DJI", "stocks", "unemployment"). Throughout this paper, when we refer verbs we take this to mean verb lemmas.
Metaphoric Antonyms
From a distributional perspective, the arguments of a verb and its antonym (like rise and fall) should have a definite structure that identifies their relationship to one another. That is, the frequency distribution of the arguments taken by rise should have a lot in common with the argument-distribution of its antonym fall (see Table 1). Furthermore, if we look at other less-strongly-paired antonyms, like rise-lower or rise-decrease, then the similarity in their argument distributions should be less. Specifically, we should find that a computational measure of similarity, such as cosine similarity, between the words' argument-distributions should be predictive people's choice of antonyms. Within a larger body of work on automated semantic tagging and semantic parsing, some work has focused on automating the generation of semantically resolute phrases (Brown et al., 2005). Online lexicons, such as WordNet and LSA, have been used to generate and resolve analogies by modelling synonymy (Turney, 2006;Veale, 2004). Such work approaches semantics, and specifically antonymy, between words and phrases, but avoids conceptual metaphors. Lakoff (1992) offers a cognitive theory of metaphor, one in which linguistic metaphors are related, but distinct, from the metaphoric concepts they structure. Deignan (2005) offers a bridge between concept and language, by proposing a cline between metonymy (part-whole relationship) and metaphor. Deignan's link from metonymy to metaphor is a good example of a corpus-based approach to metaphor because it preserves the cognitive structures proposed by Lakoff, while making the link between semantics (words) and metaphor (thought) explicit. Here, we explore this link with regard to antonyms.
In this article, we report two studies examining these issues. Study 1 was a study of participants' identification of antonyms in two distinct tasks: a free-generation task (where one is given rise and asked for its opposite) and a match-the-opposite task (where one is asked to match rise to its opposite in a set of words). The word-sets were drawn from the above corpus and consisted of a set of positive, UP verbs (e.g., rise, soar, rally) and more negative, DOWN verbs (e.g., fall, lose, dip; see Table 2). Study 2 examined the argument-distributions of the antonym-pairs chosen by participants in Study 1 to see if they were, in any way, predictive of the choices made. To anticipate our findings, we find that argument distributions correlate moderately with the frequencies of antonym choices made by people. Furthermore, in the majority of cases, the most similar distribution for an antonym pair corresponds to the pair most-frequently chosen by people. The percentages of antonym-pairs identified in the two tasks (T1 and T2) of Study 1 and their cosine similarity scores (Sim). Total % is the mean percent occurrence across both tasks; bold words were only generated in the free-response task (T1).
Antonym pair (prompt-response) Task 1 % Task 2 % Total % Sim
Study 1: People's Antonym Choices
In this study, participants were either given the positive, UP verbs or the negative, DOWN verbs and asked to perform two tasks on the set (a free-generation task, always followed by a match-the-opposite task). The measure was the frequency with which a particular pair was identified in either task.
Method
Participants Twelve students at University College Dublin voluntarily took part in the study; five male and seven female. All were native English speakers.
Participants were assigned to one of the two conditions; receiving either all UP verbs or DOWN verbs as prompts in both tasks of the study. Table 2 were used as the materials. Procedure Participants were given written and verbal instructions indicating that they would be asked to carry out two tasks that involved identifying "the opposites of the presented words". For the free-generation task (Task 1) they were read the list of words, one-by-one, and asked to verbally respond to these prompts. Responses were timed and recorded during the study and later transcribed by the experimenter. After Task 1 the experimenter presented the second task. Note there were no constraints on the responses for the first part of the study. For the match-the-opposite task (Task 2), participants were given a sheet of paper with two columns of words. The left column was the list of prompts from the Task 1, and the right column was a list of potential opposites. Their job was to draw lines from the column of prompt-words on the lefthand side to their "best opposite" on the right-hand side. Note, that they were instructed that they could indicate more than one word if they were considered tied for "best opposite". When this task was completed, the sheet was collected and participants were debriefed on the rationale for the study.
Materials The set of UP verbs and DOWN verbs shown in
Scoring Note that whether participants are given the UP or DOWN verb-sets they tend to produce the same pairs; that is, one could be given rise and produce fall, the rise-fall antonym-pair or one could be given fall and produce rise generating the same rise-fall antonym-pair. As there were no clear differences in the pairs identified by participants who were presented either all UP verbs or DOWN verbs, the scoring was performed on the two conditions collapsed together. In scoring the data, we noted the frequency of a particular antonym-pair produced from a particular prompt (e.g., rise or fall) as a proportion of the total number of presentations of that prompt, in either the first or second task.
Results & Discussion
General Characteristics of the Data. In all, participants identified 114 unique antonym pairs to the 30 presented words (combined UP-and DOWN-verbs). On average, a given prompt-word gave rise to almost five alternative antonym pairs (M = 4.8) with a range from 2 (for weak, participants produced weak-strong and weak-stable) or 9 alternative pairs (e.g., elevate-drop, elevate-fall). On average, in the free-generation task participants suggested one antonym (M = 1.37) that was not in the opposing set used in the match-the-opposite task (e.g., when presented with stable several participants suggested unstable as the antonym, but readily chose volatile as its antonym in the matching task). Overall, people vary significantly in the antonyms identified for a prompt word. However, for a group of people, there is usually a clear most-frequently-identified antonym. For instance, on average, 96% of participants chose strong when prompted with weak or weak when prompted with strong. Table 3 shows the overall percentage for the top two most frequently identified antonym-pairs for each prompt word. Note, that a conservative estimate of chance across both tasks would be close to 5%. This chance-level computation is simply an observation of all available choices in Task 2 along with those freegeneration choices in Task 1 that were not available in Task 2. This means that the chance-level estimation of 5% is much more conservative because in Task 1, as the entire English lexicon is available to the participant. Thus, though some percentages are low, they are well above chance. The Free-Generation Task. A notable aspect of the data is how different the percentages are for identified antonyms in the two tasks. The free-generation task allowed participants to name whatever antonym came to mind, some of which were not included in the set for Task 2. However, if one looks at the most-frequently-identified antonyms, there are only five cases (out of 60) where "another" antonym was identified frequently. This means that we can be confident that the match-the-opposite task was not overly constrained in the choices given to participants.
The Match-the-Opposite Task. In this task, the choice of antonym was restricted to the 15 contrasting words, with participants being given the option to choose more than one. This is a more constrained task in which to identify antonyms and produced a generally clearer pattern of antonym-pair identification 1 . There are clear winners in terms of favoured antonym pairs; notably, increase-decrease (100%), elevate-fall (44%) and alleviate-exacerbate (43%). Note that some of the low percentages occur because one of the words in the pair is used by another very dominant antonym; so, for example, the listings for fallgain and fall-increase are very low (4%; though below chance) because fall-rise (implicitly listed in rise-fall) has a high percentage (57%).
In itself, this data is interesting but does not answer the posed question of whether these patterns of behaviour are 1 By necessity when a word is generated in Task 1 but not present in Task 2, the percentage has to be 0 in Task 2 (as it was not used as a word prompt).
predictable from the argument-distributions of the verbs . In the next study, we turn to this key issue. To reiterate our hypothesis, we expect that an empirical analysis of the distributional similarity between verb-arguments will correlate to the the results of the study presented in this section.
Study 2: Similarity of Antonym Distributions
Study 1 gives us a set of human data on how people tend to identify antonyms, in this study we compare these identifications to a corpus analysis of the argument distributions of the same words. Our hypothesis was that by taking a distributive approach to knowledge, we might be able to identify antonyms by analysing the arguments they take. Study 1 provides a way of validating our computational analysis of these words' argument distributions.
Method
Materials All the same words used in Study 1 were used in this analysis. We also included the words generated by the participants in Study 1 that were not in our original material list. Procedure Taking the 114 antonym pairs in Study 1, we assembled them into a set of word-vectors by the frequency of their arguments given by Sketch Engine (Kilgarriff, 2004). Each verb had anywhere from 250 to 2,000 arguments in its vector (if a particular word was found in one vector of a pair, but not in the other, it was given a frequency of zero 2 ). We examined a number of similarity measures including Euclidean distance, cosine similarity, and Kullback-Leibler divergence. We also compared methods of cutting and smoothing the tails of the distributions to mitigate the effects of low-frequency arguments. Markedly, the most successful measure was cosine similarity, in which the distribution's tail was not cut or smoothed. This measure was applied to the vectors of all words in each of the 114 antonym pairs and similarity scores noted. Correlations were computed between this measure and the proportions for different antonympairs in Task 1 and Task 2 separately, as well as the combined totals (see Table 3).
Results & Discussion
Overall, the argument-distributions of the words provide a moderately effective means for identifying the most-frequently-chosen antonym pairs. Correlations to All Antonym-Pairs. The Pearson correlations between the cosine similarity scores and the proportions in each of the tasks and overall, reveal a moderate correlation (r = 0.31) for Task-2 x Cosine-Similarity . The other measures reveal low correlations for Task-1 x Cosine Similarity (r = 0.10) and Total-% x Cosine Similarity (r = 0.25). It is perhaps not surprising that the best correlation is found in the more constrained task where people's choice of antonym was more restricted. That such correlated regularities could be found for data from a relatively small sample (n = 12) is, we believe, very encouraging for the veracity of this technique. However, the correlation only gives us a general sense of the correspondence; the more demanding question is whether the most-frequently-identified antonyms specifically emerge from the computational analysis of argument-distributions.
Identifying Most-Frequently-Identified Antonyms. Table 3 shows the top-two most-frequently-identified antonyms for a given prompt word in the UP-and DOWN-verb sets. In the column showing the cosine similar score (Sim) for an antonym, when the score is shown in bold it indicates that this was the highest similarity score for all the alternative antonym-pairs in the set. So, in 60% of cases the most-frequently identified antonym-pair was also the one with the highest-similarity score in its set of antonym-pairs. If we widen this assessment to accept the highest and secondhighest scored antonym pair, then 87% of the pairs that emerge from the corpus analysis were identified as mostfrequent antonyms by participants. This is a very good correspondence between the predictions of the computational measure and the results of the human data.
General Discussion
Metaphors, and their linguistic instantiations, structure not only the way we converse, but the way we think. In this paper we have shown that a statistical analysis of the argument-distributions can be used to identify antonymic verbpairs -pairs that refer to opposing metaphors in our knowledge (cf. Lakoff, 1992).
The strongest antonyms identified by participants in Study 1 are shown to be predictable by looking at statistical regularities of word-usage in a corpus. In itself, this is an interesting result, but it also lends support to an emerging body of work on finding meaning behind word-use statistics (see Turney & Pantel, 2010 for a survey). Specifically, vector space models, a form of which we employed in Study 2 of this paper, have been used in computational research on document summarisation, comparison, information extraction, searching, and indexing. These models, have also found cognitive relevance in analogy resolution, semantic priming and comprehension, and word-sense disambiguation. This growing body of work, as well as the current paper, bridges a gap between words and meaning.
In another paper, using the same corpus, we show that metaphoric verbs, exhibit a partially-subsumptive hierarchical structure (Gerow & Keane, 2011b). Both papers show that, in this financial domain, there are clear statistical regularities in word usage that can be used as pointers to the underlying structure and organization of metaphors. We believe that this is an important finding. Indeed, both papers bridge a gap, analogous to the word-meaning gap, between linguistic and conceptual metaphors.
Table 1 :
1The percentage of the argument-distributions of rise and fall for their 10 most frequent arguments.Rise
(Arg)
% of Total
(Freq)
Fall
(Arg)
% of Total
(Freq)
index
7.39%
index
6.97%
share
5.67%
share
6.41%
point
4.83%
point
3.75%
percent
2.90%
percent
2.97%
price
2.43%
price
2.83%
stock
2.00%
stock
2.78%
yield
1.90%
yield
1.77%
cent
1.31%
cent
1.34%
profit
0.91%
profit
1.34%
rate
0.90%
rate
1.24%
(Nrise = 39,261; Nfall = 39,230).
Table 2 :
2The UP and DOWN verb used in studies.UP-verbs
occurrences (% corpus * )
DOWN-verbs
occurrences (% corpus * )
rise
29,261 (4.20%)
fall
39,230 (4.20%)
gain
13,134 (1.40%)
lose
12,298 (1.30%)
increase 6,158 (0.67%)
decrease
123 (0.01%)
climb
5,631 (0.60%)
tumble
2,135 (0.23%)
jump
4,960 (0.53%)
slip
3,336 (0.36%)
rally
4,190 (0.45%)
retreat
1,474 (0.20%)
advance 2,385 (0.26%)
slide
2,777 (0.30%)
surge
2,313 (0.25%)
plunge
1,592 (0.17%)
recover
2,165 (0.23%)
worsen
500 (0.05%)
soar
1,649 (0.18%)
plummet
443 (0.05%)
rebound 1,220 (0.13%)
dip
1,322 (0.14%)
alleviate
134 (0.01%)
decline
3,672 (0.39%)
elevate
52 (0.01%)
drop
8,377 (0.90%)
strong
718 (0.07%)
weak
1222 (0.13%)
ease
2,243 (0.35%)
sink
1,339 (0.14%)
Table 3 :
3
Note, we also used 1 instead of 0, a technique that is sometimes used to control the effects of the tail of the distribution, but it did not produce notably different results to those reported.
AcknowledgementsThis work was carried out as part of a self-funded MSc in the Cognitive Science programme at the University College Dublin by the first author. Thanks to Trinity College Dublin and Prof. K. Ahmad for support and supervision of the first author's PhD during the preparation of this paper. Thanks to K. Hadfield and four anonymous reviewers for valuable suggestions.
Automatic questions generation for vocabulary assessment. J C Brown, G A Frishkoff, M Eskenazi, Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing. the conference on Human Language Technology and Empirical Methods in Natural Language ProcessingBrown, J. C., Frishkoff, G. A., & Eskenazi, M. (2005). Automatic questions generation for vocabulary assess- ment. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing (pp. 819-826).
A corpus-linguistic perspective on the relationship between metonymy and metaphor. Style. A Deignan, 39Deignan, A. (2005). A corpus-linguistic perspective on the relationship between metonymy and metaphor. Style, 39 (1), 72-91.
Mining the web for the "voice of the herd" to spot stock market bubbles. A Gerow, M T Keane, Proceedings of the 22 nd International Joint Conference on Artificial Intelligence. the 22 nd International Joint Conference on Artificial IntelligenceTo appear inGerow, A. & Keane, M. T. (2011a). Mining the web for the "voice of the herd" to spot stock market bubbles. To appear in Proceedings of the 22 nd International Joint Conference on Artificial Intelligence.
Identifying metaphoric antonyms in a corpus analysis of finance articles. A Gerow, M T Keane, Proceedings of the 33rd Annual Meeting of the Cognitive Science Society. the 33rd Annual Meeting of the Cognitive Science SocietyTo appear inGerow, A. & Keane, M. T. (2011b). Identifying metaphoric antonyms in a corpus analysis of finance articles. To appear in Proceedings of the 33rd Annual Meeting of the Cognitive Science Society.
The sketch engine. A Kilgarriff, P Rychlý, P Smrž, D Tugwell, Proceedings of EU-RALEX. EU-RALEXKilgarriff, A., Rychlý, P., Smrž, P., and Tugwell, D . (2004). The sketch engine. In Proceedings of EU-RALEX (pp. 105-116).
Metaphors we live by. G Lakoff, M Johnson, University of Chicago PressChicago, ILLakoff, G. & Johnson, M. (1980). Metaphors we live by . Chicago, IL: University of Chicago Press.
The contemporary theory of metaphor. G Lakoff, Metaphor and Thought 2 nd Edition. Andrew, OrtonyCambridgeCambridge University PressLakoff, G. (1992). The contemporary theory of metaphor. In Andrew, Ortony (ed.) Metaphor and Thought 2 nd Edition . Cambridge: Cambridge University Press.
A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. T K Landauer, S T Dumais, Psychological Review. 104Landauer, T. K. & Dumais, S. T. (1991). A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104, 211-240.
Quantitative analysis of culture using millions of digitized books. J.-B Michel, ScienceExpress10Michel, J.-B. et al. (2010). Quantitative analysis of culture using millions of digitized books. ScienceExpress, 10 (1126).
TreeTagger -a language independent part-of-speech tagger. G Schmid, Schmid, G. (1994). TreeTagger -a language independent part-of-speech tagger. (http://www.ims.uni-stuttgart.de /Tools/DecisionTreeTagger.html).
The apple pie parser. S Sekine, v5.9Sekine, S. (1997). The apple pie parser, v5.9. (http://nlp.c- s.nyu.edu/app/).
Similarity of semantic relations. P D Turney, Computational Linguistics. 323Turney, P. D. (2006). Similarity of semantic relations. Computational Linguistics, 32 (3), 379-416.
From frequency to meaning: Vector space models of semantics. P D Turney, P Pantel, Journal of Artificial Intelligence Research. 37Turney, P. D. & Pantel, P. (2010). From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37, 141-188
WordNet sits the SAT: A knowledge-based approach to lexical analogy. T Veale, Proceedings of the 16th. the 16thVeale, T. (2004). WordNet sits the SAT: A knowledge-based approach to lexical analogy. In Proceedings of the 16th
European Conference on Artificial Intelligence. European Conference on Artificial Intelligence (pp. 606- 612).
| [] |
[
"Quantifiers, Anaphora, and Intensionality",
"Quantifiers, Anaphora, and Intensionality"
] | [
"Mary Dalrymple dalrymple@parc.xerox.com \nAT&T Bell Laboratories\n07974Murray HillNJ\n",
"John Lamping lamping@parc.xerox.com \nAT&T Bell Laboratories\n07974Murray HillNJ\n",
"Fernando Pereira pereira@research.att.com \nAT&T Bell Laboratories\n07974Murray HillNJ\n",
"Vijay Saraswat saraswat@parc.xerox.com \nAT&T Bell Laboratories\n07974Murray HillNJ\n"
] | [
"AT&T Bell Laboratories\n07974Murray HillNJ",
"AT&T Bell Laboratories\n07974Murray HillNJ",
"AT&T Bell Laboratories\n07974Murray HillNJ",
"AT&T Bell Laboratories\n07974Murray HillNJ"
] | [] | The relationship between Lexical-Functional Grammar (LFG) functional structures (fstructures) for sentences and their semantic interpretations can be expressed directly in a fragment of linear logic in a way that correctly explains the constrained interactions between quantifier scope ambiguity, bound anaphora and intensionality.The use of a deductive framework to account for the compositional properties of quantifying expressions in natural language obviates the need for additional mechanisms, such as Cooper storage, to represent the different scopes that a quantifier might take. Instead, the semantic contribution of a quantifier is recorded as a logical formula whose use in a proof will establish the scope of the quantifier. Different proofs will in general lead to different scopes. In each complete proof, the properties of linear logic will ensure that each quantifier is properly scoped.The interactions between quantified NPs and intensional verbs such as 'seek' are also accounted for in this deductive setting. A single specification in linear logic of the argument requirements of intensional verbs is sufficient to derive the correct reading predictions for intensional-verb clauses both with nonquantified and with quantified direct objects. In particular, both de dicto and de re readings are derived for quantified objects. The effects of type-raising or quantifying-in rules in other frameworks here just follow as linear-logic theorems.While our approach resembles current categorial approaches in important ways(Moortgat, 1988;Moortgat, 1992a;Morrill, 1993;Carpenter, 1993), it differs from them in allowing the greater type flexibility of categorial semantics(van Benthem, 1991)while maintaining a precise connection to syntax. As a result, we are able to provide derivations for certain readings of sentences with intensional verbs and complex direct objects that are not derivable in current purely categorial accounts of the syntax-semantics interface. * Xerox PARC, Palo Alto CA 94304; | 10.1023/a:1008224124336 | [
"https://arxiv.org/pdf/cmp-lg/9504029v2.pdf"
] | 7,441,581 | cmp-lg/9504029 | ebd3c9affc58eb362aabe6f2e40dcae168f93d54 |
Quantifiers, Anaphora, and Intensionality
arXiv:cmp-lg/9504029v2 29 Apr 1995 February 3, 2008
Mary Dalrymple dalrymple@parc.xerox.com
AT&T Bell Laboratories
07974Murray HillNJ
John Lamping lamping@parc.xerox.com
AT&T Bell Laboratories
07974Murray HillNJ
Fernando Pereira pereira@research.att.com
AT&T Bell Laboratories
07974Murray HillNJ
Vijay Saraswat saraswat@parc.xerox.com
AT&T Bell Laboratories
07974Murray HillNJ
Quantifiers, Anaphora, and Intensionality
arXiv:cmp-lg/9504029v2 29 Apr 1995 February 3, 2008
The relationship between Lexical-Functional Grammar (LFG) functional structures (fstructures) for sentences and their semantic interpretations can be expressed directly in a fragment of linear logic in a way that correctly explains the constrained interactions between quantifier scope ambiguity, bound anaphora and intensionality.The use of a deductive framework to account for the compositional properties of quantifying expressions in natural language obviates the need for additional mechanisms, such as Cooper storage, to represent the different scopes that a quantifier might take. Instead, the semantic contribution of a quantifier is recorded as a logical formula whose use in a proof will establish the scope of the quantifier. Different proofs will in general lead to different scopes. In each complete proof, the properties of linear logic will ensure that each quantifier is properly scoped.The interactions between quantified NPs and intensional verbs such as 'seek' are also accounted for in this deductive setting. A single specification in linear logic of the argument requirements of intensional verbs is sufficient to derive the correct reading predictions for intensional-verb clauses both with nonquantified and with quantified direct objects. In particular, both de dicto and de re readings are derived for quantified objects. The effects of type-raising or quantifying-in rules in other frameworks here just follow as linear-logic theorems.While our approach resembles current categorial approaches in important ways(Moortgat, 1988;Moortgat, 1992a;Morrill, 1993;Carpenter, 1993), it differs from them in allowing the greater type flexibility of categorial semantics(van Benthem, 1991)while maintaining a precise connection to syntax. As a result, we are able to provide derivations for certain readings of sentences with intensional verbs and complex direct objects that are not derivable in current purely categorial accounts of the syntax-semantics interface. * Xerox PARC, Palo Alto CA 94304;
Introduction
This paper describes a part of our ongoing investigation in the use of formal deduction to explicate the relationship between syntactic analyses in Lexical-Functional Grammar (LFG) and semantic interpretations. We use linear logic (Girard, 1987) to represent the connection between two dissimilar linguistic levels: LFG f-structures and their semantic interpretations.
F-structures provide a uniform representation of syntactic information relevant to semantic interpretation that abstracts away from the varying details of phrase structure and linear order in particular languages. As notes, however, the flatter, unordered functional structure of LFG does not fit well with traditional semantic compositionality, based on functional abstraction and application, which mandates a rigid order of semantic composition. We are thus led to a more relaxed form of compositionality, in which, as in more traditional ones, the semantics of each lexical entry in a sentence is used exactly once in interpretation, but without imposing a rigid order of composition. Approaches to semantic interpretation that encode semantic representations in attribute-value structures (Pollard and Sag, 1987;Fenstad et al., 1987;Pollard and Sag, 1993) offer such a relaxation of compositionality, but are unable to properly represent constraints on variable binding and scope (Pereira, 1990).
The present approach, in which linear logic is used to specify the relation between fstructures and their meanings, provides exactly what is required for a calculus of semantic composition for LFG. It can directly represent the constraints on the creation and use of semantic units in sentence interpretation, including those pertaining to variable binding and scope, without forcing a particular hierarchical order of composition, except as required by the properties of particular lexical entries.
The use of formal deduction in semantic interpretation was implicit in deductive systems for categorial syntax (Lambek, 1958), and has been made explicit through applications of the Curry-Howard parallelism between proofs and terms in more recent work on categorial semantics (van Benthem, 1988;van Benthem, 1991), labeled deductive systems (Moortgat, 1992b) and flexible categorial systems (Hendriks, 1993). Accounts of the syntax-semantics interface in the categorial tradition require that syntactic and semantic analyses be formalized in parallel algebraic structures of similar signatures, based on generalized application and abstraction (or residuation) operators and structure-preserving relations between them. Those accounts therefore force the adoption of categorial syntactic analyses, with an undesirably strong dependence on phrase structure and linear order.
We have previously shown that the linear-logic formalization of the syntax-semantics interface for LFG provides simple and general analyses of modification, functional completeness and coherence, and complex predicate formation . In the present paper, the analysis is extended to the interpretation of quantified NPs. After an overview of the approach, we present our analysis of the compositional properties of quantified NPs, and we show that the analysis correctly accounts for scope ambiguity and its interactions with bound anaphora. We also present an analysis of intensional verbs, which take quantified arguments, and show that our approach predicts the full range of acceptable readings without appealing to additional machinery.
LFG and Linear Logic
Syntactic framework LFG assumes two syntactic levels of representation. Constituent structure (c-structure) encodes phrasal dominance and precedence relations, and is represented as a phrase structure tree. Functional structure (f-structure) encodes syntactic predicate-argument structure, and is represented as an attribute-value matrix. The c-structure and f-structure for sentence (1) are given in (2): (1) Bill appointed Hillary.
(2) C-structure: F-structure: As illustrated, an f-structure consists of a collection of attributes, such as pred, subj, and obj, whose values can, in turn, be other f-structures. The relationship between c-structure trees and the corresponding f-structures is given by a functional projection function φ from c-structure nodes to f-structures. More generally, LFG analyses involve several levels of linguistic representation called projections related by means of projection functions (Kaplan, 1987;Halvorsen and Kaplan, 1988). For instance, phonological, morphological, or discourse structure might be represented by a phonological, morphological, or discourse projection, related to other projections by means of functional specifications.
The functional projection of a c-structure node is the solution of constraints associated with the phrase-structure rules and lexical entries used to derive the node. In each rule or lexical entry constraint, the ↑ metavariable refers to the φ-image of the mother cstructure node, and the ↓ metavariable refers to the φ-image of the nonterminal labeled by the constraint (Kaplan and Bresnan, 1982, page 183). For example, the following annotated phrase-structure rules were used in the analysis of sentence (1):
(3) S −→ NP (↑ subj) = ↓ VP ↑ = ↓
The annotations on the rule indicate that the f-structure for the S (↑ in the annotation on the NP node) has a subj attribute whose value is the f-structure for the NP daughter (↓ in the annotation on the NP node), and that the S node corresponds to an f-structure which is the same as the f-structure for the VP daughter.
When the phrase-structure rule for S is used in the analysis of a particular sentence, the metavariables ↑ and ↓ are instantiated to particular f-structures placed in correspondence with nodes of the c-structure. We will refer to actual f-structures by giving them names such as f , g, and h. The instantiated phrase structure rule is given in (4), with the φ correspondence between c-structure nodes and f-structures indicated by the directed arcs from phrase-structure nodes to attribute-value matrices:
(4) S −→ NP (f subj) = h VP f = g S NP VP f, g : subj h :[]
Lexical entries also use the metavariables ↑ and ↓ to encode information about the fstructures of the preterminal nodes that immediately dominate them. A partial lexical entry for the word 'Bill' is:
(5) Bill NP (↑ pred) = 'Bill'
This entry states that 'Bill' has syntactic category NP. The constraint (↑ pred) = 'Bill' states that the preterminal node immediately dominating the terminal symbol 'Bill' has an f-structure whose value for the attribute pred is 'Bill'. In this paper, we will provide only the most minimal f-structural representations, leaving aside all details of syntactic specification; in this example, for instance, agreement and other syntactic features of 'Bill' have been omitted. For a particular instance of use of the word 'Bill', the following c-structure and fstructure configuration results:
(6) (h pred) = 'Bill' NP Bill h : pred 'Bill'
Other lexical entries similarly specify features of the f-structure of the immediately dominating preterminal node. The following is a list of the phrase structure rules and lexical entries used in the analysis of example (1): 1
(7) S −→ NP (↑ subj) = ↓ VP ↑ = ↓ VP −→ V ↑ = ↓ NP (↑ obj) = ↓ (8) Bill NP (↑ pred) = 'Bill' appointed V (↑ pred)= 'appoint'
Hillary NP (↑ pred) = 'Hillary'
For a more complete explication of the syntactic assumptions of LFG, see Bresnan (1982), Levin, Rappaport, and Zaenen (1983), and the references cited there.
Lexically-specified semantics A distinguishing feature of our work (and of other work within the LFG framework) is that semantic composition does not take syntactic dominance and precedence relations as the main input. Instead, we follow other work in LFG (Kaplan and Bresnan 1982, Halvorsen 1983, Halvorsen and Kaplan 1988 in assuming that the functional syntactic information encoded by f-structures determines semantic composition. That is, we believe that meaning composition is mainly determined by syntactic relations such as subject-of , object-of , modifier-of , and so on. Those relations are realized by different c-structure forms in different languages, but are represented directly and uniformly in the f-structure.
In LFG, syntactic predicate-argument structure is projected from lexical entries. Therefore, its effect on semantic composition will for the most part -in fact, in all the cases considered in this paper -be determined by lexical entries, not by phrase-structure rules. In particular, the two phrase-structure rules given above for S and VP need not encode semantic information, but only specify how grammatical functions such as subj are expressed in English. In some cases, the constituent structure of a syntactic construction may make a direct semantic contribution, as when properties of the construction as a whole and not just of its lexical elements are responsible for the interpretation of the construction. Such cases include, for instance, relative clauses with no complementizer ('the man Bill met'). We will not discuss construction-specific interpretation rules in this paper.
In the same way as the functional projection function φ associates f-structures to cstructures as described above, we will use a semantic or σ-projection function σ to map f-structures to semantic or σ-structures encoding information about f-structure meaning.
1 Those familiar with other analyses within the LFG framework will notice that we have not included a list of grammatical functions subcategorized for by the verb 'appoint'; this is because we assume a different treatment of the LFG requirements of completeness and coherence. We return to this point below.
For instance, the following lexical entry for 'Bill' extends (8) with a suitable constraint on semantic structure:
(9) Bill NP (↑ pred) = 'Bill' ↑ σ ;Bill
The additional constraint ↑ σ ;Bill is what we call the meaning constructor of the entry. The expression ↑ σ stands for the σ projection of the f-structure ↑. The σ projection is an attribute-value matrix like the f-structure. For simple entries such as this, the σ projection has no internal structure; below, we will examine cases in which the σ projection is structured with several different attributes.
As above, for a particular use of 'Bill', the metavariable ↑ will be replaced by a particular f-structure h, with semantic projection h σ :
(10) (h pred) = 'Bill' NP Bill h : pred 'Bill' h σ :[ ]; Bill
More generally, the association between the semantic structure h σ and a meaning P is represented by the atomic formula h σ ;P , where ; is an otherwise uninterpreted binary predicate symbol. (In fact, we use not one but a family of relations ; τ indexed by the semantic type of the intended second argument, although for simplicity we will omit the type subscript whenever it is determinable from context.) We can now explain the meaning constructor in (9). If a particular occurrence of 'Bill' in a sentence is associated with f-structure h, the syntactic constraint in the lexical entry Bill will be instantiated as:
(h pred) = 'Bill' and the semantic constraint will be instantiated as:
h σ ;Bill
representing the association between h σ and the constant Bill representing its meaning. We will often informally say that P is h's meaning without referring to the role of the semantic structure h σ in h σ ;P . We will see, however, that f-structures and their semantic projections must be distinguished, because semantic projections can carry more information than just the association to the meaning for the corresponding f-structure.
Logical representation of semantic compositionality We now turn to an examination of the lexical entry for 'appointed'. In this case, the meaning constructor is more complex, as it relates the meanings of the subject and object of a clause to the clause's meaning:
(11) appointed V (↑ pred)= 'appoint' ∀X, Y. (↑ subj) σ ;X ⊗ (↑ obj) σ ;Y −• ↑ σ ;appoint (X, Y )
The meaning constructor is the linear-logic formula:
∀X, Y. (↑ subj) σ ;X ⊗ (↑ obj) σ ;Y −• ↑ σ ;appoint (X, Y )
in which the linear-logic connectives of multiplicative conjunction ⊗ and linear implication −• are used to specify how the meaning of a clause headed by the verb is composed from the meanings of the arguments of the verb. For the moment, we can think of the linear connectives as playing the same role as the analogous classical connectives conjunction ∧ and implication →, but we will soon see that the specific properties of the linear connectives are essential to guarantee that lexical entries bring into the interpretation process all and only the information provided by the corresponding words. The meaning constructor for 'appointed' asserts, then, that if the subject (subj) of a clause with main verb 'appointed' means X and its object (obj) means Y , then the whole clause means appoint (X, Y ). 2 The meaning constructor can thus be thought of as a linear definite clause, with the variables X and Y playing the same role as Prolog variables.
A particular instance of use of 'appointed' produces the following f-structure and meaning constructor:
(12) (f pred) = 'appoint' V appointed f : pred 'appoint' subj [ ] obj [ ] f σ :[ ] ∀X, Y. (f subj) σ ; X ⊗ (f obj) σ ;Y −• f σ ; appoint (X, Y ) 2
In fact, we believe that the correct treatment of the relation between a verb and its arguments requires the use of mapping principles specifying the relation between the array of semantic arguments required by a verb and their possible syntactic realizations (Bresnan and Kanerva, 1989;Alsina, 1993;Butt, 1993). A verb like 'appoint', for example, might specify that one of its arguments is an agent and the other is a theme. Mapping principles would then specify that agents can be realized as subjects and themes as objects.
Here we make the simplifying assumption (valid for English) that the arguments of verbs have already been linked to syntactic functions and that this linking is represented in the lexicon. In the case of complex predicates this assumption produces incorrect results, as shown by Butt (1993) for Urdu. Mapping principles are very naturally incorporated into the framework discussed here; see and for discussion and illustration.
The instantiated meaning constructor asserts that f is the f-structure for a clause with predicate (pred) 'appoint', and:
• if f 's subject (f subj) has meaning X • and (⊗) f 's object (f obj) has meaning Y • then ( −• ) f has meaning appoint (X, Y ).
It is not an accident that the form of the meaning constructor for appointed is analogous to the type (e × e) → t which, in its curried form e → e → t, is the standard type for a transitive verb in a compositional semantics setting (Gamut, 1991). In general, the propositional structure of the meaning constructors of lexical entries will parallel the types assigned to the meanings of the same words in compositional analyses.
As mentioned above, in most cases phrase-structure rules make no semantic contributions of their own. Thus, all the semantic information for a sentence like 'Bill appointed Hillary' is provided by the lexical entries for 'Bill', 'appointed', and 'Hillary':
(13) Bill NP (↑ pred) = 'Bill' ↑ σ ;Bill appointed V (↑ pred)= 'appoint' ∀X, Y. (↑ subj) σ ;X ⊗ (↑ obj) σ ;Y −• ↑ σ ;appoint (X, Y )
Hillary NP (↑ pred) = 'Hillary' ↑ σ ;Hillary
Assembly of meanings via deduction We have now the ingredients for building semantic interpretations by deductive means. To recapitulate the development so far, lexical entries provide semantic constructors, which are linear-logic formulas specifying how the meanings of f-structures are built from the meanings of their substructures. Thus, linear logic serves as a glue language to assemble meanings. Certain terms in the glue language represent (open) formulas of an appropriate meaning language, which for the present purposes will be a version of Montague's intensional logic (Montague, 1974). 3 Other terms in the glue language represent semantic projections. The glue-language formula f ;t, with f a term representing a semantic projection and t a term representing a meaning-language formula, expresses the association between the semantic projection denoted by f and the meaning fragment denoted by t.
The fragment of linear logic we use as glue language will be described incrementally as we discuss examples, and is summarized in Appendix A. The semantic contribution of each lexical entry is a linear-logic formula, its meaning constructor, that can be understood as "instructions" for combining the meanings of the lexical entry's syntactic arguments to obtain the meaning of the f-structure headed by the entry. In the case of the verb 'appointed' above, the meaning constructor is a glue language formula consisting of instructions on how to assemble the meaning of a sentence with main verb 'appointed', given the meanings of its subject and object.
We will now show how meanings are assembled by linear-logic deduction. The full set of proof rules relevant to this paper is given in Appendix B. For readability, however, we will present derivations informally in the main body of the paper. As a first example, consider the lexical entries in (13) and let the constants f , g and h name the following f-structures:
(14) f : pred 'appoint' subj g: pred 'Bill' obj h: pred 'Hillary'
Instantiating the lexical entries for 'Bill', 'Hillary', and 'appointed' appropriately, we obtain the following meaning constructors, abbreviated as bill, hillary, and appointed: bill: g σ ;Bill hillary:
h σ ;Hillary appointed: ∀X, Y. g σ ;X ⊗ h σ ;Y −• f σ ;appoint (X, Y )
These formulas show how the generic semantic contributions in the lexical entries are instantiated to reflect their participation in this particular f-structure. Since the entry 'Bill' gives rise to f-structure g, the meaning constructor for 'Bill' provides a meaning for g σ . Similarly, the meaning constructor for 'Hillary' provides a meaning for h σ . The verb 'appointed' requires two pieces of information, the meanings of its subject and object, in no particular order, to produce a meaning for the clause. As instantiated, the f-structures corresponding to the subject and object of the verb are g and h, respectively, and f is the f-structure for the entire clause. Thus, the instantiated entry for 'appointed' shows how to combine a meaning for g σ (its subject) and h σ (its object) to generate a meaning for f σ (the entire clause).
In the following, assume that the formula bill-appointed is defined thus:
bill-appointed: ∀Y. h σ ;Y −• f σ ;appoint (Bill, Y )
Then the following derivation is possible in linear logic (⊢ stands for the linear-logic entailment relation):
(15) bill ⊗ hillary ⊗ appointed (P remises.) ⊢ bill-appointed ⊗ hillary X → Bill ⊢ f σ ;appoint (Bill, Hillary) Y → Hillary
Each formula is annotated with the variable substitutions (universal instantiations) required to derive it from the preceding one by the modus ponens rule
A ⊗ (A −• B) ⊢ B.
Of course, another derivation is also possible. Assume that the formula appointed-hillary is defined as:
appointed-hillary: ∀X. g σ ;X −• f σ ;appoint (X, Hillary)
Then we have the following derivation:
(16) bill ⊗ hillary ⊗ appointed (P remises.) ⊢ bill ⊗ appointed-hillary Y → Hillary ⊢ f σ ;appoint (Bill, Hillary) X → Bill
In summary, each word in a sentence contributes a linear-logic formula, its meaning constructor, relating the semantic projections of specific f-structures in the LFG analysis to representations of their meanings. From these glue language formulas, the interpretation process attempts to deduce an atomic formula relating the semantic projection of the whole sentence to a representation of the sentence's meaning. Alternative derivations may yield different such conclusions, corresponding to ambiguities of semantic interpretation.
Linear logic As we have just outlined, we use deduction in linear logic to assign meanings to sentences, starting from information about their functional structure and about the semantic contributions of their words. Traditional compositional approaches depend on a strict separation between functors and arguments, typically derived from a binary-branching phrase-structure tree. In contrast, our linear-logic-based approach allows the premises carrying semantic information to commute while keeping their connection to the f-structure, and is thus more compatible with the flat and relatively free form organization of functional structure.
An important motivation for using linear logic is that it allows us to directly capture the intuition that lexical items and phrases each contribute exactly once to the meaning of a sentence. As noted by Klein and Sag (1985, page 172):
Translation rules in Montague semantics have the property that the translation of each component of a complex expression occurs exactly once in the translation of the whole. . . . That is to say, we do not want the set S [of semantic representations of a phrase] to contain all meaningful expressions of IL which can be built up from the elements of S, but only those which use each element exactly once.
In our terms, the semantic contributions of the constituents of a sentence are not contextindependent assertions that may be used or not in the derivation of the meaning of the sentence depending on the course of the derivation. Instead, the semantic contributions are occurrences of information which are generated and used exactly once. For example, the formula g σ ;Bill can be thought of as providing one occurrence of the meaning Bill associated to the semantic projection g σ . That meaning must be consumed exactly once (for example, by appointed in (15)) in the derivation of a meaning of the entire utterance.
It is this "resource-sensitivity" of natural language semantics-an expression is used exactly once in a semantic derivation-that linear logic can model. The basic insight underlying linear logic is that logical formulas are resources that are produced and consumed in the deduction process. This gives rise to a resource-sensitive notion of implication, the linear implication −• : the formula A −• B can be thought of as an action that can consume (one copy of) A to produce (one copy of) B. Thus, the formula A⊗(A −• B) linearly entails B. It does not entail A ⊗ B (because the deduction consumes A), and it does not entail (A −• B)⊗B (because the linear implication is also consumed in doing the deduction). This resource-sensitivity not only disallows arbitrary duplication of formulas, but also disallows arbitrary deletion of formulas. Thus the linear multiplicative conjunction ⊗ is sensitive to the multiplicity of formulas: A ⊗ A is not equivalent to A (the former has two copies of the formula A). For example, the formula A ⊗ A ⊗ (A −• B) linearly entails A ⊗ B (there is still one A left over) but does not entail B (there must still be one A present). In this way, linear logic checks that a formula is used once and only once in a deduction, enforcing the requirement that each component of an utterance contributes exactly once to the assembly of the utterance's meaning.
A direct consequence of the above properties of linear logic is that the constraints of functional completeness and coherence hold without further stipulation 4 . In the present setting, the feature structure f corresponding to the utterance is associated with the (⊗) conjunction φ of all the formulas associated with the lexical items in the utterance. The conjunction is said to be complete and coherent iff
T h ⊢ φ −• f σ ;t (for some term t),
where T h is the background theory of general linguistic principles. Each t is to be thought of as a valid meaning for the sentence. This guarantees that the entries are used exactly once in building up the denotation of the utterance: no syntactic or semantic requirements may be left unfulfilled, and no meaning may remain unused.
Our glue language needs to be only a fragment of higher-order linear logic, the tensor fragment, that is closed under conjunction, universal quantification, and implication. This fragment arises from transferring to linear logic the ideas underlying the concurrent constraint programming scheme of Saraswat (1989). 5
Relationship with Categorial Syntax and Semantics As suggested above, there are close connections between our approach and various systems of categorial syntax and semantics. The Lambek calculus (Lambek, 1958), introduced as a logic of syntactic combination, turns out to be a fragment of noncommutative multiplicative linear logic. If permutation is added to Lambek's system, its left-and right-implication connectives (\ and /) collapse into a single implication connective with behavior identical to −• . This undirected version of the Lambek calculus was developed by van Benthem (1988; to account for the semantic combination possibilities of phrase meanings.
Those systems and related ones (Moortgat, 1988;Hepple, 1990;Morrill, 1990) were developed as calculi of syntactic/semantic types, with propositional formulas representing syntactic categories or semantic types. Given the types for the lexical items in a sentence as assumptions, the sentence is syntactically well-formed in the Lambek calculus if the type of the sentence can be derived from the assumptions arranged as an ordered list. Furthermore, the Curry-Howard isomorphism between proofs and terms (Howard, 1980) allows the extraction of a term representing the meaning of the sentence from the proof that the sentence is well-formed (van Benthem, 1986). However, the Lambek calculus and its variants carry with them a particular view of the syntax-semantics interface which is not obviously compatible with the flatter f-structures of LFG. In Section 5, we will examine more closely the differences between those approaches and ours.
On the other hand, categorial semantics in the undirected Lambek calculus and other related commutative calculi provide an analysis of the possibilities of meaning combination independently of the syntactic realizations of those meanings, but does not offer a mechanism for relating semantic combination possibilities to the corresponding syntactic combination possibilities.
Our system follows categorial semantics in using the "propositional skeleton" of glue formulas to encode the types of phrase meanings and thus their composition potential. In addition, however, first-order quantification over semantic projections maintains the connection between those types and the corresponding syntactic objects, while quantification over semantic terms is used to build the meanings of those syntactic objects. This tripartite organization reflects the three linked systems of representation that participate in semantic interpretation: syntactic structure, semantic types and semantic interpretations themselves. In this way, we can take advantage of the principled description of potential meaning combinations arising from categorial semantics without losing track of the constraints imposed by syntax on the possible combinations of those meanings.
Quantification
Our treatment of quantification, and in particular of quantifier scope ambiguities and of the interactions between scope and bound anaphora, follows the analysis of Pereira (1990;, but offers in addition a formal account of the syntax-semantics interface, which was treated only informally in that earlier work.
Quantifier meanings
The basic idea for the analysis can be seen as a logical counterpart at the glue level of the standard type assignment for generalized quantifiers (Barwise and Cooper, 1981). The generalized quantifier meaning of a natural language determiner has the following type:
(17) (e → t) → (e → t) → t
that is, the type of functions from two properties, the quantifier's restriction and scope, to propositions. At the semantic glue level, we can understand that type as follows. For any determiner, if for arbitrary x we can construct a meaning R(x) for the quantifier's restriction, and again for arbitrary x we can construct a meaning S(x) for the quantifier's scope, where R and S are suitable properties (functions from entities to propositions), then we can construct the meaning Q(R, S) for the whole sentence containing the determiner, where Q is the meaning of the determiner.
Assume for the moment that we have determined the following semantic structures: restr for the restriction (a common noun phrase), restr-arg for its implicit argument, scope for the scope of quantification, and scope-arg for the grammatical function filled by the quantified NP. Then the foregoing analysis can be represented in linear logic by the following schematic formula: 6
(18) ∀R, S. (∀x. restr-arg ;x −• restr ;R(x)) ⊗ (∀x. scope-arg ;x −• scope ;S(x)) −• scope ;Q(R, S) Given the equivalence between A ⊗ B −• C and A −• (B −• C)
, the propositional part of (18) parallels the generalized quantifier type (17).
In addition to providing a semantic type assignment for determiners, (18) uses glue language quantification to express how the meanings of the restriction and scope of quantification are determined and combined into the meaning of the quantified clause. The subformula ∀x. restr-arg ;x −• restr ;R(x)
specifies that restr has meaning R(x) if for arbitrary x restr-arg has meaning x, that is, it gives the dependency of the meaning of a common noun phrase on its implicit argument. Property R is the representation of that dependency as a function in the meaning language. Similarly, the subformula
∀x. scope-arg ;x −• scope ;S(x)
specifies the dependency of the meaning S(x) of a semantic structure scope on the meaning x of one of its arguments scope-arg . If both dependencies hold, then R and S are an appropriate restriction and scope for the determiner meaning Q. Computationally, the nested universal quantifiers substitute unique new constants (eigenvariables) for the quantified variable x, and the nested implications try to prove their consequents with their antecedents added to the current set of assumptions. For the restriction (the case of the scope is similar), this will in particular involve solving an equation of the form R(x) = t, where restr ;t has been derived. The equation must be solved modulo α-, β-and η-conversion, and any solution R must not contain occurrences of x, since R's scope is wider than x's. Higher-order unification (Huet, 1975) is a procedure suitable for solving such equations. 7
Quantifier restrictions
We have seen that since the meaning of the restriction of a quantifier is a property (type e → t), its meaning constructor has the form of an implication, just like a verb. In (18), the first line of the determiner's semantic constructor
(∀x. restr-arg ;x −• restr ;R(x))
requires a meaning x for restr-arg to produce the meaning R(x) for restr, defining the restriction R of the quantifier. We need thus to identify the semantic projections restr-arg and restr .
The f-structure of a quantified NP has the general form:
(19) f : spec q pred n
where q is the determiner f-structure and n the noun f-structure. None of the f-structures f , q or n is a natural syntactic correlate of the argument or result of the quantifier restriction. This contrasts with the treatment of verbs, whose semantic contributions and argument dependencies are directly associated with appropriate syntactic units of the clauses they head. Therefore, we take the semantic projection f σ of the quantified NP to be structured with two attributes (f σ var) and (f σ restr): While higher-order unification is in general undecidable, the unification problems involved here are of one of the forms F (x) = t or p(X) = t where t is a closed term, F and X essentially existential variables and x and p essentially universal variables. These cases fall within the lλ fragment of Miller (1990), which is a decidable extension of first-order unification.
The value of var will play the role of restr-arg, supplying an entity-type variable, and the value of restr will play the role of restr in the meaning constructor of the determiner. For a preliminary version of the lexical entry for 'every', we replace the relevant portions of our canonical determiner entry appropriately:
(21) Preliminary lexical entry for 'every':
every Det (↑ spec) = 'every' ∀R, S. (∀x. (↑ σ var) ;x −• (↑ σ restr) ;R(x)) ⊗ (∀x. scope-arg ;x −• scope ;S(x)) −• scope ;every(R, S)
The restriction property R should of course be derived from the semantic contribution of the nominal part of the noun phrase. Therefore, semantic constructors for nouns must connect appropriately to the var and restr components of the noun phrase's semantic projection, as we shall now see.
Noun meanings
We will use the following phrase structure rule for simple noun phrases:
(22) NP −→ Det ↑ = ↓ N ↑ = ↓
This rule states that the determiner Det and noun N contribute equally to the f-structure for the NP. Lexical specifications ensure that the noun contributes the pred attribute and its value, and the determiner contributes the spec attribute and its value.
The c-structure, f-structure, and semantic structure for 'every voter', together with the functional relations between them, are: In rule (22), the meaning constructors of the noun 'voter' and the determiner 'every' make reference to the same semantic structure, and in particular to the same semantic projections var and restr. The noun will supply appropriate values for the var and restr attributes of the NP, and these will be consumed by the determiner's meaning constructor. Thus, the semantic constructor for a noun will have the general form
∀x. (↑ σ var) ;x −• (↑ σ restr) ;P x
where P is the meaning of the noun. 8 In particular, the lexical entry for 'voter' is:
(24) voter N (↑ pred) = 'voter' ∀X. (↑ σ var) ;X −• (↑ σ restr);voter (X)
Given this entry and the one for 'every' in (21), we obtain the following instantiated semantic constructors for (23):
every: ∀R, S. (∀x. (f σ var) ;x −• (f σ restr);R(x)) ⊗ (∀x. scope-arg ;x −• scope ;S(x)) −• scope ;every(R, S) voter: ∀X. (f σ var) ;X −• (f σ restr);voter (X)
Applying the variable substitutions X → x, R → voter and modus ponens to those two premises, we obtain the semantic constructor for 'every voter':
(25) every-voter: ∀S. (∀x. scope-arg ;x −• scope ;S(x)) −• scope ;every(voter , S)
In keeping with the parallel noted earlier between our semantic constructors and compositional types, the propositional part of this formula corresponds to the standard type for NP meanings, (e → t) → t.
Quantified NP meanings
To complete our analysis of the semantic contribution of determiners, we need to characterize how a quantified NP contributes to the semantics of a sentence in which it appears, by specifying the semantic projections scope-arg and scope in quantified NP semantic constructors like (25).
Individual-type contribution First, we require the meaning of the scope to depend on the meaning of (the position filled by) the quantifier itself. Thus, scope-arg is the semantic projection for the quantified NP itself: Informally, the constructor for 'every voter' can be read as follows: if by giving the arbitrary meaning x of type e to f , the f-structure for 'every voter', we can derive the meaning S(x) of type t for the scope of quantification scope, then S can be the property that the quantifier requires as its scope, yielding the meaning every(voter , S) for scope. The quantified NP can thus be seen as providing two contributions to an interpretation: locally, a referential import x, which must be discharged when the scope of quantification is established; and globally, a quantificational import of type (e → t) → t, which is applied to the meaning of the scope of quantification to obtain a quantified proposition. Notice also that the assignment of a meaning to scope appears on both sides of the implication, and that in fact the meaning is not the same in the two instances. Linear logic allows for the consumption of the preliminary meaning in the antecedent of the implication, producing the final meaning for scope in the conclusion.
Scope of quantification To complete our account of quantified NP interpretation, we need to explain how to select the possible scopes of quantification, for which we used the place-holder scope in (26).
As is well known, the scope of a quantifier is not syntactically fixed. While syntactic effects may play a significant role in scope preferences, most claims of scope islands (eg. May's (1985)) are defeasible given appropriate choices of lexical items and context. Therefore, we will take as possible quantifier scopes all semantic projections for which a meaning of proposition type can be derived. But even this liberal notion of scope is subject to indirect constraints from syntax, as those that we will see arise from interaction of coreference relations and quantification.
Previous work on scope determination in LFG (Halvorsen and Kaplan, 1988) defined possible scopes at the f-structure level, using inside-out functional uncertainty to nondeterministically choose a scope f-structure for quantified noun phrases. That approach requires the scope of a quantified NP to be an f-structure which contains the NP f-structure. In contrast, our approach depends only on the logical form of semantic constructors to yield just the appropriate scope choices. Within the constraints imposed by that logical form, the actual scope can be freely chosen. Logically, that means that the semantic constructor for an NP should quantify universally over scopes, as follows: The foregoing argument leads to the following general semantic constructor for a determiner with meaning Q:
(28) ∀H, R, S. where H ranges over semantic structures associated with meanings of type t. Note that the var and restr components of the semantic projection for a quantified NP in our analysis play a similar role to the / / category constructor in PTQ (Montague, 1974), that of distinguishing syntactic configurations with identical semantic types but different contributions to the interpretation. The two PTQ syntactic categories t/e for intransitive verb phrases and t/ /e for common noun phrases correspond to the single semantic type e → t; similarly, the two conjuncts in the antecedent of (28) correspond to the same semantic type, encoded with a linear implication, but to two different syntactic contexts, one relating the predication of an NP to its implicit argument and one relating a clause to an embedded argument.
Simple example of quantification
Before we look at quantifier scope ambiguity and interactions between scope and bound anaphora, we demonstrate the basic operation of our proposed meaning constructor for quantified NPs with a singly quantified, unambiguous sentence:
(29) Bill convinced every voter.
To carry out the analysis, we need a lexical entry for 'convinced':
(30) convinced V (↑ pred)= 'convince' ∀X, Y. (↑ subj) σ ;X ⊗ (↑ obj) σ ;Y −• ↑ σ ;convince(X, Y )
The f-structure for (29) is:
(31) f : pred 'convince' subj g: pred 'Bill' obj h: spec 'every' pred 'voter'
The premises for the derivation are appropriately instantiated meaning constructors for 'Bill' and 'convinced' together with the instantiated meaning constructor derived earlier for the quantified NP 'every voter': bill:
g σ ;Bill convinced: ∀X, Y. g σ ;X ⊗ h σ ;Y −• f σ ;convince(X, Y ) every-voter: ∀H, S. (∀x. h σ ;x −• H ; t S(x)) −• H ; t every(voter , S)
Giving the name bill-convinced to the formula bill-convinced:
∀Y. h σ ;Y −• f σ ;convince(Bill, Y )
we have the derivation:
bill ⊗ convinced ⊗ every-voter (Premises.) ⊢ bill-convinced ⊗ every-voter X → Bill ⊢ f σ ;every(voter , λz.convince(Bill , z)) H → f σ , Y → x S → λz.convince(Bill , z)
No derivation of a different formula f σ ; t P is possible. The formula bill-convinced represents the semantics of the scope of the determiner 'every'. The derivable formula
∀Y. h σ ; e Y −• h σ ; e Y
could at first sight be considered another possible, but erroneous, scope. However, the type subscripting of the ; relation used in the determiner lexical entry requires the scope to represent a dependency of a proposition on an individual, while this formula represents the dependency of an individual on an individual (itself). Therefore, it does not provide a valid scope for the quantifier.
Quantifier scope ambiguities
When a sentence contains more than one quantifier, scope ambiguities are of course possible. In our system, those ambiguities will appear as alternative successful derivations. We will take as our example this sentence: 9
(32) Every candidate appointed a manager.
We need the following additional lexical entries:
(33) a Det (↑ spec) = 'a' ∀H, R, S. (∀x. (↑ σ var);x −• (↑ σ restr);R(x)) ⊗ (∀x. ↑ σ ;x −• H ;S(x)) −• H ;a(R, S) (34) candidate N (↑ pred) = 'candidate'
∀X. (↑ σ var) ;X −• (↑ σ restr);candidate (X) (35) manager N (↑ pred) = 'manager' ∀X. (↑ σ var) ;X −• (↑ σ restr);manager (X)
The f-structure for sentence (32) is:
(36) f : pred 'appoint' subj g: spec 'every' pred 'candidate' obj h: spec 'a' pred 'manager'
We can derive meaning constructors for 'every candidate' and 'a manager' in the way shown in Section 3.4. Further derivations proceed from those contributions together with the contribution of 'appointed':
every-candidate: ∀G, R. (∀x. g σ ;x −• G; R(x)) −• G;every(candidate , R) a-manager: ∀H, S. (∀y. h σ ;y −• H ;S(y)) −• H ;a(manager , S) appointed: ∀X, Y. g σ ;X ⊗ h σ ;Y −• f σ ;appoint (X, Y )
As of yet, we have not made any commitment about the scopes of the quantifiers; the scope and scope meaning variables in every-candidate and a-manager have not been instantiated. Scope ambiguities are manifested in two different ways in our system: through the choice of different semantic structures G and H, corresponding to different scopes for the quantified NPs, or through different relative orders of application for quantifiers that scope at the same point. For this example, the second case is relevant, and we must now make a choice to proceed. The two possible choices correspond to two equivalent rewritings of appointed:
appointed 1 : ∀X. g σ ;X −• (∀Y. h σ ;Y −• f σ ;appoint (X, Y )) appointed 2 : ∀Y. h σ ;Y −• (∀X. g σ ;X −• f σ ;appoint (X, Y ))
These two equivalent forms correspond to the two possible ways of "currying" a twoargument function f : α × β → γ as one-argument functions:
λu.λv.f (u, v) : α → (β → γ) λv.λu.f (u, v) : β → (α → γ)
We select 'a manager' to take narrower scope by using the variable instantiations
H → f σ , Y → y, S → λv.appoint (X, v)
and transitivity of implication to combine appointed 1 with a-manager into:
appointed-a-manager: ∀X. g σ ;X −• f σ ; t a(manager , λv.appoint (X, v))
We have thus the derivation every-candidate ⊗ appointed 1 ⊗ a-manager ⊢ every-candidate ⊗ appointed-a-manager ⊢ f σ ; t every(candidate , λu.a(manager , λv.appoint (u, v))) of the ∀∃ reading of (32), where the last step uses the substitutions
G → f σ , X → x, R → λu.a(manager , λv.appoint (u, v))
Alternatively, we could have chosen 'every candidate' to take narrow scope, by combining appointed 2 with every-candidate to produce:
every-candidate-appointed: ∀Y. h σ ;Y −• f σ ; t every(candidate , λu.appoint (u, Y ))
This gives the derivation every-candidate ⊗ appointed 2 ⊗ a-manager ⊢ every-candidate-appointed ⊗ a-manager ⊢ f σ ; t a(manager , λv.every(candidate , λu.appoint (u, v)))
for the ∃∀ reading. These are the only two possible outcomes of the derivation of a meaning for (32), as required.
Constraints on quantifier scoping
Sentence (37) contains two quantifiers and therefore might be expected to show a two-way ambiguity analogous to the one described in the previous section:
(37) Every candidate appointed an admirer of his.
However, no such ambiguity is found if the pronoun 'his' is taken to corefer with the subject 'every candidate'. In this case, only one reading is available, in which 'an admirer of his' takes narrow scope. Intuitively, this NP may not take wider scope than the quantifier 'every candidate', on which its restriction depends.
As we will soon see, the lack of a wide scope 'a' reading follows automatically from our formulation of the meaning constructors for quantifiers and anaphors without further stipulation. In Pereira's earlier work on deductive interpretation (Pereira 1990(Pereira , 1991, the same result was achieved through constraints on the relative scopes of glue-level universal quantifiers representing the dependencies between meanings of clauses and the meanings of their arguments. Here, although universal quantifiers are used to support the extraction of properties representing the meanings of the restriction and scope (the variables R and S in the semantic constructors for determiners), the blocking of the unwanted reading follows from the propositional structure of the glue formulas, specifically the nested linear implications. This is more satisfactory, since it does not reduce the problem of proper quantifier scoping in the object language to the same problem in the metalanguage.
The lexical entry for 'admirer' is:
(38) admirer N (↑ pred) = 'admirer' ∀X, Y. (↑ σ var);X ⊗ (↑ obl OF ) σ ;Y −• (↑ σ restr) ;admirer (X, Y )
Here, 'admirer' is a relational noun taking as its oblique argument a phrase with prepositional marker 'of', as indicated in the f-structure by the attribute obl OF . The meaning constructor for a relational noun has, as expected, the same propositional form as the binary relation type e × e → t: one argument is the admirer, and the other is the admiree. We assume that the semantic projection for the antecedent of the pronoun 'his' has been determined by some separate mechanism and recorded as the ant attribute of the pronoun's semantic projection. 10 The meaning constructor of the pronoun is, then, a formula that consumes the meaning of its antecedent and then reintroduces that meaning, simultaneously assigning it to its own semantic projection:
(39) his N (↑ pred) = 'pro' ∀X. (↑ σ ant) ;X −• (↑ σ ant) ;X ⊗ ↑ σ ;X
In other words, the semantic contribution of a pronoun copies the meaning X of its antecedent as the meaning of the pronoun itself. Since the left-hand side of the linear implication "consumes" the antecedent meaning, it must be reinstated in the consequent of the implication. The f-structure for example (37) is:
(40) f : pred 'appointed' subj g: spec 'every' pred 'candidate' obj h: spec 'a' pred 'admirer' obl OF i: pred 'pro' with (i σ ant) = g σ .
We will begin by illustrating the derivation of the meaning of 'an admirer of his', starting from the following premises:
a: ∀H, R, S. (∀x. (h σ var);x −• (h σ restr) ;R(x)) ⊗ (∀x. h σ ;x −• H ;S(x)) −• H ;a(R, S) admirer: ∀Z, X. (h σ var) ;Z ⊗ i σ ;X −• (h σ restr);admirer (Z, X) his: ∀X. g σ ;X −• g σ ;X ⊗ i σ ;X
First, we rewrite admirer into the equivalent form
∀X. i σ ;X −• (∀Z. (h σ var);Z −• (h σ restr) ;admirer (Z, X))
We can use this formula to rewrite the second conjunct in the consequent of his, yielding admirer-of-his: ∀X. g σ ;X −• g σ ;X⊗ (∀Z. (h σ var) ;Z −• (h σ restr);admirer (Z, X))
In turn, the second conjunct in the consequent of admirer-of-his matches the first conjunct in the antecedent of a given appropriate variable substitutions, allowing us to derive an-admirer-of-his:
∀X. g σ ;X −• g σ ;X ⊗ (∀H, S. (∀x. h σ ;x −• H ;S(x)) −• H ;a(λz.admirer (z, X), S))
At this point the other formulas available are:
every-candidate: ∀H, S. (∀x. g σ ;x −• H ;S(x)) −• H ;every(candidate , S) appointed: ∀Z, Y. g σ ;Z ⊗ h σ ;Y −• f σ ;appoint (Z, Y )
We have thus the meanings of the two quantified NPs. The antecedent implication of every-candidate has an atomic conclusion and hence cannot be satisfied by an-admirer-of-his, which has a conjunctive conclusion. Therefore, the only possible move is to combine appointed and an-admirer-of-his. We do this by first putting appointed in the equivalent form
∀Z. g σ ;Z −• (∀Y. h σ ;Y −• f σ ;appoint (Z, Y ))
After substituting X for Z, this can be used to rewrite the first conjunct in the consequent of an-admirer-of-his to derive
∀X. g σ ;X −• (∀Y. h σ ;Y −• f σ ;appoint (X, Y ))⊗ (∀H, S. (∀x. h σ ;x −• H ;S) −• H ;a(λz.admirer (z, X), S))
Applying the substitutions
Y → x, H → f σ , S → λz.appoint (X, z)
and modus ponens with the two conjuncts in the consequent as premises, we obtain ∀X. g σ ;X −• f σ ; t a(λz.admirer (z, X), λz.appoint (X, z))
Finally, this formula can be combined with every-candidate to give the meaning of the whole sentence: f σ ; t every(candidate , λw.a(λz.admirer (z, w), λz.appoint (w, z)))
In fact, this is the only derivable conclusion, showing that our analysis blocks those putative scopings in which variables occur outside the scope of their binders.
Adequacy
We will now argue that our analysis is sound in that all variables occur in the scope of their binders, and complete in that all possible sound readings can be generated. More precisely, soundness requires that all occurrences of a meaning-level variable x representing the argument positions filled by a quantified NP or anaphors bound to the NP are within the scope of the quantifier meaning of the NP binding x. As argued by Pereira (1990), treatments of quantification based on storage or quantifier raising either fail to guarantee soundness or enforce it by stipulation. In contrast, deductive frameworks based on a suitable type logic for meanings, such as those arising from categorial semantics, achieve soundness as a by-product of the soundness of their underlying type logics.
In the present setting, meaning terms are explicitly constructed rather than read out from well-typing proofs using the Curry-Howard connection between proofs and terms, but the particular form of our glue-logic formulas follows that of typing rules closely and thus guarantees soundness, as we will now explain.
Recall first that quantifiers can only be introduced into meaning terms by quantified NPs with semantic contributions of the form
(41) ∀H, S. (∀x. f ;x −• H ;S(x)) −• H ;Q(S)
where f is the semantic projection of the NP and Q is the meaning of the NP. Since S outscopes x, any instance of S in a valid derivation will be a meaning term of the form λz.T , with x not free in T . The free occurrences of z in T will be precisely the positions quantified over by Q. We need thus to show that f and all semantic projections coreferential f have z as its interpretation. But f itself is given interpretation x in (41), while coreferential projections must by lexical entry (39) also have interpretation x. Since S(x) = T [z → x] with x not free in T , any free occurrence of x in S(x) must arise from substituting x for z in T . That is, the interpretation of f and those of any other projections which corefer with f are quantified over by Q as required.
As seen in the above argument, the dependency of anaphors on their antecedents is encoded by the propositional structure and quantification over semantic projections of the semantic contributions of anaphors. That encoding alone is sufficient to generate all and only the possible derivations, but quantification over meaning terms is needed to extract the appropriate meaning terms from the derivations. The scope of glue language variables ranging over meaning terms guarantees that all variables in meaning terms are properly bound.
Turning now to completeness, we need to consider the correlations between the deductive patterns and the functional structure. With one exception, the glue-logic formulas from which deduction starts respect the functional structure of meanings in that implications that conclude the meaning of a phrase depend on the meanings of all immediate subphrases which can have meanings, or depend on the phrase itself, but on nothing else. The exception is anaphors, whose meanings depend on that of their antecedents. Thus, the meaning of a phrase will, transitively, depend on the meanings of all its subphrases that can have meanings and on the meanings of the antecedents of its anaphoric pronouns. Now we can consider the possible scopings of a quantified NP in terms of phrase structure. The linearity of the implication in the antecedent of the NP's constructor requires the meaning of the scope to depend on the meaning of the noun phrase and that nothing else depend on that meaning. But the above argument shows that this will be true exactly of every containing phrase, unless there is a bound anaphor not contained in the containing phrase that has the NP as its antecedent. So all the containing phrases that also contain all coreferring anaphors are, indeed, candidates for scope of the quantified NP.
It is worth noting that the quantificational structure of semantic constructors is enough on its own to ensure soundness of the resulting meaning terms. In particular, the nested implication form of quantified NP constructors could be replaced by the flatter
∃x.g σ ;x ⊗ (∀H, S. H ;S(x) −• H ;Q(S))
A quantifier lexical entry would then look like:
∃x. (↑ σ var) ;x ⊗ ∀H, R. (↑ σ restr);R(x) −• (↑ σ ;x ⊗ ∀S. (H ;S(x)) −• H ;Q(z, Rz, Sz))
This formulation just asserts that there is a generic entity, x, which stands for the meaning of the quantified phrase, and also serves as the argument of the restriction. The derivations of the restriction and scope are then expected to consume this information. By avoiding nested implications, this formulation may be computationally more desirable.
However, the logical structure of this formulation is not as restrictive as that of (28), as it can allow additional derivations where information intended for the restriction can be used by the scope. This cannot happen in our analyses, however, since all the dependencies in semantic constructors respect syntactic dependencies expressed in the f-structure. As long as that principle is observed, the formulation above is equivalent to (28). Despite this, we prefer to stay closer to categorial semantics and thus capture explicitly quantifier and anaphoric dependencies in the propositional structure. We will therefore continue with the formulation (28).
Intensional Verbs
Following Montague (1974),we will give an intensional verb like seek a meaning that takes as direct object an NP meaning intension. Montague's method for assembling meanings by function application forces the meanings of all expressions of a given syntactic category to be raised to their lowest common semantic type. In particular, every transitive verb meaning, whether intensional or not, must take a quantified NP meaning intension as direct-object argument. In contrast, our approach allows the semantic contributions of verbs to be of as low a type as possible. Nonetheless, the uniformity of the translation process is preserved because any required type changes are derivable within the glue language, along lines similar to type change in the undirected Lambek calculus (van Benthem, 1988).
We will not represent intensional types explicitly at the glue level, in contrast to categorial treatments of intensionality such as Morrill's (1990;. Instead, semantic constructors will correspond to the appropriate extensional types. The Montagovian intension and extension operatorsˆandˇwill appear only at term level, to the right of ; in our derivation formulas. Thus, while the meaning of seek has type e → (s → ((e → t) → t)) → t, the corresponding semantic constructor in (43) parallels the type e → ((e → t) → t) → t.
Our implicit treatment of intensional types imposes certain constraints on the use of functional abstraction and application in meaning terms, since β-reduction is only valid for intensional terms if the argument is intensionally closed, that is, if the free occurrences of the bound variable do not occur in intensional contexts (Gamut, 1991, p. 131). As we will see, that constraint is verified by all the semantic terms in our semantic constructors. Thus, in carrying out proofs we will be justified in solving for free variables in meaning terms modulo theˇˆ-elimination schemaˇ(ˆP ) = P and α-, β-and η-conversion.
Generalized quantifier meanings in Montague grammar are given type (s → e → t) → (s → e → t) → t, that is, their are relations between properties. While we maintain the propositional form of glue-level formulas corresponding to the extensional generalized quantifier meanings discussed earlier, the semantic terms in determiner semantic constructors must be adapted to match the new intensionalized generalized quantifier type:
∀H, R, S. (∀x. (↑ σ var) ;x −• (↑ σ restr) ;(ˇR)(x)) ⊗ (∀x. ↑ σ ;x −• H ;(ˇS)(x)) −• H ;a(R, S)
Therefore, the meaning of a sentence such as (37) will now be written: every(ˆcandidate ,ˆλw.a(ˆλz.admirer (z, w),ˆλz.appoint (w, z)))
The type-changing potential of the linear-logic formulation allows us to give an intensional verb a single semantic constructor, and yet have the expected de re/de dicto ambiguities follow without further stipulation. For example, we will see that for sentence (42) Bill seeks a unicorn.
we can derive the two readings:
de dicto reading: seek (Bill ,ˆλQ.a(ˆunicorn, Q)) de re reading: a(ˆunicorn, λu.seek (Bill ,ˆλQ.(ˇQ)(u)))
Given the foregoing analysis, the lexical entry for seek is:
(43) seek (↑ pred) = 'seek' ∀Z, Y. (↑ subj) σ ;Z ⊗(∀s, p. (∀X. (↑ obj) σ ;X −• s ;(ˇp)(X)) −• s ;Y (p)) −• ↑ σ ;seek (Z,ˆY )
which can be paraphrased as follows:
∀Z, Y. (↑ subj) σ ;Z⊗
The verb seek requires a meaning Z for its subject and (∀s, p.
(∀X. (↑ obj) σ ;X −• s ;(ˇp)(X)) −• s ;Y (p)) a meaningˆY for its object, where Y is an NP meaning applied to the meaning p of an arbitrarily-chosen 'scope' s,
( * ) −• ↑ σ ;seek (Z,ˆY )
to produce the clause meaning seek (Z,ˆY ).
Rather than looking for an entity type meaning for its object, the requirement expressed by the subformula labeled ( * ) describes semantic constructors of quantified NPs. Such a constructor takes as input the constructor for a scope, which by itself maps an arbitrary meaning X to the meaning p(X) for an arbitrary scope s. From that input, the quantified NP constructor will produce a final quantified meaning M for s. That meaning is required to satisfy the equation M = Y (p), and thusˆY is the property of properties (predicate intensions) that seek requires as second argument. Note that the argument p of Y in the equation will be an intension given the new semantic constructors for determiners. Therefore, β-conversion with the abstraction Y as functor is allowed. The f-structure for (42) is:
(44) f : pred 'seek' subj g: pred 'Bill' obj h: spec 'a' pred 'unicorn'
The semantic constructors associated with this f-structure are then:
seeks: ∀Z, Y. g σ ;Z ⊗(∀s, p.(∀X. h σ ;X −• s ;(ˇp)(X)) −• s ;Y (ˆp)) −• f σ ;seek (Z,ˆY ) Bill: g σ ;Bill a-unicorn: ∀H, S. (∀x. h σ ;x −• H ;(ˇS)(x)) −• H ;a(ˆunicorn, S)
These are the premises for the deduction of the meaning of sentence (42). From the premises Bill and seeks and the instantiation Z → Bill we can conclude by modus ponens:
Bill-seeks: ∀Y. (∀s, p. (∀X. h σ ;X −• s ;(ˇp)(X)) −• s ;Y (p)) −• f σ ;seek (Bill ,ˆY )
Different derivations starting from the premises Bill-seeks and a-unicorn will yield the alternative readings of Bill seeks a unicorn, as we shall now see.
De Dicto Reading
The formula a-unicorn is exactly what is required by the antecedent of Bill-seeks provided that the following substitutions are performed:
H → s S → p X → x Y → λP.a(ˆunicorn, P )
We can thus conclude the desired de dicto reading: f σ ;seek (Bill ,ˆλP.a(ˆunicorn, P )))
To show how the premises also support a de re reading, we consider first the simpler case of nonquantified direct objects. Figure 1: Proof that Al can function as a quantifier
h σ ;Al ⊢ h σ ;Al s ;(ˇP )(Al ) ⊢ s ;(ˇP )(Al ) h σ ;Al , h σ ;Al −• s ;(ˇP )(Al ) ⊢ s ;(ˇP )(Al ) h σ ;Al , (∀x.h σ ;x −• s ;(ˇP )(x)) ⊢ s ;(ˇP )(Al ) h σ ;Al ⊢ (∀x.h σ ;x −• s ;(ˇP )(x)) −• s ;(ˇP )(Al ) h σ ;Al ⊢ ∀P.(∀x.h σ ;x −• s ;(ˇP )(x)) −• s ;(ˇP )(Al )
Nonquantified Objects
The meaning constructor for seek also allows for nonquantified objects as arguments, without needing a special type-raising rule. Consider the f-structure for the sentence Bill seeks Al:
(45) f : pred 'seek' subj g: pred 'Bill' obj h: pred 'Al'
The lexical entry for Al is analogous to the one for Bill. We begin with the premises Bill-seeks and Al:
Bill-seeks: ∀Y.(∀s, p.(∀X. h σ ;X −• s ;(ˇp)(X)) −• s ;Y (p)) −• f σ ;seek (Bill ,ˆY ) Al: h σ ;Al
For the derivation to proceed, Al must supply the NP meaning constructor that Bill-seeks requires. This is possible because Al can map a proof Π of the meaning for s from the meaning for h into a meaning for s, simply by supplying h σ ;Al to Π. Formally, from Al we can prove ( Figure 1):
(46) ∀P. (∀x. h σ ;x −• s ;(ˇP )(x)) −• s ;(ˇP )(Al )
This corresponds to the Montagovian type-raising of a proper name meaning to an NP meaning, and also to the undirected Lambek calculus derivation of the sequent e ⇒ (e → t) → t. Formula (46) with the substitutions
P → p, Y → λP.(ˇP )(Al )
can then be used to satisfy the antecedent of Bill-seeks to yield the desired result: , so it is easy to modify Al to make it suitable for seek. Because a λ-term must specify exactly how functions and arguments combine, the conversion must be explicitly applied somewhere, either in a meaning postulate or in an alternate definition for seek . Thus, it is impossible to write a function term that is indifferent with respect to whether its argument is Al or λP.(ˇP )(Al ).
In our deductive framework, on the other hand, the exact way in which different propositions can interact is not prescribed, although it is constrained by their logical structure. Thus h σ ;Al can function as any logical consequence of itself, in particular as:
∀S, P. (∀x. h σ ;x −• S ; (ˇP )(x)) −• S ;(ˇP )(Al )
This flexibility, which is also found in syntactic-semantic analyses based on the Lambek calculus and its variants (Moortgat, 1988;Moortgat, 1992b;van Benthem, 1991), seems to align well with some of the type flexibility in natural language.
Type Raising and Quantifying In
The derivation in Figure 1 can be generalized as shown in Figure 2 to prove the general type-raising theorem: This theorem can be used to raise meanings of e type to (e → t) → t type, or, dually, to quantify into verb argument positions. For example, with the variable instantiations
I → h σ X → x P → p S → s Y → λR.(ˇR)(Z)
we can use transitivity of implication to combine (47) with Bill-seeks to derive:
Bill-seeks ′ : ∀Z. h σ ;Z −• f σ ;seek (Bill ,ˆλR.(ˇR)(Z))
This formula can then be combined with arguments of type e to produce a meaning for f σ . For instance, it will take the non-type-raised h σ ;Al to yield the same result f σ ;seek (Bill ,ˆλR.(ˇR)(Al )) as the combination of Bill-seeks with the type-raised version of Al. In fact, Bill-seeks ′ corresponds to type e → t, and can thus be used as the scope of a quantifier, which would then quantify into the intensional direct object argument of seek. As we will presently see, that is exactly what is needed to derive de re readings.
De Re Reading
We have just seen how theorem (47) provides a general mechanism for quantifying into intensional argument positions. In particular, it allowed the derivation of Bill-seeks ′ from Bill-seeks. Now, given the premises
Z → x H → f σ S →ˆλz.seek (Bill,ˆλR.(ˇR)(z))
we can apply modus ponens to derive the de re reading of Bill seeks a unicorn: f σ ;a(ˆunicorn,ˆλz.seek (Bill,ˆλR.(ˇR)(z)))
Comparison with Categorial Syntactic Approaches
In recent work, multidimensional and labeled deductive systems (Moortgat, 1992b;Morrill, 1993) have been proposed as refinements of the Lambek systems that are able to represent synchronized derivations involving multiple levels of representation: for instance, a level of head-dependent representations and a level of syntactic functor-argument representations. However, these systems do not seem yet able to represent the connection between a flat syntactic representation in terms of grammatical functions, such as the f-structure of LFG, and a function-argument semantic representation. The problem in those systems is that they cannot express at the type level the link between particular syntactic structures (fstructures in our case) and particular contributions to meaning. The extraction of meanings from derivations following the Curry-Howard isomorphism that is standard in categorial systems demands that the order of syntactic combination coincide with the order of semantic combination so that functor-argument relations at the syntactic and semantic level are properly aligned.
Nevertheless, there are strong similarities between the analysis of quantification that we present and analyses of the same phenomena discussed by Morrill (1993) and Carpenter (1993). Following Moortgat (1992a), they add to an appropriate version of the Lambek calculus (Lambek, 1958) the scope connective ⇑, subject to the following proof rules:
Γ, v : A, Γ ′ ⇒ u : B ∆, t(λv.u) : B, ∆ ′ ⇒ C ∆, Γ, t : A ⇑ B, Γ ′ , ∆ ′ ⇒ C [QL] Γ ⇒ u : A Γ ⇒ λv.v(u) : A ⇑ B [QR]
In terms of the scope connective, a quantified NP is given the category N ⇑ S, which semantically corresponds to the type (e → t) → t and agrees with the propositional structure of our linear formulas for quantified NPs. A phrase of category N ⇑ S is an infix functor that binds a variable of type e, the type of individual NPs N, within a scope of type t, the type of sentences S. An intensional verb like 'seek' has, then, category (N \ (S)/(N ⇑ S), with corresponding type ((e → t) → t) → e → t. 11 Thus the intensional verb will take as direct object a quantified NP, as required. A problem arises, however, with sentences such as (48) Bill seeks a conversation with every unicorn.
This sentence has five possible interpretations:
(49) a. seek (Bill,ˆλP.every(ˆunicorn,ˆλu.a(ˆλz.conv-with(z, u), P ))) b. seek (Bill,ˆλP.a(ˆλz.every(ˆunicorn,ˆλu.conv-with(z, u)), P )) c. every(ˆunicorn,ˆλu.seek (Bill ,ˆλP.a(ˆλz.conv-with(z, u), P ))) d. every (ˆunicorn,ˆλu.a(ˆλz.conv-with(z, u),ˆλz.seek (Bill ,ˆλP.(ˇP )(z)))) e. a(ˆλz.every (ˆunicorn,ˆλu.conv-with(z, u)),ˆλz.seek (Bill,ˆλP.(ˇP )(z))) Both our approach and the categorial analysis using the scope connective have no problem in deriving interpretations (49b), (49c), (49d) and (49e). In those cases, the scope of 'every unicorn' interpreted as an appropriate term of type e → t. However, the situation is different for interpretation (49a), in which both the conversations and the unicorn are de dicto, but the conversations sought may be different for different unicorns sought. As we will show below, this interpretation can be easily derived within our framework. However, a similar derivation does not appear possible in terms of the categorial scoping connective. The difficulty for the categorial account is that the category N ⇑ S represents a phrase that plays the role of a category N phrase where it appears, but takes an S (dependent 11 These category and type assignments are an oversimplification since intensional verbs like seek require a direct object of type s → ((e → t) → t), but for the present discussion the simpler category and type are sufficient. Morrill (1993) provides a full treatment. on the N) as its scope. In the derivation of (49a), however, the scope of 'every unicorn' is 'a conversation with', which is not of category S. Semantically, 'a conversation with' is represented by:
(50) λP.ˆλu.a(ˆλz.conv-with(z, u), P ) : (s → e → t) → (s → e → t)
The undirected Lambek calculus (van Benthem, 1991) allows us to compose (50) with the interpretation of 'every unicorn':
(51) λQ.every(ˆunicorn, Q) : (s → e → t) → t to yield:
(52) λP.every(ˆunicorn,ˆλu.a(ˆλz.conv-with(z, u), P )) : (s → e → t) → t As we will see below, our linear logic formulation also allows that derivation step.
In contrast, as Moortgat (1992a) points out, the categorial rule [QR] is not powerful enough to raise N ⇑ S to take as scope any functor whose result is a S. In particular, the sequent
(53) N ⇑ S ⇒ N ⇑ (N ⇑ S)
is not derivable, whereas the corresponding "semantic" sequent (up to permutation) (54) q : (e → t) → t ⇒ λR.λP.q(λx.R(P )(x)) : ((e → t) → (e → t)) → (e → t) → t is derivable in the undirected Lambek calculus. Sequent (54) will in particular raise (51) to a function that, applied to (50), produces (52), as required. Furthermore, the solution proposed by Morrill (1993) to make the scope calculus complete is to restrict the intended interpretation of ⇑ so that (53) is not valid. Thus, contra Carpenter (1993), Morrill's logically more satisfying account of ⇑ is not a step towards making reading (49a) available.
We now give the derivation of the interpretation (49a) in our framework. The f-structure for (48) The two formulas Bill-seeks and every-unicorn can be derived as described before: Using substitutions H → s T → p Y → λR.every(ˆunicorn,ˆλu.a(ˆλz.conv-with(z, u), R)) and modus ponens, we then combine (57) with Bill-seeks to obtain the desired final result: f σ ;seek (Bill,ˆλR.every(ˆunicorn,ˆλu.a(ˆλz.conv-with(z, u), R)))
Bill
Thus, we see that our more flexible connection between syntax and semantics permits the full range of type flexibility provided categorial semantics without losing the rigorous connection to syntax. In contrast, current categorial accounts of the syntax-semantics interface do not appear to offer the needed flexibility when syntactic and semantic composition are more indirectly connected, as in the present case. Recently, Oehrle (1993) independently proposed a multidimensional categorial system with types indexed so as to keep track of the syntax-semantic connections that we represent with ;. Using proof net techniques due to Moortgat (1992b) and Roorda (1991), he maps categorial formulas to first-order clauses similar to our meaning constructors, except that the formulas arising from determiners lack the embedded implication. Oehrle's system models quantifier scope ambiguities in a way similar to ours, but it is not clear that it can account correctly for the interactions with anaphora, given the lack of implication embedding in the clausal representation used.
Conclusion
Our approach exploits the f-structure of LFG for syntactic information needed to guide semantic composition, and also exploits the resource-sensitive properties of linear logic to express the semantic composition requirements of natural language. The use of linear logic as the glue language in a deductive semantic framework allows a natural treatment of quantification which automatically gives the right results for quantified NPs, their scopes and bound anaphora, and allows for a clean and natural treatment of extensional verbs and their arguments.
Indeed, the same basic facts are also accounted for in other recent treatments of compositionality, in particular categorial analyses with discontinuous constituency connectives (Moortgat, 1992a). These results suggest the advantages of a generalized form of compositionality in which the meaning constructors of phrases are represented by logical formulas rather than by functional abstractions as in traditional compositionality. The fixed application order and fixed type requirements of lambda terms are just too restrictive when it comes to encoding the freer order of information presentation in natural language.
In this observation, our treatment is closely related to systems of syntactic and semantic type assignment based on the Lambek calculus and its variants. However, we differ from those categorial approaches in providing an explicit link between functional structures and semantic derivations that does not depend on linear order and constituency in syntax to keep track of predicate-argument relations. Thus we avoid the need to force syntax and semantics into an uncomfortably tight categorial embrace.
We would particularly like to thank Johan van Benthem, Bob Carpenter, Jan van Eijck, Kris Halvorsen, Angie Hinrichs, David Israel, Ron Kaplan, Chris Manning, John Maxwell, Michael Moortgat, John Nerbonne, Stanley Peters, Henriette de Swart and an anonymous reviewer for the Second CSLI Workshop on Logic, Language, and Computation for comments and discussion. They are not responsible for any remaining errors, and we doubt that they will endorse all our analyses and conclusions, but we are sure that the end result is much improved for their help.
B Proof rules for intensional higher-order linear logic
Identity F ⊢ F Γ 1 ⊢ F Γ 2 , F ⊢ D Γ 1 , Γ 2 ⊢ D Cut Exch. Left Γ 1 , F, G, Γ 2 ⊢ D Γ 1 , G, F, Γ 2 ⊢ D λ Left Γ, F ′ ⊢ D F → λ F ′ Γ, F ⊢ D Γ ⊢ D D → λ D ′ Γ ⊢ D ′ λ Right ⊗ Left Γ, F, G ⊢ D Γ, (F ⊗ G) ⊢ D Γ 1 ⊢ F Γ 2 ⊢ G Γ 1 , Γ 2 ⊢ (F ⊗ G) ⊗ Right −• Left Γ 1 ⊢ F Γ 2 , G ⊢ D Γ 1 , Γ 2 , (F −• G) ⊢ D Γ, F ⊢ G Γ ⊢ (F −• G) −• Right Π Left Γ, P t ⊢ D Γ, ΠP ⊢ D Γ ⊢ P y Γ ⊢ ΠP Π Right
The Π Right rule only applies if y is not free in Γ, Σ, and any nonlogical theory axioms. We write M → λ N to indicate that N can be obtained from M by one or more applications of α− or β− reduction, or by the application of the rule:
(ˆ(Q)) → Q to a sub-term of M .
( 27 )
27every-voter: ∀H, S. (∀x. f σ ;x −• H ;S(x)) −• H ;every(voter , S)
(
∀x. (↑ σ var) ;x if, by assuming an arbitrary meaning x for (↑ σ var),−• (↑ σ restr) ;R(x)) a meaning R(x) for (↑ σ restr)can be derived, (∀x. ↑ σ ;x and if, by assuming an arbitrary meaning x for ↑, −• H ; t S(x)) a meaning S(x) for some scope H can be derived, −• H ; t Q(R, S) then we can derive a possible complete meaning for H
Figure 2 :
2f σ ;seek (Bill,ˆλP.(ˇP )(Al )) It is worth contrasting the foregoing derivation with treatments of the same issue in a λ-calculus setting. The function λx.λP.(ˇP )(x) raises a term like Al to the quantified I ;Z ⊢ I ;Z S ;(ˇP )(Z) ⊢ S ;(ˇP )(Z) I ;Z, I ;Z −• S ;(ˇP )(Z) ⊢ S ;(ˇP )(Z) I ;Z, (∀x. I ;x −• S ;(ˇP )(x)) ⊢ S ;(ˇP )(Z) I ;Z ⊢ (∀x. I ;x −• S ;(ˇP )(x)) −• S ;(ˇP )(Z) I ;Z ⊢ ∀S, P. (∀x. I ;x −• S ;(ˇP )(x)) −• S ;(ˇP )(Z) ⊢ I ;Z −• ∀S, P. (∀x. I ;x −• S ;(ˇP )(x)) −• S ;(ˇP )(Z) ⊢ ∀I, Z. I ;Z −• ∀S, P. (∀x. I ;x −• S ;(ˇP )(x)) −• S ;(ˇP )(Z) General Type-Raising Theorem NP form λP.(ˇP )(Al )
( 47 )
47∀I, Z. I ;Z −• (∀S, P. (∀x.I ;x −• S ;(ˇP )(x)) −• S ;(ˇP )(Z))
Bill-seeks′ : ∀Z. h σ ;Z −• f σ ;seek (Bill ,ˆλR.(ˇR)(Z)) a-unicorn: ∀H, S. (∀x. h σ ;x −• H ;(ˇS)(x)) −• H ;a(ˆunicorn, S)and the variable substitutions
-seeks: ∀Y. (∀s, p. (∀X. h σ ;X −• s ;(ˇp)(X)) −• s ;Y (p)) −• f σ ;seek (Bill ,ˆY ) every-unicorn: ∀G, S. (∀x. i σ ;x −• G;(ˇS)(x)) −• G; every(ˆunicorn, S)The remaining lexical premises for (55) are:a: ∀H, R, T. ((∀x. (h σ var);x −• (h σ restr); (ˇR)(x)) ⊗(∀x. h σ ;x −• H ;(ˇT )(x))) −• H ;a(R, T ) conv-with: ∀Z, X. (h σ var);Z ⊗ i σ ;X −• (h σ restr); conv-with(Z, X)From these premises we immediately derive∀X, H, T. i σ ;X ⊗ (∀x. h σ ;x −• H ;(ˇT )(x)) −• H ;a(ˆλz.conv-with(z, X), T )which can be rewritten as:(56) ∀H, T. (∀x. h σ ;x −• H ;(ˇT )(x)) −• ∀X. (i σ ;X −• H ;a(ˆλz.conv-with(z, X), T ))If we apply the substitutionsX → x, G → H, S →ˆλu.a(ˆλz.conv-with(u, v), T ), formula (56) can be combined with every-unicorn to yield the required quantifier-type formula:(57) ∀H, T. (∀x. h σ ;x −• H ;(ˇT )(x)) −• H ;every(ˆunicorn,ˆλu.a(ˆλz.conv-with(z, u), T ))
pred 'seek' subj g: pred 'Bill'is:
(55)
f :
obj h:
spec 'a'
pred 'conversation'
obl WITH i:
spec 'every'
pred 'unicorn'
The reader familiar with Montague may be surprised by the apparently purely extensional form of the meaning terms in the examples that follow, in contrast with Montague's use of intensional expressions even in purely extensional cases to allow for uniform translation rules. The reasons for this divergence are explained in Section 4.
'An f-structure is locally complete if and only if it contains all the governable grammatical functions that its predicate governs. An f-structure is complete if and only if all its subsidiary f-structures are locally complete. An f-structure is locally coherent if and only if all the governable grammatical functions that it contains are governed by a local predicate. An f-structure is coherent if and only if all its subsidiary f-structures are locally coherent.'(Kaplan and Bresnan, 1982, pages 211-212) To illustrate:(a) *John devoured.[incomplete](b) *John arrived Bill the sink.[incoherent]5 Saraswat and Lincoln (1992) provide an explicit formulation for the higher-order version of the linear concurrent constraint programming scheme.Scedrov (1993) gives a tutorial introduction to linear logic itself; supplies further background on computational aspects of linear logic relevant to the implementation of the present proposal.
We use lower-case letters for essentially universal variables, that is, variables that stand for new local constants in a proof. We use capital letters for essentially existential variables, that is, Prolog-like variables that become instantiated to particular terms in a proof. In other words, essentially existential variables stand for specific but as yet unspecified terms, while essentially universal variables stand for arbitrary constants, that is, constants that could be replaced by any term while still maintaining the validity of the derivation. In the linear-logic fragment we use here, essentially existential variables arise from universal quantification with outermost scope, while essentially universal variables arise from universal quantification whose scope is a conjunct in the antecedent of an outermost implication.
Of course, the derivation would be more complicated if the NP included adjective phrases or other noun modifiers; for the sake of brevity, we will not discuss the contribution of noun modifiers in this paper. Intuitively, the function of modifiers is to consume the meaning of the phrase they modify and produce a new, modified meaning of the same semantic shape, which can play the same semantic role as the unmodified phrase can play. provide a general discussion of modification in this framework.
To allow for apparent scope ambiguities, we adopt a scoping analysis of indefinites, as proposed, for example, byNeale (1990).
The determination of appropriate values for ant requires a more detailed analysis of other linguistic constraints on anaphora resolution, which would need further projections to give information about, for example, discourse relations and salience.Dalrymple (1993) discusses in detail LFG analyses of anaphoric binding.
AcknowledgmentsPortions of this work were originally presented at the Second CSLI Workshop on Logic, Language, and Computation, Stanford University, and published asDalrymple et al. (1994a); and at the Conference on Information-Oriented Approaches to Logic, Language and Computation, held at Saint Mary's College, Moraga, California, and published asDalrymple et al. (1994b). We are grateful to the audiences at these venues for helpful comments.A Syntax of the Meaning and Glue LanguagesThe meaning language is based on Montague's intensional higher-order logic, with the following syntax:Terms are typed in the usual way; logical connectives such as every and a are represented by constants of appropriate type. The "cap" operator is polymorphic, and of type α → (s → α); similarly the "cup" operator is of type (s → α) → α).For readability, we will often "uncurry" M N 1 · · · N m as M (N 1 , . . . , N m ). Note that we allow variables in the glue language to range over meaning terms.The glue language refers to three kinds of terms: meaning terms, f-structures, and semantic or σ-structures. f-and σ-structures are feature structures in correspondence (through projections) with constituent structure. Conceptually, feature structures are just functions which, when applied to attributes (a set of constants), return constants or other feature structures. In the following we let A range over some pre-specified set of attributes.Glue-language formulas are built up using linear connectives from atomic formulas of the form S ; τ M , whose intended interpretation is that the meaning associated with σstructure S is denoted by term M of type τ . We omit the type subscript τ when it can be determined from context. We usually write ΠλX. G as ∀X. G, and similarly for ΠλH. G.
Predicate Composition: A Theory of Syntactic Function Alternations. Alex Alsina, Stanford UniversityPh.D. thesisAlsina, Alex. 1993. Predicate Composition: A Theory of Syntactic Function Alternations. Ph.D. thesis, Stanford University.
Generalized quantifiers and natural language. Jon Barwise, Robin Cooper, Linguistics and Philosophy. 4Barwise, Jon and Robin Cooper. 1981. Generalized quantifiers and natural language. Linguistics and Philosophy, 4:159-219.
The Mental Representation of Grammatical Relations. Joan Bresnan, The MIT PressCambridge, MABresnan, Joan, editor. 1982. The Mental Representation of Grammatical Relations. The MIT Press, Cambridge, MA.
Locative inversion in Chicheŵa: A case study of factorization in grammar. Joan Bresnan, Jonni M Kanerva, Linguistic Inquiry. 201Also in E. Wehrli and TBresnan, Joan and Jonni M. Kanerva. 1989. Locative inversion in Chicheŵa: A case study of factorization in grammar. Linguistic Inquiry, 20(1):1-50. Also in E. Wehrli and T.
Stowell, Syntax and Semantics 26: Syntax and the Lexicon. New YorkAcademic PressStowell, eds., Syntax and Semantics 26: Syntax and the Lexicon. New York: Academic Press.
The Structure of Complex Predicates. Miriam Butt, Stanford UniversityPh.D. thesisButt, Miriam. 1993. The Structure of Complex Predicates. Ph.D. thesis, Stanford Univer- sity.
Quantification and scoping: a deductive account. Bob Carpenter, Submitted for publicationCarpenter, Bob. 1993. Quantification and scoping: a deductive account. Submitted for publication.
The Syntax of Anaphoric Binding. Mary Dalrymple, CSLI Lecture Notes. Center for the Study of Language and Information. 36Dalrymple, Mary. 1993. The Syntax of Anaphoric Binding. Number 36 in CSLI Lecture Notes. Center for the Study of Language and Information.
The resource logic of complex predicate interpretation. Mary Dalrymple, Angie Hinrichs, John Lamping, Vijay Saraswat, Proceedings of the 1993 Republic of China Computational Linguistics Conference (ROCLING). the 1993 Republic of China Computational Linguistics Conference (ROCLING)Hsitou National Park, TaiwanDalrymple, Mary, Angie Hinrichs, John Lamping, and Vijay Saraswat. 1993. The re- source logic of complex predicate interpretation. In Proceedings of the 1993 Republic of China Computational Linguistics Conference (ROCLING), Hsitou National Park, Taiwan, September. Computational Linguistics Society of R.O.C.
A deductive account of quantification in LFG. Mary Dalrymple, John Lamping, C N Fernando, Vijay Pereira, Saraswat, Quantifiers, Deduction, and Context. Center for the Study of Language and Information. Makoto Kanazawa, Christopher J. Piñón, and Henriette de SwartStanford, CaliforniaDalrymple, Mary, John Lamping, Fernando C. N. Pereira, and Vijay Saraswat. 1994a. A deductive account of quantification in LFG. In Makoto Kanazawa, Christopher J. Piñón, and Henriette de Swart, editors, Quantifiers, Deduction, and Context. Center for the Study of Language and Information, Stanford, California.
Intensional verbs without type-raising or lexical ambiguity. Mary Dalrymple, John Lamping, C N Fernando, Vijay Pereira, Saraswat, Conference on Information-Oriented Approaches to Logic, Language and Computation. Moraga, CaliforniaSaint Mary's CollegeDalrymple, Mary, John Lamping, Fernando C. N. Pereira, and Vijay Saraswat. 1994b. In- tensional verbs without type-raising or lexical ambiguity. In Conference on Information- Oriented Approaches to Logic, Language and Computation, Moraga, California. Saint Mary's College.
LFG semantics via constraints. Mary Dalrymple, John Lamping, Vijay Saraswat, Proceedings of the Sixth Meeting of the European ACL. the Sixth Meeting of the European ACLAssociation for Computational LinguisticsUniversity of UtrechtDalrymple, Mary, John Lamping, and Vijay Saraswat. 1993. LFG semantics via constraints. In Proceedings of the Sixth Meeting of the European ACL, University of Utrecht, April. European Chapter of the Association for Computational Linguistics.
Jens Fenstad, Erik, Tore Per-Kristian Halvorsen, Johan Langholm, Van Benthem, Situations, Language and Logic. D. Reidel. DordrechtFenstad, Jens Erik, Per-Kristian Halvorsen, Tore Langholm, and Johan van Benthem. 1987. Situations, Language and Logic. D. Reidel, Dordrecht.
Logic, Language, and Meaning. L T F Gamut, Intensional Logic and Logical Grammar. The University of Chicago Press2ChicagoGamut, L. T. F. 1991. Logic, Language, and Meaning, volume 2: Intensional Logic and Logical Grammar. The University of Chicago Press, Chicago.
Linear logic. J.-Y Girard, Theoretical Computer Science. 45Girard, J.-Y. 1987. Linear logic. Theoretical Computer Science, 45:1-102.
Situation Semantics and semantic interpretation in constraint-based grammars. Halvorsen, Per-Kristian, CSLI-TR-101Proceedings of the International Conference on Fifth Generation Computer Systems, FGCS-88. the International Conference on Fifth Generation Computer Systems, FGCS-88Tokyo, JapanStanford UniversityTechnical ReportHalvorsen, Per-Kristian. 1988. Situation Semantics and semantic interpretation in con- straint-based grammars. In Proceedings of the International Conference on Fifth Gen- eration Computer Systems, FGCS-88, pages 471-478, Tokyo, Japan, November. Also published as CSLI Technical Report CSLI-TR-101, Stanford University, 1987.
Projections and semantic description in Lexical-Functional Grammar. Per-Kristian Halvorsen, Ronald M Kaplan, Proceedings of the International Conference on Fifth Generation Computer Systems. the International Conference on Fifth Generation Computer SystemsTokyo, JapanInstitute for New Generation SystemsHalvorsen, Per-Kristian and Ronald M. Kaplan. 1988. Projections and semantic description in Lexical-Functional Grammar. In Proceedings of the International Conference on Fifth Generation Computer Systems, pages 1116-1122, Tokyo, Japan. Institute for New Generation Systems.
Studied Flexibility: Categories and Types in Syntax and Semantics. ILLC dissertation series 1993-5. Herman Hendriks, Amsterdam, HollandUniversity of AmsterdamHendriks, Herman. 1993. Studied Flexibility: Categories and Types in Syntax and Seman- tics. ILLC dissertation series 1993-5, University of Amsterdam, Amsterdam, Holland.
The Grammar and Processing of Order and Dependency: a Categorial Approach. Mark Hepple, University of EdinburghPh.D. thesisHepple, Mark. 1990. The Grammar and Processing of Order and Dependency: a Categorial Approach. Ph.D. thesis, University of Edinburgh.
The formulae-as-types notion of construction. W A Howard, To H.B. Curry: Essays on Combinatory Logic, Lambda Calculus and Formalism. J.P. Seldin and J.R. HindleyLondon, EnglandAcademic PressHoward, W.A. 1980. The formulae-as-types notion of construction. In J.P. Seldin and J.R. Hindley, editors, To H.B. Curry: Essays on Combinatory Logic, Lambda Calculus and Formalism. Academic Press, London, England, pages 479-490.
A unification algorithm for typed λ-calculus. Gérard Huet, Theoretical Computer Science. 1Huet, Gérard. 1975. A unification algorithm for typed λ-calculus. Theoretical Computer Science, 1:27-57.
Three seductions of computational psycholinguistics. Ronald M Kaplan, Linguistic Theory and Computer Applications. Peter Whitelock, Harold Somers, Paul Bennett, Rod Johnson, and Mary McGee WoodLondonAcademic PressKaplan, Ronald M. 1987. Three seductions of computational psycholinguistics. In Peter Whitelock, Harold Somers, Paul Bennett, Rod Johnson, and Mary McGee Wood, ed- itors, Linguistic Theory and Computer Applications. Academic Press, London, pages 149-188.
Lexical-Functional Grammar: A formal system for grammatical representation. Ronald M Kaplan, Joan Bresnan, Joan BresnanThe MIT PressCambridge, MAThe Mental Representation of Grammatical RelationsKaplan, Ronald M. and Joan Bresnan. 1982. Lexical-Functional Grammar: A formal system for grammatical representation. In Joan Bresnan, editor, The Mental Represen- tation of Grammatical Relations. The MIT Press, Cambridge, MA, pages 173-281.
Type-driven translation. Ewan Klein, Ivan A Sag, Linguistics and Philosophy. 8Klein, Ewan and Ivan A. Sag. 1985. Type-driven translation. Linguistics and Philosophy, 8:163-201.
The mathematics of sentence structure. Joachim Lambek, American Mathematical Monthly. 65Lambek, Joachim. 1958. The mathematics of sentence structure. American Mathematical Monthly, 65:154-170.
. Lori S Levin, Annie ZaenenMalka Rappaport, Annie ZaenenPapers in Lexical-Functional Grammar. Indiana University Linguistics Club. Levin, Lori S., Malka Rappaport, and Annie Zaenen, editors. 1983. Papers in Lexical- Functional Grammar. Indiana University Linguistics Club, Bloomington, IN.
Logical Form: its Structure and Derivation. Robert May, Linguistic Inquiry Monographs. 12MIT PressMay, Robert. 1985. Logical Form: its Structure and Derivation, volume 12 of Linguistic Inquiry Monographs. MIT Press, Cambridge, Massachusetts.
A logic programming language with lambda abstraction, function variables and simple unification. Dale A Miller, Extensions of Logic Programming. Peter Schroeder-HeisterSpringer-VerlagMiller, Dale A. 1990. A logic programming language with lambda abstraction, function variables and simple unification. In Peter Schroeder-Heister, editor, Extensions of Logic Programming, Lecture Notes in Artificial Intelligence. Springer-Verlag.
The proper treatment of quantification in ordinary English. Richard Montague, Formal Philosophy. Richmond H. ThomasonNew Haven, ConnecticutYale University PressMontague, Richard. 1974. The proper treatment of quantification in ordinary English. In Richmond H. Thomason, editor, Formal Philosophy. Yale University Press, New Haven, Connecticut.
Categorial Investigations: Logical and Linguistic Aspects of the Lambek Calculus. Michael Moortgat, Amsterdam, The NetherlandsUniversity of AmsterdamPh.D. thesisMoortgat, Michael. 1988. Categorial Investigations: Logical and Linguistic Aspects of the Lambek Calculus. Ph.D. thesis, University of Amsterdam, Amsterdam, The Netherlands, October.
Generalized quantifiers and discontinuous type constructors. Michael Moortgat, Discontinuous Constituency. Mouton de Gruyter. W. Sijtsma and H. van HorckBerlin, GermanyTo appearMoortgat, Michael. 1992a. Generalized quantifiers and discontinuous type constructors. In W. Sijtsma and H. van Horck, editors, Discontinuous Constituency. Mouton de Gruyter, Berlin, Germany. To appear.
Labelled deductive systems for categorial theorem proving. Michael Moortgat, Proceedings of the Eighth Amsterdam Colloquium. P. Dekker and M. Stokhofthe Eighth Amsterdam ColloquiumAmsterdam. Institute for Logic, Language and ComputationMoortgat, Michael. 1992b. Labelled deductive systems for categorial theorem proving. In P. Dekker and M. Stokhof, editors, Proceedings of the Eighth Amsterdam Colloquium, pages 403-423, Amsterdam. Institute for Logic, Language and Computation.
Intensionality and boundedness. Glyn Morrill, Linguistics and Philosophy. 136Morrill, Glyn. 1990. Intensionality and boundedness. Linguistics and Philosophy, 13(6):699-726.
Type Logical Grammar: Categorial Logic of Signs. Studies in Linguistics and Philosophy. Glyn V Morrill, Kluwer Academic PublishersDordrecht, HollandTo appearMorrill, Glyn V. 1993. Type Logical Grammar: Categorial Logic of Signs. Studies in Lin- guistics and Philosophy. Kluwer Academic Publishers, Dordrecht, Holland. To appear.
. Stephen Neale, Descriptions. The MIT PressCambridge, MANeale, Stephen. 1990. Descriptions. The MIT Press, Cambridge, MA.
Structure of Linguistic Inference: Categorial and Unification-Based Approaches. Richard T Oehrle, European Summer School in Logic, Language and Information. String-based categorial type systemsOehrle, Richard T. 1993. String-based categorial type systems. Workshop "Structure of Linguistic Inference: Categorial and Unification-Based Approaches," European Summer School in Logic, Language and Information, Lisbon, Portugal.
Categorial semantics and scoping. Fernando C N Pereira, Computational Linguistics. 161Pereira, Fernando C. N. 1990. Categorial semantics and scoping. Computational Linguis- tics, 16(1):1-10.
Semantic interpretation as higher-order deduction. Fernando C N Pereira, Logics in AI: European Workshop JELIA'90. Jan van EijckAmsterdam, HollandSpringer-VerlagPereira, Fernando C. N. 1991. Semantic interpretation as higher-order deduction. In Jan van Eijck, editor, Logics in AI: European Workshop JELIA'90, pages 78-96, Amsterdam, Holland. Springer-Verlag.
Information-Based Syntax and Semantics, Volume I. Number 13 in CSLI Lecture Notes. Carl Pollard, Ivan A Sag, Center for the Study of Language and Information, Stanford UniversityPollard, Carl and Ivan A. Sag. 1987. Information-Based Syntax and Semantics, Volume I. Number 13 in CSLI Lecture Notes. Center for the Study of Language and Information, Stanford University.
Head-Driven Phrase Structure Grammar. Carl Pollard, Ivan A Sag, The University of Chicago PressChicagoPollard, Carl and Ivan A. Sag. 1993. Head-Driven Phrase Structure Grammar. The Uni- versity of Chicago Press, Chicago.
Dirk Roorda, Resource Logics: Proof-theoretical Investigations. University of AmsterdamPh.D. thesisRoorda, Dirk. 1991. Resource Logics: Proof-theoretical Investigations. Ph.D. thesis, Uni- versity of Amsterdam.
Concurrent Constraint Programming Languages. Vijay A Saraswat, Doctoral Dissertation Award and Logic Programming Series. MIT PressCarnegie-Mellon UniversityPh.D. thesisSaraswat, Vijay A. 1989. Concurrent Constraint Programming Languages. Ph.D. thesis, Carnegie-Mellon University. Reprinted by MIT Press, Doctoral Dissertation Award and Logic Programming Series, 1993.
A brief introduction to linear concurrent constraint programming. Vijay A Saraswat, Xerox Palo Alto Research CenterTechnical reportSaraswat, Vijay A. 1993. A brief introduction to linear concurrent constraint programming. Technical report, Xerox Palo Alto Research Center, April.
Higher-order, linear concurrent constraint programming. Vijay A Saraswat, Patrick Lincoln, Xerox Palo Alto Research CenterTechnical reportSaraswat, Vijay A. and Patrick Lincoln. 1992. Higher-order, linear concurrent constraint programming. Technical report, Xerox Palo Alto Research Center, August.
A brief guide to linear logic. Andre Scedrov, Current Trends in Theoretical Computer Science. G. Rozenberg and A. Salomaa, editorsWorld Scientific Publishing CoScedrov, Andre. 1993. A brief guide to linear logic. In G. Rozenberg and A. Salomaa, edi- tors, Current Trends in Theoretical Computer Science, pages 377-394. World Scientific Publishing Co.
Categorial grammar and lambda calculus. Johan Van Benthem, Mathematical Logic and its Application. D. SkordevNew York, New YorkPlenum Pressvan Benthem, Johan. 1986. Categorial grammar and lambda calculus. In D. Skordev, editor, Mathematical Logic and its Application. Plenum Press, New York, New York, pages 39-60.
The Lambek calculus. Johan Van Benthem, Categorial Grammars and Natural Language Structures. D. Reidel. Richard T. Oehrle, Emmon Bach, and Deirdre WheelerDordrechtvan Benthem, Johan. 1988. The Lambek calculus. In Richard T. Oehrle, Emmon Bach, and Deirdre Wheeler, editors, Categorial Grammars and Natural Language Structures. D. Reidel, Dordrecht, pages 35-68.
Language in Action: Categories, Lambdas and Dynamic Logic. Johan Van Benthem, North-Holland, Amsterdamvan Benthem, Johan. 1991. Language in Action: Categories, Lambdas and Dynamic Logic. North-Holland, Amsterdam.
| [] |
[
"The Effects of Data Size and Frequency Range on Distributional Semantic Models",
"The Effects of Data Size and Frequency Range on Distributional Semantic Models"
] | [
"Magnus Sahlgren Gavagai \nUniversity of Pisa\nSlussplan 9, via Santa Maria 36Box 1263111 30, 164 29, 56126Stockholm, Kista, PisaSweden, Italy\n",
"Sics \nUniversity of Pisa\nSlussplan 9, via Santa Maria 36Box 1263111 30, 164 29, 56126Stockholm, Kista, PisaSweden, Italy\n",
"Alessandro Lenci alessandro.lenci@unipi.it \nUniversity of Pisa\nSlussplan 9, via Santa Maria 36Box 1263111 30, 164 29, 56126Stockholm, Kista, PisaSweden, Italy\n"
] | [
"University of Pisa\nSlussplan 9, via Santa Maria 36Box 1263111 30, 164 29, 56126Stockholm, Kista, PisaSweden, Italy",
"University of Pisa\nSlussplan 9, via Santa Maria 36Box 1263111 30, 164 29, 56126Stockholm, Kista, PisaSweden, Italy",
"University of Pisa\nSlussplan 9, via Santa Maria 36Box 1263111 30, 164 29, 56126Stockholm, Kista, PisaSweden, Italy"
] | [
"Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing"
] | This paper investigates the effects of data size and frequency range on distributional semantic models. We compare the performance of a number of representative models for several test settings over data of varying sizes, and over test items of various frequency. Our results show that neural network-based models underperform when the data is small, and that the most reliable model over data of varying sizes and frequency ranges is the inverted factorized model. | 10.18653/v1/d16-1099 | [
"https://www.aclweb.org/anthology/D16-1099.pdf"
] | 12,277,102 | 1609.08293 | 4b63af049b14cb57e0e82a6d214e73036de72a85 |
The Effects of Data Size and Frequency Range on Distributional Semantic Models
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 1-5, 2016. 2016
Magnus Sahlgren Gavagai
University of Pisa
Slussplan 9, via Santa Maria 36Box 1263111 30, 164 29, 56126Stockholm, Kista, PisaSweden, Italy
Sics
University of Pisa
Slussplan 9, via Santa Maria 36Box 1263111 30, 164 29, 56126Stockholm, Kista, PisaSweden, Italy
Alessandro Lenci alessandro.lenci@unipi.it
University of Pisa
Slussplan 9, via Santa Maria 36Box 1263111 30, 164 29, 56126Stockholm, Kista, PisaSweden, Italy
The Effects of Data Size and Frequency Range on Distributional Semantic Models
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsNovember 1-5, 2016. 2016
This paper investigates the effects of data size and frequency range on distributional semantic models. We compare the performance of a number of representative models for several test settings over data of varying sizes, and over test items of various frequency. Our results show that neural network-based models underperform when the data is small, and that the most reliable model over data of varying sizes and frequency ranges is the inverted factorized model.
Introduction
Distributional Semantic Models (DSMs) have become a staple in natural language processing. The various parameters of DSMs -e.g. size of context windows, weighting schemes, dimensionality reduction techniques, and similarity measureshave been thoroughly studied (Weeds et al., 2004;Sahlgren, 2006;Riordan and Jones, 2011;Bullinaria and Levy, 2012;Levy et al., 2015), and are now well understood. The impact of various processing models -matrix-based models, neural networks, and hashing methods -have also enjoyed considerable attention lately, with at times conflicting conclusions Levy et al., 2015;Schnabel et al., 2015;Österlund et al., 2015;Sahlgren et al., 2016). The consensus interpretation of such experiments seems to be that the choice of processing model is less important than the parameterization of the models, since the various processing models all result in more or less equivalent DSMs (provided that the parameterization is comparable).
One of the least researched aspects of DSMs is the effect on the various models of data size and frequency range of the target items. The only previous work in this direction that we are aware of is Asr et al. (2016), who report that on small data (the CHILDES corpus), simple matrix-based models outperform neural network-based ones. Unfortunately, Asr et al. do not include any experiments using the same models applied to bigger data, making it difficult to compare their results with previous studies, since implementational details and parameterization will be different.
There is thus still a need for a consistent and fair comparison of the performance of various DSMs when applied to data of varying sizes. In this paper, we seek an answer to the question: which DSM should we opt for if we only have access to limited amounts of data? We are also interested in the related question: which DSM should we opt for if our target items are infrequent? The latter question is particularly crucial, since one of the major assets of DSMs is their applicability to create semantic representations for ever-expanding vocabularies from text feeds, in which new words may continuously appear in the low-frequency ranges.
In the next section, we introduce the contending DSMs and the general experiment setup, before turning to the experiments and our interpretation of the results. We conclude with some general advice.
Distributional Semantic Models
One could classify DSMs in many different ways, such as the type of context and the method to build distributional vectors. Since our main goal here is to gain an understanding of the effect of data size and frequency range on the various models, we focus primarily on the differences in processing models, hence the following typology of DSMs.
Explicit matrix models
We here include what could be referred to as explicit models, in which each vector dimension corresponds to a specific context (Levy and Goldberg, 2014). The baseline model is a simple co-occurrence matrix F (in the following referred to as CO for Co-Occurrence). We also include the model that results from applying Positive Pointwise Mutual Information (PPMI) to the co-occurrence matrix. PPMI is defined as simply discarding any negative values of the PMI, computed as:
PMI(a, b) = log f ab × T f a f b (1)
where f ab is the co-occurrence count of word a and word b, f a and f b are the individual frequencies of the words, and T is the number of tokens in the data. 1
Factorized matrix models
This type of model applies an additional factorization of the weighted co-occurrence counts. We here include two variants of applying Singular Value Decomposition (SVD) to the PPMI-weighting cooccurrence matrix; one version that discards all but the first couple of hundred latent dimensions (TSVD for truncated SVD), and one version that instead removes the first couple of hundred latent dimensions (ISVD for inverted SVD). SVD is defined in the standard way:
F = U ΣV T(2)
where U holds the eigenvectors of F , Σ holds the eigenvalues, and V ∈ U (w) is a unitary matrix mapping the original basis of F into its eigenbasis. Since V is redundant due to invariance under unitary transformations, we can represent the factorization ofF in its most compact formF ≡ U Σ. 1 We also experimented with smoothed PPMI, which raises the context counts to the power of α and normalizes them (Levy et al., 2015), thereby countering the tendency of mutual information to favor infrequent events:
f (b) = #(b) α b #(b) α ,
but it did not lead to any consistent improvements compared to PPMI.
Hashing models
A different approach to reduce the dimensionality of DSMs is to use a hashing method such as Random Indexing (RI) (Kanerva et al., 2000), which accumulates distributional vectors d(a) in an online fashion:
d(a) ← d(a i )+ c j=−c,j =0 w(x (i+j) )π j r(x (i+j) ) (3)
where c is the extension of the context window, w(b) is a weight that quantifies the importance of context term b, 2 r d (b) is a sparse random index vector that acts as a fingerprint of context term b, and π j is a permutation that rotates the random index vectors one step to the left or right, depending on the position of the context items within the context windows, thus enabling the model to take word order into account (Sahlgren et al., 2008).
Neural network models
There are many variations of DSMs that use neural networks as processing model, ranging from simple recurrent networks (Elman, 1990) to more complex deep architectures (Collobert and Weston, 2008). The incomparably most popular neural network model is the one implemented in the word2vec library, which uses the softmax for predicting b given a (Mikolov et al., 2013):
p(b|a) = exp( b · a) b ∈C exp( b · a)(4)
where C is the set of context words, and b and a are the vector representations for the context and target words, respectively. We include two versions of this general model; Continuous Bag of Words (CBOW) that predicts a word based on the context, and Skip-Gram Negative Sampling (SGNS) that predicts the context based on the current word.
varying sizes, we use one big corpus as starting point, and split the data into bins of varying sizes. We opt for the ukWaC corpus (Ferraresi et al., 2008), which comprises some 1.6 billion words after tokenization and lemmatization. We produce subcorpora by taking the first 1 million, 10 million, 100 million, and 1 billion words.
Since the co-occurrence matrix built from the 1 billion-word ukWaC sample is very big (more than 4,000,000 × 4,000,000), we prune the cooccurrence matrix to 50,000 dimensions before the factorization step by simply removing infrequent context items. 3 As comparison, we use 200 dimensions for TSVD, 2,800 (3,000-200) dimensions for ISVD, 2,000 dimensions for RI, and 200 dimensions for CBOW and SGNS. These dimensionalities have been reported to perform well for the respective models (Landauer and Dumais, 1997;Sahlgren et al., 2008;Mikolov et al., 2013;Österlund et al., 2015). All DSMs use the same parameters as far as possible with a narrow context window of ±2 words, which has been shown to produce good results in semantic tasks (Sahlgren, 2006;Bullinaria and Levy, 2012).
We use five standard benchmark tests in these experiments; two multiple-choice vocabulary tests (the TOEFL synonyms and the ESL synonyms), and three similarity/relatedness rating benchmarks (SimLex-999 (SL) (Hill et al., 2015), MEN (Bruni et al., 2014), and Stanford Rare Words (RW) (Luong et al., 2013)). The vocabulary tests measure the synonym relation, while the similarity rating tests measure a broader notion of semantic similarity (SL and RW) or relatedness (MEN). 4 The results for the vocabulary tests are given in accuracy (i.e., percentage of correct answers), while the results for the similarity tests are given in Spearman rank correlation. is that the neural networks models do not produce competitive results for the smaller data, which corroborates the results by Asr et al. (2016). The best results for the smallest data are produced by the factorized models, with both TSVD and ISVD producing top scores in different test settings. It should be noted, however, that even the top scores for the smallest data set are substandard; only two models (PPMI and TSVD) manage to beat the random baseline of 25% for the TOEFL tests, and none of the models manage to beat the random baseline for the ESL test.
Comparison by data size
The ISVD model produces consistently good results; it yields the best overall results for the 10 mil- lion and 100 million-word data, and is competitive with SGNS on the 1 billion word data. Figure 1 shows the average results and their standard deviations over all test settings. 5 It is obvious that there are no huge differences between the various models, with the exception of the baseline CO model, which consistently underperforms. The TSVD and RI models have comparable performance across the different data sizes, which is systematically lower than the PPMI model. The ISVD model is the most consistently good model, with the neural network-based models steadily improving as data becomes bigger.
Looking at the different datasets, SL and RW are the hardest ones for all the models. In the case of SL, this confirms the results in (Hill et al., 2015), and might be due to the general bias of DSMs towards semantic relatedness, rather than genuine semantic similarity, as represented in SL. The substandard performance on RW might instead be due to the low frequency of the target items. It is interesting to note that these are benchmark tests in which neural models perform the worst even when trained on the largest data.
Comparison by frequency range
In order to investigate how each model handles different frequency ranges, we split the test items into three different classes that contain about a third of the frequency mass of the test items each. This split was produced by collecting all test items into a common vocabulary, and then sorting this vocabulary by its frequency in the ukWaC 1 billionword corpus. We split the vocabulary into 3 equally large parts; the HIGH range with frequencies ranging from 3,515,086 ("do") to 16,830 ("organism"), the MEDIUM range with frequencies ranging between 16,795 ("desirable") and 729 ("prickly"), and the LOW range with frequencies ranging between 728 ("boardwalk") to hapax legomenon. We then split each individual test into these three ranges, depending on the frequencies of the test items. Test pairs were included in a given frequency class if and only if both the target and its relatum occur in the frequency range for that class. For the constituent words in the test item that belong to different frequency ranges, which is the most common case, we use a separate MIXED class. The resulting four classes contain 1,387 items for the HIGH range, 656 items for the MEDIUM range, 350 items for the LOW range, and 3,458 items for the MIXED range. 6 Table 2 (next side) shows the average results over the different frequency ranges for the various DSMs trained on the 1 billion-word ukWaC data. We also include the highest and lowest individual test scores (signified by ↑ and ↓), in order to get an idea about the consistency of the results. As can be seen in the table, the most consistent model is ISVD, which produces the best results in both the MEDIUM and MIXED frequency ranges. The neural network models SGNS and CBOW produce the best results in the HIGH and LOW range, respectively, with CBOW clearly outperforming SGNS in the latter case. The major difference between these models is that CBOW predicts a word based on a context, while SGNS predicts a context based on a word. Clearly, the former approach is more beneficial for low-frequent items.
The PPMI, TSVD and RI models perform similarly across the frequency ranges, with RI producing somewhat lower results in the MEDIUM range, and TSVD producing somewhat lower results in the LOW range. The CO model underperforms in all frequency ranges. Worth noting is the fact that all models that are based on an explicit matrix (i.e. CO All DSMs are trained on the 1 billion words data.
PPMI, TSVD and ISVD) produce better results in the MEDIUM range than in the HIGH range.
The arguably most interesting results are in the LOW range.
Unsurprisingly, there is a general and significant drop in performance for low frequency items, but with interesting differences among the various models. As already mentioned, the CBOW model produces the best results, closely followed by PPMI and RI. It is noteworthy that the low-dimensional embeddings of the CBOW model only gives a modest improvement over the highdimensional explicit vectors of PPMI. The worst results are produced by the ISVD model, which scores even lower than the baseline CO model. This might be explained by the fact that ISVD removes the latent dimensions with largest variance, which are arguably the most important dimensions for very lowfrequent items. Increasing the number of latent dimensions with high variance in the ISVD model improves the results in the LOW range (16.59 when removing only the top 100 dimensions).
Conclusion
Our experiments confirm the results of Asr et al. (2016), who show that neural network-based models are suboptimal to use for smaller amounts of data. On the other hand, our results also show that none of the standard DSMs work well in situations with small data. It might be an interesting novel research direction to investigate how to design DSMs that are applicable to small-data scenarios.
Our results demonstrate that the inverted factorized model (ISVD) produces the most robust results over data of varying sizes, and across several different test settings. We interpret this finding as fur-ther corroborating the results of Bullinaria andLevy (2012), andÖsterlund et al. (2015), with the conclusion that the inverted factorized model is a robust competitive alternative to the widely used SGNS and CBOW neural network-based models.
We have also investigated the performance of the various models on test items in different frequency ranges, and our results in these experiments demonstrate that all tested models perform optimally in the medium-to-high frequency ranges. Interestingly, all models based on explicit count matrices (CO, PPMI, TSVD and ISVD) produce somewhat better results for items of medium frequency than for items of high frequency. The neural network-based models and ISVD, on the other hand, produce the best results for high-frequent items.
None of the tested models perform optimally for low-frequent items. The best results for lowfrequent test items in our experiments were produced using the CBOW model, the PPMI model and the RI model, all of which uses weighted context items without any explicit factorization. By contrast, the ISVD model underperforms significantly for the low-frequent items, which we suggest is an effect of removing latent dimensions with high variance.
This interpretation suggests that it might be interesting to investigate hybrid models that use different processing models -or at least different parameterizations -for different frequency ranges, and for different data sizes. We leave this as a suggestion for future research.
Figure 1 :
1Average results and standard deviation over all tests.
Table 1
1summarizes the results over the different test settings. The most notable aspect of these resultsTable 1: Results for DSMs trained on data of varying sizes.DSM
TOEFL
ESL
SL MEN
RW
1 million words
CO
17.50 20.00 −1.64 10.72 −3.96
PPMI
26.25 18.00
8.28 21.49 −2.57
TSVD
27.50 20.00
4.43 22.15 −1.56
ISVD
22.50 14.00 14.33 19.74
5.31
RI
20.00 16.00
5.65 17.94
1.92
SGNS
15.00
8.00
3.64 12.34
1.46
CBOW
15.00 10.00 −0.16 11.59
1.39
10 million words
CO
40.00 22.00
4.77 15.20
0.95
PPMI
52.50 38.00 26.44 39.83
4.00
TSVD
38.75 30.00 19.27 34.33
5.53
ISVD
45.00 44.00 30.19 44.21
9.88
RI
47.50 24.00 20.44 34.56
3.32
SGNS
43.75 42.00 28.30 26.59
2.38
CBOW
40.00 30.00 22.22 28.33
3.04
100 million words
CO
45.00 30.00 10.00 19.36
3.12
PPMI
66.25 54.00 33.75 46.74 15.05
TSVD
46.25 34.00 25.11 42.49 13.00
ISVD
66.25 66.00 40.98 54.55 21.27
RI
55.00 48.00 32.31 45.71 10.15
SGNS
65.00 58.00 40.75 52.83 11.73
CBOW
61.25 46.00 36.15 48.30 15.62
1 billion words
CO
55.00 40.00 11.85 21.83
6.82
PPMI
71.25 54.00 35.69 52.95 24.29
TSVD
56.25 46.00 31.36 52.05 13.35
ISVD
71.25 66.00 44.77 60.11 28.46
RI
61.25 50.00 35.35 50.51 18.58
SGNS
76.25 66.00 41.94 67.03 24.50
CBOW
75.00 56.00 38.31 59.84 22.80
Table 2 :
2Average results for DSMs over four different frequency ranges for the items in the TOEFL, ESL, SL, MEN, and RW tests.
Experiment setupSince our main focus in this paper is the performance of the above-mentioned DSMs on data of2 We use w(b) = e −λ· f (b) V where f (b)is the frequency of context item b, V is the total number of unique context items seen thus far (i.e. the current size of the growing vocabulary), and λ is a constant that we set to 60(Sahlgren et al., 2016).
Such drastic reduction has a negative effect on the performance of the factorized methods for the 1 billion word data, but unfortunately is necessary for computational reasons.4 It is likely that the results on the similarity tests could be improved by using a wider context window, but such improvement would probably be consistent across all models, and is thus outside the scope of this paper.
Although rank correlation is not directly comparable with accuracy, they are both bounded between zero and one, which means we can take the average to get an idea about overall performance.
233 test terms did not occur in the 1 billion-word corpus.
AcknowledgementsThis research was supported by the Swedish Research Council under contract 2014-28199.
Comparing predictive and co-occurrence based models of lexical semantics trained on child-directed speech. Fatemeh Asr, Jon Willits, Michael Jones, Proceedings of CogSci. CogSciFatemeh Asr, Jon Willits, and Michael Jones. 2016. Comparing predictive and co-occurrence based mod- els of lexical semantics trained on child-directed speech. In Proceedings of CogSci.
Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. Marco Baroni, Georgiana Dinu, Germán Kruszewski, Proceedings of ACL. ACLMarco Baroni, Georgiana Dinu, and Germán Kruszewski. 2014. Don't count, predict! a systematic compari- son of context-counting vs. context-predicting seman- tic vectors. In Proceedings of ACL, pages 238-247.
Multimodal distributional semantics. Elia Bruni, Nam Khanh Tran, Marco Baroni, Journal of Artificial Intelligence Research. 491Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Arti- ficial Intelligence Research, 49(1):1-47, January.
Extracting semantic representations from word co-occurrence statistics: stop-lists, stemming, and svd. Behavior Research Methods. John Bullinaria, Joseph P Levy, 44John Bullinaria and Joseph P. Levy. 2012. Extract- ing semantic representations from word co-occurrence statistics: stop-lists, stemming, and svd. Behavior Re- search Methods, 44:890-907.
A unified architecture for natural language processing: Deep neural networks with multitask learning. Ronan Collobert, Jason Weston, Proceedings of ICML. ICMLRonan Collobert and Jason Weston. 2008. A unified ar- chitecture for natural language processing: Deep neu- ral networks with multitask learning. In Proceedings of ICML, pages 160-167.
Finding structure in time. Jeffrey L Elman, Cognitive Science. 14Jeffrey L. Elman. 1990. Finding structure in time. Cog- nitive Science, 14:179-211.
Introducing and evaluating ukwac, a very large web-derived corpus of english. Adriano Ferraresi, Eros Zanchetta, Marco Baroni, Silvia Bernardini, Proceedings of WAC-4. WAC-4Adriano Ferraresi, Eros Zanchetta, Marco Baroni, and Silvia Bernardini. 2008. Introducing and evaluating ukwac, a very large web-derived corpus of english. Proceedings of WAC-4, pages 47-54.
Simlex-999: Evaluating semantic models with (genuine) similarity estimation. Felix Hill, Roi Reichart, Anna Korhonen, Computational Linguistics. 414Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.
Random indexing of text samples for latent semantic analysis. Pentti Kanerva, Jan Kristofersson, Anders Holst, Proceedings of CogSci. CogSci1036Pentti Kanerva, Jan Kristofersson, and Anders Holst. 2000. Random indexing of text samples for latent se- mantic analysis. In Proceedings of CogSci, page 1036.
A solution to platos problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. K Thomas, Susan T Landauer, Dumais, Psychological Review. 1042Thomas K Landauer and Susan T. Dumais. 1997. A so- lution to platos problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2):211-240.
Linguistic regularities in sparse and explicit word representations. Omer Levy, Yoav Goldberg, Proceedings of CoNLL. CoNLLOmer Levy and Yoav Goldberg. 2014. Linguistic regu- larities in sparse and explicit word representations. In Proceedings of CoNLL, pages 171-180.
Improving distributional similarity with lessons learned from word embeddings. Omer Levy, Yoav Goldberg, Ido Dagan, Transactions of the Association for Computational Linguistics. 3Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Im- proving distributional similarity with lessons learned from word embeddings. Transactions of the Associa- tion for Computational Linguistics, 3:211-225.
Better word representations with recursive neural networks for morphology. Minh-Thang Luong, Richard Socher, Christopher D Manning, Proceedings of CoNLL. CoNLLMinh-Thang Luong, Richard Socher, and Christopher D. Manning. 2013. Better word representations with re- cursive neural networks for morphology. In Proceed- ings of CoNLL, pages 104-113.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Proceedings of NIPS. NIPSTomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111-3119.
Factorization of latent variables in distributional semantic models. Davidödling Arvidösterlund, Magnus Sahlgren, Proceedings of EMNLP. EMNLPArvidÖsterlund, DavidÖdling, and Magnus Sahlgren. 2015. Factorization of latent variables in distributional semantic models. In Proceedings of EMNLP, pages 227-231.
Redundancy in perceptual and linguistic experience: Comparing feature-based and distributional models of semantic representation. Brian Riordan, Michael N Jones, Topics in Cognitive Science. 32Brian Riordan and Michael N. Jones. 2011. Redun- dancy in perceptual and linguistic experience: Com- paring feature-based and distributional models of se- mantic representation. Topics in Cognitive Science, 3(2):303-345.
Permutations as a means to encode order in word space. Magnus Sahlgren, Anders Holst, Pentti Kanerva, Proceedings of CogSci. CogSciMagnus Sahlgren, Anders Holst, and Pentti Kanerva. 2008. Permutations as a means to encode order in word space. In Proceedings of CogSci, pages 1300- 1305.
Jussi Karlgren, Fredrik Olsson, Per Persson, and Akshay Viswanathan. 2016. The Gavagai Living Lexicon. Magnus Sahlgren, Amaru Cuba Gyllensten, Fredrik Espinoza, Ola Hamfors, Anders Holst, Proceedings of LREC. LRECMagnus Sahlgren, Amaru Cuba Gyllensten, Fredrik Espinoza, Ola Hamfors, Anders Holst, Jussi Karl- gren, Fredrik Olsson, Per Persson, and Akshay Viswanathan. 2016. The Gavagai Living Lexicon. In Proceedings of LREC.
The Word-Space Model. Magnus Sahlgren, Stockholm UniversityPhd thesisMagnus Sahlgren. 2006. The Word-Space Model. Phd thesis, Stockholm University.
Evaluation methods for unsupervised word embeddings. Tobias Schnabel, Igor Labutov, David Mimno, Thorsten Joachims, Proceedings of EMNLP. EMNLPTobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. 2015. Evaluation methods for unsupervised word embeddings. In Proceedings of EMNLP, pages 298-307.
Characterising measures of lexical distributional similarity. Julie Weeds, David Weir, Diana Mccarthy, Proceedings of COLING. COLINGJulie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional simi- larity. In Proceedings of COLING, pages 1015-1021.
| [] |
[
"A Comparative Study on Speaker-attributed Automatic Speech Recognition in Multi-party Meetings",
"A Comparative Study on Speaker-attributed Automatic Speech Recognition in Multi-party Meetings"
] | [
"Fan Yu \nSchool of Computer Science\nAudio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina\n",
"Zhihao Du ",
"Shiliang Zhang ",
"Yuxiao Lin \nCollege of Computer Science and Technology\nZhejiang University\nHangzhouChina\n",
"Lei Xie lxie@nwpu.edu.cn \nSchool of Computer Science\nAudio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina\n"
] | [
"School of Computer Science\nAudio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina",
"College of Computer Science and Technology\nZhejiang University\nHangzhouChina",
"School of Computer Science\nAudio, Speech and Language Processing Group (ASLP@NPU)\nNorthwestern Polytechnical University\nXi'anChina"
] | [] | In this paper, we conduct a comparative study on speakerattributed automatic speech recognition (SA-ASR) in the multiparty meeting scenario, a topic with increasing attention in meeting rich transcription. Specifically, three approaches are evaluated in this study. The first approach, FD-SOT, consists of a frame-level diarization model to identify speakers and a multitalker ASR to recognize utterances. The speaker-attributed transcriptions are obtained by aligning the diarization results and the recognized hypotheses. However, due to the modular independence, such an alignment strategy may suffer from erroneous timestamps which severely hinder the model performance. Therefore, we propose the second approach, WD-SOT, to address alignment errors by introducing a word-level diarization model, which can get rid of such timestamp alignment dependency. To further mitigate the alignment issues, we propose the third approach, TS-ASR, which trains a target-speaker separation module and an ASR module jointly. By comparing various strategies for each SA-ASR approach, experimental results on a real meeting scenario corpus, AliMeeting, reveal that the WD-SOT approach achieves 10.7% relative reduction on averaged speaker-dependent character error rate (SD-CER), compared with the FD-SOT approach. In addition, the TS-ASR approach also outperforms the FD-SOT approach and brings 16.5% relative average SD-CER reduction. | 10.21437/interspeech.2022-11210 | [
"https://arxiv.org/pdf/2203.16834v3.pdf"
] | 247,839,164 | 2203.16834 | 29ea9c4243563dc1a04b82ead8fc8ea0c35dd207 |
A Comparative Study on Speaker-attributed Automatic Speech Recognition in Multi-party Meetings
Fan Yu
School of Computer Science
Audio, Speech and Language Processing Group (ASLP@NPU)
Northwestern Polytechnical University
Xi'anChina
Zhihao Du
Shiliang Zhang
Yuxiao Lin
College of Computer Science and Technology
Zhejiang University
HangzhouChina
Lei Xie lxie@nwpu.edu.cn
School of Computer Science
Audio, Speech and Language Processing Group (ASLP@NPU)
Northwestern Polytechnical University
Xi'anChina
A Comparative Study on Speaker-attributed Automatic Speech Recognition in Multi-party Meetings
Index Terms: rich transcriptionspeaker-attributedmulti- speaker ASRAliMeeting
In this paper, we conduct a comparative study on speakerattributed automatic speech recognition (SA-ASR) in the multiparty meeting scenario, a topic with increasing attention in meeting rich transcription. Specifically, three approaches are evaluated in this study. The first approach, FD-SOT, consists of a frame-level diarization model to identify speakers and a multitalker ASR to recognize utterances. The speaker-attributed transcriptions are obtained by aligning the diarization results and the recognized hypotheses. However, due to the modular independence, such an alignment strategy may suffer from erroneous timestamps which severely hinder the model performance. Therefore, we propose the second approach, WD-SOT, to address alignment errors by introducing a word-level diarization model, which can get rid of such timestamp alignment dependency. To further mitigate the alignment issues, we propose the third approach, TS-ASR, which trains a target-speaker separation module and an ASR module jointly. By comparing various strategies for each SA-ASR approach, experimental results on a real meeting scenario corpus, AliMeeting, reveal that the WD-SOT approach achieves 10.7% relative reduction on averaged speaker-dependent character error rate (SD-CER), compared with the FD-SOT approach. In addition, the TS-ASR approach also outperforms the FD-SOT approach and brings 16.5% relative average SD-CER reduction.
Introduction
Speaker-attributed automatic speech recognition (SA-ASR) is the major purpose of rich transcription in real-world multi-party meetings [1,2]. In general, an SA-ASR aims at answering the question "who spoke what" [3][4][5][6]. Compared with the multispeaker ASR [7][8][9], SA-ASR not only focuses on transcribing multi-speaker speech that may contain overlapped segments from different speakers, but also assigns speaker labels to each recognized word. Therefore, SA-ASR system needs to take more consideration of all involved modules, such as speaker diarization [10][11][12] to count and identify speakers, speech separation [13][14][15] to handle overlapping speech and ASR [16][17][18] to recognize speech contents from the separated signals.
The accuracy of SA-ASR is affected by both transcript prediction and speaker assignment. Recently, a lot of efforts have been made on designing an end-to-end system that directly outputs multi-speaker transcriptions [7,9,[19][20][21]. Speech separation and joint-training with multiple ASR decoders under permutation invariant training (PIT) scheme is the typical ap-*Lei Xie is the corresponding author proach [7,[19][20][21]. However, the maximum number of speakers that the model can handle is constrained by the number of the decoders in the model. Besides, duplicated hypotheses can be generated in different outputs, since the outputs are independent of each another in PIT. To mitigate these issues, the serialized output training (SOT) strategy [9] is proposed for multitalker ASR, which introduces a special symbol to represent the speaker change with only one output layer. In this way, SOTbased models have no constraints on the maximum number of speakers and avoid duplicated hypotheses naturally. In the recent M2MeT challenge which focuses on multi-speaker ASR in the meeting scenario [22,23], SOT has been well applied and achieved remarkable performance. Therefore, in this paper, we combine the multi-speaker hypotheses of the SOT-based ASR model and the frame-level results of the speaker diarization model (e.g, TS-VAD [24,25]) to obtain the speaker-attributed transcriptions by simply aligning the timestamps as our first approach, namely frame-level diarization with SOT (FD-SOT).
However, due to the modular independence, such an alignment strategy may suffer from erroneous timestamps which severely hinder the speaker assignment performance. Considering that the lack of correlation between the ASR and speaker diarization modules may cause potential alignment errors, the recently updated revisions of SOT introduced speaker inventory and speaker encoder to produce speaker labels for each ASR token [26][27][28]. Although they significantly reduced the speaker-attributed word error rate, these approaches cost a lot for changing ASR model structures and training pipelines, which is not practical in real production systems. In order to retain the original ASR network structure designed for singlespeaker while addressing the alignment errors, we propose the second approach named word-level diarization with SOT (WD-SOT), which utilizes the recognized transcriptions from an SOT-based ASR to model the diarization system, to get rid of such timestamp alignment dependency. Meanwhile, we adopt self-attention to capture more contextual information, which can further improve the diarization performance.
Both FD-SOT and WD-SOT depend on the output of the SOT model, and the errors of SOT will seriously affect the performance of the overall framework. Therefore, we turn to another solution of SA-ASR that uses a separation model to handle overlapped speech while getting rid of the dependence on the multi-talker ASR output [7,19,[29][30][31][32]. As the speech separation models are usually trained based on the signal-level criterion, a joint training strategy was proposed to mitigate the mismatch between the separation and the backend ASR, leading to better recognition performance [7,19,31,32]. However, most of the prior works adopted blind source separation (BSS), which cannot determine the specific speaker of the recognition transcrip-tions. In order to fit the SA-ASR task, we use target speech separation (TSS) by adopting speaker diarization and speaker extraction models to obtain the speaker embeddings required by the TSS module. In this paper, we name this approach as joint target-speaker separation and ASR (TS-ASR) approach.
Although most of the works described above show promising results in multi-speaker ASR, they are mostly evaluated by simulated multi-speaker data. Problems including unknown number of speakers, variable overlap rate, and accurate speaker identity are still considered unsolved, especially for real-world applications. SA-ASR for multi-speaker speech remains to be a very difficult problem especially when speech utterances from multiple speakers significantly overlap in monaural recordings. To the best of our knowledge, we are the first to evaluate SA-ASR approaches on a real meeting corpus AliMeeting to provide reasonable SA-ASR results and promote the research in meeting rich transcription.
Evaluation Data
In this study, we use the AliMeeting corpus [22,23] to evaluate various SA-ASR systems. Collected in real meetings, the AliMeeting corpus contains 104.75 hours 1 data for training (Train), 4 hours for evaluation (Eval) and 10 hours for test (Test). Each set contains several meeting sessions and each session consists of a 15 to 30-minute discussion with 2 to 4 participants. To highlight the speaker overlap, the sessions with 4 participants account for 59%, 50% and 57% sessions in Train, Eval and Test, respectively. Train and Eval sets contain the 8-channel audios recorded from the microphone array (Ali-far) as well as the near-field audio (Ali-near) from the participant's headset microphone, while the Test set only contains the far-field audios. Ali-far-bf is produced by applying CDDMA Beamformer [33,34]. In this paper, the model training and evaluation are all based on the single-channel audio, namely Ali-near and Ali-far-bf. The prefix Train-, Evaland Testare used to indicate different sets, e.g., Train-Ali-far-bf denotes the one channel data outputted by the beamformer which takes the AliMeeting 8-channel array Train data as input. We use official scripts 2 provided by the M2MeT challenge to prepare the sentence segmentation timestamp. Meanwhile, in order to improve the performance of the speech separation module used in this paper, we simulate 50 hours mixed training data named Train-Ali-simu from Train-Ali-near.
SA-ASR
SOT
The SOT method has an excellent ability to model the dependencies among outputs for different speakers and no longer has a limitation on the maximum number of speakers. To recognize multiple utterances, SOT serializes multiple references into a single token sequence with a special token sc , which is used to concatenate the transcriptions of the different utterances. In order to avoid the complex calculation of PIT for all possible concatenation patterns, SOT sorts the reference labels by their start times, which means "first-in, first-out" (FIFO). The experiments show that the FIFO method achieves a better CER than the method of calculating all permutations [9]. Meanwhile, considering the high overlap ratio and frequent speaker turn on the AliMeeting corpus, we experimentally investigate the speakerbased and utterance-based FIFO training schemes [35] in sec- 1 Hours are calculated in a single channel of audio. 2 https://github.com/yufan-aslp/AliMeeting
FD-SOT
As the top three teams all employ TS-VAD to find the overlap between speakers in the M2MeT challenge speaker diarization track [23], we also re-implemented it and achieved DERs of 4.20% and 5.42% on AliMeeting Eval and Test sets, respectively. To further obtain the speaker-attributed transcriptions, we combine the results of TS-VAD and SOT by aligning the timestamps. This approach is named as frame-level diarization with SOT (FD-SOT). The detailed process of FD-SOT is showing as follows:
1) We estimate the number of the utterances using the oracle sentence segmentation, saysN , according to the diarization output of TS-VAD.
2) The utterance number of SOT output is defined as N . If N is equal to N , no further effort is required. 3) IfN is larger than N , we select N out of the utterances that have the longest duration in the TS-VAD diarization output, and discard other utterances. 4) IfN is smaller than N , we selectN out of utterances that have the longest text length in the SOT output, and discard other utterances. 5) Finally, we match utterances between TS-VAD and SOT in chronological order.
WD-SOT
In the proposed world-level diarization (WD) method, as shown in Fig. 1, we first use three individual encoders to encode the multi-talker hypotheses, speech features and speaker embeddings. Given the encoded hypotheses H = {h l |l = 1, . . . , L} and features X = {xt|t = 1, . . . , T }, a multi-head attention is used to produce the aggregated feature representation r l for each token:
r l = T t=1 a l,t Wvxt(1)
which is thought to include both acoustic and semantic information. a l,t is calculated by
α l,t = dot(Wqh l , W k xt), a l,t = exp(α l,t ) T i=1 exp(α l,i ) .(2)
Wq, W k and Wv are trainable parameters in the attention layer. Next, the context-independent (CI) score S CI is derived from
While CI scores only consider the representations of the current speaker, the contextual information of different speakers is also useful to identify the activated speaker from others. Therefore, we further design a context-dependent (CD) score S CD l,n , which is defined as follows:
S CD l,n = f (r l , vn; R, Θ)(4)
where f is a context-aware function, e.g., the self-attention based networks (SAN) [16], and Θ are learnable parameters of f . R = {r l |l = 1, . . . , L} contains all aggregated representations in an utterance. Finally, the CI and CD scores are concatenated and fed to a post-processing network to predict the corresponding speaker for each character.
In this study, a four-layer self-attention based encoder is employed to encode the recognized text with 8 attention heads and 256 hidden units in each layer.
TS-ASR
Target-speaker separation modules generate target-speaker representation from multi-speaker signals by enrollment embedding for ASR. With the premise that optimizing front-end and back-end separately will lead to sub-optimal performance, joint modeling will make the whole system matching the final metric. Meanwhile, we adopt TS-VAD described in Section 3.2 and d-vector extraction network to obtain the speaker embeddings required by the target-speaker separation module.
For the structure of target-speaker separation model , we compared the performance between Conformer [17] and convolutional recurrent network (CRN) [36]. Target-speaker separation Conformer network consists of 6 encoder layers with 4 attention heads, 256 attention dimensions and 2048 dimensional feed-forward network. CRN network consists of 2 bidirectional long short-term memory (BLSTM) layers with 256 hidden dimensions and a convolutional encoder-decoder (CED) with skip connections of corresponding encoder and decoder layer. We employ a Res2Net-based d-vector extraction network trained on the VoxCeleb corpus [37,38] as the speaker embedding model. Noted that the d-vector is combined with both front-end models using feature-wise linear modulation (FILM) [39], which shows better performance on the multispeaker and noisy datasets. In order to control the TS-ASR model parameters to be consistent with the ASR model of previous approaches, we adopt the ASR model with 6 layers of encoder.
Experiments
Training details
In our work, the 80-dimensional log Mel-filter bank feature (Fbank) is used as the input feature. The window size is 25 ms with a shift of 10 ms. We use 4950 Chinese characters extracted from the training transcriptions as the modeling units. SOT models are trained directly based on Train-Ali-far-bf, Train-Ali-near and Train-Ali-simu. For front-end and back-end joint models, we first pre-train each module with Train-Ali-simu and Train-Ali-near, respectively. Then we use the Train-Ali-far-bf data for fine-tuning and use ASR loss function to update the parameters of the whole model for joint training. We train the model for 100 epochs with the Adam optimizer.
Evaluation metric
We use two evaluation metrics in our experiments, referring as speaker independent-(SI-) and speaker dependent-(SD-) character error rate (CER) [40]. The SI-CER is designed to measure the performance of multi-speaker ASR task (e.g, track 2 of M2Met challenge [22,23]), which ignores the speaker labels. On the other hand, the SD-CER, slightly different from cp-WER [28], is calculated by comparing the ASR hypothesis and the reference transcription of the corresponding speaker instead of all possible speaker permutations. The SD-CER is more rigorous in calculation due to the global calculation of the entire meeting, which needs to determine the exact global speaker ID for each utterance in the whole meeting.
Comparison of different SA-ASR approaches
As shown in Table 1, we evaluate our SA-ASR approaches on AliMeeting Eval and Test sets. We first compare the speakerbased and utterance-based FIFO training schemes of SOT models described in Section 3.1. Consistent with the conclusion of [35], we find that the utterance-based FIFO training significantly outperforms the speaker-based FIFO training with over 1% absolute SI/SD-CER reduction for all SOT-based approaches, due to the high overlap ratio and frequent speaker turns of AliMeeting. Based on this conclusion, we employ the utterance-based FIFO scheme in the remaining experiments.
For the SOT-based model, we regard the SI-CER result of SOT as the topline of SD-CER results, assuming that each token matches the correct speaker. We can see that our proposed WD-SOT approach outperforms the FD-SOT approach, leading to 12.2% (41.0% → 36.0%) and 9.6% (41.2% → 37.1%) relative SD-CER reduction on Eval and Test set, respectively. Compared with SOT-based SA-ASR models, our proposed TS-ASR models achieve the lowest averaged SD-CER. Specifically, TS-ASR (CRN) approach achieves SD-CERs of 32.5% and 35.1% on Eval and Test sets, respectively.
Comparison of various strategies for WD-SOT
We further compare the effect of various strategies for the WD-SOT approach, and the results are shown in Table 2. The firstrow result of WD-SOT is trained using the ground truth transcripts of Train-Ali-far-bf only. To increase the robustness of the model, we add the hypothetical transcriptions to the training set, which dramatically decreases the average SD-CER from 39.1% to 37.9%. From the comparison of contextual information, we can conclude that when the WD-SOT captures more context information through the self-attention layer, the performance can be significantly improved, which achieves 2.9% (37.9% → 36.8%) relative average SD-CER reduction. Our WD-SOT approach is based on the results of SOT, so the SOT performance will seriously affect the performance of the whole framework. Considering the position offset of the separator sc , we intend to investigate the impact of the separator prediction accuracy on the model performance. After replacing the oracle separator, the overall performance has been improved from 36.8% to 36.3%, especially on Test sets with 2.5% relative SD-CER reduction.
Impact of minimum time of diarization utterances for TS-ASR approach
In the TS-ASR approach, we need to determine the speakers within an oracle sentence segment, which relies on two estimated information: the oracle speaker labels and the speaker diarization results. Surprisingly, we obtain better recognition performance by using speaker diarization results (0s) for both TS-ASR approaches, comparing with the oracle speaker labels, leading to 7.0%/8.2% and 2.5%/3.4% relative SD-CER reduction on Eval and Test set for Conformer and CRN TS-ASR approaches, as shown in Table 3. By analyzing the decoding results, we find that some interfering speech is recognized when target-speaker speech duration is short, resulting in a large number of insertion errors. Compared with the insertion errors caused by oracle speaker labels covering all speaker speech, the deletion errors caused by speaker diarization results ignoring short speaker speech are fewer. Based on this finding, we further investigate the impact of minimum time of diarization utterances for TS-ASR approaches. From the table, we can see that both TS-ASR models achieve the best results at minimum time equal to 0.5 s, which brings absolute SD-CER reduction ranging from 0.5% to 0.8% on the Eval and Test sets compared with the TS-ASR approaches without deleting short speaker speech. The comparison results between Conformer and CRN TS-ASR approaches with different optimization strategies are shown in Table 4. Here, the front-end modules are pre-trained on Train-Ali-simu set and back-end modules are pre-trained with Train-Ali-near data, respectively. The difference between the separated and joint training strategies is whether we use ASR loss function to update the front-end module when we use Train-Alifar-bf data to fine-tune the whole model. According to the Table 4, joint optimization for Conformer and CRN TS-ASR approaches leads to 26.8% (47.4% → 34.7%) and 23.9% (45.1% → 34.3%) relative average SD-CER reduction on Eval and Test set, respectively. We conclude that the joint optimization can make the front-end module more suitable and less distorted for the back-end ASR.
Conclusion
In this study, three SA-ASR approaches are evaluated on the AliMeeting corpus, a challenging meeting dataset with multitalker conversation. Compared with the baseline approach, FD-SOT, the proposed WD-SOT approach addresses the alignment errors by introducing a word-level diarization model and results in 10.7% relative average SD-CER reduction. To further get rid of the dependence on multi-talker ASR output, the proposed TS-ASR approach trains a target-speaker separation module and an ASR module jointly, which leads to 16.5% relative average SD-CER reduction compared with FD-SOT. Moreover, ignoring short diarization utterances can bring 0.8% absolute SD-CER reduction for the TS-ASR task. In the future, we will investigate how to incorporate the single-speaker ASR trained on large-scale data into our proposed approaches for real-world applications.
Acknowledgement
This work was supported by Alibaba Group through Alibaba Research Intern Program and Key R & D Projects of the Ministry of Science and Technology (2020YFC0832500).
Figure 1 :
1Semantic diagram of the proposed world-level diarization method.
tion 4 .
4Remarkably, we employ Transformer[16] model with Conformer encoder[17], which includes 12 encoder layers and 6 decoder layers. The common parameters of the encoder and decoder layers are: d head = 4, d attn = 256, d f f = 2048 for the number of attention heads, dimension of attention module, and dimension of feedforward layer, respectively.
Spkr 2 Approach 1 (
21Baseline): Frame-level diarization with SOT (FD-SOT) Approach 2: Word-level diarization with SOT (WD-SOT) Approach 3: Joint target-speaker separation and ASR (TS-ASR) selected by diarization results
Figure 2 :
2Semantic diagram of compared approaches, including FD-SOT, WD-SOT and TS-ASR. the dot product between the encoded speaker embeddings vn and the aggregated representations r l : S CI l,n = dot(r l , vn).
Table 1 :
1SA-ASR results for various modular approaches on Eval and Test sets (%).Approach
Metric
Eval Test Average
SOT [22, 23]
SI-CER 29.7 30.9
30.6
FD-SOT
SD-CER
41.0 41.2
41.2
WD-SOT
36.0 37.1
36.8
TS-ASR (Confomer)
34.8 34.7
34.7
TS-ASR (CRN)
32.5 35.1
34.4
Table 2 :
2Comparison of the WD-SOT results of various strategies on Eval and Test sets (SD-CER %).Approach
Eval
Test
Average
WD-SOT
40.6
38.5
39.1
+ Hypothetical transcriptions
37.8
38.0
37.9
+ Contextual information
36.0
37.1
36.8
+ Oracle separator sc
36.4
36.2
36.3
Table 3 :
3TS-ASR results of the different minimum time of diarization utterances on Eval and Test sets (SD-CER %).Speaker diarization
Approach Set Oracle
0s
0.3s 0.5s 0.7s
Confomer Eval
37.4
34.8 34.4 34.3 34.3
CRN
35.4
32.5 32.0 31.9 31.9
Confomer Test
35.6
34.7 34.3 34.2 34.3
CRN
36.3
35.1 34.5 34.3 34.4
4.6. Effect of joint training for TS-ASR approach
Table 4 :
4The comparison of separated v.s. joint optimization for TS-ASR on Eval and Test sets (SD-CER %).Approach Optimize strategy Eval Test Average
Conformer
Separated
46.0 48.0
47.4
Joint
34.8 34.7
34.7
CRN
Separated
43.3 45.8
45.1
Joint
32.5 35.1
34.3
The rich transcription 2006 spring meeting recognition evaluation. J G Fiscus, J Ajot, M Michel, J S Garofolo, Proc. MLMI. MLMISpringerJ. G. Fiscus, J. Ajot, M. Michel, and J. S. Garofolo, "The rich tran- scription 2006 spring meeting recognition evaluation," in Proc. MLMI. Springer, 2006, pp. 309-322.
The rich transcription 2007 meeting recognition evaluation. J G Fiscus, J Ajot, J S Garofolo, Proc. MTPH. MTPHSpringerJ. G. Fiscus, J. Ajot, and J. S. Garofolo, "The rich transcription 2007 meeting recognition evaluation," in Proc. MTPH. Springer, 2007, pp. 373-389.
The fifth 'CHiME' speech separation and recognition challenge: Dataset, task and baselines. J Barker, S Watanabe, E Vincent, J Trmal, Proc. INTERSPEECH. ISCA. INTERSPEECH. ISCAJ. Barker, S. Watanabe, E. Vincent, and J. Trmal, "The fifth 'CHiME' speech separation and recognition challenge: Dataset, task and baselines," in Proc. INTERSPEECH. ISCA, 2018, pp. 1561-1565.
CHiME-6 Challenge: Tackling Multispeaker Speech Recognition for Unsegmented Recordings. S Watanabe, M Mandel, J Barker, Proc. CHiME. CHiMES. Watanabe, M. Mandel, J. Barker et al., "CHiME-6 Chal- lenge: Tackling Multispeaker Speech Recognition for Unseg- mented Recordings," in Proc. CHiME 2020, 2020, pp. 1-7.
The third DI-HARD diarization challenge. N Ryant, P Singh, V Krishnamohan, R Varma, K Church, C Cieri, J Du, S Ganapathy, M Liberman, arXiv:2012.01477arXiv preprintN. Ryant, P. Singh, V. Krishnamohan, R. Varma, K. Church, C. Cieri, J. Du, S. Ganapathy, and M. Liberman, "The third DI- HARD diarization challenge," arXiv preprint arXiv:2012.01477, 2020.
The AMI meeting corpus. I Mccowan, J Carletta, W Kraaij, S Ashby, S Bourban, M Flynn, Proc. ICMT. ICMTCiteseer88100I. McCowan, J. Carletta, W. Kraaij, S. Ashby, S. Bourban, M. Flynn et al., "The AMI meeting corpus," in Proc. ICMT, vol. 88. Citeseer, 2005, p. 100.
Recognizing multi-talker speech with permutation invariant training. D Yu, X Chang, Y Qian, Proc. INTERSPEECH. ISCA. INTERSPEECH. ISCAD. Yu, X. Chang, and Y. Qian, "Recognizing multi-talker speech with permutation invariant training," in Proc. INTERSPEECH. ISCA, 2017, pp. 2456-2460.
Progressive joint modeling in unsupervised single-channel overlapped speech recognition. Z Chen, J Droppo, J Li, W Xiong, Proc. TASLP. TASLP26Z. Chen, J. Droppo, J. Li, and W. Xiong, "Progressive joint mod- eling in unsupervised single-channel overlapped speech recogni- tion," Proc. TASLP, vol. 26, no. 1, pp. 184-196, 2017.
Serialized output training for end-to-end overlapped speech recognition. N Kanda, Y Gaur, X Wang, Z Meng, T Yoshioka, Proc. INTERSPEECH. ISCA, 2020. INTERSPEECH. ISCA, 2020N. Kanda, Y. Gaur, X. Wang, Z. Meng, and T. Yoshioka, "Seri- alized output training for end-to-end overlapped speech recogni- tion," in Proc. INTERSPEECH. ISCA, 2020, pp. 2797-2801.
A review of speaker diarization: Recent advances with deep learning. T J Park, N Kanda, D Dimitriadis, K J Han, S Watanabe, S Narayanan, Proc. CSL. CSL72101317T. J. Park, N. Kanda, D. Dimitriadis, K. J. Han, S. Watanabe, and S. Narayanan, "A review of speaker diarization: Recent advances with deep learning," Proc. CSL, vol. 72, p. 101317, 2022.
End-to-end neural speaker diarization with selfattention. Y Fujita, N Kanda, S Horiguchi, Y Xue, K Nagamatsu, S Watanabe, Proc. ASRU. IEEE. ASRU. IEEEY. Fujita, N. Kanda, S. Horiguchi, Y. Xue, K. Nagamatsu, and S. Watanabe, "End-to-end neural speaker diarization with self- attention," in Proc. ASRU. IEEE, 2019, pp. 296-303.
End-to-end speaker diarization for an unknown number of speakers with encoder-decoder based attractors. S Horiguchi, Y Fujita, S Watanabe, Y Xue, K Nagamatsu, Proc. IN-TERSPEECH. ISCA, 2020. IN-TERSPEECH. ISCA, 2020S. Horiguchi, Y. Fujita, S. Watanabe, Y. Xue, and K. Naga- matsu, "End-to-end speaker diarization for an unknown number of speakers with encoder-decoder based attractors," in Proc. IN- TERSPEECH. ISCA, 2020, pp. 269-273.
Permutation invariant training of deep models for speaker-independent multi-talker speech separation. D Yu, M Kolbaek, Z.-H Tan, J Jensen, Proc. ICASSP. IEEE. ICASSP. IEEED. Yu, M. Kolbaek, Z.-H. Tan, and J. Jensen, "Permutation invari- ant training of deep models for speaker-independent multi-talker speech separation," in Proc. ICASSP. IEEE, 2017, pp. 241-245.
Deep clustering: Discriminative embeddings for segmentation and separation. J R Hershey, Z Chen, J Le Roux, S Watanabe, Proc. ICASSP. IEEE. ICASSP. IEEEJ. R. Hershey, Z. Chen, J. Le Roux, and S. Watanabe, "Deep clus- tering: Discriminative embeddings for segmentation and separa- tion," in Proc. ICASSP. IEEE, 2016, pp. 31-35.
Deep attractor network for single-microphone speaker separation. Z Chen, Y Luo, N Mesgarani, Proc. ICASSP. IEEE. ICASSP. IEEEZ. Chen, Y. Luo, and N. Mesgarani, "Deep attractor network for single-microphone speaker separation," in Proc. ICASSP. IEEE, 2017, pp. 246-250.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Proc. NeurIPS. NeurIPSA. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," in Proc. NeurIPS, 2017, pp. 5998-6008.
Conformer: Convolution-augmented transformer for speech recognition. A Gulati, J Qin, C.-C Chiu, N Parmar, Proc. INTERSPEECH. ISCA, 2020. INTERSPEECH. ISCA, 2020A. Gulati, J. Qin, C.-C. Chiu, N. Parmar et al., "Conformer: Convolution-augmented transformer for speech recognition," in Proc. INTERSPEECH. ISCA, 2020, pp. 5036-5040.
Recent advances in end-to-end automatic speech recognition. J Li, arXiv:2111.01690arXiv preprintJ. Li, "Recent advances in end-to-end automatic speech recogni- tion," arXiv preprint arXiv:2111.01690, 2021.
A purely end-to-end system for multi-speaker speech recognition. H Seki, T Hori, S Watanabe, J L Roux, J R Hershey, Proc. ACL. ACL. ACL. ACLH. Seki, T. Hori, S. Watanabe, J. L. Roux, and J. R. Hershey, "A purely end-to-end system for multi-speaker speech recognition," in Proc. ACL. ACL, 2018, pp. 2620-2630.
Continuous speech separation: Dataset and analysis. Z Chen, T Yoshioka, L Lu, T Zhou, Z Meng, Y Luo, J Wu, X Xiao, J Li, Proc. ICASSP. IEEE, 2020. ICASSP. IEEE, 2020Z. Chen, T. Yoshioka, L. Lu, T. Zhou, Z. Meng, Y. Luo, J. Wu, X. Xiao, and J. Li, "Continuous speech separation: Dataset and analysis," in Proc. ICASSP. IEEE, 2020, pp. 7284-7288.
MIMO-Speech: End-to-end multi-channel multi-speaker speech recognition. X Chang, W Zhang, Y Qian, J Le Roux, S Watanabe, Proc. ASRU. IEEE. ASRU. IEEEX. Chang, W. Zhang, Y. Qian, J. Le Roux, and S. Watanabe, "MIMO-Speech: End-to-end multi-channel multi-speaker speech recognition," in Proc. ASRU. IEEE, 2019, pp. 237-244.
M2MeT: The ICASSP 2022 multi-channel multi-party meeting transcription challenge. F Yu, S Zhang, Y Fu, L Xie, S Zheng, Z Du, Proc. ICASSP. IEEE. ICASSP. IEEEF. Yu, S. Zhang, Y. Fu, L. Xie, S. Zheng, Z. Du et al., "M2MeT: The ICASSP 2022 multi-channel multi-party meeting transcrip- tion challenge," in Proc. ICASSP. IEEE, 2022.
Summary on the ICASSP 2022 multi-channel multi-party meeting transcription grand challenge. F Yu, S Zhang, P Guo, Y Fu, Z Du, S Zheng, L Xie, Proc. ICASSP. IEEE. ICASSP. IEEEF. Yu, S. Zhang, P. Guo, Y. Fu, Z. Du, S. Zheng, L. Xie et al., "Summary on the ICASSP 2022 multi-channel multi-party meet- ing transcription grand challenge," in Proc. ICASSP. IEEE, 2022.
Target-speaker voice activity detection: a novel approach for multi-speaker diarization in a dinner party scenario. I Medennikov, M Korenevsky, T Prisyach, Y Khokhlov, M Korenevskaya, Proc. INTERSPEECH. ISCA, 2020. INTERSPEECH. ISCA, 2020I. Medennikov, M. Korenevsky, T. Prisyach, Y. Khokhlov, M. Ko- renevskaya et al., "Target-speaker voice activity detection: a novel approach for multi-speaker diarization in a dinner party scenario," in Proc. INTERSPEECH. ISCA, 2020, pp. 274-278.
Target-speaker voice activity detection with improved ivector estimation for unknown number of speaker. M He, D Raj, Z Huang, J Du, Z Chen, S Watanabe, arXiv:2108.03342arXiv preprintM. He, D. Raj, Z. Huang, J. Du, Z. Chen, and S. Watan- abe, "Target-speaker voice activity detection with improved i- vector estimation for unknown number of speaker," arXiv preprint arXiv:2108.03342, 2021.
Joint speaker counting, speech recognition, and speaker identification for overlapped speech of any number of speakers. N Kanda, Y Gaur, X Wang, Z Meng, Z Chen, T Zhou, T Yoshioka, Proc. INTERSPEECH. ISCA, 2020. INTERSPEECH. ISCA, 2020N. Kanda, Y. Gaur, X. Wang, Z. Meng, Z. Chen, T. Zhou, and T. Yoshioka, "Joint speaker counting, speech recognition, and speaker identification for overlapped speech of any number of speakers," in Proc. INTERSPEECH. ISCA, 2020, pp. 36-40.
End-to-end speaker-attributed ASR with transformer. N Kanda, G Ye, Y Gaur, X Wang, Z Meng, Z Chen, T Yoshioka, Proc. INTERSPEECH. ISCA, 2021. INTERSPEECH. ISCA, 2021N. Kanda, G. Ye, Y. Gaur, X. Wang, Z. Meng, Z. Chen, and T. Yoshioka, "End-to-end speaker-attributed ASR with trans- former," in Proc. INTERSPEECH. ISCA, 2021, pp. 4413-4417.
A comparative study of modular and joint approaches for speaker-attributed asr on monaural long-form audio. N Kanda, X Xiao, J Wu, T Zhou, Y Gaur, X Wang, Z Meng, Z Chen, T Yoshioka, arXiv:2107.02852arXiv preprintN. Kanda, X. Xiao, J. Wu, T. Zhou, Y. Gaur, X. Wang, Z. Meng, Z. Chen, and T. Yoshioka, "A comparative study of modular and joint approaches for speaker-attributed asr on monaural long-form audio," arXiv preprint arXiv:2107.02852, 2021.
Conv-tasnet: Surpassing ideal timefrequency magnitude masking for speech separation. Y Luo, N Mesgarani, Proc. TASLP. TASLPIEEE27Y. Luo and N. Mesgarani, "Conv-tasnet: Surpassing ideal time- frequency magnitude masking for speech separation," in Proc. TASLP, vol. 27, no. 8. IEEE, 2019, pp. 1256-1266.
Voicefilter: Targeted voice separation by speakerconditioned spectrogram masking. Q Wang, H Muckenhirn, K Wilson, P Sridhar, Z Wu, J R Hershey, Proc. INTERSPEECH. ISCA. INTERSPEECH. ISCAQ. Wang, H. Muckenhirn, K. Wilson, P. Sridhar, Z. Wu, J. R. Hershey et al., "Voicefilter: Targeted voice separation by speaker- conditioned spectrogram masking," in Proc. INTERSPEECH. ISCA, 2019, pp. 2728-2732.
Continuous speech separation: Dataset and analysis. Z Chen, T Yoshioka, L Lu, T Zhou, Z Meng, Y Luo, J Wu, X Xiao, J Li, Proc. ICASSP. IEEE, 2020. ICASSP. IEEE, 2020Z. Chen, T. Yoshioka, L. Lu, T. Zhou, Z. Meng, Y. Luo, J. Wu, X. Xiao, and J. Li, "Continuous speech separation: Dataset and analysis," in Proc. ICASSP. IEEE, 2020, pp. 7284-7288.
An end-to-end architecture of online multi-channel speech separation. J Wu, Z Chen, J Li, T Yoshioka, Z Tan, Proc. INTERSPEECH. ISCA, 2020. INTERSPEECH. ISCA, 2020J. Wu, Z. Chen, J. Li, T. Yoshioka, Z. Tan et al., "An end-to-end architecture of online multi-channel speech separation," in Proc. INTERSPEECH. ISCA, 2020, pp. 81-85.
Differential beamforming for uniform circular array with directional microphones. W Huang, J Feng, Proc. INTER-SPEECH. ISCA, 2020. INTER-SPEECH. ISCA, 2020W. Huang and J. Feng, "Differential beamforming for uniform circular array with directional microphones." in Proc. INTER- SPEECH. ISCA, 2020, pp. 71-75.
A real-time speaker diarization system based on spatial spectrum. S Zheng, W Huang, X Wang, H Suo, J Feng, Z Yan, Proc. ICASSP. IEEE, 2021. ICASSP. IEEE, 2021S. Zheng, W. Huang, X. Wang, H. Suo, J. Feng, and Z. Yan, "A real-time speaker diarization system based on spatial spectrum," in Proc. ICASSP. IEEE, 2021, pp. 7208-7212.
Investigation of end-to-end speaker-attributed asr for continuous multi-talker recordings. N Kanda, X Chang, Y Gaur, X Wang, Z Meng, Z Chen, Proc. SLT. IEEE, 2021. SLT. IEEE, 2021N. Kanda, X. Chang, Y. Gaur, X. Wang, Z. Meng, Z. Chen et al., "Investigation of end-to-end speaker-attributed asr for continuous multi-talker recordings," in Proc. SLT. IEEE, 2021, pp. 809-816.
A convolutional recurrent neural network for real-time speech enhancement. K Tan, D Wang, Proc. INTERSPEECH. ISCA. INTERSPEECH. ISCAK. Tan and D. Wang, "A convolutional recurrent neural network for real-time speech enhancement," in Proc. INTERSPEECH. ISCA, 2018, pp. 3229-3233.
Voxceleb: A largescale speaker identification dataset. A Nagrani, J S Chung, A Zisserman, Proc. INTERSPEECH. ISCA. INTERSPEECH. ISCAA. Nagrani, J. S. Chung, and A. Zisserman, "Voxceleb: A large- scale speaker identification dataset," in Proc. INTERSPEECH. ISCA, 2017, pp. 2616-2620.
Voxceleb2: Deep speaker recognition. J S Chung, A Nagrani, A Zisserman, Proc. INTERSPEECH. ISCA. INTERSPEECH. ISCAJ. S. Chung, A. Nagrani, and A. Zisserman, "Voxceleb2: Deep speaker recognition," in Proc. INTERSPEECH. ISCA, 2018, pp. 1086-1090.
Film: Visual reasoning with a general conditioning layer. E Perez, F Strub, H De, V Vries, A Dumoulin, Courville, Proceedings AAAI. AAAI32E. Perez, F. Strub, H. De Vries, V. Dumoulin, and A. Courville, "Film: Visual reasoning with a general conditioning layer," in Proceedings AAAI, vol. 32, no. 1, 2018.
AISHELL-4: An open source dataset for speech enhancement, separation, recognition and speaker diarization in conference scenario. Y Fu, L Cheng, S Lv, Y Jv, Y Kong, Z Chen, Y Hu, Proc. INTERSPEECH. ISCA, 2021. INTERSPEECH. ISCA, 2021Y. Fu, L. Cheng, S. Lv, Y. Jv, Y. Kong, Z. Chen, Y. Hu et al., "AISHELL-4: An open source dataset for speech enhancement, separation, recognition and speaker diarization in conference sce- nario," in Proc. INTERSPEECH. ISCA, 2021, pp. 3665-3669.
| [
"https://github.com/yufan-aslp/AliMeeting"
] |
[
"Visually Analyzing Contextualized Embeddings",
"Visually Analyzing Contextualized Embeddings"
] | [
"Matthew Berger matthew.berger@vanderbilt.edu \nVanderbilt University\n\n"
] | [
"Vanderbilt University\n"
] | [] | A B C DFigure 1: Our method allows the exploration of contextualized embeddings produced by language models. Our design shows (A) co-occurrences of phrases via their assigned clusters, (B) per-cluster span lengths and (C) how much context a given cluster captures. One may also inspect example sentences in detail (D), here highlighting terms that describe building structures.ABSTRACTIn this paper we introduce a method for visually analyzing contextualized embeddings produced by deep neural network-based language models. Our approach is inspired by linguistic probes for natural language processing, where tasks are designed to probe language models for linguistic structure, such as parts-of-speech and named entities. These approaches are largely confirmatory, however, only enabling a user to test for information known a priori. In this work, we eschew supervised probing tasks, and advocate for unsupervised probes, coupled with visual exploration techniques, to assess what is learned by language models. Specifically, we cluster contextualized embeddings produced from a large text corpus, and introduce a visualization design based on this clustering and textual structurecluster co-occurrences, cluster spans, and cluster-word membership * | 10.1109/vis47514.2020.00062 | [
"https://arxiv.org/pdf/2009.02554v1.pdf"
] | 221,517,046 | 2009.02554 | 6bca3a95584763856bd352efc31e2d5de5e0ecad |
Visually Analyzing Contextualized Embeddings
Matthew Berger matthew.berger@vanderbilt.edu
Vanderbilt University
Visually Analyzing Contextualized Embeddings
A B C DFigure 1: Our method allows the exploration of contextualized embeddings produced by language models. Our design shows (A) co-occurrences of phrases via their assigned clusters, (B) per-cluster span lengths and (C) how much context a given cluster captures. One may also inspect example sentences in detail (D), here highlighting terms that describe building structures.ABSTRACTIn this paper we introduce a method for visually analyzing contextualized embeddings produced by deep neural network-based language models. Our approach is inspired by linguistic probes for natural language processing, where tasks are designed to probe language models for linguistic structure, such as parts-of-speech and named entities. These approaches are largely confirmatory, however, only enabling a user to test for information known a priori. In this work, we eschew supervised probing tasks, and advocate for unsupervised probes, coupled with visual exploration techniques, to assess what is learned by language models. Specifically, we cluster contextualized embeddings produced from a large text corpus, and introduce a visualization design based on this clustering and textual structurecluster co-occurrences, cluster spans, and cluster-word membership *
INTRODUCTION
Recent advances in natural language processing (NLP) have led to the development of language models that perform remarkably well across a wide range of language understanding tasks [8,24], e.g. named entity recognition, entailment, paraphrase verification [37]. These models typically take the form of deep neural networks that are pre-trained on a large corpus of unannotated text, and subsequently fine-tuned for specific language understanding tasks. An intriguing property of these models is that, due to the combination of the pre-training objective and model capacity, they encode a variety of linguistic structure, despite never being explicitly trained to learn such structure [6,18,27]. However, comprehending the full space of what is learned is elusive, and remains an open problem.
Approaches for interpreting pre-trained language models have relied on the design of supervised probes -human-annotated datasets that capture known semantic or syntactic properties, e.g. parts-ofspeech, chunking, dependency syntax [2,18]. Representations extracted from language models are trained to solve problems posed by these probes to assess how well the model captures linguistic structure. Although supervised probes have helped shed light on language models, they inherit several limitations. First, they are confirmatory, only telling us whether or not a language model has learned a known linguistic property. Secondly, models trained to solve probes face issues regarding complexity, e.g. an overly-complex model that performs well may poorly reflect the probe task [13].
In this work we propose an interactive approach to understanding deep, pre-trained, language models. Our work is inspired by existing probing methods, but instead approaches language model interpretability in an unsupervised manner: rather than build probespecific classifiers, we aim to let the data distribution speak for itself. Specifically, we focus on contextualized embeddings of words: vector representations that encode the context of a particular word with respect to its originating sentence. Given a large text corpus, in analogy to supervised probes we cluster the embeddings. Given the clustering, the key goal of our visualization is to help a user understand the functionality of clusters, and relationships between clusters. As shown in Fig. 1, our visualization is designed to highlight patterns of linguistic properties: (A) co-occurrences in clusters, (B) formation of phrases via contiguous cluster spans, (C) just how contextual is a given word, as well as (D) details-on-demand for showing individual sentences and their words' cluster assignments. Combined, these views are designed to help the user identify specific linguistic properties through a set of supported interactions.
To evaluate our method we gathered feedback from users to assess what information they could gain by using our system. Through the feedback, we find that different types of linguistic structures, e.g. parts-of-speech, noun phrases, named entities, can be identified through our visualization design.
RELATED WORK
Our work is most related to interpretability approaches within, both, NLP and visual analytics for understanding language models.
Neural network-based language models date back to Bengio et al. [3], and have gained recent attention with more sophisticated network architectures and language modeling objectives [8,24]. These models have demonstrated significant performance gains in a wide variety of language understanding tasks [25,37], despite the seemingly irrelevant tasks used for pre-training, e.g. masked word prediction and next sentence prediction [8]. This has motivated the design of supervised probes [2,7] as a way to test what linguistic knowledge language models encode in their learned representations [14,16,18,33]. Yet these methods face several limitations. As supervised models are usually trained from these representations to assess the accuracy of a probing task, overparameterized models might poorly reflect the linguistic knowledge encoded by the language model [13]. Further, it is delicate to design a probing dataset that ensures task relevance in what is learned [9,26]. Our approach is inspired by probing methods, but is focused on unsupervised methods for interpreting pre-trained language models, complemented by interactive visualization techniques.
Significant work has been developed within the visual analytics community for interpreting deep NLP models, please see Hohman et al. [15] for a broader survey on deep learning and visual analytics, and Spinner et al. [30] for model interpretability within visualization. Visualization methods have been developed to understand contextindependent word embeddings, through assessing analogies [19], customizing embedding projections [21] and comparing embeddings [5,12]. Closely related to our method are approaches that visually analyze recurrent neural networks, namely LSTMVis [32] and RNNVis [22]. RNNVis similarly clusters hidden representations of RNNs, but focuses on specific tasks, e.g. sentiment analysis, whereas we consider task-independent pre-training objectives. Other works have considered the interpretation and editing of sequenceto-sequence models [31], models designed for natural language inference [20], and interactively performing abstractive summarization [10]. Further methods have visually analyzed self-attention in language models [23,35], whereas we consider contextualized embeddings in Transformer models [34]. Recent work such as Checklist [29] and TX-Ray [28] permit the customization of supervised and unsupervised probes, respectively. In contrast to Rethmeier et al., which focuses on interpreting individual neurons, we consider the embedding space as a whole.
OBJECTIVES AND TASKS
Before discussing the tasks that we aim to support, we first discuss the language model, and extracted representations, used in this work. Our goal is to understand the representations learned by different layers in the Transformer model [34], pre-trained on large amounts of raw textual data using the BERT objectives of masked word, and next sentence, prediction [8]. Specifically, we use the cased 12-layer BERT model of Devlin et al. [8], where for a fixed layer, given a sentence composed of m words (w 1 , w 2 , . . . , w m ), passing this sequence through the model provides us with a d = 768 dimensional vector for each word, denoted x(w j ) ∈ R d for the j'th word in the sentence. We denote x(w j ) as the contextualized embedding for word w j . Note that the same word's contextualized embeddings from two different sentences will likely be different, due to sentence context, e.g. "handle" can be treated as a noun or a verb.
We would like to gain insight on the linguistic properties learned by contextualized embeddings. However, to circumvent the issues inherent in supervised probes, and empower the user in exploration, we approach this in an unsupervised manner. Specifically, given a sentence drawn from a large input corpus, we first obtain the contextualized embedding for each word in the sentence. For a word broken into subwords, the last subword's embedding is taken as the original word's embedding [18]. Next, we cluster the contextualized embeddings over all sentences, using k-means. For robustness, we adapt the initialization scheme of Arthur et al. [1] by limiting seed vectors to unique words, and performing k-means over different initializations, taking the result with lowest sum-of-squared distances to assigned cluster centers. Empirically, we find this scheme produces stable clusters, in part due to the large number of vectors provided by each of our tested corpora [36], e.g. ranging from 75K to 250K vectors. We set the number of clusters, k, to 50 in all experiments.
For a given sentence i and a word at position j in the sentence, we obtain a cluster label l(w i j ) ∈ [1, k]. The resulting clustering can be viewed as a proxy for a set of supervised probes, e.g. one cluster could reflect the verb part-of-speech, while another cluster could represent location-based named entities. However, unlike supervised probes, we do not know, a priori, the meaning of the clusters. Hence, the main purpose of our visualization design is to help the user in understanding (1) what a cluster represents, and (2) the relationships between clusters. The tasks supported in our design aim to address these objectives, and serve to abstract typical approaches taken in supervised probes:
(T1) Assess how much context a given cluster contains. Certain words (e.g. punctuation) are less reliant on context than other words (e.g. "place") that may have multiple senses. This task intends to abstract multiple probes such as parts-of-speech [18], semantic role labeling [4], and word tense [7].
(T2) Determine a cluster's ability to form meaningful phrases. This task abstracts segmentation probes such as syntactic chunking and named entity extraction [18], as well as constituency parsing [17]. (T3) Analyze relationships between clusters. This task abstracts relationships between clusters, e.g. relation extraction [18], syntactic dependencies [6], and coreference resolution [33].
VISUALIZATION DESIGN
In this section we discuss our visualization design that addresses our tasks, please see Fig. 2 for an overview of the encodings employed in our design.
Cluster-Word Membership
This view address (T1) in showing the amount of context reflected in a given cluster. Specifically, for a given word w, denote c(w, l) as the number of times this word appears in the corpus with cluster l ∈ [1, k]. Then for such a cluster, we compute the percentage in which that word appears in the cluster:
p(w, l) = c(w, l) ∑ k j=1 c(w, j) .(1)
Thus, for cluster l we have an assigned percentage p for all words w in our corpus. We encode this as a distribution (Fig. 2(A)), where the x-axis encodes the percentage, and an area mark's height encodes how many words contain that percentage. We perform kernel density estimation to arrive at a smoothed distribution. Percentages of p = 0 are filtered out, as they tend to dominate, and are implicitly encoded via nonzero counts across the rest of the clusters. This view enables us to determine differences in clusters in terms of word senses. For instance, two clusters may both reflect past tense, yet they are distinguished by part-of-speech, where one cluster represents adjectives, and the other represents verbs. Our design would consequently depict overlap between these clusters (T1). In general, distributions that are concentrated at a value of 1 indicate only one meaning, independent of context, whereas a more even distribution across percentages indicates the dependence on context for the meaning of individual words.
Cluster Spans
This view addresses (T2) in showing the ability of a cluster to represent contiguous text spans. Specifically, for a given cluster, for each sentence in our corpus we group words that (a) form a contiguous span and (b) all belong to this particular cluster. We then count how many times, for a given span length, these cluster-specific spans occur over the entire corpus. We visually encode this as a heatmap (Fig. 2(B)) where each square represents a particular span length, beginning at a span of 1 (individual word), and increasing from left-to-right. We use a sequential, luminance-decreasing color map to encode count, e.g. how many times a cluster-specific span occurs in the corpus. Aligned columns of the heatmap permit a rapid comparison of span length frequencies between clusters, while a given row depicts a cluster's distribution of span frequencies. As shown in Fig. 1(B) for the last layer of the Transformer [34] model, this design enables the user to quickly assess whether certain clusters result in long spans compared to other clusters, indicative of certain types of linguistic features, e.g. named entities or a part of a constituency parse tree. This grouping of words into contiguous, cluster-specific spans is carried over to other elements of the design, namely cluster co-occurrences, as well as the detailed sentence view. Herein we refer to these grouped words as phrases for full generality.
Pairwise Cluster Co-occurrences
This view addresses (T3) in depicting relationships between clusters. More specifically, for a given phrase corresponding to a cluster, we count how many times it co-occurs with a different cluster's phrase within a given sentence. We measure co-occurrences over different spacings of phrases, e.g. phrases belonging to two different clusters might be right next to each other, but other times they might be separated by several phrases. We show these relationships in a small-multiples view of area marks: rows correspond to clusters in the first position (e.g. the left portion of the co-occurrence), while columns correspond to clusters in the second position (e.g. the right portion). The height of the area mark encodes the number of cooccurrences, while the x-axis within each cell encodes the amount of spacing between phrases, increasing from left-to-right ( Fig. 2(C)). Area marks allow the user to quickly identify patterns with respect to cluster pairings. A large spike within the area mark indicates a frequent co-occurrence between clusters at a given amount of spacing, distinguished from other spacings between these clusters. This potentially indicates a salient relationship between clusters (T3), e.g. co-reference resolution for diagonal cells (identical clusters) or dependency relations between distinct parts-of-speech spaced a fixed amount apart. Note in Fig. 1, there are zero counts for co-occurrences that are directly next to each other in cells on the diagonal, due to the grouping of words into phrases. Further, to visually align the different views, we associate a unique glyph with each cluster, distributed as horizontal and vertical spans within the co-occurrence view. In particular, the vertical strip of glyphs is in alignment with the rows of the span heatmap and cluster-word membership views, for quick identification of clusters amongst all views. This glyph design was chosen to handle a potentially large number of clusters. Other visual channels, e.g. color, can lead to discriminability issues, particularly for complex spatial arrangements [11]. This is characteristic of our sentence view, discussed next.
Interactions and Detailed Sentence Inspection
We allow for user interactions to (a) understand relationships between the different views, and (b) provide for detailed inspection of sentences. Specifically, the user can brush the cluster-word membership view to select words within the particular percentage range. The remaining cluster-word distributions are updated for the brushed Figure 4: We show a use of our system for exploring multiple word senses, allowing the user to discover nouns that are largely context-independent (left), in contrast to context-dependent words shared by a different cluster that capture adjectives (right).
set of words with a superimposed purple area mark, in order to show more detailed relationships between clusters, shown in Fig. 1(C). Furthermore, the co-occurrence view is also updated, where we show a purple area mark for co-occurrences that contain the brushed words. We limit this filtering only to the first item (left position) of a co-occurrence. This linked update allows the user to inspect co-occurrences that have varying levels of context, depending on the user's selection. We, similarly, allow the user to brush the cluster span view, limiting phrases to the particular span lengths brushed by the user. We, further, update the co-occurrences view to this filtered set of (left positional) phrases, but limit the selection to only the particular cluster, in contrast with the word-cluster membership selection which impacts all clusters. We also allow for the user to select both an individual cell and phrase spacing within the co-occurrence view, as indicated by the dark green arrow in Fig. 1(A), and corresponding highlighted cluster glyphs. If a user has previously performed a brushed from the aforementioned interactions, then this selection is limited to the brush query: this is shown in Fig. 1(A) by the arrow positioned on the purple area mark. For a given selection, we populate a more detailed sentence view in Fig. 1(D), where we show sentences that contain the particular pair of clusters, and spacing between clusters. The cluster-specific glyphs are carried over to this view, as well as the depiction of phrases via brackets that highlight cluster-specific contiguous spans. In Fig. 1(D), we see that the user's selection resulted in, predominantly, adjectives in the left cluster, yet these are words that can have different senses (e.g. "master" can be an adjective or noun), which arise from the user's brush of words that belong to different clusters, and are thus more context-dependent. Likewise, Fig. 3 shows an example of filtering phrases to within a certain length.
We allow the user to control various aspects of the design. They may select any of the layers within the Transformer model to load in the main view, providing a quick comparison of how contextual particular layers are -including the first layer, which is largely dependent on word embeddings and thus mostly free of context. The user can also control how many clusters to show in the visualization to reduce visual complexity, where we prioritize clusters based on the number of unique words that each cluster contains. Further, for the sentence view the user can opt to exclude glyphs of clusters not selected in the co-occurrence view, freeing clutter.
RESULTS
To demonstrate our interface, we first show a use case of our system. Our interface supports the loading of an arbitrary set of sentences, but for evaluation purposes, we limit this to sentences from a book, namely "Scottish Cathedrals and Abbeys." 1 Our use case is based on this corpus, showing results for the 9'th layer of the Transformer model, please see Fig. 4. On the left side, the user first selects words that have high membership with the square cluster (1), thus limiting our view to context-independent words. The selection prompts an update to the co-occurrence view via the purple area marks representing those words, where upon clicking a pair of clusters (2) we see that the square cluster for this selection reflects nouns (3). On the right, the user next selects a range of word-cluster memberships from the same (square) cluster (1) prompting linked highlighting across clusters, thus reflective of context-dependent words/phrases. We can observe a spike in co-occurrences for this selection with respect to a pair of clusters (2), indicative of words that belong to different clusters that are right next to one another. Upon closer inspection (3), we find that this represents adjective-noun pairs, where the words classified as adjectives may also be treated as nouns, demonstrating their reliance on context.
In addition, we have gathered feedback from users, in order to assess what features participants could find by interacting with the visualization. More specifically, we conducted experiments with three graduate students, all in Computer Science, who all have some amount of experience using visual interfaces. We did not constrain them in their interactions, instead promoting free-form exploration, asking them: (1) What insights did you find by using the interface? (2) Did you find the interface easy to use? All in all, participants found different aspects of language through the interface: one participant was able to quickly identify parts-ofspeech (adjectives, nouns), while another participant found named entities in the form of dates, as well as more semantic groupings, e.g. different aspects of religion such as church, chapel, etc.. and building structures such as monument, exterior, etc.. Another participant was able to discover patterns with respect to different layers of the Transformer model, namely how the cluster-word membership becomes less unique in later layers, as well as more diverse span lengths. Participants, however, did find the design to be rather complex. One participant mentioned that it took some time to understand it, but afterwards, they were able to navigate amongst the views. Another participant, however, found the complexity to be too overwhelming at times, which inhibited their discovery.
CONCLUSION
We introduced a method for visually analyzing contextualized embeddings produced from deep, pre-trained, language models. Our visualization design takes inspiration from, and abstracts, the class of supervised probes traditionally used to interpret language models, in order to enable a more general analysis of contextualized embeddings. We find preliminary user feedback to be encouraging, however, in the future we plan on obtaining feedback from domain experts within NLP as well as linguistics to assess the design's effectiveness. Furthermore, we plan on extending our work to enable a more comparative analysis of contextualized embeddings, particularly across layers, in order to understand what linguistic properties are learned amongst different representations.
Figure 2 :
2Overview of our design showing (A) relative amount of context encoded by a cluster's set of words (B) frequency over different span lengths for cluster-specific words forming contiguous spans, and (C) cluster co-occurrence frequency regarding word/phrase spacing.
Figure 3 :
3Here we show how the user can discover multi-word phrases of the concept of time through our interface. Brushing spans of length greater than 1, and selecting in the co-occurrence view, we obtain detailed inspections in the sentence view that enables this discovery.
© 2020 IEEE. This is the author's version of the article that has been published in the proceedings of IEEE Visualization conference. The final version of this record is available at: xx.xxxx/TVCG.201x.xxxxxxx/
texts from books are acquired from Project Gutenberg (https://www. gutenberg.org/). Though the main results shown are based on only one book,Fig. 3shows our interface for "Moby Dick".
The advantages of careful seeding. D Arthur, S Vassilvitskii, K-Means++, Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms. the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms10271035D. Arthur and S. Vassilvitskii. K-means++: The advantages of care- ful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, p. 10271035, 2007.
Analysis methods in neural language processing: A survey. Y Belinkov, J Glass, Transactions of the Association for Computational Linguistics. 7Y. Belinkov and J. Glass. Analysis methods in neural language pro- cessing: A survey. Transactions of the Association for Computational Linguistics, 7:49-72, 2019.
A neural probabilistic language model. Y Bengio, R Ducharme, P Vincent, C Jauvin, Journal of machine learning research. 3Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin. A neural probabilis- tic language model. Journal of machine learning research, 3(Feb):1137- 1155, 2003.
J Bjerva, B Plank, J Bos, arXiv:1609.07053Semantic tagging with deep residual networks. arXiv preprintJ. Bjerva, B. Plank, and J. Bos. Semantic tagging with deep residual networks. arXiv preprint arXiv:1609.07053, 2016.
Embedding comparator: Visualizing differences in global structure and local neighborhoods via small multiples. A Boggust, B Carter, A Satyanarayan, arXiv:1912.04853arXiv preprintA. Boggust, B. Carter, and A. Satyanarayan. Embedding comparator: Visualizing differences in global structure and local neighborhoods via small multiples. arXiv preprint arXiv:1912.04853, 2019.
What does bert look at? an analysis of berts attention. K Clark, U Khandelwal, O Levy, C D Manning, Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPK. Clark, U. Khandelwal, O. Levy, and C. D. Manning. What does bert look at? an analysis of berts attention. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 276-286, 2019.
What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. A Conneau, G Kruszewski, G Lample, L Barrault, M Baroni, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1A. Conneau, G. Kruszewski, G. Lample, L. Barrault, and M. Baroni. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2126-2136, 2018.
J Devlin, M.-W Chang, K Lee, K Toutanova, Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJ. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Evaluating nlp models via contrast sets. M Gardner, Y Artzi, V Basmova, J Berant, B Bogin, S Chen, P Dasigi, D Dua, Y Elazar, A Gottumukkala, arXiv:2004.02709arXiv preprintM. Gardner, Y. Artzi, V. Basmova, J. Berant, B. Bogin, S. Chen, P. Dasigi, D. Dua, Y. Elazar, A. Gottumukkala, et al. Evaluating nlp models via contrast sets. arXiv preprint arXiv:2004.02709, 2020.
Visual interaction with deep learning models through collaborative semantic inference. S Gehrmann, H Strobelt, R Krüger, H Pfister, A M Rush, IEEE Transactions on Visualization and Computer Graphics. 261S. Gehrmann, H. Strobelt, R. Krüger, H. Pfister, and A. M. Rush. Visual interaction with deep learning models through collaborative semantic inference. IEEE Transactions on Visualization and Computer Graphics, 26(1):884-894, 2019.
How capacity limits of attention influence information visualization effectiveness. S Haroz, D Whitney, IEEE Transactions on Visualization and Computer Graphics. 1812S. Haroz and D. Whitney. How capacity limits of attention influence information visualization effectiveness. IEEE Transactions on Visual- ization and Computer Graphics, 18(12):2402-2410, 2012.
Interactive analysis of word vector embeddings. F Heimerl, M Gleicher, Computer Graphics Forum. Wiley Online Library37F. Heimerl and M. Gleicher. Interactive analysis of word vector em- beddings. In Computer Graphics Forum, vol. 37, pp. 253-265. Wiley Online Library, 2018.
Designing and interpreting probes with control tasks. J Hewitt, P Liang, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingJ. Hewitt and P. Liang. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 2733-2743, 2019.
A structural probe for finding syntax in word representations. J Hewitt, C D Manning, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1J. Hewitt and C. D. Manning. A structural probe for finding syntax in word representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4129-4138, 2019.
Visual analytics in deep learning: An interrogative survey for the next frontiers. F Hohman, M Kahng, R Pienta, D H Chau, IEEE transactions on visualization and computer graphics. 258F. Hohman, M. Kahng, R. Pienta, and D. H. Chau. Visual analytics in deep learning: An interrogative survey for the next frontiers. IEEE transactions on visualization and computer graphics, 25(8):2674-2693, 2018.
What does bert learn about the structure of language. G Jawahar, B Sagot, D Seddah, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsG. Jawahar, B. Sagot, and D. Seddah. What does bert learn about the structure of language? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3651-3657, 2019.
Are pre-trained language models aware of phrases? simple but strong baselines for grammar induction. T Kim, J Choi, D Edmiston, S.-G Lee, International Conference on Learning Representations. T. Kim, J. Choi, D. Edmiston, and S.-g. Lee. Are pre-trained language models aware of phrases? simple but strong baselines for grammar induction. In International Conference on Learning Representations, 2019.
Linguistic knowledge and transferability of contextual representations. N F Liu, M Gardner, Y Belinkov, M E Peters, N A Smith, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1N. F. Liu, M. Gardner, Y. Belinkov, M. E. Peters, and N. A. Smith. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 1073-1094, 2019.
Visual exploration of semantic relationships in neural word embeddings. S Liu, P.-T Bremer, J J Thiagarajan, V Srikumar, B Wang, Y Livnat, V Pascucci, IEEE transactions on visualization and computer graphics. 241S. Liu, P.-T. Bremer, J. J. Thiagarajan, V. Srikumar, B. Wang, Y. Livnat, and V. Pascucci. Visual exploration of semantic relationships in neural word embeddings. IEEE transactions on visualization and computer graphics, 24(1):553-562, 2017.
Nlize: A perturbation-driven visual interrogation tool for analyzing and interpreting natural language inference models. S Liu, Z Li, T Li, V Srikumar, V Pascucci, P.-T Bremer, IEEE transactions on visualization and computer graphics. 251S. Liu, Z. Li, T. Li, V. Srikumar, V. Pascucci, and P.-T. Bremer. Nl- ize: A perturbation-driven visual interrogation tool for analyzing and interpreting natural language inference models. IEEE transactions on visualization and computer graphics, 25(1):651-660, 2018.
Latent space cartography: Visual analysis of vector space embeddings. Y Liu, E Jun, Q Li, J Heer, Computer Graphics Forum. Wiley Online Library38Y. Liu, E. Jun, Q. Li, and J. Heer. Latent space cartography: Visual analysis of vector space embeddings. In Computer Graphics Forum, vol. 38, pp. 67-78. Wiley Online Library, 2019.
Understanding hidden memories of recurrent neural networks. Y Ming, S Cao, R Zhang, Z Li, Y Chen, Y Song, H Qu, 2017 IEEE Conference on Visual Analytics Science and Technology (VAST). IEEEY. Ming, S. Cao, R. Zhang, Z. Li, Y. Chen, Y. Song, and H. Qu. Understanding hidden memories of recurrent neural networks. In 2017 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 13-24. IEEE, 2017.
Sanvis: Visual analytics for understanding selfattention networks. C Park, I Na, Y Jo, S Shin, J Yoo, B C Kwon, J Zhao, H Noh, Y Lee, J Choo, 2019 IEEE Visualization Conference (VIS). IEEEC. Park, I. Na, Y. Jo, S. Shin, J. Yoo, B. C. Kwon, J. Zhao, H. Noh, Y. Lee, and J. Choo. Sanvis: Visual analytics for understanding self- attention networks. In 2019 IEEE Visualization Conference (VIS), pp. 146-150. IEEE, 2019.
Deep contextualized word representations. M Peters, M Neumann, M Iyyer, M Gardner, C Clark, K Lee, L Zettlemoyer, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers1M. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. Deep contextualized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 2227-2237, 2018.
Language models are unsupervised multitask learners. A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever, A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners.
Probing the probing paradigm: Does probing accuracy entail task relevance?. A Ravichander, Y Belinkov, E Hovy, arXiv:2005.00719arXiv preprintA. Ravichander, Y. Belinkov, and E. Hovy. Probing the probing paradigm: Does probing accuracy entail task relevance? arXiv preprint arXiv:2005.00719, 2020.
Visualizing and measuring the geometry of bert. E Reif, A Yuan, M Wattenberg, F B Viegas, A Coenen, A Pearce, B Kim, Advances in Neural Information Processing Systems. E. Reif, A. Yuan, M. Wattenberg, F. B. Viegas, A. Coenen, A. Pearce, and B. Kim. Visualizing and measuring the geometry of bert. In Advances in Neural Information Processing Systems, pp. 8594-8603, 2019.
Tx-ray: Quantifying and explaining model-knowledge transfer in (un-) supervised nlp. N Rethmeier, V K Saxena, I Augenstein, arXiv:1912.00982arXiv preprintN. Rethmeier, V. K. Saxena, and I. Augenstein. Tx-ray: Quantifying and explaining model-knowledge transfer in (un-) supervised nlp. arXiv preprint arXiv:1912.00982, 2019.
Beyond accuracy: Behavioral testing of NLP models with CheckList. M T Ribeiro, T Wu, C Guestrin, S Singh, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsM. T. Ribeiro, T. Wu, C. Guestrin, and S. Singh. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4902-4912. Association for Computational Linguistics, Online, July 2020.
explainer: A visual analytics framework for interactive and explainable machine learning. T Spinner, U Schlegel, H Schäfer, M El-Assady, IEEE transactions on visualization and computer graphics. 261T. Spinner, U. Schlegel, H. Schäfer, and M. El-Assady. explainer: A visual analytics framework for interactive and explainable machine learning. IEEE transactions on visualization and computer graphics, 26(1):1064-1074, 2019.
S eq 2s eq-v is: A visual debugging tool for sequence-tosequence models. H Strobelt, S Gehrmann, M Behrisch, A Perer, H Pfister, A M Rush, IEEE transactions on visualization and computer graphics. 251H. Strobelt, S. Gehrmann, M. Behrisch, A. Perer, H. Pfister, and A. M. Rush. S eq 2s eq-v is: A visual debugging tool for sequence-to- sequence models. IEEE transactions on visualization and computer graphics, 25(1):353-363, 2018.
Lstmvis: A tool for visual analysis of hidden state dynamics in recurrent neural networks. H Strobelt, S Gehrmann, H Pfister, A M Rush, IEEE transactions on visualization and computer graphics. 241H. Strobelt, S. Gehrmann, H. Pfister, and A. M. Rush. Lstmvis: A tool for visual analysis of hidden state dynamics in recurrent neural networks. IEEE transactions on visualization and computer graphics, 24(1):667-676, 2017.
What do you learn from context? probing for sentence structure in contextualized word representations. I Tenney, P Xia, B Chen, A Wang, A Poliak, R T Mccoy, N Kim, B V Durme, S Bowman, D Das, E Pavlick, International Conference on Learning Representations. I. Tenney, P. Xia, B. Chen, A. Wang, A. Poliak, R. T. McCoy, N. Kim, B. V. Durme, S. Bowman, D. Das, and E. Pavlick. What do you learn from context? probing for sentence structure in contextualized word representations. In International Conference on Learning Representa- tions, 2019.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
A multiscale visualization of attention in the transformer model. J Vig, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 57th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsJ. Vig. A multiscale visualization of attention in the transformer model. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 37-42, 2019.
Clustering stability: an overview. U , Von Luxburg, Now Publishers IncU. Von Luxburg. Clustering stability: an overview. Now Publishers Inc, 2010.
Glue: A multi-task benchmark and analysis platform for natural language understanding. A Wang, A Singh, J Michael, F Hill, O Levy, S Bowman, Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLPA. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 353-355, 2018.
| [] |
[
"Learning to Determine the Quality of News Headlines",
"Learning to Determine the Quality of News Headlines"
] | [
"Amin Omidvar omidvar@eecs.yorku.ca \nDepartment of Electrical Engineering and Computer Science\nYork University\nCanada\n",
"Hossein Pourmodheji pmodheji@eecs.yorku.ca \nDepartment of Electrical Engineering and Computer Science\nYork University\nCanada\n",
"Aijun An \nDepartment of Electrical Engineering and Computer Science\nYork University\nCanada\n\nThe Globe and Mail\nCanada\n",
"Gordon Edall "
] | [
"Department of Electrical Engineering and Computer Science\nYork University\nCanada",
"Department of Electrical Engineering and Computer Science\nYork University\nCanada",
"Department of Electrical Engineering and Computer Science\nYork University\nCanada",
"The Globe and Mail\nCanada"
] | [] | Today, most news readers read the online version of news articles rather than traditional paper-based newspapers. Also, news media publishers rely heavily on the income generated from subscriptions and website visits made by news readers. Thus, online user engagement is a very important issue for online newspapers. Much effort has been spent on writing interesting headlines to catch the attention of online users. On the other hand, headlines should not be misleading (e.g., clickbaits); otherwise readers would be disappointed when reading the content. In this paper, we propose four indicators to determine the quality of published news headlines based on their click count and dwell time, which are obtained by website log analysis. Then, we use soft target distribution of the calculated quality indicators to train our proposed deep learning model which can predict the quality of unpublished news headlines. The proposed model not only processes the latent features of both headline and body of the article to predict its headline quality but also considers the semantic relation between headline and body as well. To evaluate our model, we use a real dataset from a major Canadian newspaper. Results show our proposed model outperforms other state-of-theart NLP models. | 10.5220/0009367504010409 | [
"https://arxiv.org/pdf/1911.11139v2.pdf"
] | 208,291,191 | 1911.11139 | 8428b51aca0b0ff359d1749009c4438bf4c73813 |
Learning to Determine the Quality of News Headlines
Amin Omidvar omidvar@eecs.yorku.ca
Department of Electrical Engineering and Computer Science
York University
Canada
Hossein Pourmodheji pmodheji@eecs.yorku.ca
Department of Electrical Engineering and Computer Science
York University
Canada
Aijun An
Department of Electrical Engineering and Computer Science
York University
Canada
The Globe and Mail
Canada
Gordon Edall
Learning to Determine the Quality of News Headlines
Headline QualityDeep LearningNLP
Today, most news readers read the online version of news articles rather than traditional paper-based newspapers. Also, news media publishers rely heavily on the income generated from subscriptions and website visits made by news readers. Thus, online user engagement is a very important issue for online newspapers. Much effort has been spent on writing interesting headlines to catch the attention of online users. On the other hand, headlines should not be misleading (e.g., clickbaits); otherwise readers would be disappointed when reading the content. In this paper, we propose four indicators to determine the quality of published news headlines based on their click count and dwell time, which are obtained by website log analysis. Then, we use soft target distribution of the calculated quality indicators to train our proposed deep learning model which can predict the quality of unpublished news headlines. The proposed model not only processes the latent features of both headline and body of the article to predict its headline quality but also considers the semantic relation between headline and body as well. To evaluate our model, we use a real dataset from a major Canadian newspaper. Results show our proposed model outperforms other state-of-theart NLP models.
INTRODUCTION
People's attitude toward reading newspaper articles is changing in a way that people are more willing to read online news articles than paper-based ones. In the past, people bought a newspaper, saw almost all the pages while scanning headlines, and read through articles which seemed interesting (Kuiken, Schuth, Spitters, & Marx, 2017). The role of headlines was to help readers have a clear understanding of the topics of the article.
But today, online news publishers are changing the role of headlines in a way that headlines are the most important way to gain readers' attention. One important reason is that online news media publishers rely on the incomes generated from the subscriptions and clicks made by their readers (Reis et al., 2015). Furthermore, the publishers need to attract more readers than their competitors if they want to succeed in this competitive industry. The aforementioned reasons are the most important ones why some of the online news media come up with likable headlines to lure the readers into clicking on their headlines. These likable headlines may increase the number of clicks but at the same time will disappoint the readers since they exaggerate the content of the news articles (Omidvar, Jiang, & An, 2018).
Therefore, having a tool that can predict the quality of news headlines before publication would help authors to choose those headlines that not only increase readers' attention but also satisfy their expectations. However, there are some challenges to predict the quality of headlines. First, there is no labelled data set specifying the quality of headlines. Thus, given a set of articles and users' browsing history on the articles, how to determine the quality of headlines is an open issue. Second, given labelled data, how to build a model that can accurately predict the quality of headlines considering the metrics that data is labelled.
The main contributions of this research are as follows:
First, we proposed a novel headline quality detection approach for published headlines using dwell time and click count of the articles and we provide four headline quality indicators. By using this approach, we can label news article datasets of any size automatically, which is not possible by employing human annotators. Using human annotators to label data is costly, requires much time and effort, and may result in inconsistent labels/evaluation due to subjectivity. To the best of our knowledge, none of the previous related research have conducted similar approach for headline quality detection.
Second, we develop a deep network based predictive model that incorporates some advanced features of DNN to predict the quality of unpublished headlines using the previous approach as a ground truth. The proposed model considers the proposed headline quality indicators by considering the similarity between the headline and its article, and their latent features.
The rest of this paper is organized as follows. In section 2, the most relevant works regarding headline quality in the field of Computer Science and Psychology are studied. In section 3, we propose four quality indicators to represent the quality of headlines. Also, we label our dataset using a novel way to calculate the proposed quality indicators for published news articles. Next, we propose our novel deep learning architecture in section 4 to predict the headline quality for unpublished news articles. We use the calculated headline quality from the section 3 as ground truth to train our model. Then in section 5, our proposed model is compared with baseline models. Finally, this study is wrapped up with a conclusion in section 6.
RELATED WORKS
Many studies in different areas such as computer science, psychology, anthropology, and communication have been conducted on the popularity and accuracy of the news headlines over the past few years. In this section, the most relevant works in the domain of computer science and psychology are briefly described.
Researchers manually examined 151 news articles from four online sections of the El Pais, which is a Spanish Newspaper, in order to find out features which are important to catch the readers' attention. They also analysed how important linguistic techniques such as vocabulary and words, direct appeal to the reader, informal language, and simple structures are in order to gain the attention of readers (Palau-Sampio, 2016).
In another research, 2 million Facebook posts by over 150 U.S. based media organizations were examined to detect clickbait headlines. They found out clickbaits are more prevalent in entertaining categories (Rony, Hassan, & Yousuf, 2017). In order to determine the organic reach (i.e., which is the number of visitors without paid distribution) of the tweets, social sharing patterns were analysed in (Chakraborty, Sarkar, Mrigen, & Ganguly, 2017).
They showed how the differences between customer demographics, follower graph structure, and type of text content can influence the tweets quality.
Ecker et al. (Ecker, Lewandowsky, Chang, & Pillai, 2014) studied how misinformation in news headlines could affect news readers. They found out headlines have an important role to shape readers' attitudes toward the content of news. In (Reis et al., 2015), they extracted features from the content of 69907 news articles in order to find approaches which can help to attract clicks. They discovered the sentiment of the headline is strongly correlated to the popularity of the news article.
Some distinctive characteristics between accurate and clickbait headlines in terms of words, entities, sentence patterns, paragraph structures etc. are discovered in (Chakraborty, Paranjape, Kakarla, & Ganguly, 2016). At the end, they proposed an interesting set of 14 features to recognize how accurate headlines are. In another work, linguistically-infused network was proposed to distinguish clickbaits from accurate headlines using the passages of both article and headline along with the article's images (Glenski, Ayton, Arendt, & Volkova, 2017). To do that, they employed Long Short-Term Memory (LSTM) and Convolutional Neural Network architectures to process text and image data, respectively.
One interesting research measured click-value of individual words of headlines. Then they proposed headline click-based topic model (HCTM) based on latent Dirichlet allocation (LDA) to identify words that can bring more clicks for headlines (J. H. Kim, Mantrach, Jaimes, & Oh, 2016). In another related research (Szymanski, Orellana-Rodriguez, & Keane, 2017), a useful software tool was developed to help authors to compose effective headlines for their articles. The software uses state of the art NLP techniques to recommend keywords to authors for inclusion in articles' headline in order to make headlines look more intersecting. They calculated two local and global popularity measures for each keyword and use supervised regression model to predict how likely headlines will be widely shared on social media.
Deep Neural Networks has become a widely used technique that has produced very promising results in news headline popularity task in recent years (Bielski & Trzcinski, 2018;Stokowiec, Trzciński, Wołk, Marasek, & Rokita, 2017;Voronov, Shen, & Mondal, 2019). Most NLP approaches employ deep learning models and they do not usually need heavy feature engineering and data cleaning. However, most of the traditional methods rely on the graph data of the interactions between users and contents.
For detecting clickbait headlines, lots of research have been conducted so far (Fu, Liang, Zhou, & Zheng, 2017;Venneti & Alam, 2018;Wei & Wan, 2017;Zhou, 2017). In (Martin Potthast, Tim Gollub, Matthias Hagen, 2017) they launched a clickbait challenge competition and also released two supervised and unsupervised datasets which contains over 80000 and 20000 samples, respectively. Each sample contains news content such as headline, article, media, keywords, etc. For the supervised dataset, there are five scores from five different judges in a scale of 0 to 1. A leading proposed model in the clickbait challenge competition (Omidvar et al., 2018), which its name is albacore in the published result list on the competition's website 1 , employed bidirectional GRU along with Fully connected NN layers to determine how much clickbait each headline is. They showed that posted headline on the twitter (i.e., postText field) is the most important features of each sample to predict the judges' score due to the fact that maybe human evaluators only used posted Headline feature to label each sample. The leading approach not only got the first rank in terms of Mean Squared Error (MSE) but also is the fastest among all the other proposed models.
To the best of our knowledge, none of the previous studies analysed the quality of headlines by considering both their popularity and truthfulness (i.e., non clickbait). The reason is that almost all of the previous research, especially those for clickbait detection, looked at the problem as a binary classification task. Also, most of them depend on human evaluators to label the dataset. In our proposed data labelling approach, we determine the quality of headlines based on 4 quality indicators by considering both their popularity and validity. Also, we come up with a novel approach to calculate 4 quality indicators automatically by using users' activity log dataset. Then, our trained deep learning model not only determines how popular headlines are, but also how honest and accurate they are.
LABELING DATA
In this section, a novel approach is introduced to calculate the quality of published headlines based on users' interactions with articles. This approach is used for labeling our dataset. 1 https://www.clickbait-challenge.org/#results
Data
Our data is provided by The Globe and Mail which is a major Canadian newspaper. It contains a news corpus dataset (containing articles and their metadata) and a log dataset (containing interactions of readers with the news website). Every time a reader opens an article, writes a comment or takes any other trackable action, it is detected on the website, and then is stored as a record in a log data warehouse. Generally, every record contains 246 captured attributes such as event ID, user, time, date, browser, IP address, etc.
The log data can give useful insights into readers' behaviours. However, there are noise and inconsistencies in the clickstream data which should be cleaned before calculating any measures, applying any models, or extracting any patterns. For example, users may leave articles open in the browser for a long time while doing other activities, such as browsing other websites in another tab. In this case, some news articles will get high fake dwell times from some readers.
There are approximately 2 billion records of users' actions in the log dataset. We use the log dataset to find how many times each article has been read and how much time users spent reading it. We call these two measures click count and dwell time, respectively.
Quality Indicators
Due to the high cost of labelling supervised training data using human annotators, large datasets are not available for most NLP tasks (Cer et al., 2018).
In this section, we calculate the quality of published articles using articles' click count and dwell time measures. By using the proposed approach, we can label any size of database automatically and use those labels as ground truths to train deep learning models. A dwell time for article a is computed using Formula 1.
= ∑ ,(1)
where Ca is the number of times article a was read and Ta,u is the total amount of time that user u has spent reading article a. Thus, the dwell time of article a (i.e., Da) is the average amount of time spent on the article during a user visit. The values of read count and dwell time are normalized in the scale of zero to one. By considering these two measures for headline quality, we can define four quality indicators which are shown by the 4 corners of the rectangle in Figure 1. We did not normalize articles' dwell time by articles' length since the correlation and mutual information between articles' reading time and articles' length were 0.2 and 0.06, respectively which indicates there is a very low dependency between these two variables in our dataset.
▪ Indicator 1: High dwell time but low read count.
Articles close to this indicator were interesting for users because of their high dwell time but their headlines were not interesting enough to motivate users to click on the articles. However, those users who read these articles spent a significant amount of time reading them. ▪ Indicator 2: High dwell time and high read count.
Articles close to indicator 2 had interesting headlines since they had opened by many users, and the articles were interesting as well because of their high dwell time. ▪ Indicator 3: Low dwell time but high read count.
Articles close to this indicator have high read count but low dwell time. These headlines were interesting for users, but their articles were not. We call this type of headlines misleading headlines since the articles do not meet the expectation of the readers. As we can see in Figure 1, very few articles reside in this group. ▪ Indicator 4: Low dwell time and read count.
Headlines of these articles were not successful to attract users and those who read them did not spend much time reading them.
The probability that article a belongs to each quality indicator i (i.e. Pa,i) is calculated using formula 2 which || ||2 is the L2 norm. Softmax function is used to convert the calculated similarities into probabilities. (2)
PREDICT HEADLINE QUALITY
In this section, we propose a novel model to predict the quality of unpublished news headlines. To the best of our knowledge, we are the first to consider latent features of headlines, bodies, and the semantic relation between them to find the quality of news headlines.
Problem Definition
We consider the task of headline quality prediction as a multiclass classification problem. We assume our input contains a dataset = {( , )} of N news articles that each news article contains a header and an article which are shown by and , respectively. An approach for learning the quality of headline is to define a conditional probability ( | , , ) for each quality indicator Ij with respect to the header text (i.e., = { 1 , 2 , . . . , }), article text (i.e., = { 1 , 2 , . . . , } ), and parameterized by a model with parameters . We then estimate our prediction for each news article in our database as:
̂= ∈ {1,2,3,4} ( | , , )(3)
Proposed Model
In this section we propose a deep learning model to predict the quality of headlines before publication. The proposed model is implemented in python language and will be put on authors' GitHub account after paper publication. The architecture of the proposed model is illustrated in Figure 2.
Embedding Layer
This layer, which is available in Keras library 2 , converts the one-hot-encoding of each word in headlines and articles to the dense word embedding vectors. The embedding vectors are initialized using GloVe Embedding vectors (Pennington, Socher, & Manning, 2014). We find that 100-dimensional embedding vectors lead to the best result. Also, we use a drop out layer on top of the embedding layer to drop 0.2 percent of the output units.
Similarity Matrix Layer
Because one of the main characteristics of highquality headlines is that a headline should be related to the body of its article, the main goal of this layer is to find out how related each headline is to the article's body. Embedding vectors of the words of both the headline and the first paragraph of the articles are the inputs to this layer. We use the first paragraph of the article since the first paragraph is used extensively for news summarizing task due to its high importance to representing the whole news article (Lopyrev, 2015). In Figure 2, each cell ci,j represents the similarity between words hi and bj from the headline and its article, respectively, which is calculated using the cosine similarity between their embedding vectors using formula 4.
= ⃗ ⃗ ‖ ⃗ ‖. ‖ ⃗ ‖(4)
Using the cosine similarity function will enable our model to capture the semantic relation between the embedding vectors of two words zi and tj in the article and header, respectively. Also, the 2-d similarity matrix allows us to use 2-d CNN which has shown great performance for text classification through abstracting visual patterns from text data (Pang et al., 2016). In fact, matching headline and article is viewed as image recognition problem and 2d CNN is used to solve it. ( +1) is the computed feature map at level l+1, ( +1, ) is the k-th square kernel at the level l+1 which scans the whole feature map ( ) from the previous layer, is the size of the kernel, ( +1) are the bias parameters at level l+1, and ReLU (Dahl, Sainath, & Hinton, 2013) is chosen to be the activation function f. Then we will get feature maps by applying dynamic pooling method (Socher, Huang, Pennington, Ng, & Manning, 2011).
Convolution and Max-Pooling Layers
We use (5*5), (3*3), and (3*3) for the size of kernels, 8, 16, and 32 for the number of filters, and (2*2) for the pool size in each Convolutional Network layer, respectively. The result of the final 2-d Max-Pooling layer is flattened to the 1-d vector. Then it passes a drop out layer with the rate of 0.2. In the end, the size of the output vector is reduced to 100 using a fully-connected layer.
BERT
Google's Bidirectional Encoder Representations from Transformers (BERT) (Devlin, Chang, Lee, & Toutanova, 2018) is employed to transform variablelength inputs, which are headlines and articles, into fixed-length vectors for the purpose of finding the latent features of both headlines and articles. BERT's goal is to produce a language model using the Transformer model. Details regarding how Google Transformer works is provided in (Vaswani et al., 2017).
BERT is pre-trained on a huge dataset to learn the general knowledge that can be used and combined with the acquire knowledge on a small dataset. We use the publicly available pre-trained BERT model (i.e., BERT-Base, Uncased) 3 , published by Google. After encoding each headline into a fixed-length vector using BERT, a multi-layer perceptron is used to project each encoded headline into a 100-d vector. The same procedure is performed for the articles as well.
Topic Modelling
We use None Negative Matrix Factorization (NNMF) (J. Kim, He, & Park, 2014) and Latent Dirichlet Allocation (LDA) (Hoffman, Blei, & Bach, 2010) from Scikit-learn library to find topics from both headlines and articles. Since headlines are significantly shorter than articles, we use separate topic models for headlines and articles. Even though both NNMF and LDA can be used for topic modelling their approach is totally different from each other in a way that the former is based on linear algebra and the latter relies on probabilistic graphical modelling. We 3 https://github.com/google-research/bert\#pre-trained models find out NNMF extracts more meaningful topics than LDA on our news dataset. We create matrix A, in which each article is represented as a row and columns are the TF-IDF values of article's words. TF-IDF is an acronym for term frequency -inverse document frequency which is a statistical measure to show how important a word is to an article in a group of articles. Term Frequency (TF) part calculates how frequently a word appears in an article divided by the total number of words in that article. The Inverse Document Frequency (IDF) part weighs down the frequent words while scaling up the rare words in an entire corpus.
Then we use NNMF to factorize matrix A into two matrices W and H which are document to topic matrix and topic to word matrix, respectively. When these two matrices multiplied, the result is the matrix A with the lowest error (formula 6).
× = × ×(6)
In formula 6, n is the number of articles, v is the size of vocabulary, and t is the number of topics ( ≪ ) which we set it to 50. As it is shown in Figure 2, we use topics (i.e., each rows of matrix W) as input features to the Feedforward Neural Network (FFNN) part of our model.
FFNN
As we can see in Figure 2, FFNN layers are used in different parts of our proposed model. The rectifier is used as the activation function of all layers except the last one. The activation function of the last layer is softmax which calculates the probability of the input example being in each quality indicator. We find that using a batch normalization layer before the activation layer in all layers helps to reduce the loss of our model since a batch normalization layer normalizes the input to the activation function so that the data are centred in the linear part of the activation function.
RESULTS
Baselines
For evaluation, we have compared our proposed model with the following baseline models.
EMB + 1-d CNN + FFNN
This embedding layer is similar to the embedding layer of the proposed model which will convert onehot representation of the words to the dense 100-d vectors. A drop out layer is used on top of the embedding layer to drop 0.2 percent of the output units. Also, we use GloVe embedding vectors to initialize word embedding vectors (Pennington et al., 2014). The next layer is 1-d CNN which works well for identifying patterns within single spatial dimension data such as text, time series, and signal. Many recent NLP models employed 1-d CNN for text classification tasks (Yin, Kann, Yu, & Schütze, 2017). The architecture is comprised of two layers of convolution on top of the embedding layer. The last layer is a single layer FFNN using softmax as its activation function.
Doc2Vec + FFNN
Doc2Vec 4 is an implementation of the Paragraph Vector model, which was proposed in (Le & Mikolov, 2014). It is an unsupervised learning algorithm that can learn fixed-length vector representations for different length pieces of text such as paragraphs and documents. The goal is to learn the paragraph vectors by predicting the surrounding words in contexts obtained from the paragraph. It consists of two different models which are Paragraph Vector Distributed Memory Model (PV-DMM) and Paragraph Vector without word ordering Distributed bag of words (PV-DBOW). The former has much higher accuracy than the latter but the combination of them yields to the best result.
We convert headlines and bodies into two separate 100-d embedded vectors. These vectors are fed into FFNN, which comprises of two hidden layers with the size of 200 and 50 consecutively. ReLU is used for the activation function of all FFNN layers except the last layer which employs softmax function.
EMB + BGRU + FFNN
This is a Bidirectional Gated Recurrent Unit on top of the Embedding layer. GRU employs two gates to trail the input sequences without using separate memory cells which are reset rt and update zt gates, respectively. = ( + ℎ −1 + )
(7) = ( + ℎ −1 + ) (8) ℎ~= ℎ( ℎ + * ( ℎ ℎ −1 ) + ℎ )(9)
4 https://radimrehurek.com/gensim/models/doc2vec.html ℎ = (1 − ) * ℎ −1 + * ℎ~ (10) In formulas 7 and 8, Wr, Ur, br, Wz, Uz, bz are the parameters of GRU that should be trained during the training phase. Then, the candidate and new states will be calculated at time t based on the formula 9 and 10, respectively.
In formulas 9 and 10, * denotes an elementwise multiplication between the reset gate and the past state. So, it determines which part of the previous state should be forgotten. And update gate in formula 10 determines which information from the past should be kept and which one should be updated. The forward way reads the post text from 1 to and the backward way reads the post text from to 1 .
ℎ ⃗⃗⃗⃗⃗ = ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ ( , ℎ −1 ⃗⃗⃗⃗⃗⃗⃗⃗⃗⃗ )(11)
ℎ ⃖⃗⃗⃗⃗ = ⃖⃗⃗⃗⃗⃗⃗⃗⃗⃗ ( , ℎ +1 ⃖⃗⃗⃗⃗⃗⃗⃗⃗⃗ )
(12) ℎ = [ℎ ⃗⃗⃗⃗⃗ , ℎ ⃖⃗⃗⃗⃗ ](13)
And the input to the FFNN layer is the concatenation of the last output of forward way and backward way.
EMB + BLSTM + FFNN
This is a Bidirectional Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) on top of the Embedding layer. Embedding and FFNN layers are similar to the previous baseline. The only difference here is using LSTM instead of using GRU.
Evaluation Metrics
Mean Absolute Error (MAE) and Relative Absolute Error (RAE) are used to compare the result of the proposed model with the result of the baseline models on test dataset. As we can see in formula 14, RAE is relative to a simple predictor, which is just the average of the ground truth. The ground truth, the predicted values and the average of the ground truth are shown by , ̂, and ̅ , respectively.
= ∑ ∑ | − ̂| 4 =1 =1 ∑ ∑ | − ̅ | 4 =1 =1(14)
Experimental Results:
We train our proposed model with the same configuration two times: once using hard labels (i.e., assigning a label 1 and three 0s to the quality indicators for each sample) and the other time using soft labels, which were calculate by formula 2. We use categorical cross entropy loss function for the former and MSE loss function for the latter. Then we find out our proposed model will be trained more efficiently by using soft targets than using hard targets, same as what was shown in (Hinton, Vinyals, & Dean, 2015). The reason could be that soft targets provide more information per training example in comparison with hard targets and much less variance in the gradient between our training examples. For instance, for the machine learning tasks such as MNIST (LeCun, Bottou, Bengio, & Haffner, 1998), one case of image 1 may be given probabilities 10 -6 and 10 -9 for being 7 and 9, respectively while for another case of image 1 it may be the other way round. So, we decide to train our proposed model and all the baseline models just using soft targets.
The results of the proposed model and baseline models on the test data are shown in Table 1. The loss function of the proposed model and all the baseline models is based on the MSE between the predicted quality indicators by the models and the ground truth calculated in section 3.2 (Soft labels). And we use Adam optimizer for the proposed model and all our baseline models (Kingma & Ba, 2015). Also, we split our dataset into train, validation, and test sets using 70, 10, and 20 percent of data, respectively.
Our proposed model got the best results by having the lowest RAE among all the other models. Surprisingly, TFIDF performs better than the other baseline models. It can be due to the fact that the number of articles in our dataset is not big (28751), so complex baseline models may overfit to the training data set.
Also, we are interested in finding the importance of the latent features regarding the semantic relation between headlines and articles. So, we have removed embedding, similarity matrix, and 2-D CNN layers from the proposed model. After making these changes, RAE was increased by 6 percent in comparison with the original proposed model. This shows that measuring the similarity between article's headline and body is beneficial for headline quality prediction.
CONCLUSION
In this research, we proposed a method for calculating the quality of the published news headlines with regard to the four proposed quality indicators. Moreover, we proposed a novel model to predict the quality of headlines before their publication, using the latent features of headlines, articles, and their similarities. The experiment was conducted on a real dataset obtained from a major Canadian newspaper.
The results showed the proposed model outperformed all the baselines in terms of Mean Absolute Error (MAE) and Relative Absolute Error (RAE) measures. As headlines play an important role in catching the attention of readers, the proposed method is of great practical value for online news media.
Figure 1 :
1Representing News Headlines' quality with respect to the four quality indicators.
ThreeFigure 2 :
2Convolutional Network layers, each of which contains 2-d CNN and 2-d Max-Pooling layers, are used on top of the similarity matrix layer. The whole Similarity Matrix is scanned by the first layer of 2-d CNN to generate the first feature map. Different level of matching patterns is extracted from the Similarity Matrix in each Convolutional Network Layer based on the formula 5. The proposed model for predicting news headlines' quality according to the four quality indicators.
Table 1 :
1Comparison between the proposed model and baseline models.Models
MAE
RAE
EMB + 1-D CNN +
FFNN
0.044
105.08
Doc2Vec + FFNN
0.043
101.61
EMB + BLSTM +
FFNN
0.041
97.92
EMB + BGRU + FFNN
0.039
94.38
TF-IDF + FFNN
0.038
89.28
Proposed Model
without Similarity Matrix
0.036
86.1
Proposed Model
0.034
80.56
ACKNOWLEDGEMENTS
Understanding Multimodal Popularity Prediction of Social Media Videos With Self-Attention. A Bielski, T Trzcinski, 10.1109/ACCESS.2018.2884831IEEE Access. 6Bielski, A., & Trzcinski, T. (2018). Understanding Multimodal Popularity Prediction of Social Media Videos With Self-Attention. IEEE Access, 6, 74277- 74287. https://doi.org/10.1109/ACCESS.2018.2884831
Universal sentence encoder. D Cer, Y Yang, S Kong, N Hua, N Limtiaco, R S John, ArXiv:1803.11175ArXiv PreprintCer, D., Yang, Y., Kong, S., Hua, N., Limtiaco, N., John, R. S., … others. (2018). Universal sentence encoder. ArXiv Preprint ArXiv:1803.11175.
Stop Clickbait: Detecting and preventing clickbaits in online news media. A Chakraborty, B Paranjape, S Kakarla, N Ganguly, 10.1109/ASONAM.2016.7752207IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). Chakraborty, A., Paranjape, B., Kakarla, S., & Ganguly, N. (2016). Stop Clickbait: Detecting and preventing clickbaits in online news media. In 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM) (pp. 9- 16). IEEE. https://doi.org/10.1109/ASONAM.2016.7752207
Tabloids in the Era of Social Media? Understanding the Production and Consumption of Clickbaits in Twitter. A Chakraborty, R Sarkar, A Mrigen, N Ganguly, Chakraborty, A., Sarkar, R., Mrigen, A., & Ganguly, N. (2017). Tabloids in the Era of Social Media? Understanding the Production and Consumption of Clickbaits in Twitter. Retrieved from http://arxiv.org/abs/1709.02957
Improving deep neural networks for LVCSR using rectified linear units and dropout. G E Dahl, T N Sainath, G E Hinton, ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing -Proceedings. Dahl, G. E., Sainath, T. N., & Hinton, G. E. (2013). Improving deep neural networks for LVCSR using rectified linear units and dropout. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing -Proceedings (pp. 8609-8613).
. 10.1109/ICASSP.2013.6639346https://doi.org/10.1109/ICASSP.2013.6639346
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Retrieved from http://arxiv.org/abs/1810.04805
The effects of subtle misinformation in news headlines. U K H Ecker, S Lewandowsky, E P Chang, R Pillai, 10.1037/xap0000028Journal of Experimental Psychology: Applied. Ecker, U. K. H., Lewandowsky, S., Chang, E. P., & Pillai, R. (2014). The effects of subtle misinformation in news headlines. Journal of Experimental Psychology: Applied. https://doi.org/10.1037/xap0000028
A Convolutional Neural Network for Clickbait Detection. J Fu, L Liang, X Zhou, J Zheng, 4th International Conference on Information Science and Control Engineering (ICISCE). Fu, J., Liang, L., Zhou, X., & Zheng, J. (2017). A Convolutional Neural Network for Clickbait Detection. In 2017 4th International Conference on Information Science and Control Engineering (ICISCE) (pp. 6-10).
Fishing for Clickbaits in Social Images and Texts with Linguistically-Infused Neural Network Models. M Glenski, E Ayton, D Arendt, S Volkova, Clickbait Challenge. Glenski, M., Ayton, E., Arendt, D., & Volkova, S. (2017). Fishing for Clickbaits in Social Images and Texts with Linguistically-Infused Neural Network Models. In Clickbait Challenge 2017. Retrieved from http://arxiv.org/abs/1710.06390
G Hinton, O Vinyals, J Dean, 10.1063/1.4931082Distilling the Knowledge in a Neural Network, 1-9. Hinton, G., Vinyals, O., & Dean, J. (2015). Distilling the Knowledge in a Neural Network, 1-9. https://doi.org/10.1063/1.4931082
Long Short-Term Memory. S Hochreiter, J Schmidhuber, 10.1162/neco.1997.9.8.1735Neural Computation. 98Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8), 1735-1780. https://doi.org/10.1162/neco.1997.9.8.1735
Online learning for Latent Dirichlet Allocation. M D Hoffman, D M Blei, F Bach, Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, NIPS. Hoffman, M. D., Blei, D. M., & Bach, F. (2010). Online learning for Latent Dirichlet Allocation. In Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010.
How to Compete Online for News Audience. J H Kim, A Mantrach, A Jaimes, A Oh, 10.1145/2939672.2939873Kim, J. H., Mantrach, A., Jaimes, A., & Oh, A. (2016). How to Compete Online for News Audience, 1645-1654. https://doi.org/10.1145/2939672.2939873
Algorithms for nonnegative matrix and tensor factorizations: A unified view based on block coordinate descent framework. J Kim, Y He, H Park, 10.1007/s10898-013-0035-4Journal of Global Optimization. 582Kim, J., He, Y., & Park, H. (2014). Algorithms for nonnegative matrix and tensor factorizations: A unified view based on block coordinate descent framework. Journal of Global Optimization, 58(2), 285-319. https://doi.org/10.1007/s10898-013-0035- 4
Adam: A method for stochastic optimization. D P Kingma, J L Ba, 3rd International Conference on Learning Representations, ICLR 2015 -Conference Track Proceedings. Kingma, D. P., & Ba, J. L. (2015). Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015 -Conference Track Proceedings.
Effective Headlines of Newspaper Articles in a Digital Environment. J Kuiken, A Schuth, M Spitters, M Marx, 10.1080/21670811.2017.1279978Digital Journalism. 510Kuiken, J., Schuth, A., Spitters, M., & Marx, M. (2017). Effective Headlines of Newspaper Articles in a Digital Environment. Digital Journalism, 5(10), 1300-1314. https://doi.org/10.1080/21670811.2017.1279978
Distributed representations of sentences and documents. Q Le, T Mikolov, Proceedings of the 31st International Conference on International Conference on Machine Learning. the 31st International Conference on International Conference on Machine Learning321188Le, Q., & Mikolov, T. (2014). Distributed representations of sentences and documents. In Proceedings of the 31st International Conference on International Conference on Machine Learning -Volume 32 (pp. II-1188).
. Jmlr, Org, JMLR.org. Retrieved from https://dl.acm.org/citation.cfm?id=3045025
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, 10.1109/5.726791Proceedings of the IEEE. the IEEELeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE. https://doi.org/10.1109/5.726791
Generating News Headlines with Recurrent Neural Networks. K Lopyrev, Lopyrev, K. (2015). Generating News Headlines with Recurrent Neural Networks, 1-9.
The Clickbait Challenge 2017: Towards a Regression Model for Clickbait Strength. Martin Potthast, Tim Gollub, Matthias Hagen, B S , Proceddings of the Clickbait Chhallenge. eddings of the Clickbait ChhallengeMartin Potthast, Tim Gollub, Matthias Hagen, and B. S. (2017). The Clickbait Challenge 2017: Towards a Regression Model for Clickbait Strength. In Proceddings of the Clickbait Chhallenge.
Using Neural Network for Identifying Clickbaits in Online News Media. A Omidvar, H Jiang, A An, Annual International Symposium on Information Management and Big Data. Omidvar, A., Jiang, H., & An, A. (2018). Using Neural Network for Identifying Clickbaits in Online News Media. In Annual International Symposium on Information Management and Big Data (pp. 220- 232).
Reference press metamorphosis in the digital context: clickbait and tabloid strategies in elpais. D Palau-Sampio, com. Communication & Society29Palau-Sampio, D. (2016). Reference press metamorphosis in the digital context: clickbait and tabloid strategies in elpais.com. Communication & Society, 29(2).
Text Matching as Image Recognition. L Pang, Y Lan, J Guo, J Xu, S Wan, X Cheng, 10.1007/s001700170197Pang, L., Lan, Y., Guo, J., Xu, J., Wan, S., & Cheng, X. (2016). Text Matching as Image Recognition, 2793- 2799. https://doi.org/10.1007/s001700170197
GloVe: Global vectors for word representation. J Pennington, R Socher, C D Manning, EMNLP 2014 -2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference. Pennington, J., Socher, R., & Manning, C. D. (2014). GloVe: Global vectors for word representation. In EMNLP 2014 -2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference.
. 10.3115/v1/d14-1162https://doi.org/10.3115/v1/d14-1162
Breaking the News: First Impressions Matter on Online News. J Reis, F Benevenuto, P O S V De Melo, R Prates, H Kwak, J An, Reis, J., Benevenuto, F., de Melo, P. O. S. V., Prates, R., Kwak, H., & An, J. (2015). Breaking the News: First Impressions Matter on Online News, 357-366. Retrieved from http://arxiv.org/abs/1503.07921
Diving deep into clickbaits: Who use them to what extents in which topics with what effects?. M M U Rony, N Hassan, M Yousuf, Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and MiningRony, M. M. U., Hassan, N., & Yousuf, M. (2017). Diving deep into clickbaits: Who use them to what extents in which topics with what effects? In Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017 (pp. 232-239).
Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. R Socher, E H Huang, J Pennington, A Y Ng, C D Manning, Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems. Socher, R., Huang, E. H., Pennington, J., Ng, A. Y., & Manning, C. D. (2011). Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011, NIPS 2011.
Shallow reading with deep learning: Predicting popularity of online content using only its title. W Stokowiec, T Trzciński, K Wołk, K Marasek, P Rokita, International Symposium on Methodologies for Intelligent Systems. Stokowiec, W., Trzciński, T., Wołk, K., Marasek, K., & Rokita, P. (2017). Shallow reading with deep learning: Predicting popularity of online content using only its title. In International Symposium on Methodologies for Intelligent Systems (pp. 136- 145).
Helping News Editors Write Better Headlines: A Recommender to Improve the Keyword Contents & Shareability of News Headlines. T Szymanski, C Orellana-Rodriguez, M T Keane, Szymanski, T., Orellana-Rodriguez, C., & Keane, M. T. (2017). Helping News Editors Write Better Headlines: A Recommender to Improve the Keyword Contents & Shareability of News Headlines. Retrieved from http://arxiv.org/abs/1705.09656
. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, I Polosukhin, Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … Polosukhin, I. (2017).
Attention is all you need. Advances in neural information processing systems. Attention is all you need. In Advances in neural information processing systems (pp. 5998-6008).
How Curiosity can be modeled for a Clickbait Detector. L Venneti, A Alam, Venneti, L., & Alam, A. (2018). How Curiosity can be modeled for a Clickbait Detector. Retrieved from http://arxiv.org/abs/1806.04212
Forecasting popularity of news article by title analyzing with BN-LSTM network. A Voronov, Y Shen, P K Mondal, 10.1145/3335656.3335679ACM International Conference Proceeding Series. Association for Computing MachineryVoronov, A., Shen, Y., & Mondal, P. K. (2019). Forecasting popularity of news article by title analyzing with BN-LSTM network. In ACM International Conference Proceeding Series (pp. 19-27). Association for Computing Machinery. https://doi.org/10.1145/3335656.3335679
Learning to identify ambiguous and misleading news headlines. W Wei, X Wan, IJCAI International Joint Conference on Artificial Intelligence. Wei, W., & Wan, X. (2017). Learning to identify ambiguous and misleading news headlines. IJCAI International Joint Conference on Artificial Intelligence, 4172-4178.
Comparative Study of CNN and RNN for Natural Language Processing. W Yin, K Kann, M Yu, H Schütze, Yin, W., Kann, K., Yu, M., & Schütze, H. (2017). Comparative Study of CNN and RNN for Natural Language Processing. Retrieved from http://arxiv.org/abs/1702.01923
Clickbait Detection in Tweets Using Selfattentive Network. Y Zhou, Clickbait Challenge. Zhou, Y. (2017). Clickbait Detection in Tweets Using Self- attentive Network. In Clickbait Challenge 2017. Retrieved from http://arxiv.org/abs/1710.05364
| [
"https://github.com/google-research/bert\\#pre-trained"
] |
[
"IMPROVING END-OF-TURN DETECTION IN SPOKEN DIALOGUES BY DETECTING SPEAKER INTENTIONS AS A SECONDARY TASK",
"IMPROVING END-OF-TURN DETECTION IN SPOKEN DIALOGUES BY DETECTING SPEAKER INTENTIONS AS A SECONDARY TASK"
] | [
"Zakaria Aldeneh aldeneh@umich.edu \nUniversity of Michigan at Ann Arbor\n\n",
"Dimitrios Dimitriadis \nMicrosoft\n",
"Emily Mower Provost emilykmp@umich.edu \nUniversity of Michigan at Ann Arbor\n\n"
] | [
"University of Michigan at Ann Arbor\n",
"Microsoft",
"University of Michigan at Ann Arbor\n"
] | [] | This work focuses on the use of acoustic cues for modeling turn-taking in dyadic spoken dialogues. Previous work has shown that speaker intentions (e.g., asking a question, uttering a backchannel, etc.) can influence turn-taking behavior and are good predictors of turn-transitions in spoken dialogues. However, speaker intentions are not readily available for use by automated systems at run-time; making it difficult to use this information to anticipate a turn-transition. To this end, we propose a multi-task neural approach for predicting turntransitions and speaker intentions simultaneously. Our results show that adding the auxiliary task of speaker intention prediction improves the performance of turn-transition prediction in spoken dialogues, without relying on additional input features during run-time. | 10.1109/icassp.2018.8461997 | [
"https://arxiv.org/pdf/1805.06511v1.pdf"
] | 21,693,189 | 1805.06511 | 7768be012e2ada2ee6219c529cf242002559f994 |
IMPROVING END-OF-TURN DETECTION IN SPOKEN DIALOGUES BY DETECTING SPEAKER INTENTIONS AS A SECONDARY TASK
9 May 2018
Zakaria Aldeneh aldeneh@umich.edu
University of Michigan at Ann Arbor
Dimitrios Dimitriadis
Microsoft
Emily Mower Provost emilykmp@umich.edu
University of Michigan at Ann Arbor
IMPROVING END-OF-TURN DETECTION IN SPOKEN DIALOGUES BY DETECTING SPEAKER INTENTIONS AS A SECONDARY TASK
9 May 2018Index Terms-Multi-task learningrecurrent neural net- worksLSTMturn-takingspoken dialoguesspeaker inten- tions
This work focuses on the use of acoustic cues for modeling turn-taking in dyadic spoken dialogues. Previous work has shown that speaker intentions (e.g., asking a question, uttering a backchannel, etc.) can influence turn-taking behavior and are good predictors of turn-transitions in spoken dialogues. However, speaker intentions are not readily available for use by automated systems at run-time; making it difficult to use this information to anticipate a turn-transition. To this end, we propose a multi-task neural approach for predicting turntransitions and speaker intentions simultaneously. Our results show that adding the auxiliary task of speaker intention prediction improves the performance of turn-transition prediction in spoken dialogues, without relying on additional input features during run-time.
INTRODUCTION
Dialogue agents must be able to engage in human-like conversations in order to make interactions with spoken dialogue systems more natural and less rigid. Turn-management is an essential component of conversations as it allows participants in a dialogue to exchange control of the floor. Studies have shown that conversation partners rely on both syntactic and prosodic cues to anticipate turn-transitions [1,2,3]. Syntactic cues include keywords and semantics of an uttered sentence. Prosodic cues include the final intonation of a clause, pitch level, and speaking rate. In this work, we assess the efficacy of using acoustic cues for anticipating turn-switches in dyadic spoken dialogues. Given a single utterance, our goal is to use acoustic cues to predict if there will be a switch in speakers for the upcoming utterance or not.
Modern spoken dialogue systems generally rely on simple thresholding approaches for modeling turn-taking [4,5,6]. However, turn-management is a complex phenomenon, in which participants in a conversation rely on multiple cues * Work completed while at IBM T. J. Watson Research Center. to anticipate turn changes or end-of-turns. We anticipate that interactions between humans and machines can be improved if dialogue systems can accurately anticipate turn-switches in spoken conversations.
Turn-taking in conversations can take many forms. The two basic turn-taking functions are hold and switch. Given an utterance in a conversation, a hold indicates that the next utterance will be uttered by the same speaker while a switch indicates that the next utterance will be uttered by the other speaker in the conversation. Turn-switches can be further divided into smooth and overlapping switches [3]. Smooth switches occur when there is silence between two consecutive utterances from two speakers. Overlapping switches occur when a speaker starts uttering a sentence before the other speaker finishes uttering his/her sentence.
Previous works built models that used both acoustic and syntactic information to anticipate turn-changes to help make turn-management more natural in spoken dialogue systems [3,5]. Gravano and Hirschberg showed that raising contours of intonation correlates with turn-transitions while flat intonations correlates with turn-holds [3]. They also showed that certain keywords (e.g., "you know. . . ") and textual completion have good correlations with turn-management functions. In addition to the usefulness of syntactic and acoustic cues for modeling turn-taking, previous work showed that speaker intentions (e.g., ask a question, utter a backchannel, etc.) can be good predictors of turn-transitions in dialogues [7,8]. For example, a switch in speaker turns is more likely to occur after encountering a question than it is to occur after encountering a statement. Although speaker intentions (sometimes referred to as dialogue acts) are useful for predicting turntransitions [7,8], they require human annotations and are not readily available during run-time.
We propose the use of a multi-task Long Short-Term Memory (LSTM) network that takes in a sequence of acoustic frames from a given utterance and predicts turn-transitions and speaker intentions simultaneously. During training time, the network is optimized with a joint loss function using ground-truth labels for turns and intentions. During test time, the network makes two predictions, one of which can be discarded or used by other modules in a spoken dialogue system. The advantage is that this allows the model to use represen-tations that encode information about speaker intentions for anticipating turns changes. Our experiments demonstrate that adding the detection of speaker intentions as a secondary task improves the performance of anticipating turn-transitions.
RELATED WORK
The problem of modeling turn-taking in conversations has been extensively studied in the literature. In this section, we give an overview of related works that focused on speech or textual interactions (i.e., no visual cues). Our work complements previous work by showing that a model that uses acoustic cues for predicting turn-switches benefits from adding speaker intentions prediction as an auxiliary task using the multi-task learning framework.
One line of work looked at the use of acoustic and lexical features for modeling turn-taking behavior [4,5,9,10]. Liu et al. [5], Masumura et al. [9], and Ishimoto et al. [10] looked at the problem in Japanese conversations while Maier et al. [4] looked at the problem in German conversations. Masumura et al. proposed using stacked time-asynchronous sequential networks for detecting end-of-turns given sequences of asynchronous features (e.g., MFCCs and words) [9]. Ishimoto et al. investigated the dependency between syntactic and prosodic features and showed that combining the two features is useful for predicting end-of-turns [10]. Liu et al. built a Recurrent Neural Network (RNN) to classify a given utterance into four classes that relate to turn-taking behavior using joint acoustic and lexical embeddings [5]. Finally, Maier et al. built an LSTM with a threshold-based decoding and studied the trade-off between latency and cut-in rate for end-of-turn detection in simulated real-time dialogues [4]. The conclusion reached by this line of work was that end-of-turn detection models benefit from augmenting classifiers that use acoustic information with lexical information.
Another line of work focused solely on the acoustic modality, pointing out that using lexical features would (1) require access to a speech recognition pipeline and (2) bias the classifiers due to varying prompt types [11]. Arsikere et al. compared the effectiveness of acoustic features (e.g., pitch trends, spectral constancy, etc.) for predicting end-ofturns in two datasets that differed in prompt type (one is slow and deliberate, the other is fast and spontaneous) [11]. They found that the same acoustic cues were useful for detecting end-of-turns for both prompt types.
A final line of work used dialogue act information when modeling turn-taking behavior [7,8,12]. Guntakandla and Nielsen built a turn-taking model that relied on transcribed segments, intention labels, speaker information, and change in speaker information to predict turn-transitions in dialogues [8]. Meshorer and Heeman used current and past speaker intention labels along with two new features, relative turn length and relative floor control, summarizing past speaker behavior for predicting turn-switches in dia-logues [7]. Finally, Heeman and Lunsford showed that turntaking behavior not only depends on previous and upcoming speech act types, but also depends on the nature of a dialogue; suggesting that turn-taking events should be split into several groups depending on speech act types and the context of the dialogue [12].
The works of Meshorer and Heeman [7], Guntakandla and Nielsen [8], and Heeman and Lunsford [12] suggested that speaker intentions can be useful for predicting turntransitions. However, speaker intentions are not readily obtainable from utterances and require manual human annotations. We are interested in studying how we can augment acoustic systems with speaker intention information, available during training time, to improve performance of turn-transitions predictions.
PROBLEM SETUP
We follow the work of Meshorer and Heeman [7] and represent a conversation between two speakers as a sequence of utterances, taking the following form:
u 1 , u 2 , . . . , u N
where each u i is an utterance in the conversation. The sequence of utterances are sorted in terms of start talk time. Let spkr(·) be a function that returns the speaker of a given utterance. Given u i , the goal is to predict whether the following statement is true or false:
spkr(u i ) = spkr(u i+1 )
If the statement is true, then a turn-switch will take place and the other speaker will speak next. If the statement is false, then the current speaker will continue speaking.
Each utterance in the sequence represents a complete sentence, containing both acoustic and lexical cues, and varies in duration. We assume that we know the end-points of each utterance as in [7,8,11]. Utterance end-points can be readily obtained from modern voice-activity detection algorithms or using an end-of-utterance detection systems (e.g., [6]). We leave the problem of combining our multi-task model with end-of-utterance detection for future work and focus on the problem of predicting turn-switches from acoustic cues for a given utterance.
DATASET AND FEATURES
Dataset
We use the Switchboard corpus [13] to model turn-taking behavior in spoken dialogues. The corpus consists of dyadic telephony conversations between participants who were asked to discuss various topics. We use the annotations provided However, the SwDA corpus does not map dialogue acts to timing information in the original media files of the Switchboard corpus. It only maps dialogue acts to lexical and turn information. We augment the SwDA corpus with the NXT Switchboard corpus [14] to get utterance timing information from the original media files. The aim of the NXT Switchboard corpus was to combine major annotations that were performed on the Switchboard corpus and make them accessible within one framework.
Preparation. We first add binary turn labels (switch/hold) to each utterance in the dataset. We focus on 7 major dialogue acts which we obtain by grouping different SwDA classes as shown in Table 1. The dialogue act groups that we use are a subset of those used in [7]. We filtered out utterances in the dataset that do not have corresponding audio segments (i.e., no timing information). We obtain the final utterances by trimming the audio of the appropriate speaker channel in the original media files in accordance to the timings provided in the NXT Switchboard corpus.
Analysis. The final dataset that we use contains a total of 86,687 utterances. Table 2 shows a summary of the content of the dataset in terms of turn labels and speaker intentions. As a first pass to understand the relationship between dialogue acts and turns, we run a Chi-square test of independence and find that there is a relationship between dialogue acts and turns, p < 0.001 (i.e., they are not independent). Note that this finding is suggestive rather than conclusive; mainly because our utterances are not independent (they can come from same speaker). Nevertheless, this finding supports findings in literature [3,12], which suggested that speaker intentions influence turn-taking behavior.
Features
We use the OpenSMILE toolkit [15] to extract the following features and their first (left) derivatives using a 25ms Ham-1 https://github.com/cgpotts/swda ming window with a shift-rate of 10ms: intensity, loudness, MFCC, RMS energy, zero-crossing-rate, and smoothed pitch. As a result, a given signal is represented as a sequence of 42-dimensional feature vectors. The choice of these features was inspired by their success in previous studies on modeling turn-taking using acoustic cues [4,11].
METHOD
We use unidirectional LSTM network to model the sequence of acoustic features and make turn predictions. LSTMs are able to capture past signal behavior and they have shown success in many audio processing applications, such as speech recognition and computational paralinguistics [6,16]. In addition to their ability to capture past signal behavior, LSTMs are able to capture information relating to timing and differentials (e.g., rising slope); both of which are useful for modeling turn-taking [3]. Predicting turns can be formulated as a binary classification task where the goal is, given an utterance, predict whether there will be a turn-switch or a turn-hold. We augment this task given the following problem setup: given an utterance, simultaneously predict turn-transitions and speaker intentions. The model is trained to minimize a joint loss function that takes the following form:
L tot = λ 1 L turn + λ 2 L intent
where L turn is the loss function for turn predictions, L intent is the loss function for speaker intention predictions, L tot is the overall loss function, λ 1 and λ 2 are weights assigned to control the influence of each loss function. In this work we set λ 1 to 1.0 and λ 2 to 0.5.
Baselines. We re-implement the "full model" from [7] and compare its performance to the proposed approach. The full model uses a Random Forest classifier with the features described in Section 2. We also compare our proposed multitask approach to a single-task LSTM that is trained to minimize L turn alone. We note that the single-task LSTM approach is similar to the one used in [4]. 6. EXPERIMENTS
Setup
We evaluate performance using 5-fold cross-validation. We split on conversations, as opposed to utterances, to ensure that individual speakers do not appear in both the training and testing folds. For each testing fold, we randomly take out 33% of the training conversations and use them for validation and early stopping. For each conversation, we perform speakerspecific z-normalization on the features. We implement our models using the PyTorch library 2 . We optimize the weighted negative log-likelihood loss function and use RMSProp optimizer to train our models. We use an initial learning rate of 0.001. At the end of each epoch, we compute the macro-F1 score on the validation set and reduce the learning rate by a factor of 2 if there was no improvement from last epoch. We run for a maximum of 100 epochs and stop training if there was no improvement in validation F1 score for 5 consecutive epochs. We take a snapshot of the model after each epoch and select the one that gave the highest validation performance.
For each fold, we perform a grid search and pick the hyper-parameters that maximize validation performance. The main hyper-parameters of the model are: number of layers {1, 2} and layer width {32, 64, 128}. Once we have identified the optimal hyper-parameters for each fold, we train 3 models with different random seeds and report their ensemble performance to minimize variance due to random initialization. We report the average performance across the five folds. Table 3 shows the results obtained from our experiments. The table shows that a single-task LSTM, which uses the input features described in Section 4.2, outperforms the full model in all evaluation metrics. We attribute this improvements to better feature representations and better sequential modeling abilities of LSTMs. The table shows that a multi-task LSTM, which is trained using a joint loss function, provides consistent improvements over a single-task LSTM (significant improvements under a paired t-test, p < 0.05, in terms of recall and AUC). This suggests that a turn prediction model can benefit from representations extracted for detecting speaker intentions. Next, we study how well our model is able to identify turn-switches when the switches are smooth and when they are overlapping. Our model identifies turn-switches with a recall of 68.5% when the switches are overlapping and identifies turn-switches with a recall of 68.1% when the switches are smooth. Table 4 shows the performance of predicting turn-switches and turn-holds for each intention class, as well as the accuracy of detecting that intention class. The results show that the model is better able to predict turn-switches when presented with a backchannel or a question, and is better able to predict turn-holds when presented with a statement, opinion, or an answer. This suggests that the performance of the model depends on the context and nature of a dialogue, and that it is easier to anticipate turn-switches or turn-holds for some intentions and not for others. Table 4 also shows the performance of identifying speaker intentions by the auxiliary task. The table shows that it is easier to identify backchannels, questions, or turn-exit signals (abandon) than it is to identify agreement signals and statements. The auxiliary task obtains an unweighted average recall (UAR) of 45.6% on a 7-way classification task (where chance UAR is 14.3%).
Results and Discussion
CONCLUSION
In this work we showed that a model that uses acoustic features for modeling turn-taking in spoken dialogues could benefit from adding speaker intention detection as an auxiliary task. We also explored how the performance of our turntaking model varies depending on speaker intentions. For future work, we plan to augment acoustic features with lexical or phonetic information. We also plan to investigate combining turn-taking with end-of-utterance detection. Finally, we plan to add our model to a live spoken dialogue system.
Table 1 :
1Mapping dialogue act classes to intention classes. by the Switchboard Dialog Act Corpus 1 (SwDA), since we are interested in utilizing speaker intentions (i.e., dialogue act types). The goal of SwDA corpus was to extend the original Switchboard corpus with dialogue act types that summarize turn information in the conversations.SwDA classes Intention classes
sd, h, bf
statement
sv, ad, sv@
opinion
aa
agree
%, %-
abandon
b, bh
backchannel
qy, qo, qh
question
no, ny, ng, arp answer
Table 2 :
2Total number of utterances that are followed by holds and switches for each speaker intention class.Intention
Counts (%)
Holds
Switches
statement
26,332 (52.2) 12,722 (35.1)
opinion
8,066 (16.0)
5,227 (14.4)
agree
3,997 (7.9)
1,417 (3.9)
abandon
3,887 (7.7)
3,203 (8.8)
backchannel
6,225 (12.3)
10,678 (29.5)
question
752 (1.5)
2,369 (6.5)
answer
1,197 (2.4)
615 (1.7)
total
50,456
36,231
Table 3 :
3Performance comparison of different methods. Results shown are macro-averages across turn-switches and turn-holds.Method
Rec.
Prec. F1
AUC
Random
50.0
49.6
41.7 45.3
Full model [7] 55.8
56.7
55.4 57.8
LSTM
65.9
65.6
65.5 71.9
MT-LSTM
66.4 * 66.0
65.8 72.6 *
* indicates p < 0.05 under a paired t-test with LSTM.
Table 4 :
4Detecting turn-switches and turn-holds for each speaker intention class, as well as the per-class accuracy for detecting intentions by the auxiliary task. -Transitions Intentions F1 (switch) F1 (hold) per-class Acc.Turnstatement
51.4
73.2
39.6
opinion
54.5
70.3
43.1
agree
49.7
66.5
30.4
abandon
67.2
68.3
56.8
backchannel
79.1
51.6
49.6
question
72.2
47.6
50.3
answer
61.1
71.7
43.9
https://github.com/pytorch/pytorch
Acknowledgment. This work was supported by IBM under the Sapphire project.
Turn-taking in human communication-origins and implications for language processing. Stephen C Levinson, Trends in cognitive sciences. 201Stephen C Levinson, "Turn-taking in human communication-origins and implications for language processing," Trends in cognitive sciences, vol. 20, no. 1, pp. 6-14, 2016.
The use of content and timing to predict turn transitions. Simon Garrod, J Martin, Pickering, Frontiers in psychology. 6751Simon Garrod and Martin J Pickering, "The use of con- tent and timing to predict turn transitions," Frontiers in psychology, vol. 6, pp. 751, 2015.
Turn-taking cues in task-oriented dialogue. Agustín Gravano, Julia Hirschberg, Computer Speech & Language. 253Agustín Gravano and Julia Hirschberg, "Turn-taking cues in task-oriented dialogue," Computer Speech & Language, vol. 25, no. 3, pp. 601-634, 2011.
Towards deep end-of-turn prediction for situated spoken dialogue systems. Angelika Maier, Julian Hough, David Schlangen, Angelika Maier, Julian Hough, and David Schlangen, "Towards deep end-of-turn prediction for situated spo- ken dialogue systems," Interspeech, 2017.
Turntaking estimation model based on joint embedding of lexical and prosodic contents. Chaoran Liu, Carlos Ishi, Hiroshi Ishiguro, InterspeechChaoran Liu, Carlos Ishi, and Hiroshi Ishiguro, "Turn- taking estimation model based on joint embedding of lexical and prosodic contents," Interspeech, pp. 1686- 1690, 2017.
Improved end-of-query detection for streaming speech recognition. Carolina Parada, Gabor Simko, Matt Shannon, Shuo-Yiin Chang, Carolina Parada, Gabor Simko, Matt Shannon, and Shuo-yiin Chang, "Improved end-of-query detection for streaming speech recognition," pp. 2900-2904, 2017.
Using past speaker behavior to better predict turn transitions. Tomer Meshorer, Peter A Heeman, InterspeechTomer Meshorer and Peter A Heeman, "Using past speaker behavior to better predict turn transitions.," in Interspeech, 2016, pp. 2900-2904.
Modelling turn-taking in human conversations. Nishitha Guntakandla, Rodney Nielsen, AAAI Spring Symposium on Turn-Taking and Coordination in Human-Machine Interaction. Stanford CANishitha Guntakandla and Rodney Nielsen, "Mod- elling turn-taking in human conversations," in AAAI Spring Symposium on Turn-Taking and Coordination in Human-Machine Interaction, Stanford CA, 2015.
Online end-ofturn detection from speech based on stacked timeasynchronous sequential networks. Ryo Masumura, Taichi Asami, Hirokazu Masataki, Ryo Ishii, Ryuichiro Higashinaka, Interspeech. Ryo Masumura, Taichi Asami, Hirokazu Masataki, Ryo Ishii, and Ryuichiro Higashinaka, "Online end-of- turn detection from speech based on stacked time- asynchronous sequential networks," Interspeech, pp. 1661-1665, 2017.
End-of-utterance prediction by prosodic features and phrase-dependency structure in spontaneous japanese speech. Yuichi Ishimoto, Takehiro Teraoka, Mika Enomoto, Interspeech. Yuichi Ishimoto, Takehiro Teraoka, and Mika Enomoto, "End-of-utterance prediction by prosodic features and phrase-dependency structure in spontaneous japanese speech," Interspeech, pp. 1681-1685, 2017.
Enhanced end-of-turn detection for speech to a personal assistant. Harish Arsikere, Elizabeth Shriberg, Umut Ozertem, AAAI Spring Symposium on Turn-taking and Coordination in Human-Machine Interaction. Harish Arsikere, Elizabeth Shriberg, and Umut Oz- ertem, "Enhanced end-of-turn detection for speech to a personal assistant," in AAAI Spring Symposium on Turn-taking and Coordination in Human-Machine In- teraction, 2015.
Turn-taking offsets and dialogue context. A Peter, Rebecca Heeman, Lunsford, InterspeechPeter A Heeman and Rebecca Lunsford, "Turn-taking offsets and dialogue context," Interspeech, pp. 1671- 1675, 2017.
Switchboard: Telephone speech corpus for research and development. J John, Godfrey, C Edward, Jane Holliman, Mc-Daniel, Acoustics, Speech, and Signal Processing. 1ICASSP-92John J Godfrey, Edward C Holliman, and Jane Mc- Daniel, "Switchboard: Telephone speech corpus for research and development," in Acoustics, Speech, and Signal Processing, 1992. ICASSP-92., 1992 IEEE Inter- national Conference on. IEEE, 1992, vol. 1, pp. 517- 520.
The nxt-format switchboard corpus: a rich resource for investigating the syntax, semantics, pragmatics and prosody of dialogue. Sasha Calhoun, Jean Carletta, Jason M Brenier, Neil Mayo, Dan Jurafsky, Mark Steedman, David Beaver, Language resources and evaluation. 444Sasha Calhoun, Jean Carletta, Jason M Brenier, Neil Mayo, Dan Jurafsky, Mark Steedman, and David Beaver, "The nxt-format switchboard corpus: a rich resource for investigating the syntax, semantics, prag- matics and prosody of dialogue," Language resources and evaluation, vol. 44, no. 4, pp. 387-419, 2010.
Opensmile: the munich versatile and fast open-source audio feature extractor. Florian Eyben, Martin Wöllmer, Björn Schuller, Proceedings of the 18th ACM international conference on Multimedia. the 18th ACM international conference on MultimediaACMFlorian Eyben, Martin Wöllmer, and Björn Schuller, "Opensmile: the munich versatile and fast open-source audio feature extractor," in Proceedings of the 18th ACM international conference on Multimedia. ACM, 2010, pp. 1459-1462.
Spotting social signals in conversational speech over ip: A deep learning perspective. Raymond Brueckner, Maximilian Schmitt, Maja Pantic, Björn Schuller, Interspeech. Raymond Brueckner, Maximilian Schmitt, Maja Pantic, and Björn Schuller, "Spotting social signals in conver- sational speech over ip: A deep learning perspective," Interspeech, pp. 2371-2375, 2017.
| [
"https://github.com/cgpotts/swda",
"https://github.com/pytorch/pytorch"
] |
[
"Multi-level Representations for Fine-Grained Typing of Knowledge Base Entities",
"Multi-level Representations for Fine-Grained Typing of Knowledge Base Entities"
] | [
"Yadollah Yaghoobzadeh yadollah@cis.lmu.de \nCenter for Information and Language Processing LMU\nMunichGermany\n",
"Hinrich Schütze \nCenter for Information and Language Processing LMU\nMunichGermany\n"
] | [
"Center for Information and Language Processing LMU\nMunichGermany",
"Center for Information and Language Processing LMU\nMunichGermany"
] | [
"the Association for Computational Linguistics"
] | Entities are essential elements of natural language. In this paper, we present methods for learning multi-level representations of entities on three complementary levels: character (character patterns in entity names extracted, e.g., by neural networks), word (embeddings of words in entity names) and entity (entity embeddings). We investigate state-of-theart learning methods on each level and find large differences, e.g., for deep learning models, traditional ngram features and the subword model of fasttext (Bojanowski et al., 2016) on the character level; for word2vec (Mikolov et al., 2013) on the word level; and for the order-aware model wang2vec (Ling et al., 2015a) on the entity level.We confirm experimentally that each level of representation contributes complementary information and a joint representation of all three levels improves the existing embedding based baseline for fine-grained entity typing by a large margin. Additionally, we show that adding information from entity descriptions further improves multi-level representations of entities. | 10.18653/v1/e17-1055 | [
"https://www.aclweb.org/anthology/E17-1055.pdf"
] | 18,556,836 | 1701.02025 | bdeb6ff1a9607468af50609ccde1f55ce64b0ad4 |
Multi-level Representations for Fine-Grained Typing of Knowledge Base Entities
Association for Computational LinguisticsCopyright Association for Computational LinguisticsApril 3-7, 2017. 2017
Yadollah Yaghoobzadeh yadollah@cis.lmu.de
Center for Information and Language Processing LMU
MunichGermany
Hinrich Schütze
Center for Information and Language Processing LMU
MunichGermany
Multi-level Representations for Fine-Grained Typing of Knowledge Base Entities
the Association for Computational Linguistics
the 15th Conference of the European ChapterValencia, SpainAssociation for Computational Linguistics1April 3-7, 2017. 2017
Entities are essential elements of natural language. In this paper, we present methods for learning multi-level representations of entities on three complementary levels: character (character patterns in entity names extracted, e.g., by neural networks), word (embeddings of words in entity names) and entity (entity embeddings). We investigate state-of-theart learning methods on each level and find large differences, e.g., for deep learning models, traditional ngram features and the subword model of fasttext (Bojanowski et al., 2016) on the character level; for word2vec (Mikolov et al., 2013) on the word level; and for the order-aware model wang2vec (Ling et al., 2015a) on the entity level.We confirm experimentally that each level of representation contributes complementary information and a joint representation of all three levels improves the existing embedding based baseline for fine-grained entity typing by a large margin. Additionally, we show that adding information from entity descriptions further improves multi-level representations of entities.
Introduction
Knowledge about entities is essential for understanding human language. This knowledge can be attributional (e.g., canFly, isEdible), type-based (e.g., isFood, isPolitician, isDisease) or relational (e.g, marriedTo, bornIn). Knowledge bases (KBs) are designed to store this information in a structured way, so that it can be queried easily. Examples of such KBs are Freebase (Bollacker et al., 2008), Wikipedia, Google knowledge graph and YAGO (Suchanek et al., 2007). For automatic updating and completing the entity knowledge, text resources such as news, user forums, textbooks or any other data in the form of text are important sources. Therefore, information extraction methods have been introduced to extract knowledge about entities from text. In this paper, we focus on the extraction of entity types, i.e., assigning types to -or typing -entities. Type information can help extraction of relations by applying constraints on relation arguments.
We address a problem setting in which the following are given: a KB with a set of entities E, a set of types T and a membership function m : E × T → {0, 1} such that m(e, t) = 1 iff entity e has type t; and a large corpus C in which mentions of E are annotated. In this setting, we address the task of fine-grained entity typing: we want to learn a probability function S(e, t) for a pair of entity e and type t and based on S(e, t) infer whether m(e, t) = 1 holds, i.e., whether entity e is a member of type t.
We address this problem by learning a multilevel representation for an entity that contains the information necessary for typing it. One important source is the contexts in which the entity is used. We can take the standard method of learning embeddings for words and extend it to learning embeddings for entities. This requires the use of an entity linker and can be implemented by replacing all occurrences of the entity by a unique token. We refer to entity embeddings as entity-level representations. Previously, entity embeddings have been learned mostly using bag-of-word models like word2vec (e.g., by Wang et al. (2014) and Yaghoobzadeh and Schütze (2015)). We show below that order information is critical for highquality entity embeddings.
Entity-level representations are often uninformative for rare entities, so that using only entity embeddings is likely to produce poor results. In this paper, we use entity names as a source of information that is complementary to entity embeddings. We define an entity name as a noun phrase that is used to refer to an entity. We learn character and word level representations of entity names.
For the character-level representation, we adopt different character-level neural network architectures. Our intuition is that there is sub/cross word information, e.g., orthographic patterns, that is helpful to get better entity representations, especially for rare entities. A simple example is that a three-token sequence containing an initial like "P." surrounded by two capitalized words ("Rolph P. Kugl") is likely to refer to a person. We compute the word-level representation as the sum of the embeddings of the words that make up the entity name. The sum of the embeddings accumulates evidence for a type/property over all constituents, e.g., a name containing "stadium", "lake" or "cemetery" is likely to refer to a location. In this paper, we compute our word level representation with two types of word embeddings: (i) using only contextual information of words in the corpus, e.g., by word2vec (Mikolov et al., 2013) and (ii) using subword as well as contextual information of words, e.g., by Facebook's recently released fasttext (Bojanowski et al., 2016).
In this paper, we integrate character-level and word-level with entity-level representations to improve the results of previous work on fine-grained typing of KB entities. We also show how descriptions of entities in a KB can be a complementary source of information to our multi-level representation to improve the results of entity typing, especially for rare entities.
Our main contributions in this paper are:
• We propose new methods for learning entity representations on three levels: characterlevel, word-level and entity-level.
• We show that these levels are complementary and a joint model that uses all three levels improves the state of the art on the task of finegrained entity typing by a large margin.
• We experimentally show that an order dependent embedding is more informative than its bag-of-word counterpart for entity representation.
We release our dataset and source codes: cistern.cis.lmu.de/figment2/.
Related Work
Entity representation. Two main sources of information used for learning entity representation are: (i) links and descriptions in KB, (ii) name and contexts in corpora. We focus on name and contexts in corpora, but we also include (Wikipedia) descriptions. We represent entities on three levels: entity, word and character. Our entity-level representation is similar to work on relation extraction (Wang et al., 2014;, entity linking (Yamada et al., 2016;Fang et al., 2016), and entity typing (Yaghoobzadeh and Schütze, 2015). Our word-level representation with distributional word embeddings is similarly used to represent entities for entity linking and relation extraction (Socher et al., 2013;Wang et al., 2014). Novel entity representation methods we introduce in this paper are representation based on fasttext (Bojanowski et al., 2016) subword embeddings, several character-level representations, "order-aware" entity-level embeddings and the combination of several different representations into one multi-level representation.
Character-subword level neural networks. Character-level convolutional neural networks (CNNs) are applied by dos Santos and Zadrozny (2014) to part of speech (POS) tagging, by dos Santos and Guimarães (2015), Ma and Hovy (2016), and Chiu and Nichols (2016) to named entity recognition (NER), by and to sentiment analysis and text categorization, and by Kim et al. (2016) to language modeling (LM). Characterlevel LSTM is applied by Ling et al. (2015b) to LM and POS tagging, by Lample et al. (2016) to NER, by Ballesteros et al. (2015) to parsing morphologically rich languages, and by Cao and Rei (2016) to learning word embeddings. Bojanowski et al. (2016) learn word embeddings by representing words with the average of their character ngrams (subwords) embeddings. Similarly, Chen et al. (2015) extends word2vec for Chinese with joint modeling with characters.
Fine-grained entity typing. Our task is to infer fine-grained types of KB entities. KB completion is an application of this task. Yaghoobzadeh and Schütze (2015)'s FIGMENT system addresses this task with only contextual information; they do not use character-level and word-level features of entity names. Neelakantan and Chang (2015) and Xie et al. (2016) also address a similar task,
Entity Representation
Hidden Layer Output Layer (type probabilities) Figure 1: Schematic diagram of our architecture for entity classification. "Entity Representation" ( v(e)) is the (one-level or multi-level) vector representation of entity. Size of output layer is |T |.
but they rely on entity descriptions in KBs, which in many settings are not available. The problem of Fine-grained mention typing (FGMT) (Yosef et al., 2012;Ling and Weld, 2012;Yogatama et al., 2015;Del Corro et al., 2015;Shimaoka et al., 2016;Ren et al., 2016) is related to our task. FGMT classifies single mentions of named entities to their context dependent types whereas we attempt to identify all types of a KB entity from the aggregation of all its mentions. FGMT can still be evaluated in our task by aggregating the mention level decisions but as we will show in our experiments for one system, i.e., FIGER (Ling and Weld, 2012), our entity embedding based models are better in entity typing.
3 Fine-grained entity typing Given (i) a KB with a set of entities E, (ii) a set of types T , and (iii) a large corpus C in which mentions of E are linked, we address the task of finegrained entity typing (Yaghoobzadeh and Schütze, 2015): predict whether entity e is a member of type t or not. To do so, we use a set of training examples to learn P (t|e): the probability that entity e has type t. These probabilities can be used to assign new types to entities covered in the KB as well as typing unknown entities.
We learn P (t|e) with a general architecture; see Figure 1. The output layer has size |T |. Unit t of this layer outputs the probability for type t. "Entity Representation" ( v(e)) is the vector representation of entity e -we will describe in detail in the rest of this section what forms v(e) takes. We model P (t|e) as a multi-label classification, and train a multilayer perceptron (MLP) with one hidden layer:
P (t 1 |e) . . . P (t T |e) = σ W out f W in v(e)
(1) where W in ∈ R h×d is the weight matrix from v(e) ∈ R d to the hidden layer with size h. f is the rectifier function. W out ∈ R |T |×h is the weight matrix from hidden layer to output layer of size |T |. σ is the sigmoid function. Our objective is binary cross entropy summed over types:
t − m t log p t + (1 − m t ) log (1 − p t )
where m t is the truth and p t the prediction.
The key difficulty when trying to compute P (t|e) is in learning a good representation for entity e. We make use of contexts and name of e to represent its feature vector on the three levels of entity, word and character.
Entity-level representation
Distributional representations or embeddings are commonly used for words. The underlying hypothesis is that words with similar meanings tend to occur in similar contexts (Harris, 1954) and therefore cooccur with similar context words. We can extend the distributional hypothesis to entities (cf. Wang et al. (2014), Yaghoobzadeh and Schütze (2015)): entities with similar meanings tend to have similar contexts. Thus, we can learn a d dimensional embedding v(e) of entity e from a corpus in which all mentions of the entity have been replaced by a special identifier. We refer to these entity vectors as the entity level representation (ELR).
In previous work, order information of context words (relative position of words in the contexts) was generally ignored and objectives similar to the SkipGram (henceforth: SKIP) model were used to learn v(e). However, the bag-of-word context is difficult to distinguish for pairs of types like (restaurant,food) and (author,book). This suggests that using order aware embedding models is important for entities. Therefore, we apply Ling et al. (2015a)'s extended version of SKIP, Structured SKIP (SSKIP). It incorporates the order of context words into the objective. We compare it with SKIP embeddings in our experiments.
Word-level representation
Words inside entity names are important sources of information for typing entities. We define the word-level representation (WLR) as the average of the embeddings of the words that the entity name contains v(e) = 1/n n i=1 v(w i ) where v(w i ) is the embedding of the i th word of an entity name of length n. We opt for simple averaging since entity names often consist of a small number of words with clear semantics. Thus, averaging is a promising way of combining the information that each word contributes.
The word embedding, w, itself can be learned from models with different granularity levels. Embedding models that consider words as atomic units in the corpus, e.g., SKIP and SSKIP, are word-level.
On the other hand, embedding models that represent words with their character ngrams, e.g., fasttext (Bojanowski et al., 2016), are subword-level. Based on this, we consider and evaluate word-level WLR (WWLR) and subword-level WLR (SWLR) in this paper. There are three filters of width 2 and four filters of width 4.
Character-level representation
For computing the character level representation (CLR), we design models that try to type an entity based on the sequence of characters of its name. Our hypothesis is that names of entities of a specific type often have similar character patterns. Entities of type ETHNICITY often end in "ish" and "ian", e.g., "Spanish" and "Russian". Entities of type MEDICINE often end in "en": "Lipofen", "acetaminophen". Also, some types tend to have specific cross-word shapes in their entities, e.g., PERSON names usually consist of two words, or MUSIC names are usually long, containing several words.
The first layer of the character-level models is a lookup table that maps each character to an embedding of size d c . These embeddings capture similarities between characters, e.g., similarity in type of phoneme encoded (consonant/vowel) or similarity in case (lower/upper). The output of the lookup layer for an entity name is a matrix C ∈ R l×dc where l is the maximum length of a name and all names are padded to length l. This length l includes special start/end characters that bracket the entity name.
We experiment with four architectures to produce character-level representations in this paper: FORWARD (direct forwarding of character embeddings), CNNs, LSTMs and BiLSTMs. The output of each architecture then takes the place of the entity representation v(e) in Figure 1.
FORWARD simply concatenates all rows of matrix C; thus, v(e) ∈ R dc * l .
The CNN uses k filters of different window widths w to narrowly convolve C. For each fil-
ter H ∈ R dc×w , the result of the convolution of H over matrix C is feature map f ∈ R l−w+1 : f [i] = rectifier(C [:,i:i+w−1] H + b)
where rectifier is the activation function, b is the bias, C [:,i:i+w−1] are the columns i to i + w − 1 of C, 1 ≤ w ≤ 10 are the window widths we consider and is the sum of element-wise multiplication. Max pooling then gives us one feature for each filter. The concatenation of all these features is our representation: v(e) ∈ R k . An example CNN architecture is show in Figure 2.
The input to the LSTM is the character sequence in matrix C, i.e., x 1 , . . . , x l ∈ R dc . It generates the state sequence h 1 , ..., h l+1 and the output is the last state v(e) ∈ R d h . 2 The BiLSTM consists of two LSTMs, one going forward, one going backward. The first state of the backward LSTM is initialized as h l+1 , the last state of the forward LSTM. The BiLSTM entity representation is the concatenation of last states of forward and backward LSTMs, i.e., v(e) ∈ R 2 * d h .
Multi-level representations
Our different levels of representations can give complementary information about entities.
Character-level Representation
Word-level Representation
Entity-level Representation Entity Representation Figure 3: Multi-level representation WLR and CLR. Both WLR models, SWLR and WWLR, do not have access to the cross-word character ngrams of entity names while CLR models do. Also, CLR is task specific by training on the entity typing dataset while WLR is generic. On the other hand, WWLR and SWLR models have access to information that CLR ignores: the tokenization of entity names into words and embeddings of these words. It is clear that words are particularly important character sequences since they often correspond to linguistic units with clearly identifiable semantics -which is not true for most character sequences. For many entities, the words they contain are a better basis for typing than the character sequence. For example, even if "nectarine" and "compote" did not occur in any names in the training corpus, we can still learn good word embeddings from their non-entity occurrences. This then allows us to correctly type the entity "Aunt Mary's Nectarine Compote" as FOOD based on the sum of the word embeddings.
WLR/CLR and ELR. Representations from entity names, i.e., WLR and CLR, by themselves are limited because many classes of names can be used for different types of entities; e.g., person names do not contain hints as to whether they are referring to a politician or athlete. In contrast, the ELR embedding is based on an entity's contexts, which are often informative for each entity and can distinguish politicians from athletes. On the other hand, not all entities have sufficiently many informative contexts in the corpus. For these entities, their name can be a complementary source of information and character/word level representations can increase typing accuracy.
Thus, we introduce joint models that use combinations of the three levels. Each multi-level model concatenates several levels. We train the constituent embeddings as follows. WLR and ELR are computed as described above and are not changed during training. CLR -produced by one of the character-level networks described above -is initialized randomly and then tuned during training. Thus, it can focus on complementary information related to the task that is not already present in other levels. The schematic diagram of our multi-level representation is shown in Figure 3.
Experimental setup and results
Setup
Entity datasets and corpus.
We address the task of fine-grained entity typing and use Yaghoobzadeh and Schütze (2015)'s FIGMENT dataset 3 for evaluation. The FIGMENT corpus is part of a version of ClueWeb in which Freebase entities are annotated using FACC1 (URL, 2016b;Gabrilovich et al., 2013). The FIGMENT entity datasets contain 200,000 Freebase entities that were mapped to 102 FIGER types (Ling and Weld, 2012). We use the same train (50%), dev (20%) and test (30%) partitions as Yaghoobzadeh and Schütze (2015) and extract the names from mentions of dataset entities in the corpus. We take the most frequent name for dev and test entities and three most frequent names for train (each one tagged with entity types).
Adding parent types to refine entity dataset. FIGMENT ignores that FIGER is a proper hierarchy of types; e.g., while HOSPITAL is a subtype of BUILDING according to FIGER, there are entities in FIGMENT that are hospitals, but not buildings. 4 Therefore, we modified the FIGMENT dataset by adding for each assigned type (e.g., HOSPITAL) its parents (e.g., BUILDING). This makes FIGMENT more consistent and eliminates spurious false negatives (BUILDING in the example).
We now describe our baselines: (i) BOW & NSL: hand-crafted features, (ii) FIGMENT (Yaghoobzadeh and Schütze, 2015) and (iii) adapted version of FIGER (Ling and Weld, 2012).
We implement the following two feature sets from the literature as a hand-crafted baseline for our character and word level models. (i) BOW: individual words of entity name (both as-is and lowercased); (ii) NSL (ngram-shape-length): shape and length of the entity name (cf. Ling and Weld (2012)), character n-grams, 1 ≤ n ≤ n max , n max = 5 (we also tried n max = 7, but results were worse on dev) and normalized character n-grams: lowercased, digits replaced by "7", punctuation replaced by ".". These features are represented as a sparse binary vector v(e) that is input to the architecture in Figure 1.
FIGMENT is the model for entity typing presented by Yaghoobzadeh and Schütze (2015). The authors only use entity-level representations for entities trained by SkipGram, so the FIG-MENT baseline corresponds to the entity-level result shown as ELR(SKIP) in the tables.
The third baseline is using an existing mentionlevel entity typing system, FIGER (Ling and Weld, 2012). FIGER uses a wide variety of features on different levels (including parsing-based features) from contexts of entity mentions as well as the mentions themselves and returns a score for each mention-type instance in the corpus. We provide the ClueWeb/FACC1 segmentation of entities, so FIGER does not need to recognize entities. 5 We use the trained model provided by the authors and normalize FIGER scores using softmax to make them comparable for aggregation. We experimented with different aggregation functions (including maximum and k-largest-scores for a type), but we use the average of scores since it gave us the best result on dev. We call this baseline AGG-FIGER.
Distributional embeddings. For WWLR and ELR, we use SkipGram model in word2vec and SSkip model in wang2vec (Ling et al., 2015a) to learn embeddings for words, entities and types. To obtain embeddings for all three in the same space, we process ClueWeb/FACC1 as follows. For each sentence s, we add three copies: s itself, a copy of s in which each entity is replaced with its Freebase identifier (MID) and a copy in which each entity (not test entities though) is replaced with an ID indicating its notable type. The resulting corpus contains around 4 billion tokens and 1.5 billion types.
We run SKIP and SSkip with the same setup (200 dimensions, 10 negative samples, window size 5, word frequency threshold of 100) 6 on this corpus to learn embeddings for words, entities and FIGER types. Having entities and types in the same vector space, we can add another feature vector v(e) ∈ R |T | (referred to as TC below): for each entity, we compute cosine similarity of its entity vector with all type vectors.
For SWLR, we use fasttext 7 to learn word 5 Mention typing is separated from recognition in FIGER model. So it can use our segmentation of entities. 6 Our hyperparameter values are given in Table 1. The values are optimized on dev. We use AdaGrad and minibatch training. For each experiment, we select the best model on dev.
We use these evaluation measures: (i) accuracy: an entity is correct if all its types and no incorrect types are assigned to it; (ii) micro average F 1 : F 1 of all type-entity assignment decisions; (iii) entity macro average F 1 : F 1 of types assigned to an entity, averaged over entities; (iv) type macro average F 1 : F 1 of entities assigned to a type, averaged over types.
The assignment decision is based on thresholding the probability function P (t|e). For each model and type, we select the threshold that maximizes F 1 of entities assigned to the type on dev. Table 4: Micro F 1 on test of character, word level models for all, known ("known? yes") and unknown ("known? no") entities. Table 2 gives results on the test entities for all (about 60,000 entities), head (frequency > 100; about 12,200) and tail (frequency < 5; about 10,000). MFT (line 1) is the most frequent type baseline that ranks types according to their frequency in the train entities. Each level of representation is separated with dashed lines, and -unless noted otherwise -the best of each level is joined in multi level representations. 8 Character-level models are on lines 2-6. The order of systems is: CNN > NSL > BiLSTM > LSTM > FORWARD. The results show that complex neural networks are more effective than simple forwarding. BiLSTM works better than LSTM, confirming other related work. CNNs probably work better than LSTMs because there are few complex non-local dependencies in the sequence, but many important local features. CNNs with maxpooling can more straightforwardly capture local and position-independent features. CNN also beats NSL baseline; a possible reason is that CNN -an automatic method of feature learning 8 For accuracy measure: in the following ordered lists of sets, A<B means that all members (row numbers in Table 2 Table 6 in the appendix for more details.
Results
-is more robust than hand engineered feature based NSL. We show more detailed results in Section 4.3.
Word-level models are on lines 7-10. BOW performs worse than WWLR because it cannot deal well with sparseness. SSKIP uses word order information in WWLR and performs better than SKIP. SWLR uses subword information and performs better than WWLR, especially for tail entities. Integrating subword information improves the quality of embeddings for rare words and mitigates the problem of unknown words.
Joint word-character level models are on lines 11-13. WWLR+CLR(CNN) and SWLR+CLR(CNN) beat the component models. This confirms our underlying assumption in designing the complementary multi-level models. BOW problem with rare words does not allow its joint model with NSL to work better than NSL. WWLR+CLR(CNN) works better than BOW+CLR(NSL) by 10% micro F 1 , again due to the limits of BOW compared to WWLR. Interestingly WWLR+CLR works better than SWLR+CLR and this suggests that WWLR is indeed richer than SWLR when CLR mitigates its problem with rare/unknown words Entity-level models are on lines 14-15 and they are better than all previous models on lines 1-13. This shows the power of entity-level embeddings. In Figure 4, a t-SNE (Van der Maaten and Hinton, 2008) visualization of ELR(SKIP) embeddings using different colors for entity types shows that entities of the same type are clustered together. SSKIP works marginally better than SKIP for ELR, especially for tail entities, confirming our hypothesis that order information is important for a good distributional entity representation. This is also confirming the results of Yaghoobzadeh and Schütze (2016), where they also get better entity typing results with SSKIP compared to SKIP. They propose to use entity typing as an extrinsic evaluation for embedding models.
Joint entity, word, and character level models are on lines 16-23. The AGG-FIGER baseline works better than the systems on lines 1-13, but worse than ELRs. This is probably due to the fact that AGG-FIGER is optimized for mention typing and it is trained using distant supervision assumption. Parallel to our work, Yaghoobzadeh et al. (2017) optimize a mention typing model for our entity typing task by introducing multi instance learning algorithms, resulting comparable performance to ELR(SKIP). We will investigate their method in future.
Joining CLR with ELR (line 17) results in large improvements, especially for tail entities (5% micro F 1 ). This demonstrates that for rare entities, contextual information is often not sufficient for an informative representation, hence name features are important. This is also true for the joint models of WWLR/SWLR and ELR (lines 18-19). Joining WWLR works better than CLR, and SWLR is slightly better than WWLR. Joint models of WWLR/SWLR with ELR+CLR gives more improvements, and SWLR is again slightly better than WWLR. ELR+WWLR+CLR and ELR+SWLR+CLR, are better than their twolevel counterparts, again confirming that these levels are complementary.
We get a further boost, especially for tail entities, by also including TC (type cosine) in the combinations (lines 22-23). This demonstrates the potential advantage of having a common representation space for entities and types. Our best model, ELR+SWLR+CLR+TC (line 22), which we refer to as MuLR in the other tables, beats our initial baselines (ELR and AGG-FIGER) by large margins, e.g., in tail entities improvements are more than 8% in micro F1. Table 3 shows type macro F 1 for MuLR (ELR+SWLR+CLR+TC) and two baselines. There are 11 head types (those with ≥3000 train entities) and 36 tail types (those with <200 train entities). These results again confirm the superiority of our multi-level models over the baselines: AGG-FIGER and ELR, the best single-level model baseline.
Analysis
Unknown vs. known entities. To analyze the complementarity of character and word level representations, as well as more fine-grained comparison of our models and the baselines, we divide test entities into known entities -at least one word of the entity's name appears in a train entity -and unknown entities (the complement). There are 45,000 (resp. 15,000) known (resp. unknown) test entities. Table 4 shows that the CNN works only slightly better (by 0.3%) than NSL on known entities, but works much better on unknown entities (by 3.3%), justifying our preference for deep learning CLR models. As expected, BOW works relatively well for known entities and really poorly for unknown entities. SWLR beats CLR models as well as BOW. The reason is that in our setup, word embeddings are induced on the entire corpus using an unsupervised algorithm. Thus, even for many words that did not occur in train, SWLR has access to informative representations of words. The joint model, SWLR+CLR(CNN), is significantly better than BOW+CLR(NSL) again due to limits of BOW. SWLR+CLR(CNN) is better than SWLR in unknown entities.
Case study of LIVING-THING. To understand the interplay of different levels better, we perform a case study of the type LIVING-THING. Living beings that are not humans belong to this type.
WLRs incorrectly assign "Walter Leaf" (PERSON) and "Along Came A Spider" (MUSIC) to LIVING-THING because these names contain a word referring to a LIVING-THING ("leaf", "spider"), but the entity itself is not a LIVING-THING. In these cases, the averaging of embeddings that WLR performs is misleading. The CLR(CNN) types these two entities correctly because their names contain character ngram/shape patterns that are indicative of PERSON and MUSIC. ELR incorrectly assigns "Zumpango" (CITY) and "Lake Kasumigaura" (LOCATION) to LIVING-THING because these entities are rare and words associated with living things (e.g., "wildlife") dominate in their contexts. However, CLR(CNN) and WLR enable the joint model to type the two entites correctly: "Zumpango" because of the informative suffix "-go" and "Lake Kasumigaura" because of the informative word "Lake".
While some of the remaining errors of our best system MuLR are due to the inherent difficulty of entity typing (e.g., it is difficult to correctly type a one-word entity that occurs once and whose name is not informative), many other errors are due to artifacts of our setup. First, ClueWeb/FACC1 is the result of an automatic entity linking system and any entity linking errors propagate to our models. Second, due to the incompleteness of Freebase (Yaghoobzadeh and Schütze, 2015), many entities in the FIGMENT dataset are incompletely annotated, resulting in correctly typed entities being evaluated as incorrect.
Adding another source: description-based embeddings. While in this paper, we focus on the contexts and names of entities, there is a textual source of information about entities in KBs which we can also make use of: descriptions of entities. We extract Wikipedia descriptions of FIGMENT entities filtering out the entities (∼ 40,000 out of ∼ 200,000) without description.
We then build a simple entity representation by averaging the embeddings of the top k words (wrt tf-idf) of the description (henceforth, AVG-DES). 9 This representation is used as input in Figure 1 to train the MLP. We also train our best multi-9 k = 20 gives the best results on dev. level model as well as the joint of the two on this smaller dataset. Since the descriptions are coming from Wikipedia, we use 300-dimensional Glove (URL, 2016a) embeddings pretrained on Wikip-dia+Gigaword to get more coverage of words. For MuLR, we still use the embeddings we trained before.
Results are shown in Table 5. While for head entities, MuLR works marginally better, the difference is very small in tail entities. The joint model of the two (by concatenation of vectors) improves the micro F1, with clear boost for tail entities. This suggests that for tail entities, the contextual and name information is not enough by itself and some keywords from descriptions can be really helpful. Integrating more complex description-based embeddings, e.g., by using CNN (Xie et al., 2016), may improve the results further. We leave it for future work.
Conclusion
In this paper, we have introduced representations of entities on different levels: character, word and entity. The character level representation is learned from the entity name. The word level representation is computed from the embeddings of the words w i in the entity name where the embedding of w i is derived from the corpus contexts of w i . The entity level representation of entity e i is derived from the corpus contexts of e i . Our experiments show that each of these levels contributes complementary information for the task of finegrained typing of entities. The joint model of all three levels beats the state-of-the-art baseline by large margins. We further showed that extracting some keywords from Wikipedia descriptions of entities, when available, can considerably improve entity representations, especially for rare entities. We believe that our findings can be transferred to other tasks where entity representation matters. ELR+SWLR * * * * * * * * * * * * * * * * 0 0 0 0 0 0 0 20 ELR+WWLR+CLR * * * * * * * * * * * * * * * * * 0 0 0 0 0 0 21 ELR+SWLR+CLR * * * * * * * * * * * * * * * * * * * 0 0 0 0 22 ELR+WWLR+CLR+TC * * * * * * * * * * * * * * * * * * * 0 0 0 0 23 ELR+SWLR+CLR+TC * * * * * * * * * * * * * * * * * * * * 0 0 0 Table 6: Significance-test results for accuracy measure for all, head and tail entities. If the result for the model in a row is significantly larger than the result for the model in a column, then the value in the corresponding (row,column) is * and otherwise is 0.
Figure 2 :
2Example architecture for the characterlevel CNN with max pooling. The input is "Lipofen". Character embedding size is three.
) of A are significantly worse than all members of B: {1} < {2} < {3, . . . , 11} < {12,13} < {14,15,16} <{17, . . . , 23}. Test of equal proportions, α < 0.05. See
Figure 4 :
4t-SNE result of entity-level representations
1Lipofen
Convolution
layer
Max Pooling
Lookup table
layer
Character-level Representation
Table 1 :
1Hyperparameters of different models. w is the filter size. n is the number of CNN feature maps for each filter size. d c is the character embedding size. d h is the LSTM hidden state size. h mlp is the number of hidden units in the MLP.
Table 2 :
2Accuracy (acc), micro (mic) and macro (mac) F 1 on test for all, head and tail entities.types: all head tail
AGG-FIGER .566 .702 .438
ELR
.621 .784 .480
MuLR
.669 .811 .541
Table 3 :
3Type macro aver-
age F 1 on test for all, head
and tail types.
MuLR =
ELR+SWLR+CLR+TC
all known?
yes no
CLR(NSL)
.484 .521 .341
CLR(CNN)
.494 .524 .374
BOW
.346 .435 .065
SWLR
.590 .612 .499
BOW+NSL
.497 .535 .358
SWLR+CLR(CNN) .594 .616 .508
entities:
all
head
tail
AVG-DES
.773 .791 .745
MuLR
.825 .846 .757
Table 5 :
5Micro average F 1 results of MuLR and description based model and their joint.
Subword models have properties of both character-level models (subwords are character ngrams) and of word-level models (they do not cross boundaries between words). They probably could be put in either category, but in our context fit the word-level category better because we see the granularity level with respect to the entities and not words.
We use Blocks (van Merriënboer et al., 2015).
cistern.cis.lmu.de/figment/ 4 See github.com/xiaoling/figer for FIGER
Acknowledgments. This work was supported by DFG (SCHU 2246/8-2).
Improved transition-based parsing by modeling characters instead of words with lstms. Miguel Ballesteros, Chris Dyer, Noah A Smith, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsMiguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based parsing by model- ing characters instead of words with lstms. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 349- 359, Lisbon, Portugal, September. Association for Computational Linguistics.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, abs/1607.04606CoRRPiotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. CoRR, abs/1607.04606.
Freebase: a collaboratively created graph database for structuring human knowledge. Kurt D Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, Jamie Taylor, Proceedings of the ACM SIG-MOD International Conference on Management of Data. the ACM SIG-MOD International Conference on Management of DataVancouver, BC, CanadaKurt D. Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a col- laboratively created graph database for structuring human knowledge. In Proceedings of the ACM SIG- MOD International Conference on Management of Data, SIGMOD 2008, Vancouver, BC, Canada, June 10-12, 2008, pages 1247-1250.
A joint model for word embedding and word morphology. Kris Cao, Marek Rei, Proceedings of the 1st Workshop on Representation Learning for NLP. the 1st Workshop on Representation Learning for NLPBerlin, Germany, AugustAssociation for Computational LinguisticsKris Cao and Marek Rei. 2016. A joint model for word embedding and word morphology. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 18-26, Berlin, Germany, August. Asso- ciation for Computational Linguistics.
Joint learning of character and word embeddings. Xinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, Huan-Bo Luan, Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015. the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015Buenos Aires, ArgentinaXinxiong Chen, Lei Xu, Zhiyuan Liu, Maosong Sun, and Huan-Bo Luan. 2015. Joint learning of char- acter and word embeddings. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 1236-1242.
Named entity recognition with bidirectional lstm-cnns. Jason Chiu, Eric Nichols, Transactions of the Association for Computational Linguistics. 4Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transac- tions of the Association for Computational Linguis- tics, 4:357-370.
Finet: Context-aware fine-grained named entity typing. Luciano Del Corro, Abdalghani Abujabal, Rainer Gemulla, Gerhard Weikum, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsLuciano Del Corro, Abdalghani Abujabal, Rainer Gemulla, and Gerhard Weikum. 2015. Finet: Context-aware fine-grained named entity typing. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 868-878, Lisbon, Portugal, September. Association for Computational Linguistics.
Boosting named entity recognition with neural character embeddings. Cícero Nogueira, Victor Santos, Guimarães, abs/1505.05008CoRRCícero Nogueira dos Santos and Victor Guimarães. 2015. Boosting named entity recognition with neu- ral character embeddings. CoRR, abs/1505.05008.
Learning character-level representations for part-of-speech tagging. Cícero Nogueira, Bianca Santos, Zadrozny, Proceedings of the 31th International Conference on Machine Learning, ICML 2014. the 31th International Conference on Machine Learning, ICML 2014Beijing, ChinaCícero Nogueira dos Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In Proceedings of the 31th International Conference on Machine Learn- ing, ICML 2014, Beijing, China, 21-26 June 2014, pages 1818-1826.
Entity disambiguation by knowledge and text jointly embedding. Wei Fang, Jianwen Zhang, Dilin Wang, Zheng Chen, Ming Li, Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. The 20th SIGNLL Conference on Computational Natural Language LearningBerlin, GermanyAssociation for Computational LinguisticsWei Fang, Jianwen Zhang, Dilin Wang, Zheng Chen, and Ming Li. 2016. Entity disambiguation by knowledge and text jointly embedding. In Proceed- ings of The 20th SIGNLL Conference on Computa- tional Natural Language Learning, pages 260-269, Berlin, Germany, August. Association for Computa- tional Linguistics.
Facc1: Freebase annotation of clueweb corpora. Evgeniy Gabrilovich, Michael Ringgaard, Amarnag Subramanya, Evgeniy Gabrilovich, Michael Ringgaard, and Amar- nag Subramanya. 2013. Facc1: Freebase annotation of clueweb corpora.
S Zellig, Harris, Distributional structure. Word. 10Zellig S. Harris. 1954. Distributional structure. Word, 10:146-162.
Character-aware neural language models. Yoon Kim, Yacine Jernite, David Sontag, Alexander M Rush, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. the Thirtieth AAAI Conference on Artificial IntelligencePhoenix, Arizona, USA.Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M. Rush. 2016. Character-aware neural lan- guage models. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12- 17, 2016, Phoenix, Arizona, USA., pages 2741- 2749.
Neural architectures for named entity recognition. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsGuillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recog- nition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 260-270, San Diego, California, June. Association for Computational Linguistics.
Fine-grained entity recognition. Xiao Ling, Daniel S Weld, Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence. the Twenty-Sixth AAAI Conference on Artificial IntelligenceToronto, Ontario, CanadaXiao Ling and Daniel S. Weld. 2012. Fine-grained en- tity recognition. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, July 22- 26, 2012, Toronto, Ontario, Canada.
Two/too simple adaptations of word2vec for syntax problems. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDenver, ColoradoAssociation for Computational LinguisticsWang Ling, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015a. Two/too simple adaptations of word2vec for syntax problems. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 1299-1304, Denver, Colorado, May-June. Association for Com- putational Linguistics.
Finding function in form: Compositional character models for open vocabulary word representation. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, Tiago Luis, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbonPortugal, September. Association for Computational LinguisticsWang Ling, Chris Dyer, Alan W Black, Isabel Tran- coso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015b. Finding function in form: Compositional character models for open vocabu- lary word representation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1520-1530, Lisbon, Portu- gal, September. Association for Computational Lin- guistics.
End-to-end sequence labeling via bi-directional lstm-cnns-crf. Xuezhe Ma, Eduard Hovy, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074, Berlin, Germany, August. Association for Computational Linguistics.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, abs/1301.3781CoRR. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781.
Arvind Neelakantan, Ming-Wei Chang, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDenver, ColoradoAssociation for Computational Linguisticsferring missing entity type instances for knowledge base completion: New dataset and methodsArvind Neelakantan and Ming-Wei Chang. 2015. In- ferring missing entity type instances for knowledge base completion: New dataset and methods. In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 515-525, Denver, Colorado, May-June. Association for Computational Linguistics.
Label noise reduction in entity typing by heterogeneous partial-label embedding. Xiang Ren, Wenqi He, Meng Qu, Clare R Voss, Heng Ji, Jiawei Han, Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data MiningSan Francisco, CA, USAXiang Ren, Wenqi He, Meng Qu, Clare R. Voss, Heng Ji, and Jiawei Han. 2016. Label noise reduction in entity typing by heterogeneous partial-label em- bedding. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016, pages 1825-1834.
An attentive neural architecture for fine-grained entity type classification. Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, Sebastian Riedel, Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2016. An attentive neural ar- chitecture for fine-grained entity type classification. pages 69-74, June.
Reasoning with neural tensor networks for knowledge base completion. Richard Socher, Danqi Chen, Christopher D Manning, Andrew Y Ng, Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held. Lake Tahoe, Nevada, United StatesRichard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a meet- ing held December 5-8, 2013, Lake Tahoe, Nevada, United States., pages 926-934.
Yago: a core of semantic knowledge. Fabian M Suchanek, Gjergji Kasneci, Gerhard Weikum, Proceedings of the 16th International Conference on World Wide Web. the 16th International Conference on World Wide WebBanff, Alberta, CanadaWWWFabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowl- edge. In Proceedings of the 16th International Con- ference on World Wide Web, WWW 2007, Banff, Al- berta, Canada, May 8-12, 2007, pages 697-706.
Modeling mention, context and entity with neural networks for entity disambiguation. Yaming Sun, Lei Lin, Duyu Tang, Nan Yang, Zhenzhou Ji, Xiaolong Wang, Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015. the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015Buenos Aires, ArgentinaYaming Sun, Lei Lin, Duyu Tang, Nan Yang, Zhen- zhou Ji, and Xiaolong Wang. 2015. Modeling men- tion, context and entity with neural networks for en- tity disambiguation. In Proceedings of the Twenty- Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 1333-1339.
URL. 2016a. Glove project. URL. 2016a. Glove project. http://nlp. stanford.edu/projects/glove.
URL. 2016b. Lemur project. URL. 2016b. Lemur project. http:// lemurproject.org/clueweb12/FACC1.
Visualizing data using t-sne. Laurens Van Der Maaten, Geoffrey Hinton, Journal of Machine Learning Research. 985Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(2579-2605):85.
Blocks and fuel: Frameworks for deep learning. Dzmitry Bart Van Merriënboer, Vincent Bahdanau, Dmitriy Dumoulin, David Serdyuk, Jan Warde-Farley, Yoshua Chorowski, Bengio, abs/1506.00619CoRRBart van Merriënboer, Dzmitry Bahdanau, Vincent Du- moulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio. 2015. Blocks and fuel: Frameworks for deep learning. CoRR, abs/1506.00619.
Text-enhanced representation learning for knowledge graph. Zhigang Wang, Juan-Zi Li, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016. the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016New York, NY, USAZhigang Wang and Juan-Zi Li. 2016. Text-enhanced representation learning for knowledge graph. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9-15 July 2016, pages 1293- 1299.
Knowledge graph and text jointly embedding. Zhen Wang, Jianwen Zhang, Jianlin Feng, Zheng Chen, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsZhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014. Knowledge graph and text jointly em- bedding. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1591-1601, Doha, Qatar, October. Association for Computational Linguistics.
Representation learning of knowledge graphs with entity descriptions. Ruobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, Maosong Sun, Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. the Thirtieth AAAI Conference on Artificial IntelligencePhoenix, Arizona, USARuobing Xie, Zhiyuan Liu, Jia Jia, Huanbo Luan, and Maosong Sun. 2016. Representation learning of knowledge graphs with entity descriptions. In Pro- ceedings of the Thirtieth AAAI Conference on Arti- ficial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA., pages 2659-2665.
Corpus-level fine-grained entity typing using contextual information. Yadollah Yaghoobzadeh, Hinrich Schütze, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsYadollah Yaghoobzadeh and Hinrich Schütze. 2015. Corpus-level fine-grained entity typing using con- textual information. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 715-725, Lisbon, Portugal, September. Association for Computational Linguis- tics.
Intrinsic subspace evaluation of word embedding representations. Yadollah Yaghoobzadeh, Hinrich Schütze, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, Germany, August1Long Papers). Association for Computational LinguisticsYadollah Yaghoobzadeh and Hinrich Schütze. 2016. Intrinsic subspace evaluation of word embedding representations. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 236-246, Berlin, Germany, August. Association for Computa- tional Linguistics.
Noise mitigation for neural entity typing and relation extraction. Yadollah Yaghoobzadeh, Heike Adel, Hinrich Schütze, EACL. Valencia, SpainYadollah Yaghoobzadeh, Heike Adel, and Hinrich Schütze. 2017. Noise mitigation for neural entity typing and relation extraction. In EACL, Valencia, Spain.
Joint learning of the embedding of words and entities for named entity disambiguation. Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the em- bedding of words and entities for named entity dis- ambiguation. pages 250-259, August.
Embedding methods for fine grained entity type classification. Dani Yogatama, Daniel Gillick, Nevena Lazic, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational Linguistics2Short Papers)Dani Yogatama, Daniel Gillick, and Nevena Lazic. 2015. Embedding methods for fine grained entity type classification. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 2: Short Papers), pages 291-296, Beijing, China, July. Association for Computational Linguistics.
HYENA: hierarchical type classification for entity names. Mohamed Amir Yosef, Sandro Bauer, Johannes Hoffart, Marc Spaniol, Gerhard Weikum, COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Posters. Mumbai, IndiaMohamed Amir Yosef, Sandro Bauer, Johannes Hof- fart, Marc Spaniol, and Gerhard Weikum. 2012. HYENA: hierarchical type classification for entity names. In COLING 2012, 24th International Con- ference on Computational Linguistics, Proceedings of the Conference: Posters, 8-15 December 2012, Mumbai, India, pages 1361-1370.
Text understanding from scratch. Xiang Zhang, Yann Lecun, abs/1502.01710CoRRXiang Zhang and Yann LeCun. 2015. Text understand- ing from scratch. CoRR, abs/1502.01710.
Character-level convolutional networks for text classification. Xiang Zhang, Junbo Jake Zhao, Yann Lecun, Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. pages 649-657.
| [] |
[
"Harnessing Multilingual Resources to Question Answering in Arabic",
"Harnessing Multilingual Resources to Question Answering in Arabic"
] | [
"Khalid Alnajjar \nRootroo Ltd Helsinki\nFinland\n",
"Mika Hämäläinen \nRootroo Ltd Helsinki\nFinland\n"
] | [
"Rootroo Ltd Helsinki\nFinland",
"Rootroo Ltd Helsinki\nFinland"
] | [] | The goal of the paper is to predict answers to questions given a passage of Qur'an. The answers are always found in the passage, so the task of the model is to predict where an answer starts and where it ends. As the initial data set is rather small for training, we make use of multilingual BERT so that we can augment the training data by using data available for languages other than Arabic. Furthermore, we crawl a large Arabic corpus that is domain specific to religious discourse. Our approach consists of two steps, first we train a BERT model to predict a set of possible answers in a passage. Finally, we use another BERT based model to rank the candidate answers produced by the first BERT model. | 10.48550/arxiv.2205.08024 | [
"https://arxiv.org/pdf/2205.08024v1.pdf"
] | 248,834,374 | 2205.08024 | 9a5e99660526dad8617160122c919c7bf5b279ef |
Harnessing Multilingual Resources to Question Answering in Arabic
16 May 2022
Khalid Alnajjar
Rootroo Ltd Helsinki
Finland
Mika Hämäläinen
Rootroo Ltd Helsinki
Finland
Harnessing Multilingual Resources to Question Answering in Arabic
16 May 2022Question Answering SystemsArtificial Neural ModelMultilingual BERTArabic
The goal of the paper is to predict answers to questions given a passage of Qur'an. The answers are always found in the passage, so the task of the model is to predict where an answer starts and where it ends. As the initial data set is rather small for training, we make use of multilingual BERT so that we can augment the training data by using data available for languages other than Arabic. Furthermore, we crawl a large Arabic corpus that is domain specific to religious discourse. Our approach consists of two steps, first we train a BERT model to predict a set of possible answers in a passage. Finally, we use another BERT based model to rank the candidate answers produced by the first BERT model.
Introduction
Question answering is natural language understanding problem that has received a fair share of attention in the past (Nishida et al., 2019;Rücklé et al., 2020;Asai and Choi, 2021). There are several datasets available for the task (Rajpurkar et al., 2018;Artetxe et al., 2020;Lewis et al., 2020). These datasets cover a very different domain than the one we are interested in this paper, namely the holy script Qur'an. Qur'anic Arabic itself has also received its share of NLP interest (Sharaf and Atwell, 2012;Dukes and Habash, 2010;Alsaleh et al., 2021). There is even an earlier question answering system for Qur'an (Abdelnasser et al., 2014). Other historical Arabic texts have also received some research interest (Belinkov et al., 2016;Majadly and Sagi, 2021;Alnajjar et al., 2020). In this paper, we describe our work on the Qur'an QA 2022 shared task data. The problem the QRCD (Qur'anic Reading Comprehension Dataset) dataset is built to solve is to predict an answer to a question given a passage in the Qur'an. The answer is within the passage, so the task for our model is to find where the answer starts and ends in the given passage. The problem is a challenging one for several reasons. Firstly, Arabic language has greater degree of ambiguity in written form due to the fact that most of the diacritics are left out in writing. This means that several words that are pronounced different become homographs and have an identical written form. This ambiguity is not a characteristic of the language itself but much rather a result of the orthographic conventions. This ambiguity causes challenges not only in the dataset we are using but also in any pretrained Arabic language models. Secondly, the publicly released part of the QRCD dataset is relatively small, consisting of only 710 samples in the training data and 109 samples in the de-velopment data. For this reason, we experiment with multilingual models and training data to alleviate this under-resourced scenario. Thirdly, Arabic is a language with multiple dialects that are vastly different from each other. There is dedicated NLP research for several subdialects such as Tunisian (Ben Abdallah et al., 2020), Palestinian (Jarrar et al., 2014), Gulf (Adouane and Johansson, 2016) and Egyptian Arabic (Habash et al., 2012). These are very different from the Qur'anic Arabic we are focusing on, but they will be present in any large scale language model trained for Arabic on online corpora. For this reason, we needed to ensure that the model we use in this paper is trained exclusively on Modern Standard Arabic as it is the closest contemporary variant of the language to Qur'anic Arabic and there are no language models available for classical Arabic.
Dataset
The QRCD Qur'an question answering dataset consists of 1,337 question-passage-answer triplets, which is split into training (65%), development (10%), and test (25%) sets. As the amount of training data is small (i.e., 710 and 109 for training and validation, respectively), we leverage multilingual and crosslingual resources for question answering tasks while ensuring that the model is exposed to and aware of Islamic concepts. To do so, we crawl multiple Islamic websites related to Tafseer (explanations of the Qur'an) and Fatwas (i.e., rulings or interpretations based on Islamic law for a given query) to build an Islamic-specific corpus. Table 1 lists the web-sources we crawled and how many pages were retrieved per source. We crawled all the websites using Scrapy 1 except for quran-tafseer.com for which we used Pytafseer 2 . Additionally, we add (Artetxe et al., 2020). Out of these MLQA and XQuAD also have Arabic data, which is beneficial when training the model. These datasets follow a slightly different format than the QRCD in terms of JSON. However, they are developed for the exact same task of predicting an answer in a text given a question. Therefore, they can be directly employed as additional training data without the need of reframing the problem.
Approach
In this section, we describe the process of building our artificial neural network model in detail. We base our model on the multilingual BERT (Devlin et al., 2019) that is trained on Wikipedia in multiple languages such as English, Spanish and Modern Standard Arabic. The intuition behind utilizing this model instead of an Arabic BERT model such as AraBERT (Antoun et al., 2020) is that there are more question answering dataset available in English than in Arabic; hence, our model would have a better representation for answering general questions related to a given context. The additional perk of using multilingual BERT is that we know that it has been trained on Modern Standard Arabic as opposed to several dialects that may have been present in the training data of Arabic specific BERT models that have been trained on online data such as the Oscar corpus (Abadji et al., 2022). Qur'anic Arabic is closer to Modern Standard Arabic than any of the other Arabic dialects, which means that the model will have less noise coming from multiple different dialects as noise can have undesirable effects on the final results (see (Mäkelä et al., 2020)). We also train the model using the Qur'an question answering dataset provided for the shared task to tailor its knowledge to the goal of the shared task. The following subsections elucidate on each of the aforementioned 3 https://github.com/aliftype/quran-data
Domain adaptation
As the model we are basing our work on (i.e., multilingual BERT) is trained on a generic encyclopedia corpus (Wikipedia) and has little exposure to Islamic and Qur'anic concepts, we continue training the multilingual BERT model to adapt it to the domain of the task here. In our previous research (Hämäläinen et al., 2021a;Hämäläinen et al., 2021b), we have found that BERT based models tend to work better if their training data has had text of a similar domain as the downstream task the model is fine-tuned for. Therefore, we believe that domain adaptation is beneficial in this case as well.
We convert the crawled data into a textual corpus, which we clean from non-Arabic text and remove any Arabic diacritics or punctuation using UralicNLP (Hämäläinen, 2019). In the case of Fatwas, we format the text as question first followed by the answer provided by the Mufti, and for Tafseer, we add the context (i.e., passage/verse) prior to the question. Qur'an data is added as it is. The textual corpus is then split into 80% training and 20% validation. We train the base model for the task of Masked Language Modeling and Next Sentence Prediction for 3 full epochs on our entire textual corpus.
Fine-tuning for Question Answering
Here, we fine-tune our BERT model for the task of Qur'an question answering. The data used for fine-tuning comes both from the QRCD dataset itself and the additional QA datasets (MLQA, SQuAD and XQuAD). The purpose of the other QA datsets is to make the model learn better the task of questionanswering and make it learn to use multilingual information better for this task. In essence, this should improve the results for Arabic even though a majority of the training data was in a different language.
We append a fully-connected dense layer that accepts the BERT's hidden layers and outputs two vectors of predictions. These vectors are predictions for the start and end positions of the predicted answer in the context. We use the Adam algo- (Loshchilov and Hutter, 2017) to optimize the parameters of the model, with cross entropy loss as the loss function.
Predicting Answers and Post-processing them
When inferring answers, we clamp the results to the number of tokens in the context and we ignore any special tokens prior to applying the softmax function on the predicted positions.
We apply a post-processing step on the top N generated answers. This goal of this step is to 1) ensure that the length of the answer corresponds to the type of question that is being asked and 2) eliminate overlapping predictions.
Different types of questions require answers of different lengths; for instance, answers to "who"-questions will most of the time be shorter (typically a single word is sufficient to answer it) than answers to "why"-questions (requires further elaboration consisting of several tokens). For this reason, we apply some data analysis on the Quran QA dataset to find out what the questions are that are present in the dataset and what the minimum, average and maximum lengths of the answers are. We use Farasa segmenter (Abdelali et al., 2016) to process the data for this analysis.
As a given type of questions might be expressed in different ways, we have applied some manual clustering to group all asked questions in the dataset into 8 types based on the interrogative pronouns used. Table 2 presents the question types, how many times they were present in the dataset and statistics on their answers. All predicted answers that are smaller than the average answer length are extended to either the nearest full-stop that marks the end of the verse or the average length, whichever is shorter.
When predicting multiple answers for a question, it is commonly the case that the model would predict overlapping answers. In such cases, we merge them together by taking the smaller starting position and maximum end position.
Similarity Recommendation
We noticed during our observations of the Qur'an QA dataset that some questions are detailed and elaborate further on what is being sought as an answer. Such elaborations would indicate that the most semantically similar verse to the question is probably the answer to it. For this reason, we use the second version of AraBERT Large (Antoun et al., 2020) to extract the features of the question and all verses in the given passage. Thereafter, we find the most similar verse to the question by applying the cosine similarity on the extracted features. The most similar verse is appended to the list of answers if it was not predicted already by the model and there are less than 5 predicted answers.
Results and Evaluation
We experiment with different models and techniques for predicting answers. First, we test out different BERT models and compare them to our custom model. Secondly, we investigate the effects of finetuning our model with different training and validation question and answering datasets along with the Quran QA dataset. Lastly, we assess the benefits of postprocessing the predictions and including the most similar verse to the question as an answer. In our tests, we evaluate the models based on the metrics that are considered in the shared task, namely partial Reciprocal Rank (pRR) , exact match and F1 (Rajpurkar et al., 2016). Table 4 shows the different BERT models that we have tested out using only the Quran QA dataset, where we use the train and validation step during the training phase, and test the models on the development split. Comparing KUISAIL's base and large models suggest that bigger models improve the performance of the model. However, larger models require a longer time to train, and for this reason we opted for using a base model. Despite using a smaller multilingual model as a base model, adapting it to the domain of this task has clearly improved the quality of its predictions. All results presented after this point use our custom BERT model. In Table 5 we can see the results of our two systems when comparing to the official test set of the shared task. When we compared our system on a question level to the median values across all the submissions to the task, we found that over half of the time our best system achieves better scores than the median value. However, our best model had the highest possible score among all submissions around 15% of the time. Interestingly, our worst model had the highest possible score among all the submissions 17.6% of the time despite having poorer overall performance. Table 3 lists the different settings we experimented with and their evaluation results on the development split. All the models had the Quran QA dataset as the validation dataset during the training phase, and they have been trained for 3 epochs. By comparing the first and second settings, we see that including other question and answering datasets during the training phase im-proves the predictions. Fine-tuning the model that has been exposed to other question and answering datasets further using Quran QA dataset only outperforms using Quran QA dataset solely, which demonstrates the great importance of utilizing relevant linguistic resources in other languages and applying domain adaptation. In the 4th experimental setting we include the Quran QA development split in the training dataset to cover as many cases as possible given that the amount of training data is very small; despite it being a non-recommended practice.
Our experiments point out that post-processing the predicted answers to ensure that they are of an adequate length based on the question type and that no overlapping answers have been predicted boosts the results from pRR of 0.54 to 0.7. Including the most similar verse to the question as a possible answer raises the results by a bit but it does not affect them negatively. From our observations of answers predicted by model #6 and #7 is that sometimes they would predict different answers where one of them is correct. To benefit from both of the models and include their variations in the answers, we consider answers produced by them and remove any overlapping answers during the postprocessing phase. We have submitted two runs to the shared task, which are experiments number 6 and 7.
Looking at examples of generated answers by our mod-els in Table 6 illustrates cases where the predictions have been fully accurate by predicting an exact match (e.g. example #2) and partially correct (e.g., examples #1, #3 and #4). For the partially correct predictions, the model either predicts lengthier or shorter answers which could be due to the post-processing phase. However, we find that the length of gold answers is subjective. The fifth case is an example of wrong predictions; nonetheless, the top prediction is includes two nouns and the question is asking for people (whom) which tells us that the model made its best guess given the context and it was very close.
Conclusions
In conclusion, we have embraced multilingual models, and question and answering resources to build a question answering model for Qur'an. Our results indicate that applying domain adaptation and fine-tuning the model with relevant data sets increases the performance of the models, especially in the case of limited training data like this one. As the models predict the start and end positions of the answer in the context, it is very likely that the predictions are off by few tokens. Post-processing the predictions to correspond to the expected answer per question type and merging any overlapping cases had a huge boost on the quality of predictions.
Table 2 :
2The different question types, their frequency and statistics on the length of their corresponding answers (in tokens) in the training and development dataset steps along with any preprocessing and postprocessing phases.
Table 3 :
3Experimental settings and their performances
rithm (Kingma and Ba, 2014) with decoupled weight
decay regularization
Table 4 :
4Different BERT models trained and evaluated
solely on the Quran QA dataset
pRR Exact match F1@1
Run 6 0.392
0.113
0.354
Run 7 0.409
0.092
0.364
Table 5 :
5Results on the test set
Table 6 :
6Examples of input contexts and questions to the system along with the gold answers and what the model has predicted
https://github.com/scrapy/scrapy 2 https://github.com/Quran-Tafseer/pytafseer
Towards a Cleaner Document-Oriented Multilingual Crawled Corpus. J Abadji, P Ortiz Suarez, L Romary, B Sagot, arXiv:2201.06642arXiv e-printsAbadji, J., Ortiz Suarez, P., Romary, L., and Sagot, B. (2022). Towards a Cleaner Document-Oriented Multilingual Crawled Corpus. arXiv e-prints, page arXiv:2201.06642, January.
Farasa: A fast and furious segmenter for arabic. A Abdelali, K Darwish, N Durrani, H Mubarak, 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational LinguisticsAbdelali, A., Darwish, K., Durrani, N., and Mubarak, H. (2016). Farasa: A fast and furious segmenter for arabic. In 15th Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 11-16. Association for Computational Lin- guistics.
. H Abdelnasser, M Ragab, R Mohamed, A Mohamed, B Farouk, N El-Makky, M Torki, Abdelnasser, H., Ragab, M., Mohamed, R., Mohamed, A., Farouk, B., El-Makky, N., and Torki, M. (2014).
An Arabic question answering system for the holy quran. Al-Bayan, Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP). the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP)Doha, QatarAssociation for Computational LinguisticsAl-bayan: An Arabic question answering system for the holy quran. In Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP), pages 57-64, Doha, Qatar, October. Asso- ciation for Computational Linguistics.
Gulf Arabic linguistic resource building for sentiment analysis. W Adouane, R Johansson, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Portorož, Slovenia, MayEuropean Language Resources Association (ELRAAdouane, W. and Johansson, R. (2016). Gulf Ara- bic linguistic resource building for sentiment anal- ysis. In Proceedings of the Tenth International Conference on Language Resources and Evalua- tion (LREC'16), pages 2710-2715, Portorož, Slove- nia, May. European Language Resources Associa- tion (ELRA).
Automated prediction of medieval arabic diacritics. K Alnajjar, M Hämäläinen, N Partanen, J Rueter, arXiv:2010.05269arXiv preprintAlnajjar, K., Hämäläinen, M., Partanen, N., and Rueter, J. (2020). Automated prediction of medieval arabic diacritics. arXiv preprint arXiv:2010.05269.
Quranic verses semantic relatedness using AraBERT. A Alsaleh, E Atwell, Altahhan , A , Proceedings of the Sixth Arabic Natural Language Processing Workshop. the Sixth Arabic Natural Language Processing WorkshopKyiv; Ukraine(Virtual), April. Association for Computational LinguisticsAlsaleh, A., Atwell, E., and Altahhan, A. (2021). Quranic verses semantic relatedness using AraBERT. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 185-190, Kyiv, Ukraine (Virtual), April. Associa- tion for Computational Linguistics.
AraBERT: Transformer-based model for Arabic language understanding. W Antoun, F Baly, H Hajj, Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection. the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language DetectionMarseille, France, MayEuropean Language Resource AssociationAntoun, W., Baly, F., and Hajj, H. (2020). AraBERT: Transformer-based model for Arabic language un- derstanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detec- tion, pages 9-15, Marseille, France, May. European Language Resource Association.
Challenges in information-seeking QA: Unanswerable questions and paragraph retrieval. A Asai, E Choi, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline, AugustAssociation for Computational LinguisticsLong Papers)Asai, A. and Choi, E. (2021). Challenges in information-seeking QA: Unanswerable questions and paragraph retrieval. In Proceedings of the 59th Annual Meeting of the Association for Computa- tional Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 1492-1504, Online, Au- gust. Association for Computational Linguistics.
Shamela: A largescale historical Arabic corpus. Y Belinkov, A Magidow, M Romanov, A Shmidman, M Koppel, Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities (LT4DH). the Workshop on Language Technology Resources and Tools for Digital Humanities (LT4DH)Osaka, JapanThe COLING 2016 Organizing CommitteeBelinkov, Y., Magidow, A., Romanov, M., Shmidman, A., and Koppel, M. (2016). Shamela: A large- scale historical Arabic corpus. In Proceedings of the Workshop on Language Technology Resources and Tools for Digital Humanities (LT4DH), pages 45-53, Osaka, Japan, December. The COLING 2016 Orga- nizing Committee.
Text and speech-based Tunisian Arabic subdialects identification. N Ben Abdallah, S Kchaou, F Bougares, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, France, MayEuropean Language Resources AssociationBen Abdallah, N., Kchaou, S., and Bougares, F. (2020). Text and speech-based Tunisian Arabic sub- dialects identification. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 6405-6411, Marseille, France, May. European Language Resources Association.
Morphological annotation of quranic Arabic. K Dukes, N Habash, Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10). the Seventh International Conference on Language Resources and Evaluation (LREC'10)Valletta, Malta, MayEuropean Language Resources Association (ELRADukes, K. and Habash, N. (2010). Morphological an- notation of quranic Arabic. In Proceedings of the Seventh International Conference on Language Re- sources and Evaluation (LREC'10), Valletta, Malta, May. European Language Resources Association (ELRA).
A morphological analyzer for Egyptian Arabic. N Habash, R Eskander, A Hawwari, Proceedings of the Twelfth Meeting of the Special Interest Group on Computational Morphology and Phonology. the Twelfth Meeting of the Special Interest Group on Computational Morphology and PhonologyMontréal, CanadaAssociation for Computational LinguisticsHabash, N., Eskander, R., and Hawwari, A. (2012). A morphological analyzer for Egyptian Arabic. In Proceedings of the Twelfth Meeting of the Special Interest Group on Computational Morphology and Phonology, pages 1-9, Montréal, Canada, June. As- sociation for Computational Linguistics.
Never guess what I heard... rumor detection in Finnish news: a dataset and a baseline. M Hämäläinen, K Alnajjar, N Partanen, J Rueter, Proceedings of the Fourth Workshop on NLP for Internet Freedom: Censorship, Disinformation, and Propaganda. the Fourth Workshop on NLP for Internet Freedom: Censorship, Disinformation, and PropagandaOnline, June. Association for Computational LinguisticsHämäläinen, M., Alnajjar, K., Partanen, N., and Rueter, J. (2021a). Never guess what I heard... rumor de- tection in Finnish news: a dataset and a baseline. In Proceedings of the Fourth Workshop on NLP for Internet Freedom: Censorship, Disinformation, and Propaganda, pages 39-44, Online, June. Associa- tion for Computational Linguistics.
Detecting depression in thai blog posts: a dataset and a baseline. M Hämäläinen, P Patpong, K Alnajjar, N Partanen, J Rueter, Proceedings of the Seventh Workshop on Noisy Usergenerated Text (W-NUT 2021). the Seventh Workshop on Noisy Usergenerated Text (W-NUT 2021)Hämäläinen, M., Patpong, P., Alnajjar, K., Partanen, N., and Rueter, J. (2021b). Detecting depression in thai blog posts: a dataset and a baseline. In Pro- ceedings of the Seventh Workshop on Noisy User- generated Text (W-NUT 2021), pages 20-25.
Uralicnlp: An nlp library for uralic languages. M Hämäläinen, Journal of open source software. Hämäläinen, M. (2019). Uralicnlp: An nlp library for uralic languages. Journal of open source software.
The interplay of variant, size, and task type in Arabic pre-trained language models. G Inoue, B Alhafni, N Baimukan, H Bouamor, N Habash, Proceedings of the Sixth Arabic Natural Language Processing Workshop. the Sixth Arabic Natural Language Processing WorkshopKyiv, Ukraine (OnlineAssociation for Computational LinguisticsInoue, G., Alhafni, B., Baimukan, N., Bouamor, H., and Habash, N. (2021). The interplay of variant, size, and task type in Arabic pre-trained language models. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, Kyiv, Ukraine (On- line), April. Association for Computational Linguis- tics.
Building a corpus for palestinian Arabic: a preliminary study. M Jarrar, N Habash, D Akra, N Zalmout, Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP). the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP)Doha, QatarAssociation for Computational LinguisticsJarrar, M., Habash, N., Akra, D., and Zalmout, N. (2014). Building a corpus for palestinian Arabic: a preliminary study. In Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Pro- cessing (ANLP), pages 18-27, Doha, Qatar, October. Association for Computational Linguistics.
Adam: A method for stochastic optimization. D P Kingma, J Ba, In arXivKingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. In arXiv.
Decoupled weight decay regularization. I Loshchilov, F Hutter, In arXivLoshchilov, I. and Hutter, F. (2017). Decoupled weight decay regularization. In arXiv.
Dynamic ensembles in named entity recognition for historical Arabic texts. M Majadly, T Sagi, Proceedings of the Sixth Arabic Natural Language Processing Workshop. the Sixth Arabic Natural Language Processing WorkshopKyiv; UkraineMajadly, M. and Sagi, T. (2021). Dynamic ensem- bles in named entity recognition for historical Ara- bic texts. In Proceedings of the Sixth Arabic Natu- ral Language Processing Workshop, pages 115-125, Kyiv, Ukraine (Virtual), April. Association for Com- putational Linguistics.
Wrangling with non-standard data. E Mäkelä, K Lagus, L Lahti, T Säily, M Tolonen, M Hämäläinen, S Kaislaniemi, T Nevalainen, Proceedings of the Digital Humanities in the Nordic Countries 5th Conference Riga. the Digital Humanities in the Nordic Countries 5th Conference RigaLatviaMäkelä, E., Lagus, K., Lahti, L., Säily, T., Tolonen, M., Hämäläinen, M., Kaislaniemi, S., Nevalainen, T., et al. (2020). Wrangling with non-standard data. In Proceedings of the Digital Humanities in the Nordic Countries 5th Conference Riga, Latvia, October 21- 23, 2020. CEUR-WS. org.
¡i¿ayatec¡/i¿: Building a reusable verse-based test collection for arabic question answering on the holy qur'an. R Malhas, T Elsayed, ACM Trans. Asian Low-Resour. Lang. Inf. Process. 196octMalhas, R. and Elsayed, T. (2020). ¡i¿ayatec¡/i¿: Building a reusable verse-based test collection for arabic question answering on the holy qur'an. ACM Trans. Asian Low-Resour. Lang. Inf. Process., 19(6), oct.
Answering while summarizing: Multi-task learning for multihop QA with evidence extraction. K Nishida, K Nishida, M Nagata, A Otsuka, I Saito, H Asano, J Tomita, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsNishida, K., Nishida, K., Nagata, M., Otsuka, A., Saito, I., Asano, H., and Tomita, J. (2019). Answering while summarizing: Multi-task learning for multi- hop QA with evidence extraction. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 2335-2345, Florence, Italy, July. Association for Computational Linguis- tics.
Mul-tiCQA: Zero-shot transfer of self-supervised text matching models on a massive scale. A Rücklé, J Pfeiffer, I Gurevych, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineAssociation for Computational LinguisticsRücklé, A., Pfeiffer, J., and Gurevych, I. (2020). Mul- tiCQA: Zero-shot transfer of self-supervised text matching models on a massive scale. In Proceed- ings of the 2020 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 2471-2486, Online, November. Association for Computational Linguistics.
KUI-SAIL at SemEval-2020 task 12: BERT-CNN for offensive speech identification in social media. A Safaya, M Abdullatif, Yuret , D , December. International Committee for Computational Linguistics. BarcelonaProceedings of the Fourteenth Workshop on Semantic EvaluationSafaya, A., Abdullatif, M., and Yuret, D. (2020). KUI- SAIL at SemEval-2020 task 12: BERT-CNN for of- fensive speech identification in social media. In Pro- ceedings of the Fourteenth Workshop on Semantic Evaluation, pages 2054-2059, Barcelona (online), December. International Committee for Computa- tional Linguistics.
QurAna: Corpus of the quran annotated with pronominal anaphora. A.-B Sharaf, E Atwell, Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12). the Eighth International Conference on Language Resources and Evaluation (LREC'12)Istanbul, Turkey, MayEuropean Language Resources Association (ELRASharaf, A.-B. and Atwell, E. (2012). QurAna: Corpus of the quran annotated with pronominal anaphora. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 130-137, Istanbul, Turkey, May. European Language Resources Association (ELRA).
Language Resource References. Language Resource References
On the Cross-lingual Transferability of Monolingual Representations. Mikel Artetxe, Ruder, Sebastian, Dani Yogatama, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsArtetxe, Mikel and Ruder, Sebastian and Yogatama, Dani. (2020). On the Cross-lingual Transferability of Monolingual Representations. Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics.
BERT: Pretraining of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Chang , Ming-Wei Lee, Kenton Toutanova, Kristina , Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Devlin, Jacob and Chang, Ming-Wei and Lee, Ken- ton and Toutanova, Kristina. (2019). BERT: Pre- training of Deep Bidirectional Transformers for Lan- guage Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers).
MLQA: Evaluating Cross-lingual Extractive Question Answering. Patrick Lewis, Oguz, Barlas, Ruty Rinott, Sebastian Riedel, Holger Schwenk, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsLewis, Patrick and Oguz, Barlas and Rinott, Ruty and Riedel, Sebastian and Schwenk, Holger. (2020). MLQA: Evaluating Cross-lingual Extractive Ques- tion Answering. Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics.
Ayatec: building a reusable verse-based test collection for arabic question answering on the holy qur'an. Rana Malhas, Tamer Elsayed, ACM Transactions on Asian and Low-Resource Language Information Processing. TALLIPMalhas, Rana and Elsayed, Tamer. (2020). Ayatec: building a reusable verse-based test collection for arabic question answering on the holy qur'an. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP).
SQuAD: 100,000+ questions for machine comprehension of text. P Rajpurkar, J Zhang, K Lopyrev, P Liang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsRajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. (2016). SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2383-2392, Austin, Texas, November. Association for Computational Linguis- tics.
Know What You Don't Know: Unanswerable Questions for SQuAD. Pranav Rajpurkar, Robin Jia, Percy Liang, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsShort Papers2Rajpurkar, Pranav and Jia, Robin and Liang, Percy. (2018). Know What You Don't Know: Unanswer- able Questions for SQuAD. Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers).
| [
"https://github.com/aliftype/quran-data",
"https://github.com/scrapy/scrapy",
"https://github.com/Quran-Tafseer/pytafseer"
] |
[
"Sense Perception Common Sense Relationships",
"Sense Perception Common Sense Relationships"
] | [
"Ndapa Nakashole nnakashole@eng.ucsd.edu \nComputer Science and Engineering\nUniversity of California\nSan Diego La Jolla92093CA\n"
] | [
"Computer Science and Engineering\nUniversity of California\nSan Diego La Jolla92093CA"
] | [] | Often missing in existing knowledge bases of facts, are relationships that encode common sense knowledge about unnamed entities. In this paper, we propose to extract novel, common sense relationships pertaining to sense perception concepts such as sound and smell. | null | [
"https://arxiv.org/pdf/1811.07098v1.pdf"
] | 53,716,937 | 1811.07098 | 8431f6b8801cc88552dd2d932299af61e5790950 |
Sense Perception Common Sense Relationships
17 Nov 2018
Ndapa Nakashole nnakashole@eng.ucsd.edu
Computer Science and Engineering
University of California
San Diego La Jolla92093CA
Sense Perception Common Sense Relationships
17 Nov 2018
Often missing in existing knowledge bases of facts, are relationships that encode common sense knowledge about unnamed entities. In this paper, we propose to extract novel, common sense relationships pertaining to sense perception concepts such as sound and smell.
Introduction
We seek to extract novel common sense relationships, with a focus on concepts that are discernible by sense, for example, sound and smell. There are various natural language understanding tasks where this type of knowledge is useful: consider the problem of co-reference resolution as it occurs in the following sentences: (s1.) As the cat approached the dog, it started barking furiously; (s2.) As the cat approached the dog, it started meowing furiously. We can easily determine that in s1, the pronoun "it" refers to the dog, whereas in s2, "it" refers to the cat. However, for a machine reading method to correctly resolve co-reference in s1) and s2), it requires access to background knowledge that asserts that barking and meowing are sounds produced by dogs and cats, respectively. This type of knowledge is what we aim to extract in this paper. One of the factors impeding progress in common sense knowledge acquisition is the lack of labeled data. Prior work has shown that it can be straightforward to obtain training data for identifying relationships between named entities such as companies and their headquarters, or people and their birth places (Havasi et al., 2007;Tandon et al., 2011;Bollacker et al., 2008;Hoffart et al., 2012;.
Examples of such relationships can be found in semi-structured formats on the Web (Wu and Weld, 2008;Wang and Cohen, 2008). This is not the case for common sense relationships. Our contributions in this work are three-fold. First, we propose to extract novel relationships commonly absent in existing knowledge bases. Second, we propose a method for generating labeled data by leveraging large corpora and yes/no crowd-sourcing questionnaires. Third, using the resulting labeled data, we train both a linear model and memory neural network models, obtaining high accuracy on the task of extracting these previously under-explored relationships. To focus our task, we consider three relations pertaining to sense perception of sound and smell. Namely: 1) soundSourceRelation, 2) soundSceneRelation, and 3) smellSentimentRelation.
Sound-Source Relationship
The sound-source relationship represents information about which objects produce which sounds. For example that planes and birds are capable of flying, the wind blows, and geckos bark. Obtaining sufficient labeled data to learn an extractor for this relationship is non-trivial, we propose one approach in the next section.
Labeled Data Generation
One option for obtaining labeled data is to do a cold call on a crowd-sourcing platform by asking crowd workers to list examples of sounds and their sources. However, such an approach requires crowd workers to think of examples without clues or memory triggers. This is time consuming and error prone. Additionally, this means that the monetary cost could be substantial. We propose to exploit a large corpus to obtain preliminary labeled data. This enables us to only need crowd workers to filter the data through a series of "yes/no/notsure" questions. These type of questions require little effort from crowd workers while mitigating the amount of noisy input that one could get from open-ended, cold call, type of questions.
To pose filters to crowd workers in the form of "yes/no/notsure" questions, we need a list of plausible sound-source pairs. To this end, we propose a lightly supervised corpus-based technique. First, we identify which phrases refer to sounds using a high yield, but potentially noisy pattern. In particular, we apply the following pattern to a large corpus 1 : " sound of <y>". The result is a large collection of occurrences such as: " sound of singing children". This step produced a list of 134,471 unique phrases that potentially refer to sounds. To evaluate accuracy, we randomly selected a sample of 500 phrases and asked 3 crowd workers per phrase, on Mechanical Turk, to say "yes/no/notsure" if they agree the phrase refers to a sound concept. By majority vote measure, 73.4% of the 500 phrases where considered true mentions of sounds, with a moderate agreement rate of 0.51 Fleiss κ.
This annotation result indicates that a substantial number of the phrases generated by the pattern indeed refer to sound concepts. We therefore use these phrases to generate a list of plausible sound-source pairs. One important observation we made was that about 20,000 (15%) of the 134,471 phrases are bi-grams of the form: "verb noun" or "noun verb" where in both cases, the verb is in the gerund or present participle V-ing form. For example, birds chirping, cars honking,squealing brakes, etc. From phrases of this kind, we create verb-noun pairs, that we treat as plausible soundsource pairs where the verb is the sound and the noun is the source. We then asked crowd-workers to decide if the source (noun) produces the sound (verb). Thus from "birds chirping" we generate the question, "Is chirping a sound produced by birds?"; Negative examples include: "surrounding nature", and "Standing ovation", i.e., standing is not a sound made by ovation. We generated 634 such questions on which we obtained a moderate inter-annotator agreement rate of Fleiss κ = 0.57, see Table 1. We use the resulting labeled data to train two types of learning methods. Table 1: Fleiss κ. inter-annotator agreement rates for the three relations on yes/no type crowdsourcing tasks.
Linear Learning Model
The learning problem for the sound-source relationship is as follows: given a bi-gram phrase n of the form "verb noun" or "noun verb", we wish to classify yes or no if a given noun, denoted by w src , produces the verb, denoted by word w snd , as a sound. As a linear solution to this problem, we train a logistic regression classifier. The features we use are the vectors representing the word embeddings of w src and w snd , denoted by v src , and v snd . In our experiments, we use the 300-dimensional Google News pre-trained embeddings 2 . There are several ways in which we combine v src , and v snd into a single feature vector:
Vector Concatenation: v = concat(v src , v snd ) Size of v, |v| = |v src | + |v snd | LSTM encoder : v = lstm(v src , v snd )
An LSTM (Hochreiter and Schmidhuber, 1997) recurrent neural network is used to encode the phrase containing v src and v snd . |v| = h, where h is the hidden layer size of the neural network. Source minus sound:
v = v src − v snd |v| = |v src | = |v snd | Sound minus source: v = v snd − v src |v| = |v src | = |v snd |
Memory Networks Learning Model
In addition to the variations of the linear model, we also trained a non-linear model in the form of memory networks (Sukhbaatar et al., 2015). Memory networks have been recently introduced, they combine their inference component with a memory component. The memory component serves as a knowledge base or history vault to recall words or facts from the past. For the task of relation extraction, the memory network model learns a scoring function to rank relevant memories (words) with respect to how much they express a given relationship. This is done for a given argument pair as a query, i.e., a sound-source Table 2: Accuracy of the linear models (LM) and memory networks models (MM) on the soundsource relation.
pair. At prediction time, the model finds k relevant memories (words) according to the scoring function and conditions its output on these memories. In our experiments, we explore different values of k, effectively changing how many memories (words), the model conditions on. We report results for up to k = 3 as we did not see improvements for larger values of k.
Sound-Source Evaluation
Both the linear model and the memory networks models were implemented using Tensorflow. For the memory networks, we implemented the end-to-end version as described in (Weston et al., 2014;Sukhbaatar et al., 2015). Of the 634 crowd-sourced labeled examples described in section 2.1, we used 100 as test data, the rest as training data. Model parameters such as hidden layer size of the memory networks were tuned using cross-validation on the training data. As shown in Table 2, we obtain high accuracy across all models. The best performing model is a linear model with an LSTM encoding of the sound phrases, achieving accuracy of 90%. Surprisingly, we could not obtain better results with the memory networks model. Increasing the memory size or the number of hops (how often we iterate over the memories) did not help. One possible reason is the size of our training data, in previous work (Weston et al., 2014;Sukhbaatar et al., 2015), the memory networks were trained on 1,000 or more examples per problem type whereas our training data is half the size. Nevertheless, the memory networks module still produces good accuracy, with best performance of 87%.
Sound-Scene Relationship
The sound-scene relationship represents information about which sounds are found in which scenes. For example, birds chirping can be found in a forest. Therefore, this kind of information can also be used in context recognition systems (Eronen et al., 2006), in addition to providing common sense knowledge that could be useful in language understanding tasks.
Labeled Data Generation. We would like to obtain labeled data in the form of scenes and their sounds. For example, (beach, waves crashing), (construction, hammering), (street, sirens), (street, honking cars). To obtain this type of labeled data, we again would like to only use "yes/no/notsure" crowd-sourcing questions. To generate plausible sound-scene pairs, first we find all sentences that mention at least one scene and one sound concept.
To detect sound concepts, we use the approach described in Section 2.1. To detect mentions of scenes, we specified a list of 36 example scenes, which includes scenes such as beach, park, airport most of our scenes are part of the list of acoustic scenes from a scene classification challenge 3 . The full list of scenes is in the supplementary data accompanying this submission. For every sentence that mentions both an acoustic scene and a sound concept, we apply a dependency parser 4 . This step produces dependencies that form a directed graph, with words being nodes and dependencies being edges. Dependency graph shortest paths between entities have been found to be a good indicator of relationships between entities (Xu et al., 2015;Nakashole et al., 2013b). We use shortest paths as features in order classify sound-scene pairs. To obtain training data, we sort the paths by frequency, that is, how often we have seen the path occur with different sound-scene pairs. We then consider pairs that occur with frequent shortest paths to be plausible sound-scene pairs which we can present to crowd-workers in "yes/no/notsure" questions. We randomly selected 584 sound-scene pairs, and the corresponding sentences that mention them, which were then presented to crowd workers in questions. The inter-annotator agreement rate on this task is Fleiss κ = 0.35, see Table 1.
Learning Models and Evaluation. We use the learning models described in Sections 2.2 and 2.3. For the linear model, we consider three options For the memory network models, we considered using the contents of both the shortest paths and the sentences to produce memories. We use 100 of the 584 labeled data for testing, the rest for training. The shortest paths performed better, for space reasons we omit the results of using sentences as memories. As shown in Table 3, the linear model with the shortest path achieves the best accuracy of 81%. However, the best performing memory networks model with 3 memory hops is not significantly worse at 80% accuracy.
Smell-Sentiment Relationship
For the smell-sentiment relationship, the goal is to extract information about which smells are considered pleasant, unpleasant or neutral. In general, sentiment is both subjective and context dependent. However, as we show through crowdsourced annotations, there is substantial consensus even on sentiment of smells.
Labeled Data Generation. First we generate a list of plausible smells, following a similar approach to Section 2.1. That is, we search for the pattern: " smell of <y>" in the ClueWeb corpus. The result is a large collection of occurrences such as: " smell of rotten eggs." or smell of cherry blossoms. From this collection, we randomly selected a sample of 500 phrases and asked 3 crowd workers per phrase on Mechanical Turk, to say "yes/no/notsure" if they agree the phrase refers to a smell concept. By the majority vote measure metric 89.9% of the 500 phrases are true mentions of smells, with an somewhat low agreement rate of 0.33 Fleiss κ. Having verified that our list of phrases contains a substantial number of smell concepts, we then use these phrases to evaluate sentiment of smells in a different Mechanical Turk task. We present a phrase within a sentence context. We then asked crowd workers to choose if the phrase refers to a smell that is "pleasant/unpleasant/neutral/notsure/notasmell". We generated 600 such questions on which we obtained a moderate inter-annotator agreement rate of Fleiss κ = 0.43, see Table 1. While this is not a yes/no task, it is still a simple multiple choice task with the same advantages of the yes/no tasks as we described earlier.
Learning Models and Evaluation. We again use the learning models described in Sections 2.2 and 2.3. For the linear model, we consider two options for features. LSTM encoder: LSTM encoding of the smell phrase Vector addition: vector addition encoding of the smell phrase. For the memory network models, the contents of the sentence that mentions the phrases are stored as memories. We use 100 of the 600 labeled data for testing, the rest for training. As can be seen in Table 3, the linear model with LSTM encoded phrases achieved the highest accuracy of 84%.
Conclusion
Cyc (Lenat, 1995), and Concept-Net (Havasi et al., 2007) are well-known examples of attempts to build knowledge bases of everyday common sense knowledge. These projects are decades long manual efforts involving either experts or crowd-sourcing. Other knowledge bases focus on facts about named entities such as people, locations, and companies (Bollacker et al., 2008;Hoffart et al., 2012;.
In this paper, we extracted novel common sense relations. To obtain labeled data, we proposed a combination of large corpora, and multiple choice crowd-sourced questions. These type of questions require little effort from crowd workers while mitigating the amount of noise one might get from open-ended questions. We have also proposed and trained models on this data, achieving high accuracy for all relations. Scaling up our approach to more relations is an exciting future direction for our work. We believe our technique can scale given its minimally-supervised nature.
Table 3 :
3Accuracy on the sound-scene relation.Learning Model
Accuracy
LM: LSTM encoder 0.84
LM: vector addition 0.81
MM: 1 hop
0.82
MM: 3 hops
0.82
Table 4 :
4Accuracy on the sound-sentiment rela-
tion.
for features. Shortest Paths (SP): LSTM encoding
of the dependency shortest path. Sentence (S): an
LSTM encoding of the sentence. SP + S: encod-
ing of both the shortest path and the sentence are
used as features.
In our experiments, we used the English part of ClueWeb09; http://lemurproject.org/clueweb09/
https://code.google.com/archive/p/word2vec/
http://www.cs.tut.fi/sgn/arg/dcase2016/ 4 https://pypi.python.org/pypi/practnlptools/1.0
Learning long-term dependencies with gradient descent is difficult. [ References, Bengio, IEEE Transactions on Neural Networks. Special Issue on Recur. References [Bengio et al.1994] Yoshua Bengio, P. Simard, and Paolo Frasconi. 1994. Learning long-term depen- dencies with gradient descent is difficult. IEEE Transactions on Neural Networks. Special Issue on Recur.
Freebase: A collaboratively created graph database for structuring human knowledge. [ Bollacker, SIG-MOD, SIGMOD '08. [Bollacker et al.2008] Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In SIG- MOD, SIGMOD '08, pages 1247-1250.
Extracting patterns and relations from the world wide web. Sergey Brin, WebDB. Sergey Brin. 1998. Extracting patterns and relations from the world wide web. In WebDB, pages 172-183.
P C Nichols2016] Jason, Eric Chiu, Nichols, Named entity recognition with bidirectional lstm-cnns. TACL. 4[Chiu and Nichols2016] Jason P. C. Chiu and Eric Nichols. 2016. Named entity recognition with bidi- rectional lstm-cnns. TACL, 4:357-370.
Natural language processing (almost) from scratch. [ Collobert, Journal of Machine Learning Research. 12[Collobert et al.2011] Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language pro- cessing (almost) from scratch. Journal of Machine Learning Research, 12:2493-2537.
Boosting named entity recognition with neural character embeddings. abs/1505.05008CoRRCícero Nogueira dos Santos and Victor Guimarães[dos Santos and Guimarães2015] Cícero Nogueira dos Santos and Victor Guimarães. 2015. Boosting named entity recognition with neural character em- beddings. CoRR, abs/1505.05008.
Audio-based context recognition. Audio, Speech, and Language Processing. Eronen, IEEE Transactions on. 14[Eronen et al.2006] Antti J Eronen, Vesa T Peltonen, Juha T Tuomi, Anssi P Klapuri, Seppo Fagerlund, Timo Sorsa, Gaëtan Lorho, and Jyri Huopaniemi. 2006. Audio-based context recognition. Audio, Speech, and Language Processing, IEEE Transac- tions on, 14:321-329.
A semantic network of English verbs. Christaine Fellbaum, WordNet: An Electronic Lexical Database. The MIT PressChristaine Fellbaum. 1998. A seman- tic network of English verbs. In WordNet: An Elec- tronic Lexical Database, pages 69-104. The MIT Press.
Incorporating non-local information into information extraction systems by gibbs sampling. Finkel, ACL. [Finkel et al.2005] Jenny Rose Finkel, Trond Grenager, and Christopher D. Manning. 2005. Incorporat- ing non-local information into information extrac- tion systems by gibbs sampling. In ACL.
Named entity recognition with long short-term memory. James Hammerton, HLT-NAACL. James Hammerton. 2003. Named entity recognition with long short-term memory. In HLT-NAACL, pages 172-175.
Conceptnet 3: a flexible, multilingual semantic network for common sense knowledge. [ Havasi, RANLP. [Havasi et al.2007] Catherine Havasi, Robert Speer, and Jason Alonso. 2007. Conceptnet 3: a flexible, multilingual semantic network for common sense knowledge. In RANLP, pages 27-29.
Automatic acquisition of hyponyms from large text corpora. Marti A Hearst, COL-ING. Marti A. Hearst. 1992. Automatic acqui- sition of hyponyms from large text corpora. In COL- ING, pages 539-545.
Long short-term memory. Sepp Hochreiter and Jürgen Schmidhuber. 9[Hochreiter and Schmidhuber1997] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(1):1-42.
Robust disambiguation of named entities in text. Hoffart, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. McIntyre Conference Centrethe 2011 Conference on Empirical Methods in Natural Language ProcessingEdinburgh, UKA meeting of SIGDAT, a Special Interest Group of the ACL[Hoffart et al.2011] Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Man- fred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust dis- ambiguation of named entities in text. In Proceed- ings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP 2011, 27- 31 July 2011, John McIntyre Conference Centre, Ed- inburgh, UK, A meeting of SIGDAT, a Special Inter- est Group of the ACL, pages 782-792.
YAGO2: A spatially and temporally enhanced knowledge base from Wikipedia. Johannes Hoffart, Fabian M Suchanek, Klaus Berberich, Gerhard Weikum, Artificial Intelligence. 194Hoffart et al.2012[Hoffart et al.2012] Johannes Hoffart, Fabian M. Suchanek, Klaus Berberich, and Gerhard Weikum. 2012. YAGO2: A spatially and temporally en- hanced knowledge base from Wikipedia. Artificial Intelligence, 194:28-61.
Discovering sound concepts and acoustic relations in text. [ Kumar, Acoustics, Speech and Signal Processing. IEEE2017 IEEE International Conference on[Kumar et al.2017] Anurag Kumar, Bhiksha Raj, and Ndapandula Nakashole. 2017. Discovering sound concepts and acoustic relations in text. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pages 631-635. IEEE.
Non-lexical architecture for fine-grained POS tagging. [ Labeau, Matthieu Labeau, Kevin Löser, and Alexandre Allauzen. EMNLP[Labeau et al.2015] Matthieu Labeau, Kevin Löser, and Alexandre Allauzen. 2015. Non-lexical architec- ture for fine-grained POS tagging. In EMNLP, 2015, pages 232-237.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. [ Lafferty, ICML. [Lafferty et al.2001] John D. Lafferty, Andrew McCal- lum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282- 289.
Cyc: A largescale investment in knowledge infrastructure. B Douglas, Lenat, Commun. 1138ACMDouglas B. Lenat. 1995. Cyc: A large- scale investment in knowledge infrastructure. Com- mun. ACM, 38(11).
Finding function in form: Compositional character models for open vocabulary word representation. Ling, EMNLP. [Ling et al.2015] Wang Ling, Chris Dyer, Alan W. Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luís Marujo, and Tiago Luís. 2015. Find- ing function in form: Compositional character mod- els for open vocabulary word representation. In EMNLP, pages 1520-1530.
Never-ending learning. [ Mitchell, AAAI. Kathryn Mazaitis, Thahir Mohamed, Ndapandula Nakashole, Emmanouil Antonios Platanios, Alan Ritter, Mehdi Samadi, Burr Settles, Richard C. Wang, Derry Tanti Wijaya, Abhinav Gupta, Xinlei Chen, Abulhair Saparov, Malcolm Greaves, and Joel Welling[Mitchell et al.2015] Tom M. Mitchell, William W. Cohen, Estevam R. Hruschka Jr., Partha Pratim Talukdar, Justin Betteridge, Andrew Carlson, Bha- vana Dalvi Mishra, Matthew Gardner, Bryan Kisiel, Jayant Krishnamurthy, Ni Lao, Kathryn Mazaitis, Thahir Mohamed, Ndapandula Nakashole, Em- manouil Antonios Platanios, Alan Ritter, Mehdi Samadi, Burr Settles, Richard C. Wang, Derry Tanti Wijaya, Abhinav Gupta, Xinlei Chen, Abulhair Saparov, Malcolm Greaves, and Joel Welling. 2015. Never-ending learning. In AAAI, pages 2302-2310.
Language-aware truth assessment of fact candidates. Ndapandula Nakashole, Tom M Mitchell, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014. the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014Baltimore, MD, USALong Papers1Nakashole and Mitchell2014[Nakashole and Mitchell2014] Ndapandula Nakashole and Tom M. Mitchell. 2014. Language-aware truth assessment of fact candidates. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 1009-1019.
A knowledge-intensive model for prepositional phrase attachment. Ndapandula Nakashole, Tom M Mitchell, The Association for Computer Linguistics. ACL (1)[Nakashole and Mitchell2015] Ndapandula Nakashole and Tom M. Mitchell. 2015. A knowledge-intensive model for prepositional phrase attachment. In ACL (1), pages 365-375. The Association for Computer Linguistics.
Real-time population of knowledge bases: opportunities and challenges. Weikum2012] Ndapandula Nakashole, Gerhard Nakashole, Weikum, Association for Computational Linguistics. AKBC[Nakashole and Weikum2012] Ndapandula Nakashole and Gerhard Weikum. 2012. Real-time population of knowledge bases: opportunities and challenges. In AKBC, pages 41-45. Association for Computa- tional Linguistics.
Scalable knowledge harvesting with high precision and high recall. Nakashole, Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, WSDM '11. the Fourth ACM International Conference on Web Search and Data Mining, WSDM '11[Nakashole et al.2011] Ndapandula Nakashole, Martin Theobald, and Gerhard Weikum. 2011. Scalable knowledge harvesting with high precision and high recall. In Proceedings of the Fourth ACM Interna- tional Conference on Web Search and Data Mining, WSDM '11, pages 227-236.
Fine-grained semantic typing of emerging entities. Nakashole, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL. the 51st Annual Meeting of the Association for Computational Linguistics, ACL[Nakashole et al.2013a] Ndapandula Nakashole, Tomasz Tylenda, and Gerhard Weikum. 2013a. Fine-grained semantic typing of emerging entities. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL, pages 1488-1497.
Nakashole, Gerhard Weikum, and Fabian M. Suchanek. 2013b. Discovering semantic relations from the web and organizing them with PATTY. SIGMOD Record. 42Ndapandula Nakashole[Nakashole et al.2013b] Ndapandula Nakashole, Ger- hard Weikum, and Fabian M. Suchanek. 2013b. Discovering semantic relations from the web and organizing them with PATTY. SIGMOD Record, 42(2):29-34.
Automatic extraction of facts, relations, and entities for web-scale knowledge base population. Ndapandula T Nakashole, Ndapandula T Nakashole. 2012. Au- tomatic extraction of facts, relations, and entities for web-scale knowledge base population.
Design challenges and misconceptions in named entity recognition. Lev- , Arie Ratinov, Dan Roth, CoNLL. Ratinov and Roth2009[Ratinov and Roth2009] Lev-Arie Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In CoNLL, pages 147- 155.
An attentive neural architecture for finegrained entity type classification. [ Shimaoka, arXiv:1604.05525arXiv preprint[Shimaoka et al.2016] Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, and Sebastian Riedel. 2016. An attentive neural architecture for fine- grained entity type classification. arXiv preprint arXiv:1604.05525.
Endto-end memory networks. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Rob Fergus, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaSukhbaatar et al.2015[Sukhbaatar et al.2015] Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End- to-end memory networks. In Advances in Neural Information Processing Systems 28: Annual Con- ference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 2440-2448.
Deriving a web-scale common sense fact database. Tandon, AAAI. [Tandon et al.2011] Niket Tandon, Gerard de Melo, and Gerhard Weikum. 2011. Deriving a web-scale com- mon sense fact database. In AAAI.
Acquiring comparative commonsense knowledge from the web. Tandon, AAAI. [Tandon et al.2014] Niket Tandon, Gerard de Melo, and Gerhard Weikum. 2014. Acquiring comparative commonsense knowledge from the web. In AAAI, pages 166-172.
Iterative set expansion of named entities using the web. [ Wang, C Richard, William W Wang, Cohen, Proceedings of the 8th IEEE International Conference on Data Mining (ICDM 2008). the 8th IEEE International Conference on Data Mining (ICDM 2008)Pisa, Italy[Wang and Cohen2008] Richard C. Wang and William W. Cohen. 2008. Iterative set expansion of named entities using the web. In Proceedings of the 8th IEEE International Conference on Data Mining (ICDM 2008), December 15-19, 2008, Pisa, Italy, pages 1091-1096.
. [ Weston, [Weston et al.2014] Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. arXiv preprint https://arxiv.org/abs/1410.3916.
Automatically refining the wikipedia infobox ontology. [ Wu, Weld2008] Fei Wu, Daniel S Weld, Proceedings of the 17th International Conference on World Wide Web. the 17th International Conference on World Wide WebBeijing, ChinaWWW[Wu and Weld2008] Fei Wu and Daniel S. Weld. 2008. Automatically refining the wikipedia infobox ontol- ogy. In Proceedings of the 17th International Con- ference on World Wide Web, WWW 2008, Beijing, China, April 21-25, 2008, pages 635-644.
Classifying relations via long short term memory networks along shortest dependency paths. [ Xu, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbonEMNLP[Xu et al.2015] Yan Xu, Lili Mou, Ge Li, Yunchuan Chen, Hao Peng, and Zhi Jin. 2015. Classifying re- lations via long short term memory networks along shortest dependency paths. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portu- gal, September 17-21, 2015, pages 1785-1794.
| [] |
[
"Published as a conference paper at ICLR 2020 LEARNING TO RETRIEVE REASONING PATHS OVER WIKIPEDIA GRAPH FOR QUESTION ANSWERING",
"Published as a conference paper at ICLR 2020 LEARNING TO RETRIEVE REASONING PATHS OVER WIKIPEDIA GRAPH FOR QUESTION ANSWERING"
] | [
"Akari Asai \nUniversity of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence\n\n",
"Kazuma Hashimoto k.hashimoto@salesforce.com \nUniversity of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence\n\n",
"Hannaneh Hajishirzi hannaneh@cs.washington.edu \nUniversity of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence\n\n",
"Richard Socher rsocher@salesforce.com \nUniversity of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence\n\n",
"Caiming Xiong cxiong@salesforce.com \nUniversity of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence\n\n"
] | [
"University of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence\n",
"University of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence\n",
"University of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence\n",
"University of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence\n",
"University of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence\n"
] | [] | Answering questions that require multi-hop reasoning at web-scale necessitates retrieving multiple evidence documents, one of which often has little lexical or semantic relationship to the question. This paper introduces a new graphbased recurrent retrieval approach that learns to retrieve reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions. Our retriever model trains a recurrent neural network that learns to sequentially retrieve evidence paragraphs in the reasoning path by conditioning on the previously retrieved documents. Our reader model ranks the reasoning paths and extracts the answer span included in the best reasoning path. Experimental results show state-of-the-art results in three open-domain QA datasets, showcasing the effectiveness and robustness of our method. Notably, our method achieves significant improvement in HotpotQA, outperforming the previous best model by more than 14 points. 1 . Efficient and robust question answering from minimal context over documents. In ACL, 2018.Sewon Min, Danqi Chen, Hannaneh Hajishirzi, and Luke Zettlemoyer. A discrete hard em approach for weakly supervised question answering. In EMNLP, 2019a. -hop reading comprehension through question decomposition and rescoring. In ACL, 2019c. | null | [
"https://export.arxiv.org/pdf/1911.10470v2.pdf"
] | 208,267,807 | 1911.10470 | 521b4e26df0f1cf5763dece14cbb218df152dc59 |
Published as a conference paper at ICLR 2020 LEARNING TO RETRIEVE REASONING PATHS OVER WIKIPEDIA GRAPH FOR QUESTION ANSWERING
Akari Asai
University of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence
Kazuma Hashimoto k.hashimoto@salesforce.com
University of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence
Hannaneh Hajishirzi hannaneh@cs.washington.edu
University of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence
Richard Socher rsocher@salesforce.com
University of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence
Caiming Xiong cxiong@salesforce.com
University of Washington ‡ Salesforce Research § Allen Institute for Artificial Intelligence
Published as a conference paper at ICLR 2020 LEARNING TO RETRIEVE REASONING PATHS OVER WIKIPEDIA GRAPH FOR QUESTION ANSWERING
Answering questions that require multi-hop reasoning at web-scale necessitates retrieving multiple evidence documents, one of which often has little lexical or semantic relationship to the question. This paper introduces a new graphbased recurrent retrieval approach that learns to retrieve reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions. Our retriever model trains a recurrent neural network that learns to sequentially retrieve evidence paragraphs in the reasoning path by conditioning on the previously retrieved documents. Our reader model ranks the reasoning paths and extracts the answer span included in the best reasoning path. Experimental results show state-of-the-art results in three open-domain QA datasets, showcasing the effectiveness and robustness of our method. Notably, our method achieves significant improvement in HotpotQA, outperforming the previous best model by more than 14 points. 1 . Efficient and robust question answering from minimal context over documents. In ACL, 2018.Sewon Min, Danqi Chen, Hannaneh Hajishirzi, and Luke Zettlemoyer. A discrete hard em approach for weakly supervised question answering. In EMNLP, 2019a. -hop reading comprehension through question decomposition and rescoring. In ACL, 2019c.
INTRODUCTION
Open-domain Question Answering (QA) is the task of answering a question given a large collection of text documents (e.g., Wikipedia). Most state-of-the-art approaches for open-domain QA (Chen et al., 2017;Wang et al., 2018a;Lee et al., 2018;Yang et al., 2019) leverage non-parameterized models (e.g., TF-IDF or BM25) to retrieve a fixed set of documents, where an answer span is extracted by a neural reading comprehension model. Despite the success of these pipeline methods in singlehop QA, whose questions can be answered based on a single paragraph, they often fail to retrieve the required evidence for answering multi-hop questions, e.g., the question in Figure 1. Multi-hop QA (Yang et al., 2018) usually requires finding more than one evidence document, one of which often consists of little lexical overlap or semantic relationship to the original question. However, retrieving a fixed list of documents independently does not capture relationships between evidence documents through bridge entities that are required for multi-hop reasoning.
Recent open-domain QA methods learn end-to-end models to jointly retrieve and read documents (Seo et al., 2019;Lee et al., 2019). These methods, however, face challenges for entity-centric questions since compressing the necessary information into an embedding space does not capture lexical information in entities. Cognitive Graph (Ding et al., 2019) incorporates entity links between documents for multi-hop QA to extend the list of retrieved documents. This method, however, compiles a fixed list of documents independently and expects the reader to find the reasoning paths.
In this paper, we introduce a new recurrent graph-based retrieval method that learns to retrieve evidence documents as reasoning paths for answering complex questions. Our method sequentially retrieves each evidence document, given the history of previously retrieved documents to form several reasoning paths in a graph of entities. Our method then leverages an existing reading comprehension model to answer questions by ranking the retrieved reasoning paths. The strong interplay between the retriever model and reader model enables our entire method to answer complex questions by exploring more accurate reasoning paths compared to other methods. To be more specific, our method (sketched in Figure 2) constructs the Wikipedia paragraph graph using Wikipedia hyperlinks and document structures to model the relationships between paragraphs. Our retriever trains a recurrent neural network to score reasoning paths in this graph by maximizing the likelihood of selecting a correct evidence paragraph at each step and fine-tuning paragraph BERT encodings. Our reader model is a multi-task learner to score each reasoning path according to its likelihood of containing and extracting the correct answer phrase. We leverage data augmentation and negative example mining for robust training of both models.
Our experimental results show that our method achieves the state-of-the-art results on HotpotQA full wiki and HotpotQA distractor settings (Yang et al., 2018), outperforming the previous stateof-the-art methods by more than 14 points absolute gain on the full wiki setting. We also evaluate our approach on SQuAD Open (Chen et al., 2017) and Natural Questions Open (Lee et al., 2019) without changing any architectural designs, achieving better or comparable to the state of the art, which suggests that our method is robust across different datasets. Additionally, our framework provides interpretable insights into the underlying entity relationships used for multi-hop reasoning.
RELATED WORK
Neural open-domain question answering Most current open-domain QA methods use a pipeline approach that includes a retriever and reader. Chen et al. (2017) incorporate a TF-IDF-based retriever with a state-of-the-art neural reading comprehension model. The subsequent work improves the heuristic retriever by re-ranking retrieved documents (Wang et al., 2018a;b;Lee et al., 2018;Lin et al., 2018). The performance of these methods is still bounded by the performance of the initial retrieval process. In multi-hop QA, non-parameterized retrievers face the challenge of retrieving all the relevant documents, one or some of which are lexically distant from the question. Recently, Lee et al. (2019) and Seo et al. (2019) introduce fully trainable models that retrieve a few candidates directly from large-scale Wikipedia collections. All these methods find evidence documents independently without the knowledge of previously selected documents or relationships between documents. This would result in failing to conduct multi-hop retrieval.
Retrievers guided by entity links Most relevant to our work are recent studies that attempt to use entity links for multi-hop open-domain QA. Cognitive Graph (Ding et al., 2019) retrieves evidence documents offline, and trains a reading comprehension model to jointly predict possible answer spans and next-hop spans to extend the reasoning chain. Instead, we train our retriever to find reasoning paths directly. Concurrent with our work, Entity-centric IR (Godbole et al., 2019) uses entity linking for multi-hop retrieval. Unlike our method, this method does not learn to retrieve reasoning paths sequentially, nor study the interplay between retriever and reader. Moreover, while the previous approaches require a system to encode all possible nodes, our beam search decoding process only encodes the nodes on the reasoning paths, which significantly reduces the computational costs. PullNet (Sun et al., 2019) learns to retrieve question-aware sub-graphs from text corpora and knowledge bases (e.g., Freebase), while we focus on open-domain QA solely based on text.
Multi-step (iterative) retrievers Similar to our recurrent retriever, multi-step retrievers explore multiple evidence documents iteratively. Multi-step reasoner repeats the retrieval process for a fixed number of steps, interacting with a reading comprehension model by reformulating the query in a latent space to enhance retrieval performance. Feldman & El-Yaniv (2019) structure of the documents during the iterative retrieval process. In addition, all of these multi-step retrieval methods do not accommodate arbitrary steps of reasoning and the termination condition is hard-coded. In contrast, our method leverages the Wikipedia graph to retrieve documents that are lexically or semantically distant to questions, and is adaptive to any reasoning path lengths, which leads to significant improvement over the previous work in HotpotQA and SQuAD Open.
OPEN-DOMAIN QUESTION ANSWERING OVER WIKIPEDIA GRAPH
Overview This paper introduces a new graph-based recurrent retrieval method (Section 3.1) that learns to find evidence documents as reasoning paths for answering complex questions. We then extend an existing reading comprehension model (Section 3.2) to answer questions given a collection of reasoning paths. Our method uses a strong interplay between retrieving and reading steps such that the retrieval method learns to retrieve a set of reasoning paths to narrow down the search space for our reader model, for robust pipeline process. Figure 2 sketches the overview of our QA model. We use Wikipedia for open-domain QA, where each article is divided into paragraphs, resulting in millions of paragraphs in total. Each paragraph p is considered as our retrieval target. Given a question q, our framework aims at deriving its answer a by retrieving and reading reasoning paths, each of which is represented with a sequence of paragraphs: E = [p i , . . . , p k ]. We formulate the task by decomposing the objective into the retriever objective S retr (q, E) that selects reasoning paths E relevant to the question, and the reader objective S read (q, E, a) that finds the answer a in E:
arg max E,a S(q, E, a) s.t. S(q, E, a) = S retr (q, E) + S read (q, E, a).(1)
LEARNING TO RETRIEVE REASONING PATHS
Our method learns to retrieve reasoning paths across a graph structure. Evidence paragraphs for a complex question do not necessarily have lexical overlaps with the question, but one of them is likely to be retrieved, and its entity mentions and the question often entail another paragraph (e.g., Figure 1). To perform such multi-hop reasoning, we first construct a graph of paragraphs, covering all the Wikipedia paragraphs. Each node of the Wikipedia graph G represents a single paragraph p i .
Constructing the Wikipedia graph Hyperlinks are commonly used to construct relationships between articles on the web, usually maintained by article writers, and are thus useful knowledge resources. Wikipedia consists of its internal hyperlinks to connect articles. We use the hyperlinks to construct the direct edges in G. We also consider symmetric within-document links, allowing a paragraph to hop to other paragraphs in the same article. The Wikipedia graph G is densely connected and covers a wide range of topics that provide useful evidence for open-domain questions. This graph is constructed offline and is reused throughout training and inference for any question.
THE GRAPH-BASED RECURRENT RETRIEVER
General formulation with a recurrent retriever We use a Recurrent Neural Network (RNN) to model the reasoning paths for the question q. At the t-th time step (t ≥ 1) our model selects a paragraph p i among candidate paragraphs C t given the current hidden state h t of the RNN. The initial hidden state h 1 is independent of any questions or paragraphs, and based on a parameterized vector. We use BERT's [CLS] token representation to independently encode each candidate paragraph p i along with q. 2 We then compute the probability P (p i |h t ) that p i is selected. The RNN selection procedure captures relationships between paragraphs in the reasoning path by conditioning on the selection history. The process is terminated when [EOE], the end-ofevidence symbol, is selected, to allow it to capture reasoning paths with arbitrary length given each question. More specifically, the process of selecting p i at the t-th step is formulated as follows:
w i = BERT [CLS] (q, p i ) ∈ R d ,(2)P (p i |h t ) = σ(w i · h t + b),(3)h t+1 = RNN(h t , w i ) ∈ R d ,(4)
where b ∈ R 1 is a bias term. Motivated by Salimans & Kingma (2016), we normalize the RNN states to control the scale of logits in Equation (3) and allow the model to learn multiple reasoning paths. The details of Equation (4) are described in Appendix A.1. The next candidate set C t+1 is constructed to include paragraphs that are linked from the selected paragraph p i in the graph. To allow our model to flexibly retrieve multiple paragraphs within C t , we also add K-best paragraphs other than p i (from C t ) to C t+1 , based on the probabilities. We typically set K = 1 in this paper.
Beam search for candidate paragraphs It is computationally expensive to compute Equation (2) over millions of the possible paragraphs. Moreover, a fully trainable retriever often performs poorly for entity-centric questions such as SQuAD, since it does not explicitly maintain lexical information (Lee et al., 2019). To navigate our retriever in the large-scale graph effectively, we initialize candidate paragraphs with a TF-IDF-based retrieval and guide the search over the Wikipedia graph. In particular, the initial candidate set C 1 includes F paragraphs with the highest TF-IDF scores with respect to the question. We expand C t (t ≥ 2) by appending the [EOE] symbol. We additionally use a beam search to explore paths in the directed graph. We define the score of a reasoning path E = [p i , . . . , p k ] by multiplying the probabilities of selecting the paragraphs: P (p i |h 1 ) . . . P (p k |h |E| ). The beam search outputs the top B reasoning paths E = {E 1 , . . . , E B } with the highest scores to pass to the reader model i.e., S(q, E, a) = S read (q, E, a) for E ∈ E.
In terms of the computational cost, the number of the paragraphs processed by Equation (2) is bounded by O(|C 1 | + B t≥2 |C t |), where B is the beam size and |C t | is the average size of C t over the B hypothesises.
3.1.2 TRAINING OF THE GRAPH-BASED RECURRENT RETRIEVER Data augmentation We train our retriever in a supervised fashion using evidence paragraphs annotated for each question. For multi-hop QA, we have multiple paragraphs for each question, and single paragraph for single-hop QA. We first derive a ground-truth reasoning path g = [p 1 , . . . , p |g| ] using the available annotated data in each dataset. p |g| is set to [EOE] for the termination condition.
To relax and stabilize the training process, we augment the training data with additional reasoning paths -not necessarily the shortest paths -that can derive the answer. In particular, we add a new training path g r = [p r , p 1 , . . . , p |g| ] by adding a paragraph p r ∈ C 1 that has a high TF-IDF score and is linked to the first paragraph p 1 in the ground-truth path g. Adding these new training paths helps at the test time when the first paragraph in the reasoning path does not necessarily appear among the paragraphs that initialize the Wikipedia search using the heuristic TF-IDF retrieval.
Negative examples for robustness Our graph-based recurrent retriever needs to be trained to discriminate between relevant and irrelevant paragraphs at each step. We therefore use negative examples along with the ground-truth paragraphs; to be more specific, we use two types of negative examples: (1) TF-IDF-based and (2) hyperlink-based ones. For single-hop QA, we only use the type (1). For multi-hop QA, we use both types, and the type (2) is especially important to prevent our retriever from being distracted by reasoning paths without correct answer spans. We typically set the number of the negative examples to 50.
Loss function For the sequential prediction task, we estimate P (p i |h t ) independently in Equation (3) and use the binary cross-entropy loss to maximize probability values of all the possible paths. Note that using the widely-used cross-entropy loss with the softmax normalization over C t is not desirable here; maximizing the probabilities of g and g r contradict with each other. More specifically, the loss function of g at the t-th step is defined as follows:
L retr (p t , h t ) = − log P (p t |h t ) − p∈Ct log (1 − P (p|h t )),(5)
whereC t is a set of the negative examples described above, and includes [EOE] for t < |g|. We exclude p r fromC 1 for the sake of our multi-path learning. The loss is also defined with respect to g r in the same way. All the model parameters, including those in BERT, are jointly optimized.
READING AND ANSWERING GIVEN REASONING PATHS
Our reader model first verifies each reasoning path in E, and finally outputs an answer span a from the most plausible reasoning path. This interplay is effective in making our framework robust; this is further discussed in Appendix A.3. We model the reader as a multi-task learning of (1) reading comprehension, that extracts an answer span from a reasoning path E using a standard approach (Seo et al., 2017;Xiong et al., 2017;, and (2) reasoning path re-ranking, that re-ranks the retrieved reasoning paths by computing the probability that the path includes the answer.
For the reading comprehension task, we use BERT , where the input is the concatenation of the question text and the text of all the paragraphs in E. This lets our reader to fully leverage the self-attention mechanism across the concatenated paragraphs in the retrieved reasoning paths; this paragraph interaction is crucial for multi-hop reasoning (Wang et al., 2019a).
We share the same model for re-ranking, and use the BERT's [CLS] representation to estimate the probability of selecting E to answer the question:
P (E|q) = σ(w n · u E ) s.t. u E = BERT [CLS] (q, E) ∈ R D ,(6)
where w n ∈ R D is a weight vector. At the inference time, we select the best evidence E best ∈ E by P (E|q), and output the answer span by S read :
E best = arg max E∈E P (E|q), S read = arg max i,j, i≤j P start i P end j ,(7)
where P start i , P end j denote the probability that the i-th and j-th tokens in E best are the start and end positions, respectively, of the answer span, and are calculated by following .
Training examples To train the multi-task reader model, we use the ground-truth evidence paragraphs used for training our retriever. It is known to be effective in open-domain QA to use distantly supervised examples, which are not originally associated with the questions but include expected answer strings (Chen et al., 2017;Wang et al., 2018a;Hu et al., 2019). These distantly supervised examples are also effective to simulate the inference time process. Therefore, we combine distantly supervised examples from a TF-IDF retriever with the original supervised examples. Following the procedures in Chen et al. (2017), we add up to one distantly supervised example for each supervised example. We set the answer span as the string that matches a and appears first.
To train our reader model to discriminate between relevant and irrelevant reasoning paths, we augment the original training data with additional negative examples to simulate incomplete evidence. In particular, we add paragraphs that appear to be relevant to the given question but actually do not contain the answer. For multi-hop QA, we select one ground-truth paragraph including the answer span, and swap it with one of the TF-IDF top ranked paragraphs. For single-hop QA, we simply replace the single ground-truth paragraph with TF-IDF-based negative examples which do not include the expected answer string. For the distorted evidenceẼ, we aim at minimizing P (Ẽ|q).
Multi-task loss function
The objective is the sum of cross entropy losses for the span prediction and re-ranking tasks. The loss for the question q and its evidence candidate E is as follows:
L read = L span + L no answer = (− log P start y start − log P end y end ) − log P r ,(8)
where y start and y end are the ground-truth start and end indices, respectively. L no answer corresponds to the loss of the re-ranking model, to discriminate the distorted reasoning paths with no answers. P r is P (E|q) if E is the ground-truth evidence; otherwise P r = 1 − P (E|q Each answer can be extracted from a collection of 10 paragraphs in the distractor setting, and from the entire Wikipedia in the full wiki setting. Two evidence paragraphs are associated with each question for training. Our primary target is the full wiki setting due to its open-domain scenario, and we use the distractor setting to evaluate how well our method works in a closed scenario where the two evidence paragraphs are always included. The dataset also provides annotations to evaluate the prediction of supporting sentences, and we adapt our retriever to the supporting fact prediction. Note that this subtask is specific to HotpotQA. More details are described in Appendix A.5.
SQuAD Open SQuAD Open (Chen et al., 2017) is composed of questions from the original SQuAD dataset (Rajpurkar et al., 2016). This is a single-hop QA task, and a single paragraph is associated with each question in the training data.
Natural Questions Open Natural Questions Open (Lee et al., 2019) is composed of questions from the Natural Questions dataset (Kwiatkowski et al., 2019), 3 which is based on Google Search queries independently from the existing articles. A single paragraph is associated with each question, but our preliminary analysis showed that some questions benefit from multi-hop reasoning.
Metrics We report standard F1 and EM scores for HotpotQA and SQuAD Open, and EM score for Natural Questions Open to evaluate the overall QA accuracy to find the correct answers. For Hot-potQA, we also report Supporting Fact F1 (SP F1) and Supporting Fact EM (SP EM) to evaluate the sentence-level supporting fact retrieval accuracy. To evaluate the paragraph-level retrieval accuracy for the multi-hop reasoning, we use the following metrics: Answer Recall (AR), which evaluates the recall of the answer string among top paragraphs (Wang et al., 2018a;, Paragraph Recall (PR), which evaluates if at least one of the ground-truth paragraphs is included among the retrieved paragraphs, and Paragraph Exact Match (P EM), which evaluates if both of the ground-truth paragraphs for multi-hop reasoning are included among the retrieved paragraphs.
Evidence Corpus and the Wikipedia graph We use English Wikipedia as the evidence corpus and do not use other data such as Google search snippets or external structured knowledge bases. We use the several versions of Wikipedia dumps for the three datasets (See Appendix B.5). To construct the Wikipedia graph, the hyperlinks are automatically extracted from the raw HTML source files. Directed edges are added between a paragraph p i and all of the paragraphs included in the target article. The constructed graph consists of 32.7M nodes and 205.4M edges. For HotpotQA we only use the introductory paragraphs in the graph that includes about 5.2M nodes and 23.4M edges.
Implementation details
We use the pre-trained BERT models using the uncased base configuration (d = 768) for our retriever and the whole word masking uncased large (wwm) configuration (d = 1024) for our readers. We follow Chen et al. (2017) for the TF-IDF-based retrieval model and use the same hyper-parameters. We tuned the most important hyper-parameters, F , the number of the initial TF-IDF-based paragraphs, and B, the beam size, by mainly using the HotpotQA development set (the effects of increasing F are shown in Figure 5 in Appendix C.3 along with the results with B = 1). If not specified, we set B = 8 for all the datasets, F = 500 for HotpotQA full wiki and SQuAD Open, and F = 100 for Natural Questions Open. Moreover, our method shows significant improvement in predicting supporting facts in the full wiki setting. We compare the performance of our approach to other models on the HotpotQA full wiki official hidden test set in Table 2. We outperform all the published and unpublished models including up-to-date work (marked with ♣ ) by large margins in terms of QA performance.
On SQuAD Open, our model outperforms the concurrent state-of-the-art model (Wang et al., 2019b) by 2.9 F1 and 3.5 EM scores as shown in Table 3. Due to the fewer lexical overlap between questions and paragraphs on Natural Questions, pipelined approaches using term-based retrievers often face difficulties finding associated articles. Nevertheless, our approach matches the performance of the best end-to-end retriever (ORQA), as shown in Table 4. In addition to its competitive performance, our retriever can be handled on a single GPU machine, while a fully end-to-end retriever in general requires industry-scale computational resources for training (Seo et al., 2019). More results on these two datasets are discussed in Appendix D.
PERFORMANCE OF REASONING PATH RETRIEVAL
We compare our retriever with competitive retrieval methods for HotpotQA full wiki, with F = 20.
TF-IDF (Chen et al., 2017), the widely used retrieval method that scores paragraphs according to the TF-IDF scores of the question-paragraph pairs. We simply select the top-2 paragraphs. Re-rank (Nogueira & Cho, 2019) that learns to retrieve paragraphs by fine-tuning BERT to re-rank the top F TF-IDF paragraphs. We select the top-2 paragraphs after re-ranking. Re-rank 2hop which extends Re-rank to accommodate two-hop reasoning. It first adds paragraphs linked from the top TF-IDF paragraphs. It then uses the same BERT model to select the paragraphs. Entity-centric IR is our re-implementation of Godbole et al. (2019) that is related to Re-rank 2hop, but instead of simply selecting the top two paragraphs, they re-rank the possible combinations of the paragraphs that are linked to each other. Retrieval results Table 5 shows that our recurrent retriever yields 8.8 P EM and 9. Re-rank2hop to Entity-centric IR demonstrates that exploring entity links from the initially retrieved documents helps to retrieve the paragraphs with fewer lexical overlaps. On the other hand, comparing our retriever with Entity-centric IR and Semantic Retrieval shows the importance of learning to sequentially retrieve reasoning paths in the Wikipedia graph. It should be noted that our method with F = 20 outperforms all the QA EM scores in Table 1.
ANALYSIS
We conduct detailed analysis of our framework on the HotpotQA full wiki development set.
Ablation study of our framework To study the effectiveness of our modeling choices, we compare the performance of variants of our framework. We ablate the retriever with 1) No recurrent module, which removes the recurrence from our retriever, and computes the probability of each paragraph to be included in reasoning paths independently and selects the path with the highest joint probability path on the graph; 2) No beam search, which uses a greedy search (B = 1) in our recurrent retriever ; Ablation results Table 6 shows that removing any of the listed components gives notable performance drop. The most critical component in our retriever model is the recurrent module, dropping the EM by 17.4 points. As shown in Figure 1, multi-step retrieval often relies on information mentioned in another paragraph. Therefore, without conditioning on the previous time steps, the model fails to retrieve the complete evidence. Training without hyperlink-based negative examples results in the second largest performance drop, indicating that the model can be easily distracted by reasoning paths without a correct answer and the importance of negative sampling for training. Replacing the beam search with the greedy search gives a performance drop of about 4 points on EM, which demonstrates that being aware of the graph structure is helpful in finding the best reasoning paths.
Performance drop by removing the reasoning path re-ranking indicates the importance of verifying the reasoning paths in our reader. Not using negative examples to train the reader degrades EM more than 16 points, due to the over-confident predictions as discussed in Clark & Gardner (2018).
The performance with an off-the-shelf entity linking system Although the existence of the hyperlinks is not special on the web, one question is how well our method works without the Wikipedia hyperlinks. We evaluate our method on the development set of HotpotQA full wiki with an off-theshelf entity linking system (Ferragina & Scaiella, 2011) to construct the document graph in our method. More details about this experimental setup can be found in Appendix B.7. Table 7 shows that our approach with the entity linking system shows only 2.3 F1 and 2.2 EM lower scores than those with the hyperlinks, still achieving the state of the art. This suggests that our approach is not restricted to the existence of the hyperlink information, and using hyperlinks is promising.
The effectiveness of arbitrary-step retrieval The existing iterative retrieval methods fix the number of reasoning steps (Qi et al., 2019;Godbole et al., 2019;Feldman & El-Yaniv, 2019), while our approach accommodates arbitrary steps of reasoning. We also evaluate our method by fixing the length of the reasoning path (L = {1, 2, 3, 4}). Table 8 shows that out adaptive retrieval performs the best, although the length of all the annotated reasoning paths in HotpotQA is two. As discussed in Min et al. (2019b), we also observe that some questions are answerable based on a single paragraph, where our model flexibly selects a single paragraph and then terminates retrieval.
The effectiveness of the interplay between retriever and reader Table 6 shows that the interplay between our retriever and reader models is effective. To understand this, we investigate the length of reasoning paths selected by our retriever and reader, and their final QA performance. Table 9 shows that the average length selected by our reader is notably longer than that by our retriever. Table 9 also presents the EM scores averaged over the questions with certain length of reasoning paths The 2017-18 Wigan Athletic F.C. season will be a year in which the team competes in the league cup known as what for sponsorship reasons?
The EFL Cup, currently known as the FA Cup EFL Cup Carabao Cup for sponsorship reasons...
FA Cup
The 2017-18 season is Wigan Athletic's ... their first back in League One ..., the club will also participate in the FA Cup, EFL Cup and EFL Trophy.
2017-18 Wigan Athletic F.C. Figure 4: Reasoning examples by our retriever (the bottom paragraph) and our reader (two paragraphs connected by a dotted line). Highlighted text denotes a bridge entity, and blue-underlined text represents hyperlinks.
(L = {1, 2, 3}). We observe that our framework performs the best when it selects the reasoning paths with L = 3, showing 63.0 EM score. Based on these observations, we expect the retriever favors a shorter path, while the reader tends to select a longer and more convincing multi-hop reasoning path to derive an answer string.
Qualitative examples of retrieved reasoning paths Finally, we show two examples from Hot-potQA full wiki, and Appendix C.5 presents more qualitative examples. In Figure 3, our approach successfully retrieves the correct reasoning path and answers correctly, while Re-rank fails. The top two paragraphs next to the graph are the introductory paragraphs of the two entities on the reasoning path, and the paragraph at the bottom shows the wrong paragraph selected by Re-rank. The "Millwall F.C." has fewer lexical overlaps and the bridge entity "Millwall" is not stated in the given question. Thus, Re-rank chooses a wrong paragraph with high lexical overlaps to the given question.
In Figure 4, we compare the reasoning paths ranked highest by our retriever and reader. Although the gold path is included among the top 8 paths selected by the beam search, our retriever model selects a wrong paragraph as the best reasoning path. By re-ranking the reasoning paths, the reader eventually selects the correct reasoning path ("2017-18 Wigan Athletic F.C. season" → "EFL Cup"). This example shows the effectiveness of the strong interplay of our retriever and reader.
CONCLUSION
This paper introduces a new graph-based recurrent retrieval approach, which retrieves reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions. Our retriever model learns to sequentially retrieve evidence paragraphs to form the reasoning path. Subsequently, our reader model re-ranks the reasoning paths, and it determines the final answer as the one extracted from the best reasoning path. Our experimental results significantly advance the state of the art on HotpotQA by more than 14 points absolute gain on the full wiki setting. Our approach also achieves the state-of-the-art performance on SQuAD Open and Natural Questions Open without any architectural changes, demonstrating the robustness of our method. Our method provides insights into the underlying entity relationships, and the discrete reasoning paths are helpful in interpreting our framework's reasoning process. Future work involves end-to-end training of our graph-based recurrent retriever and reader for improving upon our current two-stage training.
ACKNOWLEDGMENTS
We acknowledge grants from ONR N00014-18-1-2826, DARPA N66001-19-2-403, NSF (IIS1616112, IIS1252835), and Samsung GRO. We thank Sewon Min, David Wadden, Yizhong Wang, Akhilesh Gotmare, Tong Niu, and UW NLP group and Salesforce research members for their insightful discussions. We would also like to show our gratitude to Melvin Gruesbeck for providing us with the artistic figures presented in this paper. We thank the anonymous reviewers for their helpful and thoughtful comments. Akari Asai is supported by The Nakajima Foundation Fellowship.
a t+1 = W r [h t ; w i ] + b r , h t+1 = α a t+1 a t+1 ,(9)
where W r ∈ R d×2d is a weight matrix, b r ∈ R d is a bias vector, and α ∈ R 1 is a scalar parameter (initialized with 1.0). We set the global initial state a 1 to a parameterized vector s ∈ R d , and we also parameterize an [EOE] vector w [EOE] ∈ R d for the [EOE] symbol. The use of w i for both the input and output layers is inspired by Inan et al. (2017); Press & Wolf (2017). In addition, we align the norm of w [EOE] with those of w i , by applying layer normalization (Ba et al., 2016) of the last layer in BERT because w [EOE] is used along with the BERT outputs. Without the layer normalization, the L2-norms of w i and w [EOE] can be quite different, and the model can easily discriminate between them by the difference of the norms.
A.2 QUESTION-PARAGRAPH ENCODING IN OUR RETRIEVER COMPONENT
Equation (2) shows that we compute each paragraph representation w i conditioned on the question q. An alternative approach is separately encoding the paragraphs and the question, to directly retrieve paragraphs (Lee et al., 2019;Seo et al., 2019;. However, due to the lack of explicit interactions between the paragraphs and the question, such a neural retriever using questionindependent paragraph encodings suffers from compressing the necessary information into fixeddimensional vectors, resulting in low performance on entity-centric questions (Lee et al., 2019). It has been shown that attention-based paragraph-question interactions improve the retrieval accuracy if the retrieval scale is tractable (Wang et al., 2018a;Lee et al., 2018). There is a trade-off between the scalability and the accuracy, and this work aims at striking the balance by jointly using the lexical matching retrieval and the graphs, followed by the rich question-paragraph encodings.
A question-independent variant We can also formulate our retriever model by using a questionindependent approach. There are only two simple modifications. First, we reformulate Equation (2) as follows:
w i = BERT [CLS] (p i ),(10)
where we no longer input the question q together with the paragraphs. Next, we condition the initial RNN state h 1 on the question information. More specifically, we compute h 1 by using Equation (4) as follows:
w q = BERT [CLS] (q),(11)h 1 = RNN(h 1 , w q ),(12)
where w q is computed by using the same BERT encoder as in Equation (10), and h 1 is the original h 1 used in our question-dependent approach as described in Appendix A.1. The remaining parts are exactly the same, and we can perform the reasoning path retrieval in the same manner.
A.3 WHY IS THE INTERPLAY IMPORTANT?
Our retriever model learns to predict plausibility of the reasoning paths by capturing the paragraph interactions through the BERT's [CLS] representations, after independently encoding the paragraphs along with the question; this makes our retriever scalable to the open-domain scenario. By contrast, our reader jointly learns to predict the plausibility and answer the question, and moreover, fully leverages the self-attention mechanism across the concatenated paragraphs in the retrieved reasoning paths; this paragraph interaction is crucial for multi-hop reasoning (Wang et al., 2019a). In summary, our retriever is scalable, but the top-1 prediction is not always enough to fully capture multi-hop reasoning to answer the question. Therefore, the additional re-ranking process mitigates the uncertainty and makes our framework more robust.
A.4 HANDLING YES-NO QUESTIONS IN OUR READER COMPONENT
In the HotpotQA dataset, we need to handle yes-no questions as well as extracting answer spans from the paragraphs. We treat the two special types of the answers, yes and no, by extending the re-ranking model in Equation (6). In particular, we extend the binary classification to a multi-class classification task, where the positive "answerable" class is decomposed into the following three classes: span, yes, and no. If the probability of "yes" or "no" is the largest among the three classes, our reader directly outputs the label as the answer, without any span extraction. Otherwise, our reader uses the span extraction model to output the answer.
A.5 SUPPORTING FACT PREDICTION IN HOTPOTQA
We adapt our recurrent retriever to the subtask of the supporting fact prediction in HotpotQA (Yang et al., 2018). The task is outputting sentences which support to answer the question. Such supporting sentences are annotated for the two ground-truth paragraphs in the training data. Since our framework outputs the most plausible reasoning path E along with the answer, we can add an additional step to select supporting facts (sentences) from the paragraphs in E. We train our recurrent retriever by using the training examples for the supporting fact prediction task, where the model parameters are not shared with those of our paragraph retriever. We replace the question-paragraph encoding in Equation (2) with question-answer-sentence encoding for the task, where a question string is concatenated with its answer string. The answer string is the ground-truth one during the training time. We then maximize the probability of selecting the ground-truth sequence of the supporting fact sentences, while setting the other sentences as negative examples. At test time, we use the best reasoning path and its predicted answer string from our retriever and reader models to finally output the supporting facts for each question. The supporting fact prediction task is performed after finalizing the reasoning path and the answer for each question, and hence this additional task does not affect the QA accuracy.
B DETAILS ABOUT EXPERIMENTS
B.1 DATASET DETAILS OF HOTPOTQA, SQUAD OPEN AND NATURAL QUESTIONS OPEN
HotpotQA The HotpotQA training, development, and test datasets contain 90,564, 7,405 and 7,405 questions, respectively. To train our retriever model for the distractor setting, we use the distractor training data, where only the original ten paragraphs are associated with each question. The retriever model trained with this setting is also used in our ablation study as "retriever, no linkbased negatives" in Table 6. For the full wiki setting, we train our retriever model with the data augmentation technique and the additional negative examples described in Section 3.1.2. We use the same reader model, for both the settings, trained with the augmented additional references and the negative examples described in Section 3.2.
SQuAD Open and Natural Questions Open
B.2 DERIVING GROUND-TRUTH REASONING PATHS
Section 3.1.2 describes our training strategy for our recurrent retriever. We apply the data augmentation technique to HotpotQA and Natural Questions to consider multi-hop reasoning. To derive the ground-truth reasoning path g, we use the ground-truth evidence paragraphs associated with the questions in the training data for each dataset. For SQuAD and Natural Questions Open, each training example has only single paragraph p, and thus it is trivial to derive g as [p, [EOE]]. For the multi-hop case, HotpotQA, we have two ground-truth paragraphs p 1 , p 2 for each question. Assuming that p 2 includes the answer string, we set g = [p 1 , p 2 , [EOE]].
B.3 DETAILS ABOUT NEGATIVE EXAMPLES FOR OUR READER MODEL IN SQUAD OPEN AND NATURAL QUESTIONS OPEN
To train our reader model for SQuAD Open, in addition to the TF-IDF top-ranked paragraphs, we add two types of additional negative examples: (i) paragraphs, which do not include the answer string, from the originally annotated articles, and (ii) "unanswerable" questions from SQuAD 2.0 (Rajpurkar et al., 2018). For Natural Questions Open, we add negative examples of the type (i).
B.4 TRAINING SETTINGS
To use the pre-trained BERT models, we used the public code base, pytorch-transformers, 4 written in PyTorch. 5 For optimization, we used the code base's implementation of the Adam optimizer (Kingma & Ba, 2015), with a weight-decay coefficient of 0.01 for non-bias parameters. A warm-up strategy in the code base was also used, with a warm-up rate of 0.1. Most of the settings follow the default settings. To train our recurrent retriever, we set the learning rate to 3 · 10 −5 , and the maximum number of the training epochs to three. The mini-batch size is four; a mini-batch example consists of a question with its corresponding paragraphs. To train our reader model, we set the learning rate to 3 · 10 −5 , and the maximum number of training epochs to two. Empirically we observe better performance with a larger batch size as discussed in previous work (Liu et al., 2019;Ott et al., 2018), and thus we set the mini-batch size to 120. A mini-batch example consists of a question with its evidence paragraphs. We will release our code to follow our experiments. Although using a single dump for different open-domain QA datasets is a common practice (Chen et al., 2017;Wang et al., 2018a;Lee et al., 2018), this potentially causes inconsistent or even unfair evaluation across different experimental settings, due to the temporal inconsistency of the Wikipedia articles. More concretely, every Wikipedia article is editable and and as a result, a fact can be rephrased or could be removed. For instance, a question from the SQuAD development set, "Where does Kenya rank on the CPI scale?" is originally paired with a paragraph from the article of Kenya. Based on a single sentence "Kenya ranks low on Transparency International's Corruption Perception Index (CPI)" from the paragraph, an annotated answer span is "low." However, this sentence has been rewritten as "Kenya has a high degree of corruption according to Transparency International's Corruption Perception Index (CPI)" in a later version of the same article. 7 This is problematic considering the major evaluation metrics based on string matching.
Another problem exists especially in Natural Questions Open. The dataset contains real Google search queries, and some of them reflect temporal trends at the time when the queries were executed. If a query is related to a TV show broadcasted in 2018, we can hardly expect to extract the answer from a dump in 2017.
Like this, although Wikipedia is a useful knowledge source for open-domain QA research, its rapidly evolving nature should be considered more carefully for the reproducibility. We will make all of the data including pre-processed Wikipedia articles for each experiment available for future research.
B.6 DETAILS ABOUT INITIAL CANDIDATES C 1 SELECTION To retrieve the initial candidates C 1 for each question, we use a TF-IDF based retriever with the bi-gram hashing (Chen et al., 2017). For HotpotQA full wiki, we retrieve top F introductory paragraphs, for each question, from a corpus including all the introductory paragraphs. For SQuAD Open and Natural Questions Open, we first retrieve 50 Wikipedia articles through the same TF-IDF retriever, and further run another TF-IDF-based paragraph retriever (Clark & Gardner, 2018;Min et al., 2019a) to retrieve F paragraphs in total.
B.7 DETAILS ABOUT ENTITY LINKING EXPERIMENT
We experiment with a variant of our approach, where we incorporate an entity linking system with our framework, in place of the Wikipedia hyperlinks. In this experiment, we first retrieve seed paragraphs using TF-IDF (F = 100), and run an off-the-shelf entity linker (TagMe by Ferragina & Scaiella (2011)) over the paragraphs. If the entity linker detects some entities, we retrieve their corresponding Wikipedia articles, and add edges from the seed paragraphs to the entity-linked paragraphs. Once we build the graph, then we re-run all of the experiments while the other components are exactly the same. We use the TagMe official Python wrapper. 8 C ADDITIONAL RESULTS ON HOTPOTQA C.1 UPPER-BOUND OF OUR RETRIEVAL MODULE For scalability and computational efficiency, we bootstrap our retrieval module with TF-IDF retrieval; we first retrieval F paragraphs using TF-IDF with the method described in Section B.6 and initialize C 1 with these TF-IDF paragraphs. Although we expand our candidate paragraphs at each time step using the Wikipedia graph, if our method failed to retrieve paragraphs a few-hops away from the answer paragraphs, it is likely to fail to reach the answer paragraphs. To estimate the paragraph EM upper-bound, we have checked if two gold paragraphs are included in the top 20 TF-IDF paragraphs and their hyperlinked paragraphs in the HotpotQA full wiki setting. We found that for 75.4% of the questions, all of the gold paragraphs are included in the collections of the TF-IDF paragraphs and the hyperlinked paragraphs. Also, it should be noted when we only consider the TF-IDF retrieval results, the upper-bound drops to 35.1%, which suggests that the TF-IDF-based retrieval cannot effectively discover the paragraphs multi-hop away due to the few lexical overlap. When we increase the number of F to 100 and 500, the upper-bound reaches 84.1% and 89.2%, respectively.
C.2 PER-CATEGORY QUESTION ANSWERING AND RETRIEVAL PERFORMANCE ON HOTPOTQA FULL WIKI
In HotpotQA, there are two types of questions, bridge and comparison. While comparison-type questions explicitly mention the two entities related to the given questions, in bridge-type questions, the bridge entities are rarely explicitly stated. This makes it hard for a retrieval system to discover the paragraphs entailed by the bridge entities only. Retrieval. We observed that some of the comparison-type questions can be answered based on single paragraph, and thus our model selects only one paragraph for some of these comparisontype questions, resulting in lower P EM scores on the comparison-type questions. We show several examples of the questions where we can answer based on single paragraph in Section C.5.
C.3 ON THE ROBUSTNESS TO THE INCREASE OF THE PARAGRAPHS
As we discussed in 3.1.1, we aim at significantly reducing the search space and thus scaling the number of initial TF-IDF candidates. Increasing the number of the initial retrieved paragraphs often improves the recall of the evidence paragraphs of the datasets. On the other hand, increasing the candidate paragraphs introduces additional noises, may distract models, and eventually hurt the performance (Kratzwald & Feuerriegel, 2018). We compare the performance of three different approaches: (i) ours, (ii) ours (greedy, without reasoning path re-ranking), and (iii) Re-rank.
We increase the number of the TF-IDF-based retrieved paragraphs from 10 to 500 (For Re-rank, we compare the performance up to 200 paragraphs). Figure 5 clearly shows that our approach is robust towards the increase of the initial candidate paragraphs, and thus can constantly yield performance gains with more candidate paragraphs. Our approach with the greedy search also shows performance improvements; however, after a certain number, the greedy approach stops improving the performance. Re-rank starts suffering from the noises caused by many distracting paragraphs included in the initial candidate paragraphs at F = 200.
C.4 RESULTS OF QUESTION-INDEPENDENT PARAGRAPH ENCODING FOR OUR RETRIEVER
To show the importance of the question-paragraph encoding in our retriever model, we conduct an experiment on the development set of HotpotQA, by replacing it with the question-independent encoding described in Appendix A.2. For a fair comparison, we use the same initial TF-IDF-based retrieval (only for the full wiki setting), hyperlink-based Wikipedia graph, beam search, and reader model (BERT wwm). We train the alternative model without using the data augmentation technique (described in Section 3. Table 11: Effects of the question-dependent paragraph encoding: Comparing our retriever model with and without the query-dependent encoding. For our question-dependent approach, the full wiki results correspond to "retriever, no link-based negatives" in Table 6, and the distractor results correspond to "Ours (Reader: BERT wwm)" Table 1, to make the results comparable. Table 11 shows the results in both the full wiki and distractor settings. As seen in this table, the QA F1 and EM performance significantly deteriorates on the full wiki setting, which demonstrates the importance of the question-dependent encoding for complex and entity-centric open-domain question answering.
We can also see that the performance drop on the distractor setting is much smaller than that on the full wiki setting. This is due to its closed nature; for each question, we are given only ten paragraphs and the two gold paragraphs are always included, which significantly narrows the searching space down and makes the retrieval task much easier than that in the full wiki setting. Therefore, our recurrent retriever model is likely to discover the gold reasoning paths by the beam search, and our reader model can select the gold paths by the robust re-ranking approach. To verify this hypothesis, we checked the P EM score as a retrieval accuracy in the distractor setting. If we only consider the top-1 path from the beam search, the P EM score of the question-independent model is 12% lower than that of our question-dependent model. However, if we consider all the reasoning paths produced by the beam search, the coverage of the gold paths is almost the same. As a result, our reader model can perform similarly with both the question-dependent/independent approaches. This additionally shows the robustness of our re-ranking approach.
C.5 MORE QUALITATIVE ANALYSIS ON THE REASONING PATH ON HOTPOTQA FULL WIKI
In this section, we conduct more qualitative analysis on the reasoning paths predicted by our model. Explicitly retrieving plausible reasoning paths and re-ranking the paths provide us interpretable insights into the underlying entity relationships used for multi-hop reasoning.
As shown in Table 9, our model flexibly selects one or more paragraphs for each question. To understand these behaviors, we conduct qualitative analysis on these examples whose reasoning paths are shorter or longer than the original gold reasoning paths.
Reasoning path only with single paragraph First, we show two examples (one is a bridge-type question and the other is a comparison-type question), where our retriever selects single paragraph and terminates without selecting any additional paragraphs.
The bridge-type question in Table 12 shows that, while originally this question requires a system to read two paragraphs, Before I Go to Sleep (film) and Nicole Kidman, our retriever and reader eventually choose Nicole Kidman only. The second paragraph has a lot of lexical overlaps to the given question, and thus, a system may not need to read both of the paragraphs to answer.
The comparison-type question in Table 12 also shows that even comparison-type questions do not always require two paragraphs to answer the questions, and our model only selects one paragraph necessary to answer the given example question. In this example, the question has large lexical overlap with one of the ground-truth paragraph (The Bears and I), resulting in allowing our model to answer the question based on the single paragraph.
Min et al. (2019b) also observed that some of the questions do not necessarily require multi-hop reasoning, while HotpotQA is designed to require multi-hop reasoning (Yang et al., 2018). In that sense, we can say that our method automatical detects potentially single-hop questions.
Reasoning path with three paragraphs All of the HotpotQA questions are authored by annotators who are shown two relevant paragraphs, and thus, originally the length of ground-truth reason- Table 12: Two examples of the questions that our model retrieves a reasoning path with only one paragraph. We partly remove sentences irrelevant to the questions. Words in red correspond to the answer strings.
ing paths is always two. On the other hand, as our model accommodates arbitrary steps of reasoning, it often selects reasoning paths longer than the original annotations as shown in Table 9. When our model selects a longer reasoning path for a HotpotQA question, does it contain paragraphs that provide additional evidence? We show an example in Table 13, so as to answer this question. Our model selects an additional paragraph, Blue Jeans (Lana Del Rey song) at the first step, and then selects the two annotated gold paragraphs. This first paragraph is strongly relevant to the given question, but does not contain the answer. This additional evidence might help the reader to find the correct bridge entity ("Back to December").
C.6 QUALITATIVE ANALYSIS ON THE REASONING PATH ON HOTPOTQA DISTRACTOR
Although the main focus in this paper is on open-domain QA, we show the state-of-the-art performance on the HotpotQA distractor setting as well with the exactly same architecture. We conduct qualitative analysis to understand our model's behavior in the closed setting. In this setting, the two ground-truth paragraphs are always given for each question. Table 14 shows two examples from the HotpotQA distractor setting. In the first example, P1 and P2 are its corresponding ground-truth paragraphs. At the first time step, our retriever does not expect that P2 is related to the evidence to answer the question, as the retriever is not aware of the bridge entity, "Pasek & Paul". If we simply adopt the Re-rank strategy, P3 with the second highest probability is selected, resulting in a wrong paragraph selection. In our framework, our retriever is conditioned on the previous retrieval history and thus, at the second time step, it chooses the correct paragraph, P2, lowering the probability of P3. This clearly shows the effectiveness of our multi-step retrieval method in the closed setting as well. At the third step, our model stops the prediction by Table 15: Statistics of the reasoning paths for SQuAD Open and Natural Questions Open: the average length and the distribution of length of the reasoning paths selected by our retriever and reader for SQuAD Open and Natural Questions Open.
outputting [EOS].
In 588 examples (7.9%) of the entire distractor development dataset, the paragraph selection by our graph-based recurrent retriever differs from the top-2 strategy.
We present another example, where only the graph-based recurrent retrieval model succeeds in finding the correct paragraph pair, (P1, P2). The second question in Table 14 shows that at the first time step our retriever successfully selects P1, but does not pay attention to P2 at all, as the retriever is not aware of the bridge entity, "the Russian Civil War". Again, once it is conditioned on P1, which includes the bridge entity, it can select P2 at the second time step. Like this, we can see how our model successfully learns to model relationships between paragraphs for multi-hop reasoning.
D ADDITIONAL RESULTS ON SQUAD OPEN AND NATURAL QUESTIONS OPEN
Although the main focus of this work is on multi-hop open-domain QA, our framework shows competitive performance on the two open-domain QA datasets, SQuAD Open and Natural Questions Open. Both of the two dataets are originally created by assigning a single ground-truth paragraph for each question, and in that sense, our framework is not specific to multi-hop reasoning tasks. In this section, we further analyze our experimental results on the two datasets. Table 15 shows statistics of the lengths of the selected reasoning paths on our SQuAD Open experiment. This table is analogous to Table 9 on our HotpotQA experiments. We can clearly see that our recurrent retriever always outputs a single paragraph for each question, if we only use the top-1 predictions. This is because our retriever model for this dataset is trained with the single-paragraph annotations. Our beam search can find longer reasoning paths, and as a result, the re-ranking process in our reader model somtimes selects the reasoning paths including two paragraphs. The trend is consistent with that in Table 9. However, the effects of selecting more than one paragraph do not have a big impact; we observed only 0.1% F1/EM improvement over our method with restricting the path length to one (based on the same experiment with L = 1 in Table 8). Considering that SQuAD is a single-hop QA dataset, the result matches our intuition. Table 15 also shows the results on Natural Questions Open, where we see the same trend again. Thanks to the ground-truth path augmentation technique, our recurrent retriever model prefers longer reasoning paths than those on SQuAD Open. We observed 1% EM improvement over the L = 1 baseline on Natural Questions Open, and next we show an example to discuss why our reasoning path approach can be effective on this dataset. Table 16 shows one example where our model finds a multi-hop reasoning path effectively in Natural Questions Open (development set). The question "who sang the original version of killing me so" has relatively fewer lexical overlap with the originally annotated paragraph (Killing Me Softly with His Song (V) in Table 16). Moreover, there are several entities named as "killing me softly" in Wikipedia, because many artists cover the song. To answer this question correctly, our retriever first selects Roberta Flack (I), and then hops to the originally annotated paragraph, Killing Me Softly with His Song (V). Our reader further verifies this reasoning path and extracts the correct answer from Killing Me Softly with His Song (V). This example shows that even without gold Table 16: An example from Natural Questions Open. The bold text represents titles and paragraph indices (e.g., (I) denotes that the paragraph is an introductory paragraph). The highlighted phrase represents a bridge entity and the text in red represents an answer span.
SQuAD Open
Natural Questions Open
reasoning paths annotations, our model trained on the augmented examples learns to retrieve multihop reasoning paths from the entire Wikipedia.
These detailed experimental results on the two other open-domain QA datasets demonstrate that our framework learns to retrieve reasoning paths flexibly with evidence sufficient to answer a given question, according to each dataset's nature.
Figure 1 :
1An example of open-domain multi-hop question from HotpotQA. Paragraph 2 is unlikely to be retrieved using TF-IDF retrievers due to little lexical overlap to the given question.
Figure 2 :
2Overview of our framework.
Cognitive Graph (Ding et al., 2019) that uses the provided prediction results of the Cognitive Graph model on the HotpotQA development dataset. Semantic Retrieval (Nie et al., 2019) that uses the provided prediction results of the state-of-the-art Semantic Retrieval model on the HotpotQA development dataset.
3 )
3No link-based negative examples, which trains the retriever model without adding hyperlink-based negative examples besides TF-IDF-based negative examples.We ablate the reader model with 1) No reasoning path re-ranking, which outputs the answer only with the best reasoning path from the retriever model, and 2) No negative examples, which trains the model only with the gold paragraphs, removing L no answer from L read . During inference,"No negative examples" reads all the paths and outputs an answer with the highest answer probability.
Figure 3 :
3Reasoning examples by our model (two paragraphs connected by a dotted line) and Re-rank (the bottom two paragraphs). Highlighted text denotes a bridge entity, and blue-underlined text represents hyperlinks. Sport cup for sponsorship reasons is the third running of the competition, first played in 2015.
For SQuAD Open, we use the original training set (containing 78,713 questions) as our training data, and the original development set (containing 10,570 questions) as our test data. For Natural Questions Open, we follow the dataset splits provided by Min et al. (2019a), and the training, development and test datasets contain 79,168, 8,757 and 3,610, respectively. For both the SQuAD Open and Natural Questions Open, we train our reader on the original examples with the augmented additional negative examples and the distantly supervised examples described in Section 3.2.
B. 5
5THE WIKIPEDIA DUMPS FOR EACH DATASET For HotpotQA full wiki, we use the pre-processed English Wikipedia dump from October, 2017, provided by the HotpotQA authors. 6 For Natural Questions Open, we use the English Wikipedia dump from December 20, 2018, following Lee et al. (2019) and Min et al. (2019a). For SQuAD Open, we use the Wikipedia dump provided by Chen et al. (2017).
P1: A Christmas Story: The Musical is a musical version of the film "A Christmas Story ... The musical has music and lyrics written by Pasek & Paul and the book by Joseph Robinette. 0.98 0.00 0.00 P2: Benj Pasek and Justin Paul, known together as Pasek and Paul, are an American songwriting duo and composing team for musical theater, films, and television. ... they won both the Golden Globe and Academy Award for Best Original Song for the song "City of Stars". 0.08 0.89 0.00 P3: La La Land" is a song recorded by American singer Demi Lovato. It was written by Lovato, Joe Jonas, Nick Jonas and Kevin Jonas and produced by the Jonas Brothers alongside John Fields, for Lovato's debut studio album, Alexander Kerensky was defeated and destroyed by the Bolsheviks in the course of a civil war that ended when ? P1: The Socialist Revolutionary Party, or Party of Socialists-Revolutionaries sery") was a major political party in early 20th century Russia and a key player in the Russian Revolution. ... The anti-Bolshevik faction of this party, known as the Right SRs, which remained loyal to the Provisional Government leader Alexander Kerensky was defeated and destroyed by the Bolsheviks in the course of the Russian Civil War and subsequent persecution. 0.95 0.00 0.00 P2: The Russian Civil War (November 1917 October 1922) was a multi-party war in the former Russian Empire immediately after the Russian Revolutions of 1917, as many factions vied to determine Russiaś political future.
also propose a query reformulation mechanism with a focus on multi-hop open-domain QA. Most recently, Qi et al. (2019) introduce GoldEn Retriever, which reads and generates search queries for two steps to search documents for HotpotQA full wiki. These methods do not use the graphA
B /E / F / EOE
C /D / EOE
B
D
B
D
EOE
A
Hidden State
of RNN
EOE
G
H
G
H / K / EOE
Candidate Paragraphs
Per Timestep
D
I / J / EOE
A /G
Top One Evidence Path
Top Two Evidence Path
Reasoning Path Retrieval
Reading and Answering Reasoning Path
Reader
P(E|q) = 0.01
P(E|q) = 0.32
A
K
J
C
B
I
E
F
H
G
Hyper-link
Retrieval (d < v)
TF-IDF
Retrieval
Question
Wikipedia
D
A, B, D
G, H
). We mask the span losses for negative examples, in order to avoid unexpected effects to the span predictions.4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
We evaluate our method in three open-domain Wikipedia-sourced datasets: HotpotQA, SQuAD
Open and Natural Questions Open. We target all the English Wikipedia paragraphs for SQuAD
Open and Natural Questions Open, and the first paragraph (introductory paragraph) of each article
for HotpotQA following previous studies. More details can be found in Appendix B.
HotpotQA HotpotQA (Yang et al., 2018) is a human-annotated large-scale multi-hop QA dataset.
Table 1: HotpotQA development set results: QA and SP (supporting fact prediction) results on HotpotQA's full wiki and distractor settings. "-" denotes no results are available. Table 1 compares our method with previous published methods on the HotpotQA development set.Our method significantly outperforms all the previous results across the evaluation metrics under both the full wiki and distractor settings. Notably, our method achieves 14.5 F1 and 14.0 EM gains compared to state-of-the-art Semantic Retrieval (Nie et al., 2019) and 10.9 F1 gains over the concurrent Transformer-XH model (Zhao et al., 2020) on full wiki. We can see that our method, even with the BERT base configuration for our reader, significantly outperforms all the previous QA scores.full wiki
distractor
QA
SP
QA
SP
Models
F1
EM
F1
EM
F1
EM
F1
EM
Semantic Retrieval (Nie et al., 2019)
58.8 46.5 71.5 39.9
-
-
-
-
GoldEn Retriever (Qi et al., 2019)
49.8
-
64.6
-
-
-
-
-
Cognitive Graph (Ding et al., 2019)
49.4 37.6 58.5 23.1
-
-
-
-
DecompRC (Min et al., 2019c)
43.3
-
-
-
70.6
-
-
-
MUPPET (Feldman & El-Yaniv, 2019)
40.4 31.1 47.7 17.0
-
-
-
-
DFGN (Xiao et al., 2019)
-
-
-
-
69.2 55.4
-
-
QFE (Nishida et al., 2019)
-
-
-
-
68.7 53.7 84.7 58.8
Baseline (Yang et al., 2018)
34.4 24.7 41.0
5.3
58.3 44.4 66.7 22.0
Transformer-XH (Zhao et al., 2020)
62.4 50.2 71.6 42.2
-
-
-
-
Ours (Reader: BERT wwm)
73.3 60.5 76.1 49.3 81.2 68.0 85.2 58.6
Ours (Reader: BERT base)
65.8 52.7 75.0 47.9 73.3 59.4 84.6 57.4
4.2 OVERALL RESULTS
1 AR, leading to the improvement of 10.3 QA EM over Semantic Retrieval. The significant improvement fromModels
QA
SP
(*: anonymous)
F1
EM
F1
EM
Semantic Retrieval
57.3 45.3 70.8 38.7
GoldEn Retriever
48.6 37.9 64.2 30.7
Cognitive Graph
48.9 37.1 57.7 22.8
Entity-centric IR
46.3 35.4 43.2 0.06
MUPPET
40.3 30.6 47.3 16.7
DecompRC
40.7 30.0
-
-
QFE
38.1 28.7 44.4 14.2
Baseline
32.9 24.0 37.7
3.9
HGN* ♣
69.2 56.7 76.4 50.0
MIR+EPS+BERT* ♣
64.8 52.9 72.0 42.8
Transformer-XH*
60.8 49.0 70.0 41.7
Ours
73.0 60.0 76.4 49.1
Table 2 :
2HotpotQA full wiki test set results: official leaderboard results (on November 6, 2019) on the hidden test set of the HotpotQA full wiki setting. Work marked with ♣ appeared after September 25.Models
F1
EM
multi-passage (Wang et al., 2019b)
60.9 53.0
ORQA (Lee et al., 2019)
-
20.2
BM25+BERT (Lee et al., 2019)
-
33.2
Weaver (Raison et al., 2018)
-
42.3
RE 3 (Hu et al., 2019)
50.2 41.9
MUPPET (Feldman & El-Yaniv, 2019) 46.2 39.3
BERTserini (Yang et al., 2019)
46.1 38.6
DENSPI-hybrid (Seo et al., 2019)
44.4 36.2
MINIMAL (Min et al., 2018)
42.5 34.7
Multi-step Reasoner (Das et al., 2019) 39.2 31.9
Paragraph Ranker (Lee et al., 2018)
-
30.2
R 3 (Wang et al., 2018a)
37.5 29.1
DrQA (Chen et al., 2017)
-
29.3
Ours
63.8 56.5
Table 3 :
3SQuAD Open results: we report F1 and EM scores on the test set of SQuAD Open, following previous work. Hard EM (Min et al., 2019a) 28.8 28.1 BERT + BM 25 (Lee et al., 2019) 24.8 26.5 Ours 31.7 32.6EM
Models
Dev Test
ORQA (Lee et al., 2019)
31.3 33.3
Table 4 :
4Natural Questions Open results: we report EM scores on the test and development sets of Natural Questions Open, following previous work.Models
AR
PR
P EM EM
Ours (F = 20)
87.0 93.3
72.7
56.8
TF-IDF
39.7 66.9
10.0
18.2
Re-rank
55.1 85.9
29.6
35.7
Re-rank 2hop
56.0 70.1
26.1
38.8
Entity-centric IR
63.4 87.3
34.9
42.0
Cognitive Graph
76.0 87.6
57.8
37.6
Semantic Retrieval 77.9 93.2
63.9
46.5
Table 5 :
5Retrieval evaluation: Comparing our retrieval method with other methods across Answer Recall, Paragraph Recall, Paragraph EM, and QA EM metrics.
Table 6 :
6Ablation study: evaluating different variants of our model on HotpotQA full wiki.Settings (F = 100)
F1
EM
with hyperlinks
72.4 59.5
with entity linking system 70.1 57.3
Table 7 :
7Performance with different link structures: comparing our results on the Hotpot QA full wiki development set when we use an off-the-shelf entity linking system instead of the Wikipedia hyperlinks.Settings (F = 100)
F1
EM
Adaptive retrieval
72.4 59.5
L-step retrieval
L = 1 45.8 35.5
L = 2 71.4 58.5
L = 3 70.1 57.7
L = 4 66.3 53.9
Table 8 :
8Performancewith different reason-
ing path length: comparing the performance
with different path length on HotpotQA full
wiki. L-step retrieval sets the number of the
reasoning steps to a fixed number.
(F = 100)
Retriever Reader
EM
Avg. # of L
1.96
2.21
with L
1
539
403
31.2
2
6,639
5,655
60.0
3
227
1,347
63.0
Table 9 :
9Statisticsof the reasoning paths: the
average length and the distribution of length of
the reasoning paths selected by our retriever and
reader for HotpotQA full wiki. Avg. EM repre-
sents QA EM performance.
When was the football club founded in which Walter Otto Davis played at centre forward?Millwall Football Club is a professional football club in South East London, … Founded as Millwall Rovers in 1885.Walter Davis (footballer)
Millwall F.C.
Football club
foootballer
Welsh
Centre forward
Top Two Paragraphs Selected by Re-rank
Walter Otto Davis was a Welsh
professional footballer who played at
centre forward for Millwall for ten years
in the 1910s.
Tranmere Rovers Football Club is an
English professional association football
club founded in 1884, and based in
Welsh Birkenhead, Wirral.
Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin.End-to-end open-domain question answering with BERTserini. In NAACL (Demonstrations), 2019.Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov,
and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question
answering. In EMNLP, 2018.
Chen Zhao, Chenyan Xiong, Corby Rosset, Xia Song, Paul Bennett, and Saurabh Tiwary.
Transformer-XH: Multi-hop question answering with extra hop attention. In ICLR, 2020.
APPENDIX
A DETAILS ABOUT MODELING
A.1 A NORMALIZED RNN
We decompose Equation (4) as follows:
Table 10 :
10Retrieval evaluation: Comparing our retrieval method with other methods across Answer Recall, Paragraph Recall, Paragraph EM, and QA EM metrics.Figure 5: Robustness to the increase of F . We compare the F1 scores of our model, our model without a beam search and Re-rank with different number of F . gain and 15.1 EM gain over Semantic Retrieval for the challenging bridge-type questions. For the comparison-type questions, our method achieves almost 10 point higher QA EM than Semantic
Table 14 :
14Two examples from the HotpotQA distractor development set. Highlighted text shows the bridge entities for multi-hop reasoning, and also the words in red denote the predicted answer.SQuAD Open
Natural Questions Open
Retriever Reader Retriever
Reader
Avg. # of L
1.00
1.08
1.23
1.54
1
10,570
9,759
6,719
4,047
2
0
811
2,038
4,702
3
0
0
0
8
Appendix A.2 discusses the motivation, and Appendix C.4 shows results with an alternative approach.
We use train/dev/test splits provided byMin et al. (2019a), which can be downloaded from https: //drive.google.com/file/d/1qsN5Oyi_OtT2LyaFZFH26vT8Sqjb89-s/view.
https://github.com/huggingface/pytorch-transformers. 5 https://pytorch.org/. 6 https://hotpotqa.github.io/wiki-readme.html. 7 https://en.wikipedia.org/wiki/Kenya on October 25, 2019
https://github.com/marcocor/tagme-python
We evaluate the question answering and paragraph retrieval performance for each of the two question types. We compare the PR, P EM and QA EM for each of the two categories with two state-of-theart models, Cognitive Graph(Ding et al., 2019)and Semantic Retrieval (Nie et al., 2019). Here, we set our initial TF-IDF number F to 500.Table 10shows that our retriever yields 16.5 P EM
Jimmy Lei Ba, Jamie Ryan Kiros, Geoffrey E Hinton, arXiv:1607.06450Layer normalization. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv:1607.06450, 2016.
Reading Wikipedia to answer opendomain questions. Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes, ACL. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading Wikipedia to answer open- domain questions. In ACL, 2017.
Simple and effective multi-paragraph reading comprehension. Christopher Clark, Matt Gardner, ACL. Christopher Clark and Matt Gardner. Simple and effective multi-paragraph reading comprehension. In ACL, 2018.
Multi-step retrieverreader interaction for scalable open-domain question answering. Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Andrew Mccallum, ICLR. Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. Multi-step retriever- reader interaction for scalable open-domain question answering. In ICLR, 2019.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, NAACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL, 2019.
Cognitive graph for multi-hop reading comprehension at scale. Ming Ding, Chang Zhou, Chang Zhou, Qibin Chen, Hongxia Yang, Jie Tang, ACL. Ming Ding, Chang Zhou, Chang Zhou, Qibin Chen, Hongxia Yang, and Jie Tang. Cognitive graph for multi-hop reading comprehension at scale. In ACL, 2019.
Multi-hop paragraph retrieval for open-domain question answering. Yair Feldman, Ran El-Yaniv, ACL. Yair Feldman and Ran El-Yaniv. Multi-hop paragraph retrieval for open-domain question answering. In ACL, 2019.
Fast and accurate annotation of short texts with wikipedia pages. Paolo Ferragina, Ugo Scaiella, IEEE software. 291Paolo Ferragina and Ugo Scaiella. Fast and accurate annotation of short texts with wikipedia pages. IEEE software, 29(1):70-75, 2011.
Multi-step entitycentric information retrieval for multi-hop question answering. Ameya Godbole, Dilip Kavarthapu, Rajarshi Das, Zhiyu Gong, Abhishek Singhal, Xiaoxiao Yu, Mo Guo, Tian Gao, Hamed Zamani, Manzil Zaheer, Andrew Mccallum, Proceedings of the 2nd Workshop on Machine Reading for Question Answering. the 2nd Workshop on Machine Reading for Question AnsweringAmeya Godbole, Dilip Kavarthapu, Rajarshi Das, Zhiyu Gong, Abhishek Singhal, Xiaoxiao Yu, Mo Guo, Tian Gao, Hamed Zamani, Manzil Zaheer, and Andrew McCallum. Multi-step entity- centric information retrieval for multi-hop question answering. In Proceedings of the 2nd Work- shop on Machine Reading for Question Answering, 2019.
Retrieve, read, rerank: Towards endto-end multi-document reading comprehension. Minghao Hu, Yuxing Peng, Zhen Huang, Dongsheng Li, ACL. Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. Retrieve, read, rerank: Towards end- to-end multi-document reading comprehension. In ACL, 2019.
Tying word vectors and word classifiers: A loss framework for language modeling. Khashayar Hakan Inan, Richard Khosravi, Socher, Hakan Inan, Khashayar Khosravi, and Richard Socher. Tying word vectors and word classifiers: A loss framework for language modeling. In ICLR, 2017.
Adam: A Method for Stochastic Optimization. P Diederik, Jimmy Kingma, Ba, ICLR. Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In ICLR, 2015.
Adaptive document retrieval for deep question answering. Bernhard Kratzwald, Stefan Feuerriegel, EMNLP. Bernhard Kratzwald and Stefan Feuerriegel. Adaptive document retrieval for deep question answer- ing. In EMNLP, 2018.
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Rhinehart, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Natural questions: a benchmark for question answering research. TACL. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Rhinehart, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, et al. Natural ques- tions: a benchmark for question answering research. TACL, 2019.
Before I Go to Sleep stars an Australian actress, producer and occasional what? Before I Go to Sleep (film): Before I Go to Sleep is a 2014 mystery psychological thriller film written and directed by Rowan Joff and based on the 2011 novel of the same name by. S. J. WatsonMark Strong, Colin Firth, and Anne-Marie DuffAn international co-production between the United Kingdom, the United States, France, and Sweden, the film stars Nicole KidmanQ [bridge]: Before I Go to Sleep stars an Australian actress, producer and occasional what? Before I Go to Sleep (film): Before I Go to Sleep is a 2014 mystery psychological thriller film written and directed by Rowan Joff and based on the 2011 novel of the same name by S. J. Watson. An international co-production between the United Kingdom, the United States, France, and Sweden, the film stars Nicole Kidman, Mark Strong, Colin Firth, and Anne-Marie Duff.
Annotated reasoning path Before I Go to Sleep (film) → Nicole Kidman Predicted reasoning path: Nicole Kidman Q [comparison]: In between The Bears and I and Oceans which was released on July 31, 1974, by Buena Vista Distribution? The Bears and I: The Bears and I is a 1974 American drama film directed by Bernard McEveety and written by John Whedon. The film stars Patrick Wayne. Nicole Kidman, The film was released on. Chief Dan George, Andrew Duggan, Michael Ansara and Robert Pine; Buena Vista DistributionNicole Mary KidmanShe is the recipient of several awards, including an Academy Award, two Primetime Emmy Awards, a BAFTA Award, three Golden Globe Awards, and the Silver Bear for Best ActressNicole Kidman: Nicole Mary Kidman, is an Australian actress, producer and occasional singer. She is the recipient of several awards, including an Academy Award, two Primetime Emmy Awards, a BAFTA Award, three Golden Globe Awards, and the Silver Bear for Best Actress. Annotated reasoning path Before I Go to Sleep (film) → Nicole Kidman Predicted reasoning path: Nicole Kidman Q [comparison]: In between The Bears and I and Oceans which was released on July 31, 1974, by Buena Vista Distribution? The Bears and I: The Bears and I is a 1974 American drama film directed by Bernard McEveety and written by John Whedon. The film stars Patrick Wayne, Chief Dan George, Andrew Duggan, Michael Ansara and Robert Pine. The film was released on July 31, 1974, by Buena Vista Distribution.
Annotated reasoning path: The Bears and I, Oceans (film) Predicted reasoning path: The Bears and I Q: Yoann Lemoine, a French video director, has created music videos for Lana Del Rey, Katy Perry, and an orchestral country pop ballad by which top pop artist? Yoann Lemoine: Yoann Lemoine (born 16 March 1983) is a French music video director, graphic designer and singer-songwriter. His most notable works include his music video direction for Katy Perry's "Teenage Dream. Oceans, Oceans is a 2009 French nature documentary film directed. Jacques Perrinwith Jacques Cluzaud as co-director. Taylor Swift's single "Back to December. Lana Del Rey's "Born to Die" and Mystery Jets' "Dreaming of Another WorldOceans (film): Oceans is a 2009 French nature documentary film directed, produced, co-written, and narrated by Jacques Perrin, with Jacques Cluzaud as co-director. Annotated reasoning path: The Bears and I, Oceans (film) Predicted reasoning path: The Bears and I Q: Yoann Lemoine, a French video director, has created music videos for Lana Del Rey, Katy Perry, and an orchestral country pop ballad by which top pop artist? Yoann Lemoine: Yoann Lemoine (born 16 March 1983) is a French music video director, graphic designer and singer-songwriter. His most notable works include his music video direction for Katy Perry's "Teenage Dream", Taylor Swift's single "Back to December", Lana Del Rey's "Born to Die" and Mystery Jets' "Dreaming of Another World".
Back to December" is considered an orchestral country pop ballad and its lyrics are a remorseful plea for forgiveness for breaking up with a former lover. December Back To, Back to December" is a song written and recorded by American singer/songwriter Taylor Swift for her third studio album "Speak NowBack to December: "Back to December" is a song written and recorded by American singer/songwriter Taylor Swift for her third studio album "Speak Now" (2010). "Back to December" is considered an orchestral country pop ballad and its lyrics are a remorseful plea for forgiveness for breaking up with a former lover.
Blue Jeans" reached the top 10 in Belgium, Poland, and Israel. The second was shot and directed by Yoann Lemoine, featuring film noir elements and crocodiles. Annotated reasoning path: Yoann Lemoin → Back to December Predicted reasoning path: Blue Jeans (Lana Del Rey song) → Yoann Lemoin → Back to December Table 13: An example question where our model predicts reasoning paths of the length of three. Our model expects that the question is answerable based on the last paragraph of the annotated path. Q: Which songwriting duo composed music for. Blue Jeans (Lana Del Rey songLa La LandProduced by Emile Haynie, the song was written by Del Rey, Haynie, and Dan Heath. Charting across Europe and Asia. and created lyrics for "A Christmas Story: The Musica"? Q: who sang the original version of killing me softlyBlue Jeans (Lana Del Rey song): "Blue Jeans" is a song by American singer-songwriter Lana Del Rey for her second studio album "Born to Die" (2012). Produced by Emile Haynie, the song was written by Del Rey, Haynie, and Dan Heath. Charting across Europe and Asia, "Blue Jeans" reached the top 10 in Belgium, Poland, and Israel. The second was shot and directed by Yoann Lemoine, featuring film noir elements and crocodiles. Annotated reasoning path: Yoann Lemoin → Back to December Predicted reasoning path: Blue Jeans (Lana Del Rey song) → Yoann Lemoin → Back to December Table 13: An example question where our model predicts reasoning paths of the length of three. Our model expects that the question is answerable based on the last paragraph of the annotated path. Q: Which songwriting duo composed music for "La La Land", and created lyrics for "A Christmas Story: The Musica"? Q: who sang the original version of killing me softly
) is an American singer. She is known for her No. 1 singles "The First Time Ever I Saw Your Face. Roberta Flack (I): Roberta Cleopatra FlackKilling Me Softly with His SongRoberta Flack (I): Roberta Cleopatra Flack (born February 10, 1937) is an American singer. She is known for her No. 1 singles "The First Time Ever I Saw Your Face", "Killing Me Softly with His Song"...
The song was written in collaboration with Lori Lieberman, who recorded the song in late 1971. Killing Me Softly with His Song (V). it became a number -one hit in the US and Canada for Roberta Flack. Many artists have covered the song...Killing Me Softly with His Song (V), The song was written in collaboration with Lori Lieberman, who recorded the song in late 1971. In 1973 it became a number -one hit in the US and Canada for Roberta Flack, Many artists have covered the song....
Annotated reasoning path: Killing Me Softly with His Song (V) Predicted reasoning Path: Roberta Flack (I) → Killing Me Softly with His Song (V). Annotated reasoning path: Killing Me Softly with His Song (V) Predicted reasoning Path: Roberta Flack (I) → Killing Me Softly with His Song (V)
| [
"https://github.com/huggingface/pytorch-transformers.",
"https://github.com/marcocor/tagme-python"
] |
[
"Improving Recurrent Neural Networks For Sequence Labelling",
"Improving Recurrent Neural Networks For Sequence Labelling"
] | [
"Marco Dinarelli marco.dinarelli@ens.fr ",
"Isabelle Tellier isabelle.tellier@univ-paris3.fr ",
"\nLaTTiCe (UMR 8094)\nCNRS\nENS Paris\nUniversité Sorbonne Nouvelle -Paris\n\n",
"\nPSL Research University\nUSPC\nUniversité Sorbonne Paris Cité\n1 rue Maurice Arnoux92120MontrougeFrance\n"
] | [
"LaTTiCe (UMR 8094)\nCNRS\nENS Paris\nUniversité Sorbonne Nouvelle -Paris\n",
"PSL Research University\nUSPC\nUniversité Sorbonne Paris Cité\n1 rue Maurice Arnoux92120MontrougeFrance"
] | [] | In this paper we study different types of Recurrent Neural Networks (RNN) for sequence labeling tasks. We propose two new variants of RNNs integrating improvements for sequence labeling, and we compare them to the more traditional Elman and Jordan RNNs. We compare all models, either traditional or new, on four distinct tasks of sequence labeling: two on Spoken Language Understanding (ATIS and MEDIA); and two of POS tagging for the French Treebank (FTB) and the Penn Treebank (PTB) corpora. The results show that our new variants of RNNs are always more effective than the others. | null | [
"https://arxiv.org/pdf/1606.02555v1.pdf"
] | 15,516,097 | 1606.02555 | 5e5c428886b8d3075255884b0fd4138eb9d062eb |
Improving Recurrent Neural Networks For Sequence Labelling
June 9, 2016
Marco Dinarelli marco.dinarelli@ens.fr
Isabelle Tellier isabelle.tellier@univ-paris3.fr
LaTTiCe (UMR 8094)
CNRS
ENS Paris
Université Sorbonne Nouvelle -Paris
PSL Research University
USPC
Université Sorbonne Paris Cité
1 rue Maurice Arnoux92120MontrougeFrance
Improving Recurrent Neural Networks For Sequence Labelling
June 9, 2016
In this paper we study different types of Recurrent Neural Networks (RNN) for sequence labeling tasks. We propose two new variants of RNNs integrating improvements for sequence labeling, and we compare them to the more traditional Elman and Jordan RNNs. We compare all models, either traditional or new, on four distinct tasks of sequence labeling: two on Spoken Language Understanding (ATIS and MEDIA); and two of POS tagging for the French Treebank (FTB) and the Penn Treebank (PTB) corpora. The results show that our new variants of RNNs are always more effective than the others.
Introduction
Recurrent Neural Networks (RNN) [1][2][3] are neural models able to take some context into account in their decision function. For this reason, they are particularly suitable for several NLP tasks, in particular sequential information prediction [4][5][6][7][8]. In RNNs, the contextual information is provided to the model by a loop connection in the network architecture. This connection allows to use at the current time step one or more pieces of information predicted at previous time steps. This architecture seems particularly effective for neural networks since it allows to combine the power of distributional representations (or embeddings) with the effectiveness of contextual information.
In the literature about RNNs for NLP, two main variants have been proposed, also called "simple" RNNs: the Elman [2] and the Jordan [1] RNN models. The difference between these models lies in the position of the loop connection giving the recurrent character to the network: in the Elman RNN, it is put in the hidden layer whereas in the Jordan RNN it connects the output layer to the hidden layer. In this last case, the recurrent connection allows to use, at the current time step, the information predicted at previous time steps. In the last few years, these two types of RNNs have been very successful for language modeling [9,10], and for some sequence labeling tasks [6-8, 11, 12].
The intuition at the origin of this article is that embeddings allow a fine and effective modeling not only of words, but also of labels and their dependencies, which are very important for sequence labeling. In this paper, we define two new variants of RNN to achieve this more effective modeling.
In the first variant, the recurrent connection is between the output and the input layers. In other words, this variant gives labels predicted at previous positions in a sequence as input to the network. Such contextual information is added to the usual input context made of words, and both are used to predict the label at the current position in the sequence. Moreover we modified the hidden layer activity computation with respect to Elman and Jordan RNN, so that to take the different information provided by words and labels into account. From our intuition, thanks to label embeddings and to features-learning abilities of the hidden layer, this variant models in a more effective way label dependencies. The second variant we propose combines an Elman RNN and our first variant. This variant can thus exploit both contextual information provided by the previous states of the hidden layer, and the labels predicted at previous positions of a sequence.
A high-level schema of the Elman's, Jordan's and our first variant of RNN are shown in Figure 1. The schema of our second variant can be obtained by adding the recursion of the first one to the Elman architecture. In this figure, w is the input word, y is the predicted label, E, H, O and R are the parameter matrices between each pair of layers: they will be described in details in the next section. Before, it is worth discussing the advantages our variants can bring with respect to traditional RNN architectures, and explaining why they are expected to provide better modeling abilities. First, since the output at previous positions is given as input to the network at the current position, the contextual information flows across the whole network, affecting each layer's input and output, at both forward and backward phases. In contrast, in Elman and Jordan RNNs, not all layers are affected at both forward and backward phases.
A second advantage of our variants is given by label embeddings. Indeed, the first layer of our RNNs is just a look-up table mapping sparse "one-hot" representations into distributional representations. 1 Since in our variants the output of the network at previous steps is given as input at the current step, the mapping from sparse representations to embeddings involves both words and labels. Label embeddings can be pre-trained from data as it is usually done for words. Pre-trained word embeddings, e.g. with word2vec, have already shown their ability to capture very attractive syntactic and semantic properties [13,14]. Using label embeddings, the same properties can be learned also for labels. More importantly, using several predicted labels as embeddings provide a more effective modeling of label dependencies via the internal state of the network, which is the hidden layer's output.
Another advantage coming from the use of label embeddings and different previous labels as context, is an increased robustness of the model to prediction mistakes. This effect comes from the syntactic and semantic properties that embeddings can encode [14].
All these advantages are also supported by our second variant, which uses both a label embedding context, like the first variant, and the loop connection at the hidden layer, like an Elman RNN.
All RNNs in this article are studied in their forward, backward and bidirectional versions [3]. In order to have a fair and straightforward comparison, we give the results of our new variants of RNNs together with those obtained with our implementation of Elman and Jordan RNNs. These implementations are very close to state-of-the-art, even if we did not implement every optimization feature.
All models are evaluated on four tasks. Two are Spoken Language Understanding (SLU) tasks [15]: ATIS [16] and MEDIA [17], which can be both modeled as sequence labeling problems. Two are POS-tagging tasks, one on the French Treebank (FTB) [18,19] and one on the Penn Treebank (PTB) [20]. The results we obtain on these tasks with our implementations, despite they are not always better than the state-of-the-art, provide a stable ranking of different RNN architectures: at least one of our variants, most of the time and surprisingly the simpler one, is always better than Jordan and Elman RNNs.
In the remainder of the paper, we introduce RNNs and we describe in more details the variants proposed in this work (section 2). In section 3, we describe the corpora used for evaluation and all the results obtained, in comparison with state-of-the-art models. In section 4, we draw our conclusions.
Improving Recurrent Neural Networks
The RNNs we consider in this work have the same architecture also used for Feedforward Neural Network Language Models (NNLM), described in [21]. In this architecture, we have four layers: input, embedding, hidden and output. Words are given as input to the network as indexes, corresponding to their position in a dictionary V .
The index of a word is used to select its embedding (or distributional representation) in a real-valued matrix E ∈ R |V |XN , |V | being the size of the dictionary and N the dimensionality of the embeddings (which is a parameter to be chosen). We name E(v(w t )) the embedding of the word w given as input at the position t of a sequence. v(w t ) = i is the index of the word w t in the dictionary, and it can be seen alternatively as a "one-hot" vector representation (the vector is zero everywhere except at position v(w), where it is 1).
In contrast to NNLM, RNNs have one more connection, the recursive connection, between two layers, depending on the type of RNNs. As mentioned previously, Elman RNNs have a recursion loop in the hidden layer. Since this layer encodes the internal representation of the input to the network, the recurrent connection of an Elman network allows to keep "in memory" words used as input at previous positions in the sequence. Jordan RNNs have instead a recursion between the output and the hidden layer. This means that a Jordan RNN can take previous predicted labels into account to predict the label at the current position in a sequence. For every type of RNN, we call R the matrix of parameters of the recursion connection.
Our implementation of Jordan and Elman RNNs is like in the literature. [1][2][3]. 2
RNN Learning
Learning the described RNNs consists in learning the parameters Θ = (E, H, O, R) between each pair of layers (see Figure 1), and we omit biases to keep notations lighter. We use a cross-entropy cost function between the expected label c t and the predicted label y t at the position t in the sequence, plus a L2 regularization term [22]:
C = −c t · log(y t ) + λ 2 |Θ| 2 (1)
λ is an hyper-parameter of the model. Since y t is a probability distribution over output labels, we can also view the output of a RNN as the probability of the predicted label y t : P (y t |I, Θ). I is the input given to the network plus the contextual information provided by the recurrent connection. For Elman RNN I Elman = w t−w ...w t ...w t+w , h t−1 , that is the word input context and the output of the hidden layer at the previous position in the sequence. For Jordan RNN I Jordan = w t−w ...w t ...w t+w , y t−1 , that is the same word input context as Elman RNN, and the label predicted at the previous position. We associate the following decision function to predict the label at position t in a sequence:l l t is a particular discrete label. We use the back-propagation algorithm and the stochastic gradient descent with momentum [22] for learning the weights Θ.
Learning Variants
An important choice for learning RNN models concern the back-propagation algorithm. Indeed, because of the recurrent nature of their architecture, in order to properly learn RNNs the Back-Propagation Through Time algorithm (BPTT) should be used [23]. The BPTT algorithm constists basically in unfolding the recurrent architecture for a choosen number of steps, and then learning the network as a standard Feed-Forward network. This is supposed to allow RNNs to learn arbitrarily long past context. However, [10] has shown that RNNs for language modelling learn best with just 5 time steps in the past. This may be due to the fact that, at least in NLP tasks, the past information kept "in memory" by the network via the recurrent architecture, actually fades away after some time steps. Moreover, in many NLP tasks, using an arbitrarily-long context on either input or output side, doesn't garantee better performances, as increasing the context size also increases the noise. Since BPTT is quite more expensive than the traditional back-propagation algorithm, [7] has preferred to use explicit output context in Jordan RNNs and to learn the model with the traditional back-propagation algorithm, not surprisingly without loosing performance.
In this work we use the same variant as [7]. When using an explicit context of output labels from previous time steps, the hidden layer activity of a Jordan RNN is computed as:
h t = Σ(I t · H + [y t−c+1 y t−c+2 ...y t−1 ] · R)(3)
where c is the size of the history of previous labels that we want to explicitly use as context to predict next label.
[·] indicates the concatenation of vectors.
All the modifications applied to the Jordan RNN so far can be applied in a similar way to the Elman RNN.
New Variants of RNN
As mentioned in the introduction, the new variants of RNN proposed in this work present two differences with respect to traditional RNNs: i) the recurrent connection is from the output to the input layer, meaning that predicted labels are converted into embeddings in the same way as words; ii) the hidden layer activities are computed in a slightly different way. Indeed, in Elman and Jordan RNNs the contextual information provided by the recurrent connection is summed to the input information (see equation 3 above). In our variants of RNNs instead, word and label embeddings are concatenated and provided to the hidden layer as different inputs.
The most interesting consequence of modifications in our RNN variants, is the fact that output labels are mapped into distributional representations, as it is usually done for input items. Indeed, the first layer of our network, is just a mapping from sparse "onehot" representations to distributional representations. Such mapping results in fine features and attractive syntactic and semantic properties, as shown by word2vec and similar works [13]. Such representations can be learnt from data the same way as for words. In the simplest case, this can be done by using sequences of output labels. When structured information is available, like syntactic parse trees or structured semantic labels such as named entities or entity relations, more sophosticated embeddings can be learnt. In this work, we learn label embeddings using sequences of output labels associated to word sequences in annotated data. It is worth noting that the idea of using label embeddings has been introduced by [24] in the context of dependency parsing. In this paper, we focus on the use of several label embeddings as context, thus encoding label dependencies, which are very important in sequence labeling tasks.
Using the same notation as above, we name E w the embedding matrix for words, and E l the embedding matrix for labels. We name
I t = [E w (v(w t−w ))...E w (v(w t ))...E w (v(w t+w ))
] the concatenation of the vectors representing the input words when processing the position t in a sequence, while
L t = [E l (v(y t−c+1 ))E l (v(y t−c+2 ))...E l (v(y t−1 ))]
is the concatenation of the vectors representing the output labels predicted at the previous c steps. The hidden layer activities are computed as: [22], [·] means the concatenation of the two matrices and we omit biases to keep notations lighter. The remainder of the layer activities, as well as the error computation and back-propagation are computed the same way as in traditional RNNs.
h t = Σ([I t L t ] · H) Σ is the sigmoid activation function
Note that in this variant of RNN there is no R matrix at the recurrent connection. The recurrent connection here means that the output is given back as input to the network and it is thus converted explicitely from probability distribution given by the softmax into a label index, which is used in turn to select a label embedding from the matrix E l . Basically the role played from matrix R in Elman and Jordan RNN, is played from matrix E l in our variant.
Another important interest of having the recursion between output and input layers is robustness. This is a direct consequence of using embeddings for output labels. Since we use several predicted labels as context at each position t (see L t above), at least in the later stages of learning (when the model is close to the final optimum), it is unlikely to have several mistakes in the same context. Even then, thanks to the properties of distributed representations [14], wrong labels have very similar representations to the correct ones. Taking an example cited in [14]: if we use Paris instead of Rome, it has no effect for many NLP tasks, as they are both proper nouns for POS-tagging, locations for named entity recognition etc. Distributed representations for labels provide the same robustness on the output side. 3 Jordan RNNs cannot provide in general the same robustness. We can interpret hidden activity computation in a Jordan RNN in two ways.
On the one hand, if we interpret a Jordan RNN as using sparse label representations as input to the hidden layer, such representations are either "one-hot" representations of labels or probability distributions given as output by the softmax at the output layer. In the first case it is clear that a mistake may have more effect than in the second one, as the only value that is not zero is in a wrong position. But when probability distributions are used, we have found that most of the probability mass is "picked" on one or few labels, which thus does not provide much more softness than a "one-hot" representation. In this interpretation, sparse labels are an additional input for the hidden layer, the matrix R on the recurrent connection plays the same role as H does for input words. The two matrices are then summed to compute the total input of the hidden layer.
On the other hand, we can see the multiplication of sparse representation of labels in equation 3 as the selection of an embedding for labels from matrix R. 4 Even in this interpretation there is a substantial difference between the Jordan RNN and our variant of RNN, I-RNN henceforth (I stands for Improved). In order to understand in detail this difference, we focus on the equations for computing the hidden activities. For Jordan RNN we have:
h t = Σ(I t · H + [y t−c+1 y t−c+2 ...y t−1 ] · R) For I-RNN we have: h t = Σ([I t L t ] · H) where L t is [E l (v(y t−c+1 ))E l (v(y t−c+2 ))... E l (v(y t−1 ))]
, that is the concatenation of c previous label embeddings.
In this second interpretation, in Jordan RNN labels are not directly concerned in the computation of the total input for the hidden layer, since they are not directly multiplied by matrix H. Only input context I t is multiplied by H. The result of this multiplication is summed to the result of label embedding selection, performed as y[t − i] · R, i = 1 . . . c−1. Finally the hidden non-linear function Σ is applied. Mixing input processing I t · H and label processing y[t − i] · R with a sum, can make sense for the tasks for which the Jordan network was designed [1], as output units were of the same nature as input units (speech signal). However we believe that it doesn't express sufficiently well words and labels interactions in NLP tasks. Also, since y[t − i] · R is an embedding selection for labels, labels and words are not processed in the same way: I t is already made of word embeddings, which are further transformed by I t · H. This further transformation is not applied to label embeddings.
In I-RNN in contrast, sparse labels are first converted into embeddings with E l (v(y t−i ))], i = 1 . . . c, and their concatenation results in L t . This matrix is further concatenated to the input context I t . The result of the concatenation is multiplied by matrix H to compute the total input to the hidden layer. 5 Finally the hidden non-linear function Σ is applied. This means that information provided by the input context I t is not mixed with L t with a sum like in Jordan RNN. These two data are given to the hidden layer as separated inputs. More in particular, the concatenation of I t and L t is performed neuron-wise, that is each hidden neuron receives as input all context words and all context labels, encoding them as a network internal feature. Thus we let the hidden layer itself to learn labels interactions, and words-labels interactions. This is indeed in agreement with the "philosophy" of neural networks, where features designing is turned into features learning. Since words and labels have different nature in sequence labeling 4 Multiplying a "one-hot" representation by a matrix is equivalent to selecting one row of the matrix. 5 We remind the reader that we are omitting biases to keep notation lighter tasks, we believe that modeling interactions in this way is more effective. Also, with the I-RNN architecture, words and labels are processed in the same way, as both are first converted into embeddings, and then "transformed" again via multiplication by H matrix. In order to make results obtained with different RNNs comparable, we used the same number of hidden neurons for all RNNs. In I-RNN thus, each hidden neuron receives as input much more information than Jordan and Elman hidden neurons.
In order to make the explanation more clear, I-RNN architecture is detailed in figure 2. Symbols have the same meaning as equations in the paper, the only exception is that labels are indicated in the figure with upper-case L. Matrix H is replicated at the hidden layer computation meaning that all neurons receive the whole [I t L t ] input, which is made of 2 · w + 1 + c D-dimensional embeddings: 2w + 1 word embeddings and c label embeddings. CAT is the concatenation operator. Please note that the concatenations of embeddings are performed at two different steps just for the sake of clearness and to be coherent with equations in the paper, all concatenations can be performed in one step.
Our second variant of RNN combines the characteristics of an Elman RNN and of our first variant. In this variant, the only difference with the first one is the computation of the hidden layer activities, where we use the concatenation of the c previous hidden layer states in addition to the information already used in the first variant:
h t = Σ([I t L t ] · H + [h t−c+1 h t−c+2 ...h t−1 ] · R)
Forward, Backward and Bidirectional RNNs
All RNNs described in this work are studied in their forward, backward and bidirectional versions [3]. Forward RNNs work as described so far. Backward RNNs have exactly the same architecture, the only difference being that they process sequences in the reverse order, from the end to the begin. Backward RNNs can be used to predict future labels in a sequence. Bidirectional RNNs use both past and future information to predict the next label, which is both words and labels in our variants and in Jordan RNNs, or both words and hidden layers in Elman RNNs. When labeling sequences with bidirectional RNNs, a backward network is first used to predict labels backward. The bidirectional RNN then processes sequences in forward direction using past contextual information as usual, and future contextual information provided by the states and labels predicted by the backward network. The hidden layer of the bidirectional version of our fist variant of RNNs is thus computed as:
h t = Σ([I t L p t L f t ] · H) where L p t = L t introduced above, while L f t = [E l (v(y t+1 )) . . . E l (v(y t+c−1 ))E l (v(y t+c ))
] is the concatenation of the vectors representing the c future labels predicted by the backward model. It is very similar for our second variant, refer to [3] for details.
Recurrent Neural Network Complexity
We provide here an analysis of the complexity in terms of the number of parameters involved in each model. In Jordan RNN we count: |V | × D + ((2w + 1)D + c|O|) × |H| + |H| × |O| where |V | is the size of the input dictionary, D the dimensionality of the embeddings, |H| and |O| are the size of the hidden and output layers, respectively, w the size of the window of words used as context on the input side 6 and c is the size of the context of labels, which is multiplied by the dimensionality of the output label dictionary |O|.
With the same symbols, in an Elman RNN and in our first variant we have, respectively:
|V | × D + ((2w + 1)D + c|H|) × |H| + |H| × |O| and |V | × D + |O| × D + ((2w + 1)D + cD) × |H| + |H| × |O|
The only difference between the Jordan and Elman RNNs lies in the factors c|O| and c|H|. Their difference in complexity depends thus on the size of the output layer (Jordan) with respect to the size of the hidden layer (Elman). Since in sequence labeling tasks the hidden layer is often bigger, Elman RNN is more complex than Jordan RNN. The difference between the Jordan RNN and our first variant is in the factors |O| × D and cD. The first is due to the label embeddings 7 , the second is due to the use of such embeddings as input to the hidden layer. Since often D and O have sizes in the same order of magnitude, and thanks to the use of vectorized operations on matrices, we didn't found a noticeable difference in terms of training and testing time between the Jordan RNN and our first variant. This simple analysis also shows that our first variant roughly needs the same number of connections in the hidden layer as a Jordan RNN. Our first variant is thus architecturally equivalent to a Jordan RNN.
In contrast, for the second variant we have: |V | × D + |O| × D + ((2w + 1)D + cD + c|H|) × |H| + |H| × |O| The additional term c|H|, is due to the same recurrent connection as in an Elman RNN. Using vectorized operations for matrix calculations, we found the second variant slower for both training and testing time by a factor 1.15 with respect to the other RNNs. The same complexity analysis holds for backward RNNs. But bidirectional RNNs are even more complex. Without deriving any new formula, we note that they are slower with respect to their corresponding forward/backward models by a factor of roughly 1.5.
Evaluation
Corpora
We used four distinct corpora:
The Air Travel Information System (ATIS) task [16] has been designed to automatically provide flight information in SLU systems. The semantic representation is frame-based and the goal is to find the correct frame and the corresponding semantic slots. For example, for the sentence "I want the flights from Boston to Philadelphia today", the correct frame is FLIGHT and the words Boston, Philadelphia and today must be annotated with the concepts DEPARTURE.CITY, ARRIVAL.CITY and DEPARTURE.DATE, respectively.
ATIS is a relatively simple task dating from 1993. The training set is made of 4978 sentences taken from the "context independent" data in the ATIS-2 and ATIS-3 corpora. The test set is made of 893 sentences, taken from the ATIS-3 NOV93 and DEC94 datasets. There are no official development data provided with this corpus, we have thus taken a part of the training data at random to play this role. 8 The French corpus MEDIA [17] has been created to evaluate SLU systems providing tourist information, in particular hotel information in France. It is composed of 1250 dialogues acquired with a Wizard-of-OZ protocol where 250 speakers have applied 5 hotel reservation scenarios. The dialogues have been annotated following a rich 7 We use embeddings of the same size D for words and labels. 8 Please see [16] for more details on the ATIS corpus. 9 In addition to this rich annotation, another difficulty lies in the coreferences. Some words cannot be correctly annotated without information about previous dialog turns. For example in the sentence "Yes, the one at less than fifty euros per night", the one refers to an hotel previously introduced in the dialog. Statistics on training, development and test data from this corpus are shown in table 1.
Both ATIS and MEDIA can be modelled as sequence labeling tasks using the BIO chunking notation [25]. Several different works compared on ATIS [6][7][8]26]. [8] is the only work providing results on MEDIA with RNNs, it also provides results obtained with CRFs [27], allowing an interesting comparison.
The French Treebank (FTB) corpus is presented in [18]. The version we use for POS-tagging is exactly the same as in [19]. But, in contrast to them, who reach the best result on this task with an external lexicon, we don't use any external resource here 10 . Statistics on the FTB corpus are shown in table 2.
The Penn Treebank (FTB) corpus is presented in [20]. In order to have a direct comparison with previous works [
RNNs Parameters Settings
In order to compare with some published works on the ATIS and MEDIA tasks, we use the same dimensionality settings used by [6], [7] and [8], that is embeddings have 200 dimensions, hidden layer has 100 dimensions. We also use the same context size for words, that is w = 3, and we use c = 6 as labels context size in our variants and in Jordan RNN. We use the same tokenization, basically consisting of words lowercasing.
In contrast, in our models the sigmoid activation function at the hidden layer and the L2 regularization. While [6], [7], [8] and [26] For the FTB POS-tagging task we have used 200-dimensional embeddings, 300dimensional hidden layer, again w = 3 for the context on the input side, and 6 context labels on the output side. The bigger hidden layer gave better results during validation experiments, due to the larger word dictionary in this task with respect to the others, roughly 25000 for the FTB against 2000 for MEDIA and 1300 for ATIS. In contrast to [19], which has used several features of words (prefixes, suffixes, capitalisation information etc.), we only performed a simple tokenization for reducing the size of the input dictionary: all numbers have been mapped to a conventional symbol (NUM), and nouns not corresponding to proper names and starting with a capital letter have been converted to lowercase. We preferred this simple tokenisation without using rich features, because our goal in this work is not obtaining the best results ever, it is to compare Jordan and Elman RNNs with our variants of RNN and show that our variants works better for sequence labeling. Adding many features and/or building sophisticated models would make the message less clear, as results would be probably better but the improvements could be attributed to rich and sophisticated models, instead of to the model itself.
For the PTB POS-tagging task we use exactly the same settings and pre-processing as for the FTB task, except that we used 400 hidden neurons. During validation we found that this works better, again due to the size of the dictionary which is 45000 for this task (after pre-processing).
We trained all RNNs with exactly the same protocol: i) we first train neural language models to obtain word and label embeddings. This language model is like the one in [21], except it uses both words/labels in the past and in the future to predict next word/label. ii) we train all RNNs using the same embeddings trained at previous step. We train the RNN for word embeddings for 20 epochs, the RNN for label embeddings for 10 epochs, and we train the RNNs for sequence labeling for 20 epochs. The number of epochs has been roughly optimized on development data. At the end of training we keep the model which gave the best tagging accuracy on development data. Also we roughly optimized on development data the learning rate and the parameter λ for regularization, the best values found are 0.5 and 0.003, respectively.
Training and Tagging Time
Since implementations of RNNs use in this work are prototypes 11 , it does not make sense to compare them to state-of-the-art in terms of training and tagging time. However it is worth providing training times at least to have an idea and to have a comparison among different RNNs.
As explained in section 2.5, our first variant I-RNN and the Jordan RNN have the same complexity. Also, since the size of the hidden layer is in the same order of magnitude as the size of the output layer (i.e. the number of labels), also Elman RNN has roughly the same complexity as the Jordan RNN. This is reflected in the training time.
The training time for label embeddings is always relatively short, as the size of the output layer, that is the number of labels, is always relatively small. This training time thus can vary from few minutes for ATIS and MEDIA, to less that 1 hour for the FTB corpus.
Training word embeddings is also very fast on ATIS and MEDIA, taking less than 30 minutes for ATIS and roughly 40 minutes for MEDIA. In contrast training word embeddings on the FTB corpus takes roughly 5 days, training time goes to roughly 6 days for the PTB word embeddings.
Training the RNN taggers takes roughly the same time as training the word embeddings, as the size of the word dictionary is the dimension that most affects the computational complexity at softmax used to predict next label.
Concerning the second variant of RNN proposed in this work, I+E-RNN, this is slower as it is more complex in terms of number of parameters. Training I+E-RNN on ATIS and MEDIA takes roughly 45 minutes and 1 hour, respectively. In contrast training I+E-RNN on the FTB and PTB corpora takes roughly 6 days and 7 days, respectively.
We didn't keep track of tagging time, however it is always negligible with respect to training time, and it is always measured in few minutes. All times provided here are with a 1.7 GHz CPU, single process.
Sequence Labeling Results
The evaluation of all models described here is shown in table 4 5 6 and 7, in terms of F1 measure for ATIS and MEDIA, Accuracy on the FTB and PTB. Our implementations of Elman and Jordan RNN are indicated in tables as E-RNN and J-RNN. Our new variants are indicated as I-RNN and I+E-RNN. Table 4 shows results on the ATIS corpus. In the higher part of the table we show results obtained using only words as input. In the lower part, results are obtained using both words and word-classes available for this task. Such classes concern city names, airports and time expressions. They allow the models to generalize from specific words triggering concepts. 12 Note that our results on the ATIS corpus are not always comparable with those published in the literature because: i) Models published in the literature use a rectified linear activation function at the hidden layer 13 and the dropout regularization. Our models use the sigmoid activation function and the L2 regularization. ii) For experiments on ATIS we have used roughly 18% of the training data for development, we thus 12 For example the cities of Boston and Philadelphia in the example above are mapped to the class CITY-NAME. If a model has never seen Boston during the training phase, but has seen at least one city name, it will possibly annotate Boston as a departure city thanks to some discriminative context, such as the preposition from. 13 f (x) = max(0, x). used a smaller training set.
iii) The works we compare with do not always give details on how classes available for the task have been integrated into their models. iv) Layer dimensionality and hyper-parameter settings do not always match those of published works. In fact, to avoid running too much experiments, we have based our settings on known works, but this doesn't allow a straightforward comparison with other published works.
Despite this, the message we want to claim in this work is still true for two reasons: i) Some of the results obtained with Elman and Jordan RNNs are close, or even better, than state-of-the-art. Thus they are not weak baselines. ii) We provide a fair comparison of our variants with traditional Elman and Jordan RNNs.
The results in the higher part of table 4 show that the best model on the ATIS task, with these settings, is the Elman RNN of [26]. Note that it is not clear how the improvements of [26] with respect to [7] (in part due to same authors) have been obtained. Indeed, in [7] authors obtain the best result with a Jordan RNN, while in [26] an Elman RNN gets the best performance. During our experiments, using the same experimentation protocol as [26], we could not reach the same performances. We conclude that the difference between our results and those in [26] are due to reasons mentioned above. Beyond this, we note that our Elman and Jordan RNN implementations are equivalent to those of [7]. Also, our first variant of RNNs, I-RNN, obtains the second best result (93.84 in bold), which is the best result we could reach. Our second variant is roughly equivalent to a Jordan RNN on this task.
The results in the lower part of the table 4 (Classes), obtained with both words and classes as input, are quite better than those obtained with words only. They roughly follow the same behavior, except that in this case our Jordan RNN is slightly better than our second variant. The first variant I-RNN obtains again the best result among our implementations (95.21 in bold). In this case also, we attribute the differences with respect to published results to the different settings mentioned above. For comparison, in this part of the table we show also results obtained using CRF.
On the ATIS task, using either words or both words and classes as inputs, we can see that results are always quite high. This task is relatively simple. Label dependencies can be easily modeled, as there is basically no segmentation of concepts over different consecutive words (one concept corresponds to one word). In this settings, the potential of our new variants of RNNs cannot be fully exploited. This limitation is confirmed by the results obtained by [8] and [26] using CRFs. Indeed, RNNs don't take the whole sequence of labels into account in their decision function. In contrast, CRFs use a global decision function taking all the possible labelings of a given input sequence into account to predict the best sequence of labels. The fact that CRFs are less effective than RNN on ATIS is a clear sign that label dependency modeling is relatively simple in this task. Table 5 shows the results on the corpus MEDIA. As already mentioned, this task is more difficult because of its richer semantic annotation, and also because of the coreferences and of the segmentation of concepts over several words. This creates relatively long label dependencies, which cannot be taken into account by simple models. The difficulty of this task is confirmed by the magnitude of results (10 F1 points lower than on the ATIS task).
As can be seen in Table 5, CRFs are in general much more effective than Jordan and Elman RNNs on MEDIA. This outcome could be expected as RNNs use a local decision function not able to take long label dependencies into account. We can also see that our implementations of Elman and Jordan RNNs are comparable, even better in the case of Elman RNN, with state-of-the-art RNNs of [8].
More importantly, results on the MEDIA task shows that in this particular experimental settings where taking label dependencies into account is crucial, our new variants are remarkably more effective than both our implementations and state-of-the-art implementations [8] of Elman and Jordan RNNs. This holds for forward, backward and bidirectional RNNs. Moreover, the bidirectional version of our variants of RNNs outperforms even CRF. We attribute this effectiveness to a better modeling of label dependencies, due to label embeddings. Table 6 shows the results obtained on the POS-tagging task of the FTB corpus. On this task, we compare our RNN implementations to the state-of-the-art results obtained in [19] with the model M Elt 0 f r . We would like to underline that M Elt 0 f r , when it does not use external resources like in the model obtaining the best absolute result in [19], nevertheless uses several features associated with words that provide an advantage over features used in our RNNs. As can be expected thus, M Elt 0 f r outperforms the RNNs. Results in table 6 shows that forward and backward RNNs are quite close to each other on this task, I-RNN, providing a little improvement. In contrast, the bidirectional version of I-RNN and I+E-RNN provide a significant improvement over Jordan and Elman RNNs. Table 7 shows the results obtained on the WSJ POS tagging task. We also provide the results of [28] and [29] for comparison with the state-of-the-art. This is just for a matter of comparison, as the results shown in those works were achieved with rich features and more sophisticated models. Instead, we can roughly compare our results with those of [5], which were also obtained with neural networks. In particular, we can compare with the model called NN+SLL. Note that this is a rough comparison as the model of [5], though not a RNN, integrates capitalisation features and uses a convolution and a max-over-time layer to encode large context information.
Results for POS tagging on the PTB corpus roughly confirm the conclusions reached for FTB results, with the only difference that in this case I+E-RNN is slightly better than I-RNN. Our RNNs don't improve the state-of-the-art, but are all more effective than the model of [5]. This result is particularly important, as it shows that RNNs, even without using a sophisticated encoding of the context like the model NN+SLL in [5], are intrinsecally a better model for sequence labeling. This claim is enforced also by the fact that NN+SLL of [5] implements a global probability computation strategy similar to CRF (SLL stands for Sentence Level Likelihood), while all RNNs presented here use a local decision function (see equation 2). Again, the ranking of RNN models on the PTB POS-tagging task is stable, the variants of RNNs proposed in this work being more effective than traditional Elman and Jordan RNNs.
We would like to notify that we have also performed some experiments on ATIS and MEDIA without pre-training label embeddings. The results reached are not substantially different from those obtained with pre-training label embeddings. Indeed, on relatively small data, it is not rare to have similar or even better results without pretraining. This is due to the fact that learning effective embeddings requires a relatively large amount of data. [8] also shows results obtained with embeddings pre-trained with word2vec and without embedding pre-training. His conclusions are indeed very similar. More generally, reaching roughly the same results on a difficult task like MEDIA without label embedding pre-training is a clear sign that our variants of RNNs are superior to traditional RNNs because they use a context made of label embeddings. As a matter of fact, the gain with respect to Elman and Jordan RNNs cannot be attributed in this case to the use of pre-trained embeddings. On relatively larger corpora like FTB and PTB, label embedding pre-training seems to provide a slight improvement.
Finally, we have run experiments also modifying Jordan and Elman RNNs so that to model words and labels interactions more like I-RNN does, that is word and label embeddings (or hidden states in Elman RNN) are not summed together, instead they are concatenated. Results obtained were not substantially different from those obtained with "traditional" Jordan and Elman RNN and in any case I-RNN was always performing best, still keeping a large gain over Elman and Jordan RNN on the ME-DIA task. The explanation of this outcome is that keeping word and label embeddings separated, and then multiplying both by matrix H to compute hidden activities, as we do in I-RNN, is more effective than concatenating I t · H and y[t − 1] · R, as we did for Jordan RNN, and analogously with previous hidden layer for Elman RNN. This is not surprising, as I-RNN also in this case is applying one transformation more when multiplying label embeddings by H to compute the total input to the hidden layer.
It is someway surprising that I-RNN systematically outperforms I+E-RNN, the latter model integrates more information at the hidden layer and thus should be able to take advantage of both Elman and I-RNN characteristics. While an analysis to explain this outcome is not trivial, our interpretation is that using two recursions in a RNN gives actually redundant information to the model. Indeed, the output of the hidden layer keeps the internal state of the network, which is the internal (distributed) representation of the input n-gram of words around position t and the previous c labels. The recursion at the hidden layer allows to keep this information "in memory" and to use it at the next step t + 1. However using the recursion of I-RNN the previous c labels are also given explicitely as input to the hidden layer. This may be redundant, and constrain the model to learn an increased amount of noise. A similar idea of hybrid RNN model has been tested in [26] without showing a clear advantage on Elman and Jordan RNNs.
What can be said in general from the results obtained on all the presented tasks, is that RNN architectures using label embedding context can model label dependencies in a more effective way, even when these dependencies are relatively simple (like in ATIS and POS tagging tasks). The two variant of RNNs proposed in this work, in particular the I-RNN variant, are for this reason more effective than Elman and Jordan RNNs on sequence labeling tasks.
Comparison of Jordan-RNN and I-RNN label representations
We compare Jordan RNN and I-RNN label representations under the interpretation where Jordan RNN hidden activity computation uses sparse labels as an additional input to the hidden layer, as explained in section 2.3. As explained also in the same section, under the other interpretation I-RNN provides the advantage of performing an Table 5: Results of SLU on the MEDIA corpus additional transformation on labels, and gives words and labels embeddings as separated inputs to the hidden layer. Under the first interpretation, the advantage of using label embeddings in I-RNN instead of "one-hot" or probability distribution representations like in Jordan RNNs, is an increased amount of signal flowing across the network. A semantic interpretation of the interaction of these two representations with the network is not trivial. Indeed, in the probability representation output by the softmax in a Jordan RNN, the different dimensions are just probabilities associated to different labels. In contrast, in label embeddings used in I-RNN, the dimensions are different distributional features, related to how a particular label is used in particular label contexts. A comparison between these two representation is thus not really meaningful.
Instead, we performed a simple analysis of the magnitude of values found in the probability distribution used as representations in Jordan RNNs, when using development data of the MEDIA corpus. We summarize this analysis as follows:
1. 7843 out of 11051 ( 71%) of the time, the maximum value is greater than 0.9 2. 9928 out of 11051 ( 90%) of the time, the sum of the 3 highest probabilities is greater than 0.9
3. Excluding the 3 highest probabilities, the remaining values in the distribution This simple analysis shows that the probability distributions used for label representations in Jordan RNNs do not provide much more information to the network than a "one-hot" representation, and not much signal into the network. This problem is someway similar to the "vanishing gradient" problem [31]: as the network learns, the probability gets concentrated on few dimensions and all the other values get very small, limiting network learning. This problem is all the more obvious that label dependency modeling is more important for the task. On an absolute scale however, this is less serious than the vanishing gradient problem, as Jordan RNNs still reach competitive performances.
Conclusions
In this paper we have studied different architectures of Recurrent Neural Networks for sequence labeling tasks. We have proposed two new variants of RNNs to better model label dependencies, and we have compared these variants to the traditional architectures of Elman and Jordan RNNs. We have explained the advantages provided by the proposed variants with respect to previous RNNs. We have evaluated all RNNs, either new or traditional, on 4 different tasks: two of Spoken Language Understanding and two of POS-tagging. The results show that, even though RNNs don't always improve the state-of-the-art, our new variants of RNNs always outperform the traditional Elman and Jordan RNNs.
Figure 1 :
1High Level Schema of the Main RNNs Studied in this Work.
Figure 2 :
2Detailed architecture of I-RNN. Symbols have the same meaning as equations in the paper, the only exception is that labels are indicated in the figure with upper-case L. Please note that matrix H is replicated at the hidden layer computation meaning that all neurons receive the whole [ItLt] input, which is made of 2 · w + 1 + c D-dimensional embeddings: 2w + 1 word embeddings and c label embeddings. CAT is the concatenation operator.
Table 2 :
2Statistics on the FTB corpus used for POS-tagging semantic ontology. Semantic components can be combined to create complex semantic labels.
28] [29] [5], we split the data as they do: sections 0 − 18 are used for training, 19 − 21 for validation and 22 − 24 for testing. Statistics on the PTB corpus are shown in table 3.
use the rectified linear activation function and the dropout regularization[22] [30].Data set
Sections
Sentences
Tokens
Unknown
Training
0-18
38,219
912,344
0
Development
19-21
5,527
131,768
4,467
Test
22-24
5,462
129,654
3,649
Table 3 :
3Statistics of the training, development and test data texts of the Penn Treebank corpus
Table 4 :
4Results of SLU on the ATIS corpusModel
F1 measure
forward
backward
bidirectional
[8] E-RNN
81.94%
-
-
[8] J-RNN
83.25%
-
-
[8] CRF
-
-
86.00%
E-RNN
82.64%
82.61%
83.13%
J-RNN
83.06%
83.74%
84.29%
I-RNN
84.91%
86.28%
86.71%
I+E-RNN
84.58%
85.84%
86.21%
Table 6 :
6Results of POS-tagging on the FTBModel
Accuracy
forward
backward
bidirectional
[28]
-
-
97.24%
[29]
-
-
97.33%
[5] NN+SLL
-
-
96.37%
E-RNN
96.75%
96.76%
96.75%
J-RNN
96.71%
96.69%
96.77%
I-RNN
96.75%
96.72%
96.90%
I+E-RNN
96.73%
96.71%
96.93%
Table 7 :
7Results of POS-tagging on the PTB have very small values (less than 0.001)
The "one-hot" representation of an element at position i in a dictionary V is a vector of size |V | where the i-th component has the value 1 whereas all the others are 0.
t = argmax j∈1,...,|L| P (y j t |I, Θ)(2)2 We also find[7] quite easy to understand, even for readers not familiar with RNNs.
Sometimes in POS-tagging, models mistake verbs and nouns. They make such errors because some particular verbs occur in the same context of nouns (e.g. "the sleep is important"), and so have similar representations.
The word input context is thus made of w words on the left and w on the right of the word at a given position t, plus the word at t itself, which gives a total of 2w + 1 input words
For example the label localisation can be combined with city, relative-distance, general-relative-place, street etc.10 [19] also provides results without using the external lexicon
Our implementations are basically written in Octave https://www.gnu.org/software/octave/
Serial order: A parallel, distributed processing approach. M I Jordan, Advances in Connectionist Theory: Speech. Erlbaum. Elman, J.L., Rumelhart, D.E.Hillsdale, NJJordan, M.I.: Serial order: A parallel, distributed processing approach. In El- man, J.L., Rumelhart, D.E., eds.: Advances in Connectionist Theory: Speech. Erlbaum, Hillsdale, NJ (1989)
Finding structure in time. J L Elman, COGNITIVE SCIENCE. 14Elman, J.L.: Finding structure in time. COGNITIVE SCIENCE 14 (1990) 179- 211
Bidirectional recurrent neural networks. M Schuster, K Paliwal, Trans. Sig. Proc. 45Schuster, M., Paliwal, K.: Bidirectional recurrent neural networks. Trans. Sig. Proc. 45 (1997) 2673-2681
A unified architecture for natural language processing: Deep neural networks with multitask learning. R Collobert, J Weston, Proceedings of the 25th International Conference on Machine Learning. ICML '08. the 25th International Conference on Machine Learning. ICML '08New York, NY, USAACMCollobert, R., Weston, J.: A unified architecture for natural language processing: Deep neural networks with multitask learning. In: Proceedings of the 25th In- ternational Conference on Machine Learning. ICML '08, New York, NY, USA, ACM (2008) 160-167
Natural language processing (almost) from scratch. R Collobert, J Weston, L Bottou, M Karlen, K Kavukcuoglu, P Kuksa, J. Mach. Learn. Res. 12Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12 (2011) 2493-2537
Recurrent neural networks for language understanding. K Yao, G Zweig, M Y Hwang, Y Shi, D Yu, InterspeechYao, K., Zweig, G., Hwang, M.Y., Shi, Y., Yu, D.: Recurrent neural networks for language understanding, Interspeech (2013)
Investigation of recurrent-neuralnetwork architectures and learning methods for spoken language understanding. G Mesnil, X He, L Deng, Y Bengio, Interspeech 2013.Mesnil, G., He, X., Deng, L., Bengio, Y.: Investigation of recurrent-neural- network architectures and learning methods for spoken language understanding. In: Interspeech 2013. (2013)
Is it time to switch to word embedding and recurrent neural networks for spoken language understanding. V Vukotic, C Raymond, G Gravier, In: InterSpeech, Dresde. Vukotic, V., Raymond, C., Gravier, G.: Is it time to switch to word embedding and recurrent neural networks for spoken language understanding? In: InterSpeech, Dresde, Germany (2015)
Recurrent neural network based language model. T Mikolov, M Karafiát, L Burget, J Cernocký, S Khudanpur, INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association. Makuhari, Chiba, JapanMikolov, T., Karafiát, M., Burget, L., Cernocký, J., Khudanpur, S.: Recurrent neural network based language model. In: INTERSPEECH 2010, 11th Annual Conference of the International Speech Communication Association, Makuhari, Chiba, Japan, September 26-30, 2010. (2010) 1045-1048
Extensions of recurrent neural network language model. T Mikolov, S Kombrink, L Burget, J Cernock, S Khudanpur, ICASSP, IEEEMikolov, T., Kombrink, S., Burget, L., Cernock, J., Khudanpur, S.: Extensions of recurrent neural network language model. In: ICASSP, IEEE (2011) 5528-5531
CCG supertagging with a recurrent neural network. W Xu, M Auli, S Clark, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language ProcessingBeijing, ChinaShort Papers2Xu, W., Auli, M., Clark, S.: CCG supertagging with a recurrent neural network. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Pro- cessing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 2: Short Papers. (2015) 250-255
Unsupervised and lightly supervised partof-speech tagging using recurrent neural networks. O Zennaki, N Semmar, L Besacier, Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation. the 29th Pacific Asia Conference on Language, Information and ComputationShanghai, China29Zennaki, O., Semmar, N., Besacier, L.: Unsupervised and lightly supervised part- of-speech tagging using recurrent neural networks. In: Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation, PACLIC 29, Shanghai, China, October 30 -November 1, 2015. (2015)
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, CoRR abs/1301.3781Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word repre- sentations in vector space. CoRR abs/1301.3781 (2013)
Linguistic regularities in continuous space word representations. T Mikolov, W Yih, G Zweig, Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics. Mikolov, T., Yih, W., Zweig, G.: Linguistic regularities in continuous space word representations. In: Human Language Technologies: Conference of the North American Chapter of the Association of Computational Linguistics. (2013) 746- 751
Spoken language understanding: A survey. R De Mori, F Bechet, D Hakkani-Tur, M Mctear, G Riccardi, G Tur, IEEE Signal Processing Magazine. 25De Mori, R., Bechet, F., Hakkani-Tur, D., McTear, M., Riccardi, G., Tur, G.: Spoken language understanding: A survey. IEEE Signal Processing Magazine 25 (2008) 50-58
Expanding the scope of the atis task: The atis-3 corpus. D A Dahl, M Bates, M Brown, W Fisher, K Hunicke-Smith, D Pallett, C Pao, A Rudnicky, E Shriberg, Proceedings of the Workshop on Human Language Technology. HLT '94. the Workshop on Human Language Technology. HLT '94Stroudsburg, PA, USAAssociation for Computational LinguisticsDahl, D.A., Bates, M., Brown, M., Fisher, W., Hunicke-Smith, K., Pallett, D., Pao, C., Rudnicky, A., Shriberg, E.: Expanding the scope of the atis task: The atis-3 corpus. In: Proceedings of the Workshop on Human Language Technol- ogy. HLT '94, Stroudsburg, PA, USA, Association for Computational Linguistics (1994) 43-48
Results of the french evalda-media evaluation campaign for literal understanding. H Bonneau-Maynard, C Ayache, F Bechet, A Denis, A Kuhn, F Lefèvre, D Mostefa, M Qugnard, S Rosset, S Servan, J Vilaneau, LRECGenoa, ItalyBonneau-Maynard, H., Ayache, C., Bechet, F., Denis, A., Kuhn, A., Lefèvre, F., Mostefa, D., Qugnard, M., Rosset, S., Servan, S. Vilaneau, J.: Results of the french evalda-media evaluation campaign for literal understanding. In: LREC, Genoa, Italy (2006) 2054-2059
Building a Treebank for French. A Abeillé, L Clément, F Toussenel, Treebanks : Building and Using Parsed Corpora. SpringerAbeillé, A., Clément, L., Toussenel, F.: Building a Treebank for French. In: Treebanks : Building and Using Parsed Corpora. Springer (2003) 165-188
Coupling an annotated corpus and a lexicon for state-of-theart pos tagging. P Denis, B Sagot, Lang. Resour. Eval. 46Denis, P., Sagot, B.: Coupling an annotated corpus and a lexicon for state-of-the- art pos tagging. Lang. Resour. Eval. 46 (2012) 721-736
Building a large annotated corpus of english: The penn treebank. M P Marcus, B Santorini, M A Marcinkiewicz, COMPUTATIONAL LINGUISTICS. 19Marcus, M.P., Santorini, B., Marcinkiewicz, M.A.: Building a large annotated corpus of english: The penn treebank. COMPUTATIONAL LINGUISTICS 19 (1993) 313-330
A neural probabilistic language model. Y Bengio, R Ducharme, P Vincent, C Jauvin, JOURNAL OF MACHINE LEARNING RESEARCH. 3Bengio, Y., Ducharme, R., Vincent, P., Jauvin, C.: A neural probabilistic language model. JOURNAL OF MACHINE LEARNING RESEARCH 3 (2003) 1137- 1155
Practical recommendations for gradient-based training of deep architectures. CoRR abs/1206. Y Bengio, 5533Bengio, Y.: Practical recommendations for gradient-based training of deep archi- tectures. CoRR abs/1206.5533 (2012)
Backpropagation through time: what does it do and how to do it. P Werbos, Proceedings of IEEE. IEEE78Werbos, P.: Backpropagation through time: what does it do and how to do it. In: Proceedings of IEEE. Volume 78. (1990) 1550-1560
A fast and accurate dependency parser using neural networks. D Chen, C Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsChen, D., Manning, C.: A fast and accurate dependency parser using neural net- works. In: Proceedings of the 2014 Conference on Empirical Methods in Natu- ral Language Processing (EMNLP), Doha, Qatar, Association for Computational Linguistics (2014) 740-750
Text chunking using transformation-based learning. L Ramshaw, M Marcus, Proceedings of the 3rd Workshop on Very Large Corpora. the 3rd Workshop on Very Large CorporaCambridge, MA, USARamshaw, L., Marcus, M.: Text chunking using transformation-based learning. In: Proceedings of the 3rd Workshop on Very Large Corpora, Cambridge, MA, USA (1995) 84-94
Using recurrent neural networks for slot filling in spoken language understanding. G Mesnil, Y Dauphin, K Yao, Y Bengio, L Deng, D Hakkani-Tur, X He, L Heck, G Tur, D Yu, G Zweig, Speech, and Language Processing. Mesnil, G., Dauphin, Y., Yao, K., Bengio, Y., Deng, L., Hakkani-Tur, D., He, X., Heck, L., Tur, G., Yu, D., Zweig, G.: Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing (2015)
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. J Lafferty, A Mccallum, F Pereira, Proceedings of the Eighteenth International Conference on Machine Learning (ICML). the Eighteenth International Conference on Machine Learning (ICML)Williamstown, MA, USALafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: Proceedings of the Eigh- teenth International Conference on Machine Learning (ICML), Williamstown, MA, USA (2001) 282-289
Feature-rich part-of-speech tagging with a cyclic dependency network. K Toutanova, D Klein, C D Manning, Y Singer, NAACL '03: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. Morristown, NJ, USAAssociation for Computational LinguisticsToutanova, K., Klein, D., Manning, C.D., Singer, Y.: Feature-rich part-of-speech tagging with a cyclic dependency network. In: NAACL '03: Proceedings of the 2003 Conference of the North American Chapter of the Association for Com- putational Linguistics on Human Language Technology, Morristown, NJ, USA, Association for Computational Linguistics (2003) 173-180
Guided learning for bidirectional sequence classification. L Shen, G Satta, A Joshi, Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. the 45th Annual Meeting of the Association of Computational LinguisticsPrague, Czech RepublicAssociation for Computational LinguisticsShen, L., Satta, G., Joshi, A.: Guided learning for bidirectional sequence clas- sification. In: Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, Prague, Czech Republic, Association for Computa- tional Linguistics (2007) 760-767
Dropout: A simple way to prevent neural networks from overfitting. N Srivastava, G Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, Journal of Machine Learning Research. 15Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research 15 (2014) 1929-1958
Gradient flow in recurrent nets: the difficulty of learning long-term. S Hochreiter, Y Bengio, P Frasconi, J Schmidhuber, A Field Guide to Dynamical Recurrent Neural Networks. Kremer, KolenIEEE PressHochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J.: Gradient flow in recur- rent nets: the difficulty of learning long-term. In Kremer, Kolen, eds.: A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press (2001)
| [] |
[
"C-NMT: A Collaborative Inference Framework for Neural Machine Translation",
"C-NMT: A Collaborative Inference Framework for Neural Machine Translation"
] | [
"Yukai Chen ",
"Roberta Chiaro ",
"Enrico Macii \nInteruniversity Department of Regional and Urban Studies and Planning\nPolitecnico di Torino\nTurinItaly\n",
"Massimo Poncino ",
"Daniele Jahier Pagliari ",
"\nDepartment of Control and Computer Engineering\nPolitecnico di Torino\nTurinItaly\n"
] | [
"Interuniversity Department of Regional and Urban Studies and Planning\nPolitecnico di Torino\nTurinItaly",
"Department of Control and Computer Engineering\nPolitecnico di Torino\nTurinItaly"
] | [] | Collaborative Inference (CI) optimizes the latency and energy consumption of deep learning inference through the inter-operation of edge and cloud devices. Albeit beneficial for other tasks, CI has never been applied to the sequenceto-sequence mapping problem at the heart of Neural Machine Translation (NMT). In this work, we address the specific issues of collaborative NMT, such as estimating the latency required to generate the (unknown) output sequence, and show how existing CI methods can be adapted to these applications. Our experiments show that CI can reduce the latency of NMT by up to 44% compared to a non-collaborative approach. | 10.1109/iscas48785.2022.9937603 | [
"https://arxiv.org/pdf/2204.04043v1.pdf"
] | 248,069,124 | 2204.04043 | 687234b782b7933c0f4f5209f17b44151c19e708 |
C-NMT: A Collaborative Inference Framework for Neural Machine Translation
Yukai Chen
Roberta Chiaro
Enrico Macii
Interuniversity Department of Regional and Urban Studies and Planning
Politecnico di Torino
TurinItaly
Massimo Poncino
Daniele Jahier Pagliari
Department of Control and Computer Engineering
Politecnico di Torino
TurinItaly
C-NMT: A Collaborative Inference Framework for Neural Machine Translation
Index Terms-Machine Translation, Collaborative Inference
Collaborative Inference (CI) optimizes the latency and energy consumption of deep learning inference through the inter-operation of edge and cloud devices. Albeit beneficial for other tasks, CI has never been applied to the sequenceto-sequence mapping problem at the heart of Neural Machine Translation (NMT). In this work, we address the specific issues of collaborative NMT, such as estimating the latency required to generate the (unknown) output sequence, and show how existing CI methods can be adapted to these applications. Our experiments show that CI can reduce the latency of NMT by up to 44% compared to a non-collaborative approach.
I. INTRODUCTION AND RELATED WORKS
Deep learning (DL) obtains outstanding results in many Artificial Intelligence (AI) tasks that are relevant for embedded systems, such as computer vision and natural language processing (NLP). In order to deploy DL models on embedded devices, research and industry are increasingly resorting to Collaborative Inference (CI) [2]- [4] a paradigm that combines edge and cloud computing, in an attempt to improve performance and energy efficiency. In a CI system, deep learning inference executions are distributed among a set of collaborating edge and cloud devices, with policies based on their relative compute speeds and on their current state (workload, connection speed, etc).
Seminal works in this field mainly targeted Convolutional Neural Networks (CNNs) for computer vision [4]- [8]. One of the earliest approaches is [4], which partitions a CNN execution layer-wise between edge and cloud devices, trying to minimize its latency or energy consumption. The underlying principle is that feature tensor sizes tend to shrink for deeper layers in a CNN. Therefore, computing a few layers at the edge reduces the amount of data that needs to be sent to the cloud, and consequently the time/energy costs for transmission, possibly yielding a lower overall cost compared to pure edge and pure cloud processing. The optimal split point is adapted at runtime based on the connection latency and bandwidth, and the load of the cloud server. This approach is extended in [6], where additionally the CNN is modified to favor partitioned execution, by inserting layers that compress and decompress the feature maps respectively before and after transmitting them to the cloud. Further layer-wise partitioning approaches for feed-forward networks are found in [5], which manages multiple partition points for cases where tensor sizes are not monotonically decreasing (e.g. autoencoders), and in [7] which combines partitioning with an inference early-stopping mechanism for additional speed-ups. The authors of [8] extend these concepts to more than two offloading levels (e.g. enddevice, edge gateway and cloud).
More recently, the CI paradigm has also been applied in [9], [10] to the processing of variable-length input sequences using Recurrent Neural Networks (RNNs). [11] extended the approach to more than two offloading levels and multiple devices in each level. Both works, however, focused solely on sequence-to-class problems, such as text classification and search. In contrast, Neural Machine Translation (NMT), one of the most important DL-based tasks for smart embedded devices [12], belongs to the family of so-called sequence-tosequence (seq2seq) problems, where both inputs and outputs are variable-length sequences. Previous works have shown that, when dealing with variable-length sequences, the optimal (edge or cloud) device to execute an inference depends strongly on the sequence length, which influences the computational costs [10], [11]. However, while for sequence-to-class problems the length of the input sequence is always known beforehand, in case of a seq2seq task, computation costs also depend on the (unknown) length of the output.
In this work, we address this problem for the specific case of NMT. We show that low-cost regression models can efficiently predict the length of an output translation given the length of the input sentence, thus enabling the successful application of CI techniques. With experiments on 3 datasets and 3 DL models, based on both RNNs and Transformers [1], we show that our proposed Collaborative-NMT (C-NMT) can reduce the average inference latency by up to 44% compared to purely edge-based and cloud-based approaches, and by up to 21% compared to a "naive" approach that does not account for the output length. To the best of our knowledge, ours is the first work applying CI to a seq2seq problem; we are also the first to study Transformer models from the point of view of CI.
II. PROPOSED C-NMT FRAMEWORK
In this work, we propose C-NMT, a CI strategy aimed at optimizing the latency of NMT, one of the most relevant seq2seq tasks. We build upon the work of [10], [11], where two key peculiarities of (sequence-to-class) NLP tasks that influence the optimal CI decisions are highlighted. First, input/output sizes are small: encoding a sentence with the dictionary index of each word does not require more than 2 bytes per word. Thus, differently from CNNs, intermediate tensors tend to be larger than inputs, which means that partitioning the execution between edge and cloud is not beneficial, as it doesn't help reducing data transmission costs. Instead, the optimal strategy consists in mapping an entire inference either to edge or cloud. Second, the length of the processed inputs is a key parameter to be taken into account when deciding whether to run an inference at the edge or in the cloud, as it strongly influences the compute time. In Section II-A we analyze how these observations extend to NMT and the novel challenges introduced by this task , showing in particular that estimating the computational complexity of a translation is more complicated due to the fact that the length of the output sequence is unknown. In Section II-B we then propose a simple yet effective way to solve these challenges.
A. CI for Seq2Seq Deep Learning Models
Fig. 1a shows the most common architecture for seq2seq problems, the so called encoder/decoder. The system is composed of two separate neural networks: the encoder (blue block) processes the variable-length input X = x <1> , ...x <N > , (e.g. a sentence in English) and converts it into a fixed-size, high-dimensional vector representation, called context. The end of the input sequence is signaled to the encoder by a special <EOS> symbol. The context is then fed to the decoder (green block), whose goal is to produce the output sequence Y = y <1> , ..., y <M > (e.g. the translation of the input in German). Notice that, in general, N = M . More specifically, so-called autoregressive decoding is used in NMT, where the decoder iteratively takes as input a partially translated sentence (initially null), together with the encoder's context, and predicts the next token in the translation.
State-of-the-art models for implementing encoders and decoders are RNNs and Transformers. Here, we briefly discuss them from a computational standpoint, leaving out the details of their functionality, which can be found in [1], [13].
RNNs are composed of one or more cells, such as the Long-Short Term Memory (LSTM), that perform the same set of operations on each step of the input sequence, as shown in Fig. 1b. Each step requires the output of the previous one, i.e. the hidden and cell state vectors (h i and c i ). The last cell state is used as context in encoders, whereas for decoding, hidden states are further processed with one or more fullyconnected layers and softmax activations to produce word probabilities. As analyzed in [10], [11], the data dependency among subsequent steps makes the inference time of RNNs linearly dependent on the processed sequence length. This highlights a key problem of CI for RNN-based NMT. That is, estimating the total execution time of both encoder and decoder is key to perform correct edge/cloud mapping decisions. However, while the compute time of the encoder linearly depends on the (known) input sentence length N , the decoder RNN's execution time depends on the unknown (prior to the completion of the translation) output length M . A similar issue arises also for Transformers. These models include several layers, but the most computationally critical is self-attention [1], [14], shown in Fig. 1c. For each input element, this layer generates three vectors called query (q i ), key (k i ) and value (v i ) through learned linear mappings, omitted in the figure for space reasons. The scalar product of each query with all keys, followed by a softmax, produces the so-called attention weights w ji . Finally, the i-th output is generated by summing together all v j s, each weighted by the corresponding w ji . In the figure, the flow of operations to generate the first two outputs is shown by red and green arrows respectively. State-of-the-art transformers combine multiple of such structures (so-called attention heads) for higher accuracy. As for RNNs, transformer encoders typically use the output corresponding to the last (or first) input, further processed by fully-connected layers, as context.
The complexity of self-attention is quadratic in the input length due to query-key products; however, differently from RNNs, the processing of different sequence elements can be parallelized [14]. Consequently, for relatively short input sequences (< 100 tokens) and considering a highly parallel platform (e.g., an embedded GPU) we found that the inference time of Transformer encoders is approximately constant w.r.t. N . In contrast, autoregressive decoding, which is implemented in Transformers with masked attention [1], imposes a strict dependency among subsequent tokens, i.e., the i-th predicted word is needed as input for predicting the (i + 1)-th, limiting parallelization. In practice, the execution of the decoder has to be repeated M times, which: 1) makes it significantly slower than the encoder and 2) makes the total translation time once again linearly dependent on the output length M . This is clearly shown in Fig. 2a, which reports the total translation time of a Transformer as a function of M , for an embedded (red) and a cloud (green) GPU. The model, dataset and devices are detailed in Sec. III. Dots represent the average execution time for all outputs of the same length in the dataset, while colored bands represent standard deviations. Fig. 2b shows visually the idea behind C-NMT. Given the analysis of Section II-A, the dependency of the compute time of an NMT model on the input/output lengths approximately defines a plane in the (N, M, T exe ) space, possibly horizontal with respect to N in case of transformers. Clearly, the slopes of this plane on the z-axis are smaller for a fast cloud device than for an edge one. However, running inference on the former has an additional latency cost related to input/output transmission (T tx ), which shifts up the "cloud" execution time, as shown by the yellow arrow. This generates an interesting tradeoff from the point of view of CI since shorter input/output sequences are processed faster at the edge (Edge Region) whereas cloud offloading becomes convenient only for longer input/outputs (Cloud Region). The intersection of the two planes, and therefore the optimal inference device for a given input depends on N and M , on the relative speed of the involved devices, and on the time-varying T tx . Mathematically, C-NMT selects the target device for inference d tgt as:
B. Linear N-to-M Mapping
d tgt = d e if T exe,e (N, M ) ≤ T tx + T exe,c (N, M ) d c otherwise(1)
where suffixes e and c indicate edge and cloud respectively. Given, the compact encoding of inputs/outputs in NMT discussed above, in this work we model T tx as being dominated by the connection's round-trip time, and roughly dependent on N and M . As shown also in [10], [11], although this is an approximation, it yields quite accurate CI decisions. Concerning T exe , given the analysis of Sec II-A, we model it as a linear The most critical quantity in the above equation is M , which only becomes known after the completion of a translation. However, for the particular case of NMT, it is reasonable to assume that there is a correlation, to some extent, between the length of an input sentence and the one of its translation. As an example, Fig. 3 shows the average M for a given N and the corresponding standard deviation for three different language pairs. The caption reports the excellent regression scores obtained by a simple linear model relating the two quantities. These results show that, even for very different languages, such as Chinese and English, an accurate estimate of the output length can be obtained with a simple linear Nto-M mapping. This is the strategy used in our proposed CI system, which eventually estimates T exe,i as:
T exe,i = α N,i · N + α M,i · (γ · N + δ) + β i(2)
where γ and δ are correcting factors that only depend on the target language pair, and are independent from the device and neural network model. The need for this correction is evident from Fig 3, which clearly shows that γ < 1 is needed to account for the lower verbosity of the English language (EN) with respect to French (FR) in Fig. 3b, and of Chinese (ZH) with respect to English in Fig. 3c.
C. Implementation Details
After the offline characterization of the target NN model, the C-NMT decision has negligible overheads, as it simply consists of evaluating (2) and (1). For what concerns T tx , although this quantity is roughly independent of N and M , it still changes over time due to the variability of the edge-cloud connection signal quality or data traffic. As in [11], we attach timestamps to each inference request/response sent to/from the cloud to obtain a recent estimate of T tx . However, on end-nodes (e.g., smartphones), translation tasks are typically performed sporadically, rendering the timestamp mechanism ineffective. For this reason, we consider a system where the edge device is a gateway which aggregates the requests of multiple end-nodes and therefore can be assumed to be almost continuously fed with inference requests. The C-NMT decision then becomes whether to perform inference locally at the gateway or in a more powerful cloud server.
III. EXPERIMENTAL RESULTS
We assess the effectiveness of C-NMT considering: i) an edge gateway (GW) made of an NVIDIA Jetson TX2, including a Pascal GPU with 256 CUDA cores, and ii) a cloud server equipped with a Dual Intel Xeon E5-2630@2.40GHz, 128GB RAM and an NVIDIA Titan XP GPU. Both devices run Linux and perform inference with PyTorch. For repeatability, the network connection between the devices is simulated using 2 real round-trip-time (T rtt ) profiles taken RIPE Atlas The experiment consists in sending 100K translation requests to the GW, which uses C-NMT to decide, for each input, whether to process it locally or offload it to the cloud. The T exe model of (2) is fitted on the result of 10k inferences per device, with inputs not included in the 100k set.
We repeat the experiment for 3 different NMT architectures and datasets: i) A 2-layer BiLSTM model [16] with a hidden size of 500, tested on the IWSLT'14 German-English (DE-EN) corpus [17]; ii) A single-layer Gated Recurrent Unit (GRU) RNN [18] with hidden size 256, tested on the OPUS-100 French-English (FR-EN) corpus [19]; iii) The "MarianMT" attention-based Transformer [20] tested on the OPUS-100 English-Chinese (EN-ZH) corpus [19]. For each dataset, the correcting factors γ and δ in (2) are computed on the groundtruth (N, M real ) pairs in the corpus, where M real may in general differ from the output length M produced by the NMT model. Further, when computing γ and δ, we remove outliers (e.g., wrongly matched sentence pairs) following the pre-filtering rules described in [21]. Table I reports the obtained results. As baselines for comparison, we consider the 2 single-device approaches, i.e., the scenarios in which all 100k inputs are processed in the GW or in the server. Moreover, to have an ideal lower bound on latency, we consider an Oracle policy capable of always selecting the fastest inference device, without being affected by the sources of sub-optimality of C-NMT, such as the imperfect N-to-M regression, the linear T exe model, the outdated T tx estimates, etc. Lastly, we compare C-NMT against a CI strategy that uses the same mapping policy, but simply assumes M equal to the average output length of the reference dataset when estimating T exe . We call this approach Naive, and we use it to show the positive impact of N-to-M mapping in C-NMT. The results in the table are reported as percentage variations in the total ex. time for the 100k inferences, with respect to the 3 baselines (single devices and Oracle), where negative/positive numbers indicate an ex. time reduction/increase respectively.
The results show that, by mapping each translation either to the GW or to the server based on the input and (predicted) output lengths, C-NMT is able to significantly reduce the execution time compared to purely edge-based and cloudbased approaches. The total time reduction is up to 26%, 44% and 36% respectively for DE-EN, FR-EN and EN-ZH translations. As expected, the benefit of C-NMT w.r.t to a cloud based approach is larger with the first connection profile (CP1), which is slower on average, and therefore makes cloud offloading sub-optimal except for very long sentences, most of the time. The opposite reasoning applies to the comparison with a pure edge computing approach.
Also expectedly, the overhead of C-NMT w.r.t. an Oracle policy are larger for the EN-ZH transformer than for the two RNNs. This is because, as analyzed in Sec. II-A, decoding dominates the total latency of Transformers-based NMT on GPU platforms. Therefore, the T exe estimate for this type of model relies more heavily on the unknown M , thus suffering more from the approximated N-to-M mapping. Lastly, C-NMT is significantly more effective than the Naive approach (up to 21% larger ex. time reduction for the DE-EN dataset), except for EN-ZH translation with CP2, where the two approaches achieve very similar results.
IV. CONCLUSIONS
We have presented C-NMT, the first collaborative inference framework for NMT applications based on deep learning. We have tested our approach on RNNs and Transformers, the two state-of-the-art architectures for this type of problem, demonstrating significant execution time reductions (up to 44%) with respect to any static mapping solution. Future works will focus on the study of more advanced output length estimation methods.
Fig. 1 :
1Encoder/decoder for seq2seq mapping and key layers.
Fig. 2 :
2(a) Linear dependency of the total inference time on the output length for a Transformer. Scores of a linear fit: Jetson R 2 = 0.99, M SE = 0.13ms, Titan R 2 = 0.85, M SE = 1.2ms. (b) General principle of C-NMT.
Fig. 3 :
3Regression models for the output length estimate. IWSLT'14,DE-EN: R2-score=0.99, MSE=0.57; OPUS-100,FR-EN: R2-score=0.99, MSE=0.15; OPUS-100,EN-ZH R2-score=0.99, MSE=0.73. function of N and M , that is T exe,i = α N,i ·N +α M,i ·M +β i , where i ∈ {e, c} and α N/M,i , β i depend on the compute power of device d i and on the NN model. These parameters can be computed with a once-for-all offline characterization.
Fig. 4 :
4Connection profiles.
[15], and assuming a constant and symmetric bandwidth of 100Mbps. The simulation time vs T rtt traces are shown in Fig 4, and refer to the following RIPE Atlas query: meas id: 1437285; probe id: 6222; Date = 03/05/2018; Time = 3-7 p.m. (CP1), 7:30-12:30 a.m. (CP2).
TABLE I :
IExecution time variation (in %) for two different variable connection profiles.Dataset
Strategy
Connection Profile (CP) 1
Connection Profile (CP) 2
Ex. Time
Ex. Time
Ex. Time
Ex. Time
Ex. Time
Ex. Time
vs. GW
vs. Server
vs. Oracle
vs. GW
vs. Server
vs. Oracle
DE-EN
Naive
+11.74
-4.78
+29.17
-16.16
-5.28
+13.25
C-NMT
-13.55
-26.15
+0.11
-24.34
-17.65
+0.15
FR-EN
Naive
-5.74
-40.80
+8.03
-7.15
-32.13
+15.46
C-NMT
-12.29
-44.32
+1.24
-18.00
-41.06
+1.13
EN-ZH
Naive
-17.11
-8.08
+15.49
-36.31
-10.41
+8.51
C-NMT
-21.17
-12.46
+9.83
-35.66
-10.58
+8.77
Attention Is All You Need. A Vaswani, Adv Neural Inf Process Syst. 30A. Vaswani et al, "Attention Is All You Need," Adv Neural Inf Process Syst, vol. 30, 2017.
Deep Learning With Edge Computing: A Review. J Chen, Proc. of the IEEE. 1078J. Chen et al, "Deep Learning With Edge Computing: A Review," Proc. of the IEEE, vol. 107, no. 8, pp. 1655-1674, 2019.
Energy-efficient deep learning inference on edge devices. F Daghero, Hardware Accelerator Systems for Artificial Intelligence and Machine Learning. ElsevierF. Daghero et al, "Energy-efficient deep learning inference on edge devices," in Hardware Accelerator Systems for Artificial Intelligence and Machine Learning. Elsevier, 2020, ch. 8, pp. 1-53.
Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge. Y Kang, Proc. ASPLOS. ASPLOSY. Kang et al, "Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge," in Proc. ASPLOS, 2017, pp. 615-629.
JointDNN: An Efficient Training and Inference Engine for Intelligent Mobile Cloud Computing Services. A E Eshratifar, IEEE TMC. 202A. E. Eshratifar et al, "JointDNN: An Efficient Training and Inference Engine for Intelligent Mobile Cloud Computing Services," IEEE TMC, vol. 20, no. 2, pp. 565-576, 2021.
BottleNet: A Deep Learning Architecture for Intelligent Mobile Cloud Computing Services. A E Eshratifar, arXiv:1902.01000A. E. Eshratifar et al, "BottleNet: A Deep Learning Architecture for Intelligent Mobile Cloud Computing Services," arXiv:1902.01000, 2019.
Edge AI: On-Demand Accelerating Deep Neural Network Inference via Edge Computing. E Li, IEEE TWC. 191E. Li et al, "Edge AI: On-Demand Accelerating Deep Neural Network Inference via Edge Computing," IEEE TWC,vol.19,no.1,pp.447-457,2020.
DeePar: A Hybrid Device-Edge-Cloud Execution Framework for Mobile Deep Learning Applications. Y Huang, Proc. INFOCOM. INFOCOMY. Huang et al, "DeePar: A Hybrid Device-Edge-Cloud Execution Frame- work for Mobile Deep Learning Applications," in Proc. INFOCOM, 2019, pp. 892-897.
Optimal input-dependent edge-cloud partitioning for RNN inference. D , Jahier Pagliari, Proc. 26th IEEE ICECS. 26th IEEE ICECSD. Jahier Pagliari et al, "Optimal input-dependent edge-cloud partitioning for RNN inference," in Proc. 26th IEEE ICECS, 2019, pp. 442-445.
Input-Dependent Edge-Cloud Mapping of Recurrent Neural Networks Inference. D , Jahier Pagliari, Proc. DAC. DACD. Jahier Pagliari et al, "Input-Dependent Edge-Cloud Mapping of Recurrent Neural Networks Inference," in Proc. DAC, 2020, pp. 1-6.
CRIME: Input-Dependent Collaborative Inference for Recurrent Neural Networks. D , Jahier Pagliari, IEEE TC. D. Jahier Pagliari et al, "CRIME: Input-Dependent Collaborative Infer- ence for Recurrent Neural Networks," IEEE TC, Early Access, 2020.
Found in translation: More accurate, fluent sentences in google translate. B Turovsky, B. Turovsky, "Found in translation: More accurate, fluent sentences in google translate," November 2016.
I Goodfellow, Deep Learning. The MIT PressI. Goodfellow et al, Deep Learning. The MIT Press, 2016.
Data movement is all you need: A case study on optimizing transformers. A Ivanov, arXiv:2007.00072A. Ivanov et al, "Data movement is all you need: A case study on optimizing transformers," arXiv:2007.00072, 2020.
OpenNMT: Open-source toolkit for neural machine translation. G Klein, Proc. ACL. ACLG. Klein et al, "OpenNMT: Open-source toolkit for neural machine translation," in Proc. ACL, 2017, pp. 67-72.
Report on the 11th IWSLT evaluation campaign. M Cettolo, Proc. IWSLT. IWSLTM. Cettolo et al, "Report on the 11th IWSLT evaluation campaign," in Proc. IWSLT, 2014, pp. 2-17.
Improving massively multilingual neural machine translation and zero-shot translation. B Zhang, arXiv:2004.11867B. Zhang et al, "Improving massively multilingual neural machine translation and zero-shot translation," arXiv:2004.11867, 2020.
Huggingface's transformers: State-of-the-art natural language processing. T Wolf, arXiv:1910.03771T. Wolf et al, "Huggingface's transformers: State-of-the-art natural language processing," arXiv:1910.03771, 2019.
Paracrawl: Web-scale acquisition of parallel corpora. M Banón, Proc. ACL. ACLM. Banón et al, "Paracrawl: Web-scale acquisition of parallel corpora," in Proc. ACL, 2020, pp. 4555-4567.
| [] |
[
"Detecting dementia in Mandarin Chinese using transfer learning from a parallel corpus",
"Detecting dementia in Mandarin Chinese using transfer learning from a parallel corpus"
] | [
"Bai Li \nUniversity of Toronto\nTorontoCanada\n\nVector Institute\nTorontoCanada\n",
"Yi-Te Hsu \nUniversity of Toronto\nTorontoCanada\n\nVector Institute\nTorontoCanada\n\nAcademia Sinica\nTaipeiTaiwan\n",
"Frank Rudzicz \nUniversity of Toronto\nTorontoCanada\n\nVector Institute\nTorontoCanada\n\nToronto Rehabilitation Institute\nTorontoCanada\n"
] | [
"University of Toronto\nTorontoCanada",
"Vector Institute\nTorontoCanada",
"University of Toronto\nTorontoCanada",
"Vector Institute\nTorontoCanada",
"Academia Sinica\nTaipeiTaiwan",
"University of Toronto\nTorontoCanada",
"Vector Institute\nTorontoCanada",
"Toronto Rehabilitation Institute\nTorontoCanada"
] | [] | Machine learning has shown promise for automatic detection of Alzheimer's disease (AD) through speech; however, efforts are hampered by a scarcity of data, especially in languages other than English. We propose a method to learn a correspondence between independently engineered lexicosyntactic features in two languages, using a large parallel corpus of outof-domain movie dialogue data. We apply it to dementia detection in Mandarin Chinese, and demonstrate that our method outperforms both unilingual and machine translation-based baselines. This appears to be the first study that transfers feature domains in detecting cognitive decline. | 10.18653/v1/n19-1199 | [
"https://arxiv.org/pdf/1903.00933v1.pdf"
] | 67,856,188 | 1903.00933 | 13c6fb8bb69def2c7ace812dc0ad97c6c3668c5f |
Detecting dementia in Mandarin Chinese using transfer learning from a parallel corpus
Bai Li
University of Toronto
TorontoCanada
Vector Institute
TorontoCanada
Yi-Te Hsu
University of Toronto
TorontoCanada
Vector Institute
TorontoCanada
Academia Sinica
TaipeiTaiwan
Frank Rudzicz
University of Toronto
TorontoCanada
Vector Institute
TorontoCanada
Toronto Rehabilitation Institute
TorontoCanada
Detecting dementia in Mandarin Chinese using transfer learning from a parallel corpus
Machine learning has shown promise for automatic detection of Alzheimer's disease (AD) through speech; however, efforts are hampered by a scarcity of data, especially in languages other than English. We propose a method to learn a correspondence between independently engineered lexicosyntactic features in two languages, using a large parallel corpus of outof-domain movie dialogue data. We apply it to dementia detection in Mandarin Chinese, and demonstrate that our method outperforms both unilingual and machine translation-based baselines. This appears to be the first study that transfers feature domains in detecting cognitive decline.
Introduction
Alzheimer's disease (AD) is a neurodegenerative disease affecting 5.7 million people in the US (Association et al., 2018), and is the most common cause of dementia. Although no cure yet exists, early detection of AD is crucial for an effective treatment to delay or prepare for its effects (Dubois et al., 2016). One of the earliest symptoms of AD is speech impairment, including a difficulty in finding words and changes to grammatical structure (Taler and Phillips, 2008). These early signs can be detected by having the patient perform a picture description task, such as the Cookie Theft task from the Boston Diagnostic Aphasia Examination (Goodglass and Kaplan, 1983).
Previous models have applied machine learning to automatic detection of AD, for example, Fraser et al. (2016) extracted a wide variety of lexicosyntactic and acoustic features to classify AD and obtained 82% accuracy on the DementiaBank (DB) dataset. However, clinical studies of AD are expensive, so datasets of patient data are often scarce. Noorian et al. (2017) augmented DB with more a much larger corpus of normative data and improved the classification accuracy to 93% on DB. Similar linguistic differences between healthy and AD speech have been observed in Mandarin Chinese (Lai et al., 2009), but machine learning has not yet been applied to detecting AD in Mandarin. Daume III (2007) proposed a simple way of combining features in different domains, assuming that the same features are extracted in each domain. In our case, ensuring consistency of features across domains is challenging because of the grammatical differences between Mandarin and English. For example, Mandarin doesn't have determiners or verb tenses, and has classifiers, which don't exist in English (Chao, 1965). Another method trains a classifier jointly on multiple domains with different features on each domain, by learning a projection to a common subspace (Duan et al., 2012). However, this method only accepts labelled samples in each domain, and cannot make use of unlabelled, out-of-domain data. Other work from our broader group (Fraser et al., 2019) combined English and French data by extracting features based on conceptual "information units" rather than words, thus limiting the effects of multilingual differences.
In the current work, we train an unsupervised model to detect dementia in Mandarin, requiring only the English DB dataset and a large parallel Mandarin-English corpus of normative dialogue. We extract lexicosyntactic features in Mandarin and English using separate pipelines, and use the OpenSubtitles corpus of bilingual parallel movie dialogues to learn a correspondence between the different feature sets. We combine this correspondence model with a classifier trained on DB to predict dementia on Mandarin speech. To evaluate our system, we apply it to a dataset of speech from Mandarin-speakers with dementia, and demon- Figure 1: Diagram of our model. We train two separate models: the first is trained on OpenSubtitles and learns to map Mandarin features to English features; the second is trained on DementiaBank and predicts dementia given English features. During evaluation, the two models are combined to predict dementia in Mandarin. strate that our method outperforms several baselines.
Datasets
We use the following datasets:
• DementiaBank (Boller and Becker, 2005): a corpus of Cookie Theft picture descriptions, containing 241 narrations from healthy controls and 310 from patients with dementia. Each narration is professionally transcribed and labelled with part-of-speech tags. In this work, we only use the narration transcripts, and neither the part-of-speech tags or raw acoustics.
• Lu Corpus (MacWhinney et al., 2011): contains 49 patients performing the Cookie theft picture description, category fluency, and picture naming tasks in Taiwanese Mandarin. The picture description narrations were human-transcribed; patient diagnoses are unspecified but exhibit various degrees of dementia.
• OpenSubtitles2016 (Lison and Tiedemann, 2016): a corpus of parallel dialogues extracted from movie subtitles in various languages. We use the Traditional Chinese / English language pair, which contains 3.3 million lines of dialogue.
The Lu Corpus is missing specifics of diagnosis, so we derive a dementia score for each patient using the category fluency and picture naming tasks. For each category fluency task, we count the number of unique items named; for the picture naming tasks, we score the number of pictures correctly named, awarding partial credit if a hint was given. We apply PCA to the scores across all tasks, and assign the first principal component to be the dementia score for each patient. This gives a relative ordering of all patients for degree of dementia, which we treat as the ground-truth for evaluating our models.
Methodology
Feature extraction
We extract a variety of lexicosyntactic features in Mandarin and English, including type-token-ratio, the number of words per sentence, and proportions of various part-of-speech tags 1 . A detailed description of the features is provided in the supplementary materials (Table 2). In total, we extract 143 features in Mandarin and 185 in English. To reduce sparsity, we remove features in both languages that are constant for more than half of the dataset.
Due to the size of the OpenSubtitles corpus, it was computationally unfeasible to run feature extraction on the entire corpus. Therefore, we randomly select 50,000 narrations from the corpus, where each narration consists of between 1 to 50 contiguous lines of dialogue (about the length of a Cookie Theft narration).
For English, we train a logistic regression classifier to classify between dementia and healthy controls on DB, using our features as input. Using L1 regularization and 5-fold CV, our model achieves 77% classification accuracy on DB. This is slightly lower than the 82% accuracy reported by Fraser et al. (2016), but it does not include any acoustic features as input.
Feature transfer
Next, we use the OpenSubtitles corpus to train a model to transform Mandarin feature vectors to English feature vectors. For each target English feature, we train a separate ElasticNet linear regression (Zou and Hastie, 2005), using the Mandarin features of the parallel text as input. We perform a hyperparameter search independently for each target feature, using 3-fold CV to minimize the MSE.
Regularization
Although the output of the ElasticNet regressions may be given directly to the logistic regression model to predict dementia, this method has two limitations. First, the model considers each target feature separately and cannot take advantage of correlations between target features. Second, it treats all target feature equally, even though some are noisier than others. We introduce two regularization mechanisms to address these drawbacks: reduced rank regression and joint feature selection.
Reduced rank regression
Reduced rank regression (RRR) trains a single linear model to predict all the target features: it minimizes the sum of MSE across all target features, with the constraint that the rank of the linear mapping is bounded by some given R (Izenman, 1975). Following recommended procedures (Davies, 1982), we standardize the target features and find the best value of R with cross validation. However, this procedure did not significantly improve results so it was not included in our best model.
Joint feature selection
A limitation of the above models is that they are not robust to noisy features. For example, if some English feature is useful for predicting dementia, but cannot be accurately predicted using the Mandarin features, then including this feature might hurt the overall performance. A desirable English feature in our pipeline needs to not only be useful for predicting dementia in English, but also be reconstructable from Mandarin features.
We modify our pipeline as follows. After training the ElasticNet regressions, we sort the target features by their R 2 (coefficient of determination) measured on the training set, where higher values indicate a better fit. Then, for each K between 1 and the number of features, we select only the top K features and re-train the DB classifier (3.1) to only use those features as input. The result of this experiment is shown in Figure 2.
Experiments
Baseline models
We compare our system against two simple baselines:
1. Unilingual baseline: using the Mandarin features, we train a linear regression to predict the dementia score. We take the mean across 5 cross-validation folds.
Translate baseline:
The other intuitive way to generate English features from a Mandarin corpus is by using translation. We use Google Translate 2 to translate each Mandarin transcript to English. Then, we extract features from the translated English text and feed them to the dementia classifier described in section 3.1.
Evaluation metric
We evaluate each model by comparing the Spearman's rank-order correlation ρ (Spearman, 1904) between the ground truth dementia scores and the model's predictions. This measures the model's ability to rank the patients from the highest to the lowest severities of dementia, without requiring a threshold value. Our best model achieves a Spearman's ρ of 0.549, beating the translate baseline (n = 49, p = 0.06). Joint feature selection appears to be crucial, since the model performs worse than the baselines if we use all of the features. This is the case no matter if we predict each target feature independently or all at once with reduced rank regression. RRR does not outperform the baseline model, probably because it fails to account for the noisy target features in the correspondence model and considers each feature equally important. We did not attempt to use joint feature selection and RRR at the same time, because the multiplicative combination of hyperparameters K and R would produce a multiple comparisons problem using the small validation set.
Experimental results
Model
Using joint feature selection, we find that the best score is achieved when we use K = 13 target features (Figure 2). With K < 13, performance suffers because the DementiaBank classifier is not given enough information to make accurate classifications. With K > 13, the accuracy for the DementiaBank classifier improves; however, the overall performance degrades because it is given noisy features with low R 2 coefficients. A list of the top features is given in Table 3 in the supplementary materials.
In our experiments, the correspondence model worked better when absolute counts were used for the Chinese CFG features (e.g., the number of N P → P N productions in the narration) rather than ratio features (e.g., the proportion of CFG Figure 3: Ablation experiment where a various number of OpenSubtitles samples were used for training. The error bars indicate the two standard deviation confidence interval.
productions that were N P → P N ). When ratios were used for source features, the R 2 coefficients for many target features decreased. A possible explanation is that the narrations have varying lengths, and dividing features by the length introduces a nonlinearity that adversely affects our linear models. However, more experimentation is required to examine this hypothesis.
Ablation study
Next, we investigate how many parallel OpenSubtitles narrations were necessary to learn the correspondence model. We choose various training sample sizes from 10 to 50,000 and, for each training size, we train and evaluate the whole model from end-to-end 10 times with different random seeds ( Figure 3). As expected, the Spearman's ρ increased as more samples were used, but only 1000-2000 samples were required to achieve comparable performance to the full model.
Conclusion
We propose a novel method to use a large parallel corpus to learn mappings between engineered features in two languages. Combined with a dementia classifier model for English speech, we constructed a model to predict dementia in Mandarin Chinese. Our method achieves state-of-the-art results for this task and beats baselines based on unilingual models and Google Translate. It is successful despite the stark differences between English and Mandarin, and the fact that the parallel corpus is out-of-domain for the task. Lastly, our method does not require any Mandarin data for training, which is important given the difficulty of acquiring sensitive clinical data.
Future work will investigate the use of automatic speech recognition to reduce the need for manual transcripts, which are impractical in a clinical setting. Also, our model only uses lexicosyntactic features, and ignores acoustic features (e.g., pause duration) which are significant for dementia detection in English. Finally, it remains to apply this method to other languages, such as French and Swedish (Fraser et al., 2019), for which datasets have recently been collected.
English (185 features)
Mandarin (143 features) Narrative length: Number of words and sentences in narration. Vocabulary richness: Type-token ratio, moving average type-token ratio (with window sizes of 10, 20, 30, 40, and 50 words), Honoré's statistic, and Brunét's index. Frequency metrics: Mean word frequencies for all words, nouns, and verbs. POS counts: Counts and ratios of nouns, verbs, inflected verbs, determiners, demonstratives, adjectives, adverbs, function words, interjections, subordinate conjunctions, and coordinate conjunctions. Also includes some special ratios such as pronoun / noun and noun / verb ratios. Syntactic complexity: Counts and mean lengths of clauses, T-units, dependent clauses, and coordinate phrases as computed by Lu's syntactic complexity analyzer (Lu, 2010). Tree statistics: Max, median, and mean heights of all CFG parse trees in the narration. CFG ratios: Ratio of CFG production rule count for each of the 100 most common CFG productions from the constituency parse tree.
Narrative length: Number of sentences, number of characters, and mean sentence length. Frequency metrics: Type-token ratio, mean and median word frequencies.
POS counts: For each part-of-speech category, the number of it in the utterance and ratio of it divided by the number of tokens. Also includes some special ratios such as pronoun / noun and noun / verb ratios. Tree statistics: Max, median, and mean heights of all CFG parse trees in the narration. CFG counts: Number of occurrences for each of the 60 most common CFG production rules from the constituency parse tree. Table 2: Lexicosyntactic features extracted from Mandarin and English narrations. We used wordfreq (Speer et al., 2018) for word frequency statistics and Stanford CoreNLP for POS tagging and constituency in both languages (Klein and Manning, 2003;Levy and Manning, 2003). Our features are similar to the set of features used by Fraser et al. (2016), which the reader can refer to for a more thorough description.
Figure 2 :
2Accuracy of DementiaBank classifier model and Spearman's ρ on Lu corpus, using only the top K English features ordered by R 2 on the OpenSubtitles corpus. Spearman's ρ is maximized at K = 13, achieving a score of ρ = 0.549. DementiaBank accuracy generally increases with more features.
Moving average TTR (50 word window) 0.503 6 Moving average TTR (40 word window) 0.461 7 Moving average TTR (30 word window) 0.411 8 Average word length 0.401 9 Moving average TTR (20 word window) 0.360 10 Moving average TTR (10 word window) 0.328 11 NP → PRP# Feature Name
R 2
1 Number of words
0.894
2 Number of sentences
0.828
3 Brunét's index
0.813
4 Type token ratio
0.668
5 0.294
12 Number of nouns
0.233
13 Mean length of clause
0.225
14 PP → IN NP
0.224
15 Total length of PP
0.222
16 Complex nominals per clause
0.220
17 Noun ratio
0.213
18 Pronoun ratio
0.208
19 Number of T-units
0.207
20 Number of PP
0.205
21 Number of function words
0.198
22 Subordinate / coordinate clauses
0.193
23 Mean word frequency
0.193
24 Number of pronouns
0.191
25 Average NP length
0.188
Table 3 :
3Top English features for joint feature selection, ordered by R 2 coefficients on the OpenSubtitles corpus. The top performing model uses the first 13 features.
The feature extraction pipeline is open-source, available at: https://github.com/SPOClab-ca/COVFEFE. The lex and lex chinese pipelines were used for English and Chinese, respectively.
https://translate.google.com/
AcknowledgementsWe thank Kathleen Fraser and Nicklas Linz for their helpful comments and earlier collaboration which inspired this project.
Alzheimer's disease facts and figures. Alzheimer's & Dementia. 143Alzheimer's Association et alAlzheimer's Association et al. 2018. 2018 Alzheimer's disease facts and figures. Alzheimer's & Dementia, 14(3):367-429.
Dementiabank database guide. Francois Boller, James Becker, University of PittsburghFrancois Boller and James Becker. 2005. Dementia- bank database guide. University of Pittsburgh.
A grammar of spoken Chinese. Chao Yuen Ren, Univ of California PressYuen Ren Chao. 1965. A grammar of spoken Chinese. Univ of California Press.
Frustratingly easy domain adaptation. Hal Daume, Iii , Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. the 45th Annual Meeting of the Association of Computational LinguisticsHal Daume III. 2007. Frustratingly easy domain adap- tation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 256-263.
Procedures for reduced-rank regression. Pt Davies, Applied Statistics. PT Davies. 1982. Procedures for reduced-rank regres- sion. Applied Statistics, pages 244-255.
Learning with augmented features for heterogeneous domain adaptation. Lixin Duan, Dong Xu, Ivor W Tsang, Proceedings of the 29th International Coference on International Conference on Machine Learning. the 29th International Coference on International Conference on Machine LearningOmnipressLixin Duan, Dong Xu, and Ivor W Tsang. 2012. Learn- ing with augmented features for heterogeneous do- main adaptation. In Proceedings of the 29th Inter- national Coference on International Conference on Machine Learning, pages 667-674. Omnipress.
Preclinical Alzheimer's disease: definition, natural history, and diagnostic criteria. Bruno Dubois, Harald Hampel, H Howard, Philip Feldman, Paul Scheltens, Sandrine Aisen, Hovagim Andrieu, Habib Bakardjian, Lars Benali, Kaj Bertram, Blennow, Alzheimer's & Dementia. 123Bruno Dubois, Harald Hampel, Howard H Feldman, Philip Scheltens, Paul Aisen, Sandrine Andrieu, Ho- vagim Bakardjian, Habib Benali, Lars Bertram, Kaj Blennow, et al. 2016. Preclinical Alzheimer's dis- ease: definition, natural history, and diagnostic cri- teria. Alzheimer's & Dementia, 12(3):292-323.
Multilingual prediction of Alzheimers disease through domain adaptation and concept-based language modelling. Kathleen C Fraser, Nicklas Linz, Bai Li, Kristina Lundholm Fors, Frank Rudzicz, Alexandra Konig, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics. the 2019 Conference of the North American Chapter of the Association for Computational LinguisticsPhilippe Robert, and Dimitrios KokkinakisKathleen C. Fraser, Nicklas Linz, Bai Li, Kristina Lundholm Fors, Frank Rudzicz, Alexandra Konig, Jan Alexandersson, Philippe Robert, and Dim- itrios Kokkinakis. 2019. Multilingual prediction of Alzheimers disease through domain adaptation and concept-based language modelling. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics.
Linguistic features identify Alzheimers disease in narrative speech. C Kathleen, Jed A Fraser, Frank Meltzer, Rudzicz, Journal of Alzheimer's Disease. 492Kathleen C Fraser, Jed A Meltzer, and Frank Rudz- icz. 2016. Linguistic features identify Alzheimers disease in narrative speech. Journal of Alzheimer's Disease, 49(2):407-422.
Boston diagnostic examination for aphasia. Harold Goodglass, Edith Kaplan, Lea and FebigerPhiladelphia, Pennsylvania2nd editionHarold Goodglass and Edith Kaplan. 1983. Boston di- agnostic examination for aphasia, 2nd edition. Lea and Febiger, Philadelphia, Pennsylvania.
Reduced-rank regression for the multivariate linear model. Alan Julian Izenman, 5Journal of multivariate analysisAlan Julian Izenman. 1975. Reduced-rank regression for the multivariate linear model. Journal of multi- variate analysis, 5(2):248-264.
Accurate unlexicalized parsing. Dan Klein, D Christopher, Manning, Proceedings of the 41st Annual Meeting on Association for Computational Linguistics. the 41st Annual Meeting on Association for Computational LinguisticsAssociation for Computational Linguistics1Dan Klein and Christopher D Manning. 2003. Accu- rate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics-Volume 1, pages 423-430. Asso- ciation for Computational Linguistics.
To be semantically-impaired or to be syntacticallyimpaired: Linguistic patterns in Chinese-speaking persons with or without dementia. Yi-Hsiu Lai, Hsiu-Hua Pai, Journal of Neurolinguistics. 225Yi-hsiu Lai, Hsiu-hua Pai, et al. 2009. To be semantically-impaired or to be syntactically- impaired: Linguistic patterns in Chinese-speaking persons with or without dementia. Journal of Neu- rolinguistics, 22(5):465-475.
Is it harder to parse Chinese, or the Chinese treebank?. Roger Levy, Christopher Manning, Proceedings of the 41st Annual Meeting on Association for Computational Linguistics. the 41st Annual Meeting on Association for Computational Linguistics1Association for Computational LinguisticsRoger Levy and Christopher Manning. 2003. Is it harder to parse Chinese, or the Chinese treebank? In Proceedings of the 41st Annual Meeting on As- sociation for Computational Linguistics-Volume 1, pages 439-446. Association for Computational Lin- guistics.
Opensub-titles2016: Extracting large parallel corpora from movie and TV subtitles. Pierre Lison, Jörg Tiedemann, Proceedings of the 10th International Conference on Language Resources and Evaluation. the 10th International Conference on Language Resources and EvaluationPierre Lison and Jörg Tiedemann. 2016. Opensub- titles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation.
Automatic analysis of syntactic complexity in second language writing. Xiaofei Lu, International journal of corpus linguistics. 154Xiaofei Lu. 2010. Automatic analysis of syntactic complexity in second language writing. Interna- tional journal of corpus linguistics, 15(4):474-496.
AphasiaBank: Methods for studying discourse. Brian Macwhinney, Davida Fromm, Margaret Forbes, Audrey Holland, Aphasiology. 2511Brian MacWhinney, Davida Fromm, Margaret Forbes, and Audrey Holland. 2011. AphasiaBank: Methods for studying discourse. Aphasiology, 25(11):1286- 1307.
On the importance of normative data in speech-based assessment. Zeinab Noorian, Proceedings of Machine Learning for Health Care Workshop (NIPS MLHC). Machine Learning for Health Care Workshop (NIPS MLHC)Chloé Pou-Prom, and Frank RudziczZeinab Noorian, Chloé Pou-Prom, and Frank Rudz- icz. 2017. On the importance of normative data in speech-based assessment. In Proceedings of Ma- chine Learning for Health Care Workshop (NIPS MLHC).
The proof and measurement of association between two things. The American journal of psychology. Charles Spearman, 15Charles Spearman. 1904. The proof and measurement of association between two things. The American journal of psychology, 15(1):72-101.
. Robyn Speer, Joshua Chin, Andrew Lin, Sara Jewett, Lance Nathan, 10.5281/zenodo.1443582Luminosoin- sight/wordfreq: v2.2Robyn Speer, Joshua Chin, Andrew Lin, Sara Jew- ett, and Lance Nathan. 2018. Luminosoin- sight/wordfreq: v2.2.
Language performance in Alzheimer's disease and mild cognitive impairment: a comparative review. Vanessa Taler, Natalie A Phillips, Journal of clinical and experimental neuropsychology. 305Vanessa Taler and Natalie A Phillips. 2008. Language performance in Alzheimer's disease and mild cog- nitive impairment: a comparative review. Jour- nal of clinical and experimental neuropsychology, 30(5):501-556.
Regularization and variable selection via the elastic net. Hui Zou, Trevor Hastie, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 672Hui Zou and Trevor Hastie. 2005. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301-320.
| [
"https://github.com/SPOClab-ca/COVFEFE."
] |
[
"Unsupervised domain-agnostic identification of product names in social media posts",
"Unsupervised domain-agnostic identification of product names in social media posts"
] | [
"Nicolai Pogrebnyakov nicolaip@cbs.dk \nCopenhagen Business School Frederiksberg\nDenmark\n"
] | [
"Copenhagen Business School Frederiksberg\nDenmark"
] | [] | Product name recognition is a significant practical problem, spurred by the greater availability of platforms for discussing products such as social media and product review functionalities of online marketplaces. Customers, product manufacturers and online marketplaces may want to identify product names in unstructured text to extract important insights, such as sentiment, surrounding a product. Much extant research on product name identification has been domain-specific (e.g., identifying mobile phone models) and used supervised or semisupervised methods. With massive numbers of new products released to the market every year such methods may require retraining on updated labeled data to stay relevant, and may transfer poorly across domains. This research addresses this challenge and develops a domain-agnostic, unsupervised algorithm for identifying product names based on Facebook posts. The algorithm consists of two general steps: (a) candidate product name identification using an off-the-shelf pretrained conditional random fields (CRF) model, part-of-speech tagging and a set of simple patterns; and (b) filtering of candidate names to remove spurious entries using clustering and word embeddings generated from the data. | 10.1109/bigdata.2018.8622119 | [
"https://arxiv.org/pdf/1812.04662v1.pdf"
] | 54,478,232 | 1812.04662 | c6bc7a3db6dba4f123ae6700f7f76a55908c7c14 |
Unsupervised domain-agnostic identification of product names in social media posts
Nicolai Pogrebnyakov nicolaip@cbs.dk
Copenhagen Business School Frederiksberg
Denmark
Unsupervised domain-agnostic identification of product names in social media posts
named entity recognitionsocial mediaproduct namesFacebook
Product name recognition is a significant practical problem, spurred by the greater availability of platforms for discussing products such as social media and product review functionalities of online marketplaces. Customers, product manufacturers and online marketplaces may want to identify product names in unstructured text to extract important insights, such as sentiment, surrounding a product. Much extant research on product name identification has been domain-specific (e.g., identifying mobile phone models) and used supervised or semisupervised methods. With massive numbers of new products released to the market every year such methods may require retraining on updated labeled data to stay relevant, and may transfer poorly across domains. This research addresses this challenge and develops a domain-agnostic, unsupervised algorithm for identifying product names based on Facebook posts. The algorithm consists of two general steps: (a) candidate product name identification using an off-the-shelf pretrained conditional random fields (CRF) model, part-of-speech tagging and a set of simple patterns; and (b) filtering of candidate names to remove spurious entries using clustering and word embeddings generated from the data.
Introduction
Prospective customers often form opinions about a product they are contemplating to purchase or experience by reading reviews of or discussions around that product. These reviews and discussions occur in online marketplaces such as Amazon, online forums as well as social media, such as Twitter and Facebook. In many cases opinions formed by customers from these sources serve as a basis for decisions on whether they proceed with purchasing or experiencing the product.
Being able to recognize product names can play an important role in helping consumers orient themselves in the product assortment. It may also assist companies in identifying which products are popular with customers and why. Marketplaces can use this information to decide which products to carry. Automated identification of product names can be especially important in circumstances where discussion of a product is not attached to the product itself (as is the case in a marketplace, where reviews are written on product page and it is thus known which product a review belongs to), but rather contained in a stream of text, such as on forums or social media.
Accordingly, research on product name identification has been advancing. However, much of extant research has been specific to a particular domain and has relied on supervised or semi-supervised methods. At the same time, there are situations where domain specificity can be a hindrance, for instance when evaluating sentiment on products from a large online marketplace. Additionally, the rate of new product introduction is fast-paced: according to one estimate, about 30,000 new consumer products are launched each year [1]. At such rate, it can be challenging to provide new labeled training data to sustain the accuracy of product name recognition algorithms. And while in some domains or industries product names follow predictable, systematic conventions, in others that is not the case, hampering the use of rule-based identification. In some circumstances, such as chemical compounds, rule-based identification should, at first glance, be possible because of standardized nomenclatures. However, in practice they are often not followed, again precluding the use of simple heuristics in product name identification [2].
Domain-agnostic, unsupervised approaches to product name recognition may help alleviate these problems. Such approaches can be deployed in a "cold-start" fashion to analyze online reviews and social media discussions. This might be valuable in itself, as well as provide input for other algorithms, including supervised ones, that can further refine product name identification. Furthermore, being able to perform this task in an unsupervised manner helps scale information extraction [3]. However, domain-agnostic product name identification is complicated by a complete lack of convention for naming products. This holds not only across different domains, or industries, but also, as noted above, within industries and often within companies. For example, in the automotive industry most models of Ford cars are common dictionary words, such as "Ford Fusion", but there are exceptions, e.g., "Ford F-150". By contrast, car manufacturer BMW's models mostly follow a "letter digit" pattern (e.g., "BMW M4").
This study develops an unsupervised algorithm for identifying product names from social media posts on Facebook. A product is understood here as a physical product or a service that is either sold to individual customers directly (such as mobile phones), or individual customers have direct experience with it and can identify it (e.g., an aircraft model such as Boeing 747). This research formulates a small set of assumptions based on which the algorithm was developed. Under these assumptions, product name identification consists of two broad stages: (1) generate candidate product names associated with company names using simple patterns and (2) This study makes the following contributions:
Develop an unsupervised, domain-agnostic algorithm for product name identification using a spectrum of unsupervised machine learning methods including word embeddings and clustering.
Annotate a subset of product names, discuss variations in products names (which were observed even within individual companies) and evaluate the performance of the algorithm on these annotated names.
Related Work
Product name identification is an instance of a broader task of named entity recognition (NER), which aims to identify specific types of entities in a text [4]. Well-researched NER problems include recognition of proper names, names of organizations and locations [5]. Popular approaches to NER include (a) literal matching of entities within text to a dictionary, also called a gazetteer, (b) specifying a set of rules that describe an entity (e.g., "N<number>", or the letter N followed by a number, such as "N85", to match some Nokia phone models), and (c) trained models, such as conditional random fields (CRF), where a model is trained on pre-labeled data using a set of features such as individual characters within a word, capitalization etc. [2,4,6]. While dictionary-based approaches are easier to implement, not all entities are or can reasonably be codified in a dictionary. Without a dictionary NER becomes a non-trivial problem because of ambiguities associated with text structure, semantics and spelling [4]. These ambiguities are especially pronounced in user-generated content such as social media posts and online forum discussions, which can be particularly noisy [7,8].
Research on identification of product names has been performed in multiple domains, ranging from consumer electronics [6,9,10] to programming languages and libraries [5] to chemical compound names [2]. Extant research has primarily used supervised or semi-supervised methods.
In particular, [6] used all three approaches to NER discussed above on a task that combined NER with normalization (resolving an entity to an entry in a product name catalog), and a method that combined all three achieved accuracy of 0.307. [4] used a rules-based approach to identify several categories of entities, including product names, from web pages and normalize them. Their method achieved F1 score of 0.73 across all entity categories. [5] classified software-related entities from the online community StackOverflow into five categories (such as "programming language", "API") by training a CRF model, which achieved the F1 score of 0.782. [2] used an incomplete dictionary of chemical compound names to automatically generate random names with distributions similar to those in the dictionary, then trained a CRF model with that additionally generated data. Their model achieved F1 scores of 0.882-0.945 depending on the source of data used (patents, web pages etc.).
Entity identification in social media and other usergenerated content is more challenging than in formal texts such as news items, as indicated above. Existing NER CRF models pretrained on news items (e.g., Stanford NER [11]) appear to significantly leverage word capitalization, which is inconsistent in social media content such as Facebook and Twitter posts. To address this challenge, [12] combined K-nearest neighbors classifier with a CRF model and achieved F1 score of 0.725 on product name identification. [9] performed both NER and normalization of mobile phone names, and the NER portion consisted of two steps. First, a list of candidate product names was generated, which included some noise, or spurious items that were not in fact product names. At the second step a twoclass CRF model was trained to differentiate true product names from the noise. That method achieved F1 score of 0.85. [3] also used a two-stage approach that first generated candidate names, followed by selection of product names based on the probability of term co-occurrences, achieving the F1 score of 0.86 on a dataset of pharmaceutical products. [7] used Freebase dictionaries to generate distributions of different entities over entity types (including products), then used topic modeling (specifically, LabeledLDA) to determine the type of an entity. Learning distributions of entities bears similarities to [2], albeit it is different in that while [2] used the learned distributions to generate additional, synthetic product names, [7] used the distributions directly in modeling. Their approach allowed classifying not only entities from the dictionaries, but also entities not encountered in the dictionaries, and achieved F1 score of 0.53 in identifying product names.
While the approaches discussed above have mostly used supervised or semi-supervised approaches (except [3]), unsupervised methods can be an important addition to the landscape of methods for product name identification [2]. They can also help advance related research on products, including attitudes of customers to various products with sentiment analysis, summarization of product reviews, identification of product features and entity normalization to name just a few. Given the proliferation of products themselves, these research areas could be helped by the ability to automatically identify product names in text.
Data
The algorithm was developed using posts from Facebook. Facebook was chosen for several reasons. First, most companies have presence on the platform, and for most consumer-facing companies such presence has turned into a necessity [13,14]. Second, product names to be identified are mentioned both in company's own postings as well as user discussions on company pages, affording a rich environment for identifying product names. Added to this is the wide reach of Facebook, which had 1.94 billion monthly active users as of October 2018 [15]. Finally, using Facebook data complements extant research on product name identification, which tends to focus on product reviews from marketplaces such as Amazon, as well as product forums.
The dataset of posts was created in a stepwise process. Since this research focused on products that companies and consumers can discuss on their pages, 14 consumer-facing Table I.
The resulting dataset contained 314,773 posts by companies and 1,427,178 posts by users for these companies. The distribution of the number of posts by page is shown in Fig. 1.
Problem formulation
A. Problem formulation
Given a set S of sentences in social media posts, the task is to identify in S two sets, C and P. C = {c 1 , c 2 , …, c n } is a set of company names and P = {P c1 , P c2 , …, P cn } is a set of sets of product names, where P ci = {p 1ci , p 2ci , …, p jci }, i = 1, …, n is a set of j product names for company c i .
The identification of C is performed with a pretrained CRF model (whose accuracy is not assessed here) and only the identification of P is the focus of this research. However, given the hierarchical relationship between C and P, the result is a two-level taxonomy with C as first-order elements and P as second-order elements.
B. Assumptions
The algorithm focuses only on the first word in product names P. Thus, if a product name contained several words, such as "ThinkPad 10", only the first word ("ThinkPad") was retained. While this might be seen as a limitation of the algorithm, many product names only contain one word (e.g., Ford's "F-150"), and for multi-word names this algorithm allows capturing names of product families that can then serve as input for identifying specific instances within these product families.
Three assumptions were formulated to facilitate the generation of candidate names from posts as explained below in the algorithm description. While these assumptions have simplified the development of the algorithm, subsequent improvements of the algorithm may focus on relaxing these assumptions.
Assumption 1. At least some of the mentions of product names in the social media posts will be in the form "c m p ncm ", or company name followed by product name, for example: "Ford Explorer". Assumption 2. The probability of occurrence of a spurious candidate name (false positive) in the pattern "c m p ncm ", for example: "Microsoft's CEO", is low.
Assumption 3. When part-of-speech (POS) tagging is applied to posts, product names will be labeled as personal nouns in at least some of the cases.
The algorithm produces a two-level taxonomy with company names as first-order elements and product names associated with each company as second-order elements.
Product name identification algorithm
The algorithm for product name identification consists of the following stages.
I. Candidate name generation.
1. POS tagging and company name identification with a pretrained CRF model. Each post was tagged with a part-ofspeech tagger [16] and an off-the-shelf CRF model pretrained on news articles [11]. The latter was used to identify company names. This served as input for the next step where both parts of speech and company names were used.
Pattern-based candidate name identification.
In the tagged sentences, the following patterns were searched: "<Organization Name> <Proper Noun>" and "<Organization Name> <Possessive 's> <Proper Noun>". An example of the first pattern is "Microsoft Windows" and of the second, "Apple's iPhone". Whenever such patterns were encountered, both the company name (tagged with <Organization Name>) and the candidate product name (<Proper Noun>) were recorded. This resulted in a set of company names C and candidate product names P c for each company c, and the remainder of the process focused on filtering spurious entries from the list of candidate names. II. Filtering.
Removal of misspelled and infrequent entries.
Candidate names that occurred with less than 10% frequency in each company were removed. Many such entries were revealed to be misspelled or spurious upon inspection. Additionally, companies with only one candidate product name were removed from the list, as a subsequent clustering step required at least two candidate names per company.
Generation of word embeddings.
Word embeddings were created from the entire posts dataset using word2vec [17]. Each word in the dataset was lemmatized before embeddings were created, and each word was encoded as a vector of length 100. Word embeddings were used to create one of the metrics in the following step.
Metrics calculation.
For each candidate product name, the following metrics were calculated: (a) cosine similarity between embedding vectors of the candidate name and the name of the company to which it was allocated; (b) inverse document frequency (IDF) score in the entire posts dataset; (c) term frequencies (TF) of occurrences of the candidate name in other companies. These metrics are used on the next clustering step.
6. Clustering on the metrics. By now the process has resulted in a list of companies, candidate product names in these companies and three metrics for each candidate name. At this step spectral clustering [18] was applied to the metrics. Two clusters were used, with the intent that one of these clusters would contain product names to be retained and the other cluster names to be excluded. Product names were first clustered on IDF, and items in the cluster with the higher average IDF score were retained. Words with higher IDF scores, which occur less frequently in the dataset, were deemed to more likely to pertain to product names than words with more frequent occurrences. The remaining names (if two or more remained) were clustered on TF, and items in the cluster with the lower average TF score were retained. This is because words that were more specific to the company under consideration, rather than to other companies, were deemed to pertain to product names. Finally, the remaining names (again, if there were two or more) were clustered on cosine similarity between the candidate product name and company name. Items in the cluster with the higher average score were retained, as it was deemed that words that occur more often in the same context as the name of the company are names of products of that company. Fig. 2 shows the graphic representation of the algorithm.
Experimental evaluation
A. Evaluation measures
To assess the accuracy of the identification algorithm, posts of four companies from different industry domains were used for testing: Boeing, Ford, Lenovo and Microsoft (see Table II).
Each company had different conventions for naming products. Looking at just these four companies, it is clear that it would be difficult to create a purely rules-based product names identifier. Even for a single company there are typically Posts from the test companies were tokenized into sentences and uploaded to Amazon's Mechanical Turk (MTurk) service. There, users were asked to identify whether a sentence contained any product names and if so, specify the product name. For example, in the sentence "The Ford Fiesta will take your family further" the users labeled "Fiesta" or "Ford Fiesta" as product name (company names were stripped from MTurk results). Names of products identified by MTurk users were then compared to names of products for these four company cases generated by the algorithm to calculate the precision, recall and F1 scores.
B. Results
1) Cross-domain test cases
The results of product name identification by each of the test companies are shown in Table III. The results show that across all test companies, recall is higher than precision. Thus the algorithm errs on the side of caution, preferring to produce false negatives rather than misclassify an entity as a product name when in fact it is not. Indeed, in two of the four cases (Boeing and Ford) the number of false negatives is greater than true positives, and incidentally no false positives were produced. With Lenovo, the number of true positives (40) is still greater than false negatives, but the number of false negatives is significant (30), and there were a small number of false positives. In the case of Microsoft, the number of true positives is marginally greater than false negatives (17 vs. 16, respectively), while there is also a significant number of false positives (13).
2) Effect of dataset size
To check whether similar results could be achieved with a smaller dataset, a number of tests were run using random subsamples of the dataset of varying sizes. The subsamples ranged from 10% to 100% of the full dataset. The tests also compared the performance of different clustering methods (step 6 of the algorithm) on performance. The results are shown in Fig. 3.
The results indicate that the best performance is achieved with spectral clustering, and that dataset size matters. With spectral clustering, using larger datasets has improved the F1 score from 0.175 at 10% of the full dataset to 0.563 in the full
Discussion
This paper developed an algorithm for unsupervised identification of product names from social media posts. The algorithm consisted of two broad stages: generation of candidate product names from the corpus of social media posts and filtering the candidate names to output a list of company names, with each company name being accompanied by product names. Three assumptions were formulated to facilitate the generation of candidate product names, and it was noted that future studies may focus on relaxing these assumptions.
The candidate generation stage used an off-the-shelf CRF model pretrained on news articles to identify company names, which were used in pattern-based identification of candidate product names. Word embeddings were created, and were used at the filtering stage to calculate similarity scores between embedding vectors of each candidate product name and the corresponding company name. IDF and TF scores for each candidate product name were also calculated, and these scores were used in clustering to separate product names from spurious entries.
The algorithm exhibited an emphasis on recall rather than precision, at least in the four test cases. This could be beneficial if the algorithm is used in conjunction with other product name identification methods. For example, it can be used to generate features for other algorithms such as CRF, support vector machines (SVM), as well as provide labeled data for model training.
The balance between precision and recall was not even across the test cases, however. A possible detriment to higher precision score is that product names were not labeled as personal nouns by the POS tagger (at step 2 of the algorithm). This was particularly pronounced in the case of Boeing, which had the lowest precision score of the four test companies. Examples of product names that were not labeled as such by the algorithm include "747" or "F-15". For these products, assumption 3 did not hold. A potential future improvement for such situations is to add a pattern of "OrganizationName Number" to the patterns of candidate name identification at step 2, or add another pattern that would relax assumption 3. As for recall, the lowest score was observed for Microsoft due to a large number of false positives. This likely was a result of violated assumption 2, whereby the algorithm included into the list of candidate names many entities that were not, in fact, product names. By contrast, the high recall scores for the other three test companies suggest that assumption 1 held.
Dataset size proved to play an important role in improving the algorithm's accuracy. At first glance, this process can be applied to a dataset of any size. However, there are several benefits from having a large dataset. First, it provides a wider range of sentences, and thus a greater chance that product names will be mentioned in the context appropriate for pattern matching at step 2, and be included in candidate names. Second, it helps to train a word embeddings model that is used to determine relationships between candidate product names and company names at step 6. Finally, the performance of spectral clustering proved to increase with dataset size.
The algorithm has several advantages:
Identified product names are linked with company name in a two-level taxonomy. For example, "Explorer" (a secondorder element) is a product name of "Ford" company (first-order element).
Since the algorithm is unsupervised and domain-agnostic, it can be extended to identification of products by companies not included in the dataset. All that is needed for this are Facebook posts discussing company products, e.g. from the company's official Facebook page or an informal "fan" Facebook page. In a similar vein, it can also be used to generate updated lists of products as they are released.
It is not required that Facebook posts come from company-related pages (official or not) to identify products of that company. Other discussions of the company's products may be used instead (although presumably a company's products are more likely to be discussed on company-related pages).
At the same time, there are several limitations of the algorithm:
It retains only the first word in the product name.
However, for all four test companies the proportion of 1word product names (e.g., "G460") compared to n-word (e.g., "Visual Studio 2010") was over 50% and ranged from 59.3% for Lenovo to 100% for Boeing (in other words, the first word uniquely identifies a product and adding extra words does not add to discriminatory power).
It focuses only on English-language text. The ability to transfer the algorithm to other languages depends on the availability of off-the-shelf CRF models for company name tagging (which do exist, pretrained for multiple languages), as well as a sufficient number of social media posts discussing the company and its products (as dataset size was shown to affect name identification accuracy).
Future research can focus on probing deeper and addressing the above limitations, as well as extending the algorithm to perform other, related tasks such as normalization, including recognition of acronyms of product names.
Fig. 1 .
1Distribution of the number of posts for the Facebook pages included in the dataset (individual page names on the horizontal axis are not shown)
FilteringFig. 2 .
2list of company names C and list of product names P c for each c in C Candidate name generation Graphical representation of the algorithm variations in product naming, and these variations are substantially more pronounced across companies.
True column names as collected from Amazon's Mechanical Turk analysis of Facebook posts.
calculate several measures for each candidate name and In Proceedings of the IEEE International Conference on Big Data (IEEE Big Data 2018), Seattle, WA, December 10-13, 2018perform clustering on these measures within each company to filter out spurious candidate names.
Company The unique yellow of the Hyundai[company] Veloster[product] will help you stand out wherever you go. User To my great dissatisfaction I found out that the Sony[company] Z3[product] is not allowed to be leased. User I'm going to fly soon from Montreal to Frankfurt on board of one of your Airbus[company] A340-300[product]. Table 1. Examples of posts by companies and users, with company and product names highlighted industries were identified. A list of 100 largest companies in each of these industries was collected from Dow Jones's Factiva service (http://global.factiva.com). Because many companies sell products under a brand name different from the company's name, a list of brands operated by each company was obtained from Reuters (http://www.reuters.com/sectors). After collecting Facebook pages for each brand and removing Facebook pages with no posts, 239 Facebook pages were retained in the dataset. For each page, posts both by companies (page owners) and by users on that page were collected. Only English-language posts were retained. Examples of posts are shown inPost source
Post text
Company
Are you an Amazon[company] Prime[product] member getting
your shopping done early?
Table 2 .
2Companies and their product names used for testing
Clay Christensen's Milkshake Marketing. C Nobel, C. Nobel. "Clay Christensen's Milkshake Marketing," October 2018;
Chemical name extraction based on automatic training data generation and rich feature set. S Yan, W S Spangler, Y Chen, IEEE/ACM Transactions on Computational Biology and Bioinformatics. 105S. Yan, W. S. Spangler, and Y. Chen, "Chemical name extraction based on automatic training data generation and rich feature set," IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 10, no. 5, pp. 1218-1233, 2013.
Unsupervised named-entity extraction from the web: An experimental study. O Etzioni, M Cafarella, D Downey, A.-M Popescu, T Shaked, Artificial Intelligence. 1651O. Etzioni, M. Cafarella, D. Downey, A.-M. Popescu, T. Shaked et al., "Unsupervised named-entity extraction from the web: An experimental study," Artificial Intelligence, vol. 165, no. 1, pp. 91-134, 2005.
Rule based synonyms for entity extraction from noisy text. R Ananthanarayanan, V Chenthamarakshan, P M Deshpande, R Krishnapuram, Proceedings of the Second Workshop on Analytics for Noisy Unstructured Text Data. the Second Workshop on Analytics for Noisy Unstructured Text DataR. Ananthanarayanan, V. Chenthamarakshan, P. M. Deshpande, and R. Krishnapuram, "Rule based synonyms for entity extraction from noisy text," Proceedings of the Second Workshop on Analytics for Noisy Unstructured Text Data, pp. 31-38, 2008.
Software-specific named entity recognition in software engineering social content. D Ye, Z Xing, C Y Foo, Z Q Ang, J Li, Proceedings of the 23rd IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER). the 23rd IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER)D. Ye, Z. Xing, C. Y. Foo, Z. Q. Ang, J. Li et al., "Software-specific named entity recognition in software engineering social content," Proceedings of the 23rd IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER), pp. 90-101, 2016.
Accurate product name recognition from user generated content. S Wu, Z Fang, J Tang, Proceedings of the 12th IEEE International Conference on Data Mining Workshops (ICDMW). the 12th IEEE International Conference on Data Mining Workshops (ICDMW)S. Wu, Z. Fang, and J. Tang, "Accurate product name recognition from user generated content," Proceedings of the 12th IEEE International Conference on Data Mining Workshops (ICDMW), pp. 874-877, 2012.
Named entity recognition in tweets: an experimental study. A Ritter, S Clark, O Etzioni, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingA. Ritter, S. Clark, and O. Etzioni, "Named entity recognition in tweets: an experimental study," Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1524-1534, 2011.
Didn't roger that: Social media message complexity and situational awareness of emergency responders. N Pogrebnyakov, E Maldonado, International Journal of Information Management. 40N. Pogrebnyakov, and E. Maldonado, "Didn't roger that: Social media message complexity and situational awareness of emergency responders," International Journal of Information Management, vol. 40, pp. 166-174, 2018.
Mobile phone name extraction from internet forums: a semi-supervised approach. Y Yao, A Sun, World Wide Web. 195Y. Yao, and A. Sun, "Mobile phone name extraction from internet forums: a semi-supervised approach," World Wide Web, vol. 19, no. 5, pp. 783- 805, 2016.
Scalable adhoc entity extraction from text collections. S Agrawal, K Chakrabarti, S Chaudhuri, V Ganti, Proceedings of the VLDB Endowment. the VLDB Endowment1S. Agrawal, K. Chakrabarti, S. Chaudhuri, and V. Ganti, "Scalable ad- hoc entity extraction from text collections," Proceedings of the VLDB Endowment, vol. 1, no. 1, pp. 945-957, 2008.
Incorporating non-local information into information extraction systems by gibbs sampling. J R Finkel, T Grenager, C Manning, Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics. the 43rd Annual Meeting of the Association for Computational LinguisticsJ. R. Finkel, T. Grenager, and C. Manning, "Incorporating non-local information into information extraction systems by gibbs sampling," Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pp. 363-370, 2005.
Recognizing named entities in tweets. X Liu, S Zhang, F Wei, M Zhou, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. the 49th Annual Meeting of the Association for Computational LinguisticsX. Liu, S. Zhang, F. Wei, and M. Zhou, "Recognizing named entities in tweets," Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pp. 359-367, 2011.
A comparison of social media marketing between B2B, B2C and mixed business models. S Iankova, I Davies, C Archer-Brown, B Marder, A Yau, Industrial Marketing Management. In pressS. Iankova, I. Davies, C. Archer-Brown, B. Marder, and A. Yau, "A comparison of social media marketing between B2B, B2C and mixed business models," Industrial Marketing Management, In press.
A cost-based explanation of gradual, regional internationalization of multinationals on social networking sites. N Pogrebnyakov, Management International Review. 571N. Pogrebnyakov, "A cost-based explanation of gradual, regional internationalization of multinationals on social networking sites," Management International Review, vol. 57, no. 1, pp. 37-64, 2017.
All Facebook statistics in one place. Socialbakers, SocialBakers. "All Facebook statistics in one place," accessed October 2018; https://www.socialbakers.com/statistics/facebook/.
NLTK: the natural language toolkit. S Bird, E Loper, Proceedings of the ACL 2004 on Interactive Poster and Demonstration Sessions. the ACL 2004 on Interactive Poster and Demonstration Sessions31S. Bird, and E. Loper, "NLTK: the natural language toolkit," Proceedings of the ACL 2004 on Interactive Poster and Demonstration Sessions, p. 31, 2004.
Software framework for topic modelling with large corpora. R Rehurek, P Sojka, Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks. the LREC 2010 Workshop on New Challenges for NLP FrameworksR. Rehurek, and P. Sojka, "Software framework for topic modelling with large corpora," Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, 2010.
On spectral clustering: Analysis and an algorithm. A Y Ng, M I Jordan, Y Weiss, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsA. Y. Ng, M. I. Jordan, and Y. Weiss, "On spectral clustering: Analysis and an algorithm," Proceedings of Advances in Neural Information Processing Systems, pp. 849-856, 2002.
| [] |
[
"Learning robust speech representation with an articulatory-regularized variational autoencoder",
"Learning robust speech representation with an articulatory-regularized variational autoencoder"
] | [
"Marc-Antoine Georges marc-antoine.georges@grenoble-inp.fr \nGIPSA-lab\nUniv. Grenoble Alpes\nCNRS\n38000GrenobleFrance\n\nUniv. Grenoble Alpes\nCNRS\nLPNC\n38000GrenobleFrance\n",
"Laurent Girin \nGIPSA-lab\nUniv. Grenoble Alpes\nCNRS\n38000GrenobleFrance\n",
"Jean-Luc Schwartz \nGIPSA-lab\nUniv. Grenoble Alpes\nCNRS\n38000GrenobleFrance\n",
"Thomas Hueber \nGIPSA-lab\nUniv. Grenoble Alpes\nCNRS\n38000GrenobleFrance\n"
] | [
"GIPSA-lab\nUniv. Grenoble Alpes\nCNRS\n38000GrenobleFrance",
"Univ. Grenoble Alpes\nCNRS\nLPNC\n38000GrenobleFrance",
"GIPSA-lab\nUniv. Grenoble Alpes\nCNRS\n38000GrenobleFrance",
"GIPSA-lab\nUniv. Grenoble Alpes\nCNRS\n38000GrenobleFrance",
"GIPSA-lab\nUniv. Grenoble Alpes\nCNRS\n38000GrenobleFrance"
] | [] | It is increasingly considered that human speech perception and production both rely on articulatory representations. In this paper, we investigate whether this type of representation could improve the performances of a deep generative model (here a variational autoencoder) trained to encode and decode acoustic speech features. First we develop an articulatory model able to associate articulatory parameters describing the jaw, tongue, lips and velum configurations with vocal tract shapes and spectral features. Then we incorporate these articulatory parameters into a variational autoencoder applied on spectral features by using a regularization technique that constraints part of the latent space to follow articulatory trajectories. We show that this articulatory constraint improves model training by decreasing time to convergence and reconstruction loss at convergence, and yields better performance in a speech denoising task. | 10.21437/interspeech.2021-1604 | [
"https://arxiv.org/pdf/2104.03204v1.pdf"
] | 233,168,856 | 2104.03204 | b1f5a9a4956ff03c47426f06de31173656a79ed1 |
Learning robust speech representation with an articulatory-regularized variational autoencoder
Marc-Antoine Georges marc-antoine.georges@grenoble-inp.fr
GIPSA-lab
Univ. Grenoble Alpes
CNRS
38000GrenobleFrance
Univ. Grenoble Alpes
CNRS
LPNC
38000GrenobleFrance
Laurent Girin
GIPSA-lab
Univ. Grenoble Alpes
CNRS
38000GrenobleFrance
Jean-Luc Schwartz
GIPSA-lab
Univ. Grenoble Alpes
CNRS
38000GrenobleFrance
Thomas Hueber
GIPSA-lab
Univ. Grenoble Alpes
CNRS
38000GrenobleFrance
Learning robust speech representation with an articulatory-regularized variational autoencoder
10.5281/zenodo.154083Index Terms: Speech productionrepresentation learningvari- ational autoencoderarticulatory modelspeech enhancement
It is increasingly considered that human speech perception and production both rely on articulatory representations. In this paper, we investigate whether this type of representation could improve the performances of a deep generative model (here a variational autoencoder) trained to encode and decode acoustic speech features. First we develop an articulatory model able to associate articulatory parameters describing the jaw, tongue, lips and velum configurations with vocal tract shapes and spectral features. Then we incorporate these articulatory parameters into a variational autoencoder applied on spectral features by using a regularization technique that constraints part of the latent space to follow articulatory trajectories. We show that this articulatory constraint improves model training by decreasing time to convergence and reconstruction loss at convergence, and yields better performance in a speech denoising task.
Introduction
Motor and perceptuo-motor theories of speech perception involve internal motor simulation processes [1,2] which may be particularly recruited when perceiving speech in adverse (e.g. noisy) conditions [3]. Similarly, most computational models of speech motor control rely on an explicit access to motor representations, first when recovering the motor commands required to reach an acoustic target (inverse internal model), and then when simulating the acoustic consequences of articulatory gestures (direct internal model) [4,5]. Inspired by human cognition and neurophysiology, the integration of motor/articulatory priors and constraints in automatic speech processing systems has motivated several studies, for increasing the robustness of automatic speech recognition (ASR) systems in noise [6,7,8], for allowing a better control of text-to-speech (TTS) synthesis [9], or for designing voice restoration and pronunciation training systems [10]. Modeling the complex relationships between phonetic targets, articulatory movements and speech acoustics is also essential to build a computational model of speech perception and production [11,12].
With both these technological and fundamental research goals in mind, we deal in this paper with the automatic learning of latent representation from the raw speech audio signal. We focus on deep generative models and in particular on the variational autoencoder (VAE) model [13,14] which can be seen as a probabilistic version of a deep autoencoder. The VAE has shown to be able to learn relevant latent representations by disentangling dimensions like speaker identity or phonetic features [15,16]. This model has already been successfully used in a variety of speech processing applications, e.g. [17,18,19,20]. In line with speech perception theory, we propose in this paper to introduce prior articulatory information in the representation learning process. This information is first derived from an articulatory model built from in-vivo recordings of a reference speaker. It is then transferred at training time via an additional regularization term in the loss function of the VAE. An overview of the proposed architecture is shown in Figure 1. With this model, we address the two following research questions: 1) Can prior articulatory knowledge fasten the speech representation learning process? 2) Can prior articulatory knowledge make the learned latent representation more robust to noise? To address those questions, we compared the proposed articulatoryregularized VAE to a conventional one in term of convergence speed at training time and on a speech denoising task.
To the best of our knowledge, the introduction of prior articulatory constraints for representation learning has been proposed only in very few studies. In [21], a set of vocal tract variable derived from the articulatory phonology theory [22] is used to constrain the latent space of a (deterministic) auto-encoder. The performance is evaluated by measuring the accuracy of the reconstructed articulatory features whereas in the present study we focus on the quality of the reconstructed speech signal. In [23], a normalizing flow technique is used to constraint the latent space of two autoencoders respectively processing articulatory and audio data. However, evaluation is mostly limited to qualitative evaluation. Therefore, the present study proposes the first VAE model regularized by articulatory prior knowledge and used to learn robust latent representation from the audio speech signal.
Methodology
Acoustic and articulatory data
The following experiments were conducted on two datasets. Each of these datasets is composed of parallel audio and EMA recordings (sustained vowels, vowel-consonant-vowel sequences, words, sentences). The first dataset, PB2007, consists of 1,109 items (15 minutes of speech) produced by a reference speaker (PB, male). EMA data were recorded using the Cartens 2D EMA system (AG200). Six coils were placed on the tongue tip, blade and dorsum, upper and lower lip and on the jaw (lower incisor). The acquired trajectories were low-pass filtered at 20 Hz and down-sampled from 200 Hz to 100 Hz. We denote by y an individual resulting vector of EMA data, of dimension 12. The second corpus, BY2014, 1 includes 925 items (45 minutes of speech) produced by another reference speaker (BY, male). Articulatory trajectories were recorded using the 3D NDI Wave system with 9 coils used (3 on the tongue, 4 on the lips, 1 on the jaw, 1 on the velum). They were low-pass filtered at 20 Hz and down-sampled from 200 Hz to 100 Hz. The 3D coordinates of the 9 EMA coils were finally projected on the midsagittal plane resulting in a 14-dimensional vector y.
As required by LPCNet vocoder [24] which is used in this study to reconstruct an audio speech signal from the output of the VAE, each audio recording was converted into a sequence of 18 Bark-scale cepstral coefficients [25], using a 20-ms sliding analysis window with 10-ms frame shift. The resulting vector of audio features is denoted x.
Building the articulatory model
For each of the two reference speakers, we first built an articulatory model using the general methodology originally proposed by [26], slightly adapted in [27] in order to process EMA articulatory data. In the present study, we consider the six following articulatory parameters (represented in Figure 1): Jaw Height (JH), Tongue Body (TB), Tongue Dorsum (TD), Tongue Tip (TT), Lip Protusion (LP), Lip Height (LH), Velum (VL, for BY speaker only). Here "articulatory parameters" means that those parameters are interpretable in terms of articulatory control/function in speech production [28]. The core principle is to extract the latent dimension of tongue and lips movements after removing the contribution of the jaw, using a so-called "guided-PCA". More precisely, the JH parameter (and the corresponding value for all the articulatory observations of the dataset) is defined as the first principal component (PC) of the jaw movement. The contribution of the jaw to the movement of the tongue is then estimated using a linear regression between JH values and the coordinated of the 3 EMA coils attached to the tongue. Once this contribution estimated and removed, the parameters TB and TD are defined as the two first PCs of the joint movement of tongue dorsum and back. A linear regression between the TB and TD on one hand and the tongue tip coils on the other hand provides a residual movement of the tongue, freed from the contribution of the jaw, the tongue dorsum and the tongue back. The TT parameters is finally defined as the first PC of this residual movement. A similar procedure is used for extracting the LP and LH lip parameters and corresponding values for both datasets. For the BY2014 corpus, the VL parameter is simply defined as the first PC of the EMA coils attached to the velum. In summary, at the end of this guided PCA analysis, we have a linear transformation to go from a vector of EMA parameters y to an articulatory vector a = [JH, TB, TD, TT, LP, LH] (for the PB2007 dataset) or a = [JH, TB, TD, TT, LP, LH, VL] for the BY2014 dataset, and vice versa. For convenience in the following, we denote by a(x) the articulatory vector corresponding to the cepstral vector x. Because this set of parameter is reduced in size compared to EMA parameters, the articulatory information is "compressed" into a low-dimensional vec-1 available online at http://doi.org/10.5281/zenodo.154083 tor, which is appropriate for our later application of an articulatory constraint on the VAE latent space (which is also expected to be of low-dimension).
The VAE model
The seminal VAE model introduced in [13,14] is defined by:
p θ (x, z) = p θ (x|z)p(z),(1)
where p(z), the prior distribution of the latent vector z, is a multivariate standard Gaussian distribution, p θ (x|z) is the (conditional) likelihood function of the observed variable x, and the dimension L of z is (possibly much) lower than the dimension F of x. The parameters of p θ (x|z) are provided by a deep neural network (DNN), called the decoder network, that takes z as input. θ represents the parameters of this decoder network (e.g., the weights and biaises of a multi-layer perceptron). In the present work, p θ (x|z) is a Gaussian distribution with diagonal covariance matrix. Because the relationship between z and x is highly nonlinear, the posterior distribution p θ (z|x) is not analytically tractable. It is thus approximated with a parametric variational distribution q φ (z|x), a.k.a. the inference model, whose parameters are provided by another DNN (called the encoder network, with weights φ and input x). A usual choice, that we follow here, is to set q φ (z|x) as a Gaussian distribution with diagonal covariance matrix. The parameters {θ, φ} are then jointly estimated by maximizing a lower bound of the data log-likelihood function, called the Variational Lower Bound (VLB), given by (for one single data vector):
L(φ, θ, x) = E q φ (z|x) log p θ (x|z) −DKL q φ (z|x) p(z) ,(2)
and evaluated on a training dataset (DKL denotes the Kullback-Leibler divergence). The left term of the VLB represents the reconstruction accuracy of the encoding-decoding process and the right term is a regularization term that ensures some degree of "disentanglement" of the latent vector entries [13]. Maximization of the VLB is done by combining stochastic gradient descent with sampling techniques.
The articulatory-regularised VAE (AR-VAE)
In the present work, each vector x is a vector of Bark-scale cepstral coefficients, of dimension 18, extracted from the audio as described in Section 2.1. The dimension of z will be specified later. To force the latent space of the VAE to fit the articulatory space, we added a third term to the above VLB using the same regularization technique as in [29] (itself inspired from [30]):
L(φ, θ, x) = E q φ (z|x) log p θ (x|z) − DKL q φ (z|x) p(z) + α E q φ (z|x) R(z, a(x)) .(3)
The new regularization term R(z, a(x)) ensures that for each speech frame the first entries of the latent vector z remain close to the corresponding entries in the vector of articulatory parameters a(x) defined in Section 2.2. In our experiments, we implemented this regularization term with the mean squared error (MSE):
R z, a(x) = z1:N − a(x) 2 ,(4)
where z1:N denotes the subvector made of the N first latent values, with N = 6 or 7 depending on the dataset. This term can be interpreted in statistical modeling terms as an additional Gaussian prior on z1:N with mean vector a(x) and an arbitrary fixed variance. In practice, the two expectations in (3) are replaced with estimates based on Monte Carlo sampling of z (using the well-known reparameterization trick, just as in the seminal VAE [13]). Finally, α is a weighting factor monitoring the weight of the articulatory regularization term, and that will be varied in our experiments.
Implementation
We implemented the proposed articulatory-regularized VAE model with the following architecture: The encoder was composed of 4 fully connected hidden layers (256, 128, 64 and 32 neurons) and the decoder had the same, but reversed, composition. The size of the latent space was 12 (i.e. the size of the output layer of the encoder) when the model was trained on the PB2007 corpus (including 6 articulatory-regularized dimensions), and it was 14 when the model was trained on the BY2014 corpus (including 7 articulatory-regularized dimensions). Note that half of the z entries are articulatoryconstrained, hence they are forced to encode the information in cepstral vectors that is strongly correlated to the articulatory parameters, and the other half are let free to encode "everything else" (e.g. speech source information). The hyperbolic tangent activation function was used for each hidden layer. Model training was done using backpropagation with Adam optimizer, on mini-batches of 32 observations (pairs of x and a(x) vectors). For each experience, the datasets were randomly partitioned with 80% of the data used for training and the remaining 20% used for testing. The implementation was done using the PyTorch toolkit [31].
Experiments
Model learning speed and accuracy
We first tested if the introduction of articulatory constraints could fasten the training process. To that purpose, for each of the two datasets, and for each value of α taken in {0, 0.1, 0.25, 0.5, 1} we trained 10 AR-VAE models (with a different initialization each time) during 60 epochs (note that an AR-VAE with α = 0 is equivalent to a conventional VAE). At each epoch, we computed the reconstruction error defined as the MSE between the reconstructed and true cepstral coefficients, on the test set. For each value of α (and for each dataset), we finally built a smooth version of the learning curve by averaging the reconstruction loss on the test set, over the 10 models. These learning curves are presented in Figure 2a. First, we observe that almost all AR-VAEs converge faster than the conventional VAEs (i.e. the blue dashed line is almost always above the other lines). Then, as summarized in Figure 2b, the best final performance is obtained with the proposed AR-VAE on both datasets (i.e. with α = 1 for PB2007 and with α = 0.25 for BY2014). Therefore, adding articulatory constraints does improve representation learning, both in terms of convergence speed and accuracy.
Robustness to noise
We then tested the performance of the proposed AR-VAE on a speech denoising task. To that purpose, a babble noise was added to each audio speech signal with different signal-tonoise ratios (SNR) (no noise, 10dB, 5dB and 0dB). Sequence of acoustic feature vectors (i.e. 18 Bark-scale cepstral coefficients) were extracted from the noisy audio signals with the same method used previously. For each dataset, both VAE and AR-VAE were trained to reconstruct the non-noisy version of each acoustic feature vector from its noisy counterparts. For the AR-VAE, we first compared different values of the α parameter for this denoising task. For concision, we report here only the results with α = 1 which provided the best performance among the tested values. As for the first experiment, we trained 10 different VAEs/AR-VAEs (with a different initialization each time) and averaged the results over the 10 runs. The reconstruction error on the test dataset is shown in Figure 3. These results show that the proposed AR-VAE outperforms the conventional VAE on the denoising task, for all considered SNR. The performance difference is more pronounced for lower noise levels.
Since the absolute value of the reconstruction loss (which is in an arbitrary unit) is difficult to interpret, we then conducted further evaluations to assess the quality of the denoised speech. First, we evaluate its phonetic content. To that purpose, for each sentence of the test set, and for each considered SNR, we re-synthesized a speech signal using the LPC-Net neural vocoder fed by the cepstral coefficients reconstructed by the VAE/AR-VAE, together with the original pitch param- Figure 4: Accuracy (mean ± standard deviation) of an HMMbased phonetic decoder when processing speech signals denoised by the conventional VAE and the proposed AR-VAE (with α = 1). Figure 5: MUSHRA scores obtained for each level of noise and for the conventional VAE (α = 0) and the proposed AR-VAE (with α = 1). For the sake of clarity, we omit the anchor and reference scores. *** and * are significant (p < 0.001 and p < 0.05), and NS are non-significant differences.
eters (period and correlation, extracted from the clean speech sound). 2 The resulting speech signals were sent to a Hidden Markov Model (HMM) based phonetic decoder, trained on the original (clean) speech signals of the training set (left-to-right 3 emitting states context-independant HMM-GMM, trained using the HTK toolkit and a standard procedure, no language model used). The decoding accuracy (which takes into account insertion and deletion errors) are presented in Figure 4. Again, the AR-VAE outperforms the VAE, by a large margin (up to 10%) when processing clean speech, and by a smaller one when processing noisy speech.
Finally, we assessed the perceptive quality of the reconstructed speech signal using a MUSHRA test during which participants had to rank a set of audio stimuli by their similarity with a reference sound on a scale from 0 to 100 [32]. We first randomly selected 20 short sentences from the BY2014 corpus (preferred to the PB2007 corpus because of the presence of data on the velum). For each sentence, we generated 6 audio stimuli: a low anchor built by adding babble noise to the original audio speech signal with a SNR of 0dB and re-synthesized (i.e. vocoded) with LPCNet, a hidden reference built by resynthesizing the original signal with LPCNet, a third stimulus built by first encoding-decoding the original signal either with the conventional VAE or with the proposed AR-VAE, and then synthesizing an audio signal with LPCNet, and four other stim- 2 Several sound examples are available at https://georges. ma/p/ar-vae uli built following the same principle but after having first added noise to the original signal with three levels of SNR (10dB, 5dB and 0dB). We recruited 23 native French speakers online via the Prolific Academic platform [33]. Results are reported in Figure 5. To assess the statistical significance of the difference between the MUSHRA scores, we first conducted a Kruskal-Wallis ranksum test which showed a significant effect of the SNR factor (p < 0.05). Then a post-hoc Dunn test validated a statistically significant increase of performance from VAE to AR-VAE for clean audio (p < 0.001) and for noisy audio with SNR = 10dB, and showed no significant difference between the two models for SNR = 5dB and SNR = 0dB (i.e. very noisy inputs).
Discussion and perspectives
The two experiments reported in Section 3 suggest that articulatory constraints improve the learning of speech representations in a VAE. This study opens interesting perspectives for assessing the combined role of articulatory knowledge and auditory processes in the elaboration of internal representations of speech signals. While the efficiency of such articulatoryconstrained VAE appears clearly in terms of spectrum reconstruction, it remains to explore how the corresponding representations are structured and whether they are able to integrate the concept of articulatory/motor invariance [34] inside phonetic representations. We will particularly focus on the way articulatory information in the present VAE could improve the representation of plosive place of articulation, known to depend on the availability of articulatory/motor information [35].
Another question concerns the way articulatory regularization provided by the articulatory data available for a given speaker could enable this speaker to better process the speech utterances of other speakers. For this aim, we will explore how the regularized VAE performs in a denoising experiment involving multiple speakers. We will also study other VAE architectures in which articulatory-acoustic data for the speaking agent and acoustic-only data for other agents could be learnt together in articulatory-constrained VAE variants.
Finally, the precise developmental schedule along which a child develops her production and perception skills and learns the sounds of her language and the corresponding articulatory trajectories will shed interesting light on some algorithmic choices that should be made in further developments of such articulatory/motor constrained acoustic VAEs.
Conclusion
In this paper, we show how articulatory knowledge can help construct internal representations of auditory speech stimuli, by applying an articulatory regularization to VAEs encoding speech features, enforcing them to adopt mixed articulatoryacoustic representations in their latent space. The additional term in the training loss function enforcing a part of the latent spaces to follow articulatory parameters appears to improve learning efficiency, both in terms of learning speed and accuracy, and also improve reconstruction performance in a denoising VAE. A number of perspectives are provided by this new type of joint articulatory-acoustic learning process.
Acknowledgement
This work has been partially supported by MIAI @ Grenoble Alpes (ANR-19-P3IA-0003). The authors would like to thank Pierre Badin and Julien Diard for fruitful discussions.
Figure 1 :
1Schematic view of the proposed articulatoryregularized variational autoencoder.
Figure 2 :
2a.: Evolution of the reconstruction loss on the test set during training. For better visualization, each learning curve is fitted by an exponentially decreasing function b.: Final performance after convergence at epoch 40 on the test set.
Figure 3 :
3Reconstruction loss (mean ± standard deviation) of the conventional VAE and the proposed AR-VAE on the test set for the speech denoising task.
The motor theory of speech perception revised. A M Liberman, I G Mattingly, Cognition. 211A. M. Liberman and I. G. Mattingly, "The motor theory of speech perception revised," Cognition, vol. 21, no. 1, pp. 1-36, 1985.
The Perception-for-Action-Control Theory (PACT): A perceptuomotor theory of speech perception. J.-L Schwartz, A Basirat, L Ménard, M Sato, Journal of Neurolinguistics. 255J.-L. Schwartz, A. Basirat, L. Ménard, and M. Sato, "The Perception-for-Action-Control Theory (PACT): A perceptuo- motor theory of speech perception," Journal of Neurolinguistics, vol. 25, no. 5, pp. 336-354, 2012.
The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception. J I Skipper, J T Devlin, D R Lametti, Brain and Language. 164J. I. Skipper, J. T. Devlin, and D. R. Lametti, "The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception," Brain and Language, vol. 164, pp. 77-105, 2017.
Speech production as state feedback control. J F Houde, S S Nagarajan, Frontiers in Human Neuroscience. 5J. F. Houde and S. S. Nagarajan, "Speech production as state feed- back control," Frontiers in Human Neuroscience, vol. 5, 2011.
A neural theory of speech acquisition and production. F H Guenther, T Vladusich, Journal of Neurolinguistics. 255F. H. Guenther and T. Vladusich, "A neural theory of speech ac- quisition and production," Journal of Neurolinguistics, vol. 25, no. 5, pp. 408-422, 2012.
The potential role of speech production models in automatic speech recognition. R C Rose, J Schroeter, M M Sondhi, The Journal of the Acoustical Society of America. 993R. C. Rose, J. Schroeter, and M. M. Sondhi, "The potential role of speech production models in automatic speech recognition," The Journal of the Acoustical Society of America, vol. 99, no. 3, pp. 1699-1709, 1996.
The use of phonetic motor invariants can improve automatic phoneme discrimination. C Castellini, L Badino, G Metta, G Sandini, M Tavella, M Grimaldi, L Fadiga, PLoS ONE. 6924055C. Castellini, L. Badino, G. Metta, G. Sandini, M. Tavella, M. Grimaldi, and L. Fadiga, "The use of phonetic motor invari- ants can improve automatic phoneme discrimination," PLoS ONE, vol. 6, no. 9, p. e24055, sep 2011.
Speech production knowledge in automatic speech recognition. S King, J Frankel, K Livescu, E Mcdermott, K Richmond, M Wester, The Journal of the Acoustical Society of America. 1212S. King, J. Frankel, K. Livescu, E. McDermott, K. Richmond, and M. Wester, "Speech production knowledge in automatic speech recognition," The Journal of the Acoustical Society of America, vol. 121, no. 2, pp. 723-742, 2007.
Integrating articulatory features into HMM-based parametric speech synthesis. Z.-H Ling, K Richmond, J Yamagishi, R.-H Wang, IEEE Transactions on Audio, Speech, and Language Processing. 176Z.-H. Ling, K. Richmond, J. Yamagishi, and R.-H. Wang, "Inte- grating articulatory features into HMM-based parametric speech synthesis," IEEE Transactions on Audio, Speech, and Language Processing, vol. 17, no. 6, pp. 1171-1185, 2009.
Biosignal-based spoken communication: A survey. T Schultz, M Wand, T Hueber, D J Krusienski, C Herff, J S Brumberg, IEEE/ACM Transactions on Audio, Speech, and Language Processing. 2512T. Schultz, M. Wand, T. Hueber, D. J. Krusienski, C. Herff, and J. S. Brumberg, "Biosignal-based spoken communication: A sur- vey," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 12, pp. 2257-2271, 2017.
The complementary roles of auditory and motor information evaluated in a Bayesian perceptuo-motor model of speech perception. R Laurent, M.-L Barnaud, J.-L Schwartz, P Bessière, J Diard, Psychological Review. 1245R. Laurent, M.-L. Barnaud, J.-L. Schwartz, P. Bessière, and J. Di- ard, "The complementary roles of auditory and motor information evaluated in a Bayesian perceptuo-motor model of speech percep- tion." Psychological Review, vol. 124, no. 5, pp. 572-602, 2017.
Evaluating the potential gain of auditory and audiovisual speech predictive coding using deep learning. T Hueber, E Tatulli, L Girin, J.-L Schwartz, Neural Computation. T. Hueber, E. Tatulli, L. Girin, and J.-L. Schwartz, "Evaluating the potential gain of auditory and audiovisual speech predictive coding using deep learning," Neural Computation, pp. 596-625, 2019.
Auto-encoding variational Bayes. D P Kingma, M Welling, Proc. of ICLR. of ICLRBanff, CanadaD. P. Kingma and M. Welling, "Auto-encoding variational Bayes," in Proc. of ICLR, Banff, Canada, 2014.
Stochastic backpropagation and approximate inference in deep generative models. D J Rezende, S Mohamed, D Wierstra, Proc. of ICML. of ICMLBeijing, ChinaD. J. Rezende, S. Mohamed, and D. Wierstra, "Stochastic back- propagation and approximate inference in deep generative mod- els," in Proc. of ICML, Beijing, China, 2014.
Modeling and transforming speech using variational autoencoders. M Blaauw, J Bonada, Proc. of Interspeech. of InterspeechSan Francisco, CAM. Blaauw and J. Bonada, "Modeling and transforming speech using variational autoencoders," in Proc. of Interspeech, San Fran- cisco, CA, 2016.
Learning latent representations for speech generation and transformation. W N Hsu, Y Zhang, J Glass, Proc. of Interspeech. of InterspeechStockholm, SwedenW. N. Hsu, Y. Zhang, and J. Glass, "Learning latent representa- tions for speech generation and transformation," in Proc. of Inter- speech, Stockholm, Sweden, 2017, pp. 1273-1277.
Voice conversion from non-parallel corpora using variational auto-encoder. C Hsu, H Hwang, Y Wu, Y Tsao, H Wang, Proc. of APSIPA. of APSIPAJeju, KoreaC. Hsu, H. Hwang, Y. Wu, Y. Tsao, and H. Wang, "Voice conver- sion from non-parallel corpora using variational auto-encoder," in Proc. of APSIPA, Jeju, Korea, 2016.
Statistical speech enhancement based on probabilistic integration of variational autoencoder and non-negative matrix factorization. Y Bando, M Mimura, K Itoyama, K Yoshii, T Kawahara, Proc. of ICASSP. of ICASSPCalgary, CanadaY. Bando, M. Mimura, K. Itoyama, K. Yoshii, and T. Kawahara, "Statistical speech enhancement based on probabilistic integration of variational autoencoder and non-negative matrix factorization," in Proc. of ICASSP, Calgary, Canada, 2018.
Expressive speech synthesis via modeling expressions with variational autoencoder. K Akuzawa, Y Iwasawa, Y Matsuo, Proc. of Interspeech. of InterspeechHyderabad, IndiaK. Akuzawa, Y. Iwasawa, and Y. Matsuo, "Expressive speech syn- thesis via modeling expressions with variational autoencoder," in Proc. of Interspeech, Hyderabad, India, 2018.
Semi-supervised multichannel speech enhancement with variational autoencoders and non-negative matrix factorization. S Leglaive, L Girin, R Horaud, Proc. of ICASSP. of ICASSPBrighton, UKS. Leglaive, L. Girin, and R. Horaud, "Semi-supervised multi- channel speech enhancement with variational autoencoders and non-negative matrix factorization," in Proc. of ICASSP, Brighton, UK, 2019.
Improving generalization of vocal tract feature reconstruction: From augmented acoustic inversion to articulatory feature reconstruction without articulatory data. R Turrisi, R Tavarone, L Badino, Proc. of IEEE Spoken Language Technology Workshop. of IEEE Spoken Language Technology WorkshopAthens, GreeceR. Turrisi, R. Tavarone, and L. Badino, "Improving generalization of vocal tract feature reconstruction: From augmented acoustic in- version to articulatory feature reconstruction without articulatory data," in Proc. of IEEE Spoken Language Technology Workshop, Athens, Greece, 2018, pp. 159-166.
Towards an articulatory phonology. J J Ohala, C P Browman, L M Goldstein, Phonology Yearbook. 3J. J. Ohala, C. P. Browman, and L. M. Goldstein, "Towards an articulatory phonology," Phonology Yearbook, vol. 3, pp. 219- 252, 1986.
Learning Joint Articulatory-Acoustic Representations with Normalizing Flows. P Saha, S Fels, Proc. of Interspeech. of InterspeechP. Saha and S. Fels, "Learning Joint Articulatory-Acoustic Repre- sentations with Normalizing Flows," in Proc. of Interspeech, oct 2020, pp. 3196-3200.
LPCNet: Improving neural speech synthesis through linear prediction. J.-M Valin, J Skoglund, Proc. of ICASSP. of ICASSPBrighton, UKJ.-M. Valin and J. Skoglund, "LPCNet: Improving neural speech synthesis through linear prediction," in Proc. of ICASSP, Brighton, UK, 2019, pp. 5891-5895.
Objective measure of certain speech signal degradations based on masking properties of human auditory perception. M Schroeder, B Atal, J Hall, Frontiers of Speech Communication Research. L. Björn, S.Öhman, and G. FantAcademic PressM. Schroeder, B. Atal, and J. Hall, "Objective measure of certain speech signal degradations based on masking properties of human auditory perception," in Frontiers of Speech Communication Re- search, L. Björn, S.Öhman, and G. Fant, Eds. Academic Press, 1979, pp. 217-229.
Compensatory articulation during speech: Evidence from the analysis and synthesis of vocal-tract shapes using an articulatory model. S Maeda, Speech Production and Speech Modelling. S. Maeda, "Compensatory articulation during speech: Evidence from the analysis and synthesis of vocal-tract shapes using an ar- ticulatory model," Speech Production and Speech Modelling, pp. 131-149, 1990.
The tongue in speech and feeding: Comparative articulatory modelling. A Serrurier, P Badin, A Barney, L.-J Boë, C Savariaux, Journal of Phonetics. 406A. Serrurier, P. Badin, A. Barney, L.-J. Boë, and C. Savariaux, "The tongue in speech and feeding: Comparative articulatory modelling," Journal of Phonetics, vol. 40, no. 6, pp. 745-763, nov 2012.
From EMG to Formant Patterns of Vowels: The Implication of Vowel Spaces. S Maeda, K Honda, Phonetica. 511-3S. Maeda and K. Honda, "From EMG to Formant Patterns of Vow- els: The Implication of Vowel Spaces," Phonetica, vol. 51, no. 1-3, pp. 17-29, 1994.
Make that sound more metallic: Towards a perceptually relevant control of the timbre of synthesizer sounds using a variational autoencoder. F Roche, T Hueber, M Garnier, S Limier, L Girin, Transactions of the International Society for Music Information Retrieval. accepted -pending publicationF. Roche, T. Hueber, M. Garnier, S. Limier, and L. Girin, "Make that sound more metallic: Towards a perceptually relevant control of the timbre of synthesizer sounds using a variational autoen- coder," Transactions of the International Society for Music Infor- mation Retrieval, vol. accepted -pending publication, 2021.
Generative timbre spaces with variational audio synthesis. P Esling, A Chemla-Romeu-Santos, A Bitton, Proc. of DAFx. of DAFxAveiro, PortugalP. Esling, A. Chemla-Romeu-Santos, and A. Bitton, "Generative timbre spaces with variational audio synthesis," in Proc. of DAFx, Aveiro, Portugal, 2018.
Pytorch: An imperative style, high-performance deep learning library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, A Desmaison, A Kopf, E Yang, Z Devito, M Raison, A Tejani, S Chilamkurthy, B Steiner, L Fang, J Bai, S Chintala, Proc. of NeurIPS. of NeurIPSVancouver, CanadaA. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chil- amkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, "Pytorch: An imperative style, high-performance deep learning library," in Proc. of NeurIPS, Vancouver, Canada, 2019, pp. 8024-8035.
Method for the subjective assessment of intermediate quality level of audio systems. International Telecommunication Union, Tech. Rep. ITU-R BS. ITUITU, "Method for the subjective assessment of intermediate qual- ity level of audio systems," International Telecommunication Union, Tech. Rep. ITU-R BS.1534-3, October 2015.
Prolific.ac-a subject pool for online experiments. S Palan, C Schitter, Journal of Behavioral and Experimental Finance. 17S. Palan and C. Schitter, "Prolific.ac-a subject pool for online experiments," Journal of Behavioral and Experimental Finance, vol. 17, pp. 22-27, 2018.
Perception of the speech code. A Liberman, F Cooper, D Shankweiler, M Studdert-Kennedy, Psychological review. 746A. Liberman, F. Cooper, D. Shankweiler, and M. Studdert- Kennedy, "Perception of the speech code," Psychological review, vol. 74, no. 6, pp. 431-461, 1967.
COSMO SylPhon: A bayesian perceptuo-motor model to assess phonological learning. M.-L Barnaud, J Diard, P Bessière, J.-L Schwartz, Proc. of Interspeech. of InterspeechHyderabad, IndiaM.-L. Barnaud, J. Diard, P. Bessière, and J.-L. Schwartz, "COSMO SylPhon: A bayesian perceptuo-motor model to assess phonological learning," in Proc. of Interspeech, Hyderabad, India, 2018, pp. 3786-3790.
| [] |
[
"SEPP: Similarity Estimation of Predicted Probabilities for Defending and Detecting Adversarial Text",
"SEPP: Similarity Estimation of Predicted Probabilities for Defending and Detecting Adversarial Text"
] | [
"Hoang-Quoc Nguyen-Son \nKDDI Research\nInc. 2-1-15 Ohara356-8502FujiminoSaitamaJapan\n",
"Seira Hidano se-hidano@kddi-research.jp \nKDDI Research\nInc. 2-1-15 Ohara356-8502FujiminoSaitamaJapan\n",
"Kazuhide Fukushima ka-fukushima@kddi-research.jp \nKDDI Research\nInc. 2-1-15 Ohara356-8502FujiminoSaitamaJapan\n",
"Shinsaku Kiyomoto kiyomoto@kddi-research.jp \nKDDI Research\nInc. 2-1-15 Ohara356-8502FujiminoSaitamaJapan\n"
] | [
"KDDI Research\nInc. 2-1-15 Ohara356-8502FujiminoSaitamaJapan",
"KDDI Research\nInc. 2-1-15 Ohara356-8502FujiminoSaitamaJapan",
"KDDI Research\nInc. 2-1-15 Ohara356-8502FujiminoSaitamaJapan",
"KDDI Research\nInc. 2-1-15 Ohara356-8502FujiminoSaitamaJapan"
] | [] | There are two cases describing how a classifier processes input text, namely, misclassification and correct classification. In terms of misclassified texts, a classifier handles the texts with both incorrect predictions and adversarial texts, which are generated to fool the classifier, which is called a victim. Both types are misunderstood by the victim, but they can still be recognized by other classifiers. This induces large gaps in predicted probabilities between the victim and the other classifiers. In contrast, text correctly classified by the victim is often successfully predicted by the others and induces small gaps. In this paper, we propose an ensemble model based on similarity estimation of predicted probabilities (SEPP) to exploit the large gaps in the misclassified predictions in contrast to small gaps in the correct classification. SEPP then corrects the incorrect predictions of the misclassified texts. We demonstrate the resilience of SEPP in defending and detecting adversarial texts through different types of victim classifiers, classification tasks, and adversarial attacks. | null | [
"https://arxiv.org/pdf/2110.05748v2.pdf"
] | 238,634,220 | 2110.05748 | ec7cfed05a607c7201e51e66f11724a14398f304 |
SEPP: Similarity Estimation of Predicted Probabilities for Defending and Detecting Adversarial Text
Hoang-Quoc Nguyen-Son
KDDI Research
Inc. 2-1-15 Ohara356-8502FujiminoSaitamaJapan
Seira Hidano se-hidano@kddi-research.jp
KDDI Research
Inc. 2-1-15 Ohara356-8502FujiminoSaitamaJapan
Kazuhide Fukushima ka-fukushima@kddi-research.jp
KDDI Research
Inc. 2-1-15 Ohara356-8502FujiminoSaitamaJapan
Shinsaku Kiyomoto kiyomoto@kddi-research.jp
KDDI Research
Inc. 2-1-15 Ohara356-8502FujiminoSaitamaJapan
SEPP: Similarity Estimation of Predicted Probabilities for Defending and Detecting Adversarial Text
There are two cases describing how a classifier processes input text, namely, misclassification and correct classification. In terms of misclassified texts, a classifier handles the texts with both incorrect predictions and adversarial texts, which are generated to fool the classifier, which is called a victim. Both types are misunderstood by the victim, but they can still be recognized by other classifiers. This induces large gaps in predicted probabilities between the victim and the other classifiers. In contrast, text correctly classified by the victim is often successfully predicted by the others and induces small gaps. In this paper, we propose an ensemble model based on similarity estimation of predicted probabilities (SEPP) to exploit the large gaps in the misclassified predictions in contrast to small gaps in the correct classification. SEPP then corrects the incorrect predictions of the misclassified texts. We demonstrate the resilience of SEPP in defending and detecting adversarial texts through different types of victim classifiers, classification tasks, and adversarial attacks.
Introduction
Recent deep learning models have reached the human level in many NLP tasks. However, these models are sensitive to changes in the input data. An adversarial text can be generated from an original text while the original meaning is still preserved and bypasses human recognition. However, adversarial text can fool many victims, such as sentiment analysis (Ren et al., 2019), question answering (Jia and Liang, 2017), and search engines (Gil et al., 2019).
Popular adversarial text defenders are based on adversarial training (Shrivastava et al., 2017;Tramèr et al., 2018) or modification detection (Pruthi et al., 2019). N -gram (Juuti et al., 2018) and text similarity (Nguyen-Son et al., 2019) address the adversarial text detection problem. However, recent generators can generate adversarial text via a very small change from the original by replacing a few words (Ren et al., 2019;, few characters (Gao et al., 2018;Jones et al., 2020), or both . High duplication in word usage between the original and adversarial texts confuses both the existing defenders and detectors.
Motivation
Correct classification of text by a classifier often induces small gaps to other classifiers. An adversary can fool a victim classifier's predictions by generating misclassified text, but it does not fool other classifiers. For instance, we randomly choose correctly classified text t 1 and its adversarial text t 2 targeting a CNN classifier ( Figure 1). The predictions are made with popular deep learning models including CNN (Kim, 2014), BiLSTM, BERT-large (Devlin et al., 2019), RoBERTa-large (Liu et al., 2019), and XLNet-large (Yang et al., 2019). The prediction is indicated by pair values of positive and negative prediction probabilities. The original text t 1 is negative, so the CNN and the other models correctly predict the text with higher negative than positive values. The adversarial text t 2 changes two words "script, ace" into their synonyms "hand, genius" using Ren et al. (2019)'s work. The generated text reduces the negative probability of the victim classifier to less than 0.5. However, using the synonym does not change the overall meaning, so the other models mostly retain their negative predictions. We randomly select a negative text t 3 , which is misclassified by the CNN victim, and observe that t 3 has the same characteristic as t 2 . In particular, t 3 is predicted as positive by the victim, while other classifiers still predict it as negative. Based on the gaps in prediction probabilities among the classifiers, we can distinguish correctly classified text from misclassified text.
Contributions
In this paper, we proposed an ensemble model based on similarity estimation of predicted probabilities (SEPP) to defend adversarial texts. Unlike a basic ensemble model, which directly votes predictions from multiple classifiers, SEPP estimates the similarity in prediction probabilities from the classifiers. The similarity is used to identify the victim classifier and misclassified texts. The probabilities of misclassified texts are corrected by using predictions from other classifiers. We use the same technique to detect adversarial texts.
We conducted experiments with adversarial texts generated by the probability weighted word saliency generator (Ren et al., 2019) that fool the CNNbased sentiment analysis classifier. SEPP recovers the prediction from 22.9% to 94.0% on an adversarial dataset while keeping 96.6% on the clean dataset. This is better than the 89.6% and 92.6% achieved by adversarial training and ensemble baselines, respectively. Moreover, we detect the adversarial texts at a rate of 96.3%, which outperforms existing work, neural baselines, and ensemble baselines. Other experiments on BiLSTM and BERT yield similar results. SEPP also works well on multiple-class classification tasks and other adversarial attacks. In summary, our contributions are as follows:
• We determined that predictions of various classifiers for misclassified text differ from those of correctly classified text.
• We proposed an ensemble model using similarity estimation of predicted probabilities (SEPP) to detect a victim classifier and misclassified texts. We leveraged this detection to recover the prediction of the victim.
• We reused SEPP to distinguish adversarial text from the original text.
• We evaluated the various adversarial texts, which fooled CNN, BiLSTM, and BERT classifiers on binary-and multiple-class classification tasks. The results indicate that SEPP outperforms other existing methods.
Roadmap
The rest of this paper is organized as follows. Section 2 describes related work on adversarial text generation, detection, and defense. Section 3 introduces the SEPP system. The experiential results are shown and analyzed in Section 4. Section 5 summarizes some main key points and mentions future work.
2 Related Work
Adversarial Text Generation
Adversarial text generation can be categorized by the extent of the generation:
Paragraph
Juuti et al. (2018) trained a neural model on human-written reviews and generated adversarial texts by topic. Jia and Liang (2017) added a noise sentence to an original paragraph to change a correct result of a question answering system. Wang et al. (2020) changed product categories of a review while keeping the sentiment but fooling a sentiment analysis classifier.
Sentence
Iyyer et al. (2018) generated an adversarial sentence with the desired syntax. They used backtranslation to create a paraphrased sentence pair with different syntax. They then designed an attention network to convert a sentence into a paraphrase with the target syntax. Ren et al. (2020) combined VAE and GAN to generate large scale adversarial sentences for a limited training dataset. Han et al. (2020) generated a text using an RNN network targeting structured prediction models such as dependency parsing or POS tagger.
Phrase
Ribeiro et al. (2018) compiled paraphrased pairs at the phrase level. They then suggested a rule to replace individual phrases in an original text with corresponding phrases in the paraphrased pairs. Liang et al. (2018) inserted or deleted consecutive hot words that affected the predictions of classifiers. Wallace et al. (2019) added a fixed phrase at the beginning of any sentence and optimized it by the gradient of a victim system. They claim that a phrase "zoning tapping fiences" reduces the victim's accuracy from 86.2% to 29.1% on positive samples.
Word
Adversarial text can be created by using various word operations (insertion, deletion, and replacement) to fool AI systems with both white-box and black-box attacks. As an example of a white-box attack, Ebrahimi et al. (2018) operated on hot words that induce a high gradient change in the system. As an example of a black-box attack, Liang et al. (2018) and examined occluded words and observed the prediction change. Garg and Ramakrishnan (2020) marked candidate words and chose the top ones predicted by a BERT model. Li et al. (2020) extended this idea for sub-words. Zhang et al. (2019) improved the fluency of word replacement by performing Metropolis-Hastings sampling. The chance of replacement is improved by using a genetic algorithm (Alzantot et al., 2018), particle swarm optimization (Zang et al., 2020), or boundary optimization (Meng and Wattenhofer, 2020). Ren et al. (2019) upgraded the text fluency with synonymous words in Wordnet and similar name entities.
Character
Many of the word-based approaches can be applied directly to characters (Liang et al., 2018;Ebrahimi et al., 2018) Analysis: The paragraph approach generates flexible adversarial texts. The generation of large hardto-read text makes it easily recognizable by the Ngram model and readability metrics (Juuti et al., 2018). The sentence approaches preserves the text meaning, but they induce significant changes in text complexity (Nguyen-Son et al., 2019). In the phrase approach, the rules become fragile when we gather sufficient paraphrased pairs. The insertion and deletion of hot phrases into original text induces nonfluent text. The operators on character introduce misspellings. With the word operator, while insertion and deletion also lead to nonfluent text, the replacement produces fluent text. Among these replacements, the Wordnet-based approach (Ren et al., 2019) preserves more the original meaning than other replacements, which are based on word embedding (Li et al., 2020;Zang et al., 2020). Moreover, this replacement works well on many tasks (binary-or multiple-class classification) and is chosen to conduct main experiments in this paper.
Adversarial Text Defense
The most popular approach in the defense against adversarial text is adversarial training (Shrivastava et al., 2017), which was previously used in image processing. The adversarial texts were added to the training data before the classifier was retrained. Another approach estimated the similarity between original and adversarial texts on training data . The upper and lower bounds of adversarial data were also approximated (Ye et al., 2020;Huang et al., 2019;Jia et al., 2019) to alleviate such texts. Other defenses identified changes in adversarial texts from their origins at the character level (Jones et al., 2020;Pruthi et al., 2019) or word level . The main drawback of previous approaches is that they need to retrain the classifier. Thus, they are sensitive to a new kind of adversarial text.
Adversarial Text Detection
Original text is generally more fluent than adversarial text. Existing methods estimate the fluency based on the N -gram model. Juuti et al. (2018) extracted the N -gram features based on a variety of text components, including word, part of speech, and syntactic dependency. They also measured the text readability using thirteen relative metrics. Our previous work (Nguyen-Son et al., 2019) extracted word N -gram features in both internal information from a training corpus and external information from a website corpus 1 . Text coherence was 1 https://catalog.ldc.upenn.edu/ LDC2006T13 measured by matching similar words and combining them with the N -gram features. Powerful deep learning models (e.g., BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and XLNet (Yang et al., 2019)) can be used as reputable detectors since they prove their performance in most of the major classification tasks.
Existing methods extract the difference in word usage between original and adversarial texts. However, recent adversarial texts produced only minimal changes from the original texts. Thus, they confuse all text-based methods.
Similarity Estimation of Predicted Probabilities
We proposed an ensemble model based on the similarity estimation of predicted probabilities system (SEPP) for defending against adversarial text, as shown in Figure 2.
Training phase
The objective of the training phase is to create two kinds of discriminators. A discriminator Ω k detects misclassified texts for a classifier Γ k . Another discriminator Ψ detects a victim among candidate classifiers.
Training Misclassification Discriminator Ω k
We describe the training of a misclassification discriminator Ω k for a victim classifier Γ k in the following steps. The other misclassification discriminators is trained in the same manner way.
• Preparing training texts: We run a victim classifier Υ k to divide clean texts T k into misclassified texts M k and correctly classified texts C k . Adversarial texts A k are then generated from C k by using an existing generator and are added to M k . Each text t in M k and C k is used to extract features for training Ω k (Algorithm 1).
• Measuring similarities: The probabilityŷ c of the predicted class c in Υ k is calculated with respect to its similarity to corresponding probabilities in other classifiers Γ i . In particular, the similarity is the Manhattan distance ofŷ c and y c i (line 7). • Training the misclassification discriminator: All similarities Λ and θ are input into a feedforward neural network to train Ω k .
In Figure 1, t 1 , CNN, and other classifiers can be used as t, Υ k , and Γ i , respectively. t 1 is run on these classifiers to obtain (positive, negative) probabilitiesŷ = (0.13, 0.87),ŷ 1 = (0.05, 0.95), · · · . The similarities are calculated as Λ = (|0.87 − 0.95| = 0.08, |0.87 − 0.79| = 0.08, · · · ). All classifiers predict t 1 as negative; therefore, θ = 0. With the small values in Λ and θ, t 1 is most likely to be determined as correctly classified text. With adversarial text t 2 , the misclassified text should be detected with large values: Λ = (0.57, · · · ), θ = 4. Similarly, t 3 should be considered misclassified text with Λ = (0.56, · · · ), θ = 4.
Training Victim Discriminator Ψ
We use all misclassified texts to train a victim discriminator Ψ. Each text extracts individual features from a victim classifier in the same manner as above. The individual features are concatenated in order and input into another feedforward neural network to train Ψ. When we use t 2 as the input text, individual features (0.57, 0.46, ..., 4) and (0.57, 0.11, ..., 1)... are extracted with Υ 1 , Υ 2 .... The concatenated features (0.57, 0.46, ..., 4, 0.57, 0.11, ..., 1...) contain high values in the first individual features, so the first classifier should be identified.
Testing phase
A testing sample s of adversarial or original text is run with Ψ to determine the victim Υ v . Then, the corresponding discriminator Ω v determines whether s is a correct or misclassified text. If Ω v determines s as the correctly classified sample, then, we retain the original prediction on Υ v for the final defense prob-ability. Otherwise, the defense probability is calculated by:ŷ d = 1 n n i=1ŷ i whereŷ i is the probability from other classifiers Γ i and n is the total number of the other classifiers. For example, if Ψ identifies the victim Υ 1 of adversarial text t 2 , Ω 1 detects t 2 as misclassified text.
The prediction of t 2 is updated from positive withŷ = (0.58, 0.42) to negative with:ŷd = 0.11 + 0.22 · · · 4 , 0.89 + 0.78 · · · 4 = (0.22, 0.78). A similar flow should be processed with the misclassified text t 3 . In the case of the correctly classified text t 1 , because this kind of text is already learned via all misclassification discriminators, the correctly classified text should be identified with any victim detected by Ψ.
Evaluation
In this section, we present our experimental evaluation of defending and detecting adversarial texts.
Defending against Adversarial Texts
Dataset
We created adversarial texts by using the probability weighted word saliency (PWWS) generator (Ren et al., 2019) on the IMDB 2 (binary class) and AGNEWS 3 (four classes). We used testing data as a clean dataset. The adversarial texts are replaced with the original texts from the clean dataset to form an adversarial dataset. We use the ratio of 80/10/10 for training/developing/testing sets. This ratio is reused in further experiments.
Comparison
We compared SEPP 4 with adversarial training (Shrivastava et al., 2017) and ensemble baselines (Opitz and Maclin, 1999). While adversarial training adds adversarial texts and retrains the victims, ensemble learning votes on the predictions from the five individual classifiers (Figure 1). There are two popular ways to vote: average the predictions (soft) and select the majority class (hard). Table 1 lists the accuracy scores on testing sets while the developing sets reach similar values. SEPP can be trained with different training data (unknown), multiple training data (unsure), and the same training data (known). For example, if a victim is CNN, the different (resp. multiple, same) training data consists of misclassified and correctly classified texts, M k and C k (Figure 2), generated with BiL-STM (resp. both BiLSTM and CNN, and CNN).
The victim classifier declines significantly when moving from clean to adversarial data. Adversarial training efficiently defends against adversarial text, but it ignores the other misclassified texts. Ensemble learning appropriates this task in which adversarial text fools the victim classifier only but the other classifiers are still persistent. SEPP processes both kinds of misclassified texts and achieves high outcomes even with unknown victim classifiers. Moreover, SEPP (unsure) detects the victim classifiers more than 90% of accuracy.
Ablation Studies
We analyzed the contributions of the individual classifiers used in SEPP. The victim CNN is combined with the individual classifiers (Table 2). SEPP is presented with three groups of features: similarities Λ (SEPP-Λ), differences θ (SEPP-θ), and their combination (SEPP). The detection is affected by the performance of each model. In particular, BERT, RoBERTa, and XLNet are better than BiL-STM. SEPP improves both predictions in individual and combined features.
Attacking the BERT
We conducted other experiments (Table 3) targeting the BERT on SST-2 with various attacks at different text levels: character (DeepWordBug (Gao et al., 2018)), character and word (TextBugger ), and word (TextFooler ). We reused all six pretrained SST-2 classifiers for the ensemble models from the TextAttack framework (Morris et al., 2020) including CNN, LSTM, BERT-base, DistilBERT-base, RoBERTa-base, and AlBERT-base. The change in a few SST-2 words We integrated adversarial texts with the original texts to form adversarial/original pairs. These pairs are split into training/development/testing sets with the previous ratio (80/10/10). SEPP detects adversarial texts by extracting the same kind of features as when detecting misclassified texts (see misclassification discriminator Ω k in Figure 2). We compared SEPP with existing methods in detecting adversarial text, deep neural, and ensemble baselines as shown in Table 4. The neural baselines were trained on large models with a batch size of 4, a maximum length of 512, and an epoch of 2. The learning rates were estimated in a range of 10e−7 to 10e−2. For example, Figure 3 shows the losses in the red line corresponding to the learning rates using the BERT-large model. An optimal learning rate of 1.28e−5 was chosen when the loss was still decreasing, as recommended by Smith (2017). The number of training/test sets is shown in the second row.
The results show that the deep neural and ensemble baselines efficiently enhanced the traditional approaches by more than 10%. SEPP 5 achieves the highest performances in binary-class classification algorithms and reaches the competitive performances in multi-class classification.
Human Recognition
We randomly chose 50 adversarial/original pairs in the development set for human recognition. They were shuffled, and each text was displayed to 11 raters who decided whether it was written by a human or generated by a machine 6 . The raters recognized the adversarial texts with 62.1% accuracy on average with a low agreement (κ = −0.039). This recognition accuracy was lower than those of all machine detectors. This demonstrates that we need a detector to assist us in recognizing such texts.
Method
Detecting Adversarial Texts with Unduplicated Replacement
We analyzed the PWWS generator and found that it uses a large number of duplicate word replacements to generate adversarial texts. In particular, each replacement in a developing text was reused in 1544.3 texts on average in training texts. We clustered the texts in the development set in ranges of the number of duplicate replacements, as shown in Figure 4. We compared the detection of the top six methods. The low ranges significantly affected the deep learning baselines. In the high ranges, many duplicate replacements occurred with training data, offering more chances for detection with these mod- els. However, since SEPP is independent of these replacements, we achieved resistant performances even in the low ranges.
We used PWWS to generate adversarial texts without reusing previous word replacements. We ran the detectors on this dataset ( baselines accuracy also maintained the prediction at around 92%. We analyzed the learning rate estimation process of the BERT-large model, as shown by the blue line ( Figure 3). All of the losses were similar to a random line (−ln(0.5) = 0.69). The losses remained after many epochs of training.
Conclusion
In this paper, we propose an ensemble model based on similarity estimation of predicted probabilities (SEPP) for defending against adversarial text by detecting a victim classifier and correcting misclassified text. SEPP measures the similarity among predictions from multiple classifiers. We evaluated adversarial texts generated by word-based and/or character-based generators. The generated texts targeted popular classifiers (CNN, BiLSTM, and BERT) in a binary and a multiclass classification. The results show that SEPP outperformed the existing work not only in defending against adversarial texts but also in maintaining performance on clean texts. Moreover, we achieved better performance in detecting adversarial texts than existing detectors. Based on the generalization of the proposed method, we can straightforwardly apply it for detecting other adversarial data such as fake images or forged audio.
Figure 1 :
1Predictions (positive, negative) based on sentiment analysis classifiers.
. Moreover, Zhou et al. (2019) recovered the character replacement in an adversarial text. Gil et al. (2019) suggested a method based on a character operator targeting Google search scores. Pruthi et al. (2019), Jones et al. (2020), and Li et al. (2019) manipulated the middle characters of an individual word to preserve the text fluency.
Figure 2 :•
2Similarity estimation of the predicted probability of defending against adversarial text. Training and testing are shown as solid and dashed lines, respectively.Algorithm 1: Extracting features.Input : text t; victim Υ k ; other classifiers Γ = {Γ i } Output: extracted features 1ŷ = getPredict (Υ k , t) 2 c = arg maxŷ 3 P = {ŷ i = getPredict(Γ i , t)} 4 Λ = ∅ // similarity features 5 θ = 0 // differences count feature 6 forŷ i ∈ Counting different predictions: We count predicted classes of other classifiers Γ i that are different from the predicted class c of the victim, which calls the different prediction count θ (line 9).
Figure 3 :
3Learning rate estimation of BERT-large model in duplicate and unduplicate replacement generation of adversarial texts.
Figure 4 :
4Detection of adversarial texts that fool CNN classifier. Duplicate replacement indicates number of replacements reused in training data.
Correctly classified text /Adversarial text : Misclassified original text : Caddyshack II is NOTHING compared to the original Caddyshack. But, there are legitimate reasons for it. (1) Rodney Dangerfield was supposed to be the ace of this film BUT he didn't like the script/hand, wanted to change it, his request was denied, so he didn't do the film. (2) It was low budget, Bill Murray had grown to superstar status. Ted Knight passed away in 1986, and Chevy Chase the "so-called ace/genius" of the first movie (although it was Rodney all the way) couldn't be on more than 5 minutes, because it would cost too much to pay him. BUT you had Dan Aykroyd, Robert Stack, Randy Quaid and Jackie Mason, all serviceable substitutes, who none had their best performances.Text
I am not a big fan of the
Spielberg/Cruise version of this
film. And so I must throw in
with
the
more
humble
Latt/Howel version. C Thomas
Howel had more heart and
more sympathy that Cruise in
the lead role (at least in my
opinion). Now this is hard to
imagine until you strip away
everything in the Spielberg
version that cost more than a
thousand dollars. There would
be nothing left, no special
effects, no sets, no Cruise.
CNN (victim)
(0.13, 0.87)
(0.68, 0.32)
(0.58, 0.42)
Bi-LSTM
(0.05, 0.95)
(0.11, 0.89)
(0.12, 0.88)
BERT-large
(0.21, 0.79)
(0.22, 0.78)
(0.12, 0.88)
RoBERTa-large
(0.06, 0.94)
(0.23, 0.77)
(0.09, 0.91)
XLNet-large
(0.30, 0.70)
(0.33, 0.67)
(0.05, 0.95)
Table 2 :
2Combination of classifiers and features in SEPP.(8.7 words/text) leads to a remarkable change in
classifiers' predictions and negatively affects ensem-
ble models, especially in hard voting. However,
SEPP retains the most efficient defenses across the
attacks.
4.2 Detecting Adversarial Texts
4.2.1 Detecting Adversarial Texts with
Duplicate Replacement
Table 3 :
3Attacking BERT on SST-2.Method
IMDB
AGNEWS
CNN
BiLSTM
CNN
BiLSTM
#train/#test
26531/3316 30046/3756 3376/422 2538/317
N -gram
81.3
83.0
70.1
69.6
Complexity
80.0
82.9
73.2
68.7
BERT-large
92.7
91.5
89.3
88.7
RoBERTa-large
95.0
94.9
88.4
93.8
XLNet-large
94.6
94.9
91.0
92.3
Ensemble (soft voting)
94.6
96.7
94.8
97.0
Ensemble (hard voting)
94.8
95.2
92.9
95.4
SEPP
96.3
97.6
94.3
96.8
Table 4 :
4Detecting adversarial texts with duplicate replacement.
6
The survey is available at the following link (https:// forms.gle/TNRNeYyAcyrt8zF67)92.9%
95.7%
96.5%
96.9%
50%
60%
70%
80%
90%
100%
[0,100)
#212
[100,200)
#160
[200,300)
#114
[300,)
#2830
Duplicate replacement range (#text)
BERT-large
RoBERTa-large
XLNet-large
Soft voting
Hard voting
SEPP
Table 5 )
5. While existing methods and deep neural baselines remained in the random guess range, SEPP and ensembleMethod
IMDB
AGNEWS
CNN BiLSTM CNN BiLSTM
#train
1682
2028
972
1172
N -gram
51.9
52.7
55.6
56.8
Complexity
51.1
51.2
50.8
53.4
BERT-large
50.9
51.6
56.5
63.5
RoBERTa-large
50.0
54.0
52.4
50.0
XLNet-large
50.0
55.9
50.0
62.2
Ensemble (soft voting)
89.1
88.6
91.9
96.6
Ensemble (hard voting) 89.1
89.0
94.4
95.2
SEPP
89.6
89.8
92.7
95.9
Table 5 :
5Detecting adversarial texts with unduplicated replacement.
https://ai.stanford.edu/˜amaas/data/ sentiment/ 3 http://groups.di.unipi.it/˜gulli/AG_ corpus_of_news_articles.html 4 SEPP using five classifiers(Figure 1), separately trained on the IMDB with suggested configurations and obtained similar performance. For example, CNN and RoBERTa-large achieved 88.8% and 96.5% accuracies, respectively.
Our source code is available at the following link (https: //github.com/quocnsh/SEPP)
Generating natural language adversarial examples. Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, Kai-Wei Chang, EMNLP. Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo- Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial exam- ples. In EMNLP, pages 2890-2896.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, NAACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL, pages 4171-4186.
Hotflip: White-box adversarial examples for text classification. Javid Ebrahimi, Anyi Rao, Daniel Lowd, Dejing Dou, ACL. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. Hotflip: White-box adversarial examples for text classification. In ACL, pages 31-36.
Black-box generation of adversarial text sequences to evade deep learning classifiers. Ji Gao, Jack Lanchantin, Mary Lou Soffa, Yanjun Qi, SPW. Ji Gao, Jack Lanchantin, Mary Lou Soffa, and Yanjun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In SPW, pages 50-56.
Bae: Bert-based adversarial examples for text classification. Siddhant Garg, Goutham Ramakrishnan, EMNLP. Siddhant Garg and Goutham Ramakrishnan. 2020. Bae: Bert-based adversarial examples for text classification. In EMNLP, pages 6174-6181.
White-to-black: Efficient distillation of black-box adversarial attacks. Yotam Gil, Yoav Chai, NAACL. Or Gorodissky, and Jonathan BerantYotam Gil, Yoav Chai, Or Gorodissky, and Jonathan Be- rant. 2019. White-to-black: Efficient distillation of black-box adversarial attacks. In NAACL, pages 1373- 1379.
Adversarial attack and defense of structured prediction models. Wenjuan Han, Liwen Zhang, Yong Jiang, Kewei Tu, EMNLP. Wenjuan Han, Liwen Zhang, Yong Jiang, and Kewei Tu. 2020. Adversarial attack and defense of structured prediction models. In EMNLP, pages 2327-2338.
Achieving verified robustness to symbol substitutions via interval bound propagation. Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, Pushmeet Kohli, EMNLP. Po-Sen Huang, Robert Stanforth, Johannes Welbl, Chris Dyer, Dani Yogatama, Sven Gowal, Krishnamurthy Dvijotham, and Pushmeet Kohli. 2019. Achieving verified robustness to symbol substitutions via interval bound propagation. In EMNLP, pages 4074-4084.
Adversarial example generation with syntactically controlled paraphrase networks. Mohit Iyyer, John Wieting, Kevin Gimpel, Luke Zettlemoyer, NAACL. Mohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. In NAACL, pages 1875-1885.
Adversarial examples for evaluating reading comprehension systems. Robin Jia, Percy Liang, EMNLP. Robin Jia and Percy Liang. 2017. Adversarial exam- ples for evaluating reading comprehension systems. In EMNLP, pages 2021-2031.
Certified robustness to adversarial word substitutions. Robin Jia, Aditi Raghunathan, Kerem Göksel, Percy Liang, EMNLP. Robin Jia, Aditi Raghunathan, Kerem Göksel, and Percy Liang. 2019. Certified robustness to adversarial word substitutions. In EMNLP, pages 4120-4133.
Is bert really robust? a strong baseline for natural language attack on text classification and entailment. Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits, AAAI. Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong baseline for nat- ural language attack on text classification and entail- ment. In AAAI, pages 8018-8025.
Robust encodings: A framework for combating adversarial typos. Erik Jones, Robin Jia, Aditi Raghunathan, Percy Liang, ACL. Erik Jones, Robin Jia, Aditi Raghunathan, and Percy Liang. 2020. Robust encodings: A framework for combating adversarial typos. In ACL, pages 2752- 2765.
Stay on-topic: Generating context-specific fake restaurant reviews. Mika Juuti, Bo Sun, Tatsuya Mori, N Asokan, ESORICS. Mika Juuti, Bo Sun, Tatsuya Mori, and N Asokan. 2018. Stay on-topic: Generating context-specific fake restau- rant reviews. In ESORICS, pages 132-151.
Convolutional neural networks for sentence classification. Yoon Kim, EMNLP. Yoon Kim. 2014. Convolutional neural networks for sen- tence classification. In EMNLP, pages 1746-1751.
Textbugger: Generating adversarial text against real-world applications. J Li, Ji, Du, T Li, Wang, NDSS. J Li, S Ji, T Du, B Li, and T Wang. 2019. Textbugger: Generating adversarial text against real-world applica- tions. In NDSS.
Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. Bert-attack: Adversarial attack against bert using bert. EMNLPLinyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020. Bert-attack: Adversarial at- tack against bert using bert. In EMNLP, pages 6193- 6202.
Deep text classification can be fooled. Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, Wenchang Shi, IJCAI. Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2018. Deep text clas- sification can be fooled. In IJCAI, pages 4208-4215.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approach. arXiv. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv.
Joint character-level word embedding and adversarial stability training to defend adversarial text. Hui Liu, Yongzheng Zhang, Yipeng Wang, Zheng Lin, Yige Chen, AAAI. Hui Liu, Yongzheng Zhang, Yipeng Wang, Zheng Lin, and Yige Chen. 2020. Joint character-level word em- bedding and adversarial stability training to defend ad- versarial text. In AAAI, pages 8384-8391.
A geometryinspired attack for generating natural language adversarial examples. Zhao Meng, Roger Wattenhofer, COLING. Zhao Meng and Roger Wattenhofer. 2020. A geometry- inspired attack for generating natural language adver- sarial examples. In COLING.
Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in nlp. John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, Yanjun Qi, EMNLP. John Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A frame- work for adversarial attacks, data augmentation, and adversarial training in nlp. In EMNLP, pages 119- 126.
Identifying adversarial sentences by analyzing text complexity. Tran Phuong Hoang-Quoc Nguyen-Son, Seira Thao, Shinsaku Hidano, Kiyomoto, PACLIC. Hoang-Quoc Nguyen-Son, Tran Phuong Thao, Seira Hi- dano, and Shinsaku Kiyomoto. 2019. Identifying ad- versarial sentences by analyzing text complexity. In PACLIC, pages 182-190.
Popular ensemble methods: An empirical study. David Opitz, Richard Maclin, Journal of Artificial Intelligence Research. 11David Opitz and Richard Maclin. 1999. Popular ensem- ble methods: An empirical study. Journal of Artificial Intelligence Research, 11:169-198.
Combating adversarial misspellings with robust word recognition. Danish Pruthi, Bhuwan Dhingra, Zachary C Lipton, ACL. Danish Pruthi, Bhuwan Dhingra, and Zachary C Lipton. 2019. Combating adversarial misspellings with robust word recognition. In ACL, pages 5582-5591.
Generating natural language adversarial examples through probability weighted word saliency. Yihe Shuhuai Ren, Kun Deng, Wanxiang He, Che, ACL. Shuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial exam- ples through probability weighted word saliency. In ACL, pages 1085-1097.
Generating natural language adversarial examples on a large scale with generative models. Yankun Ren, Jianbin Lin, Siliang Tang, Jun Zhou, Shuang Yang, Yuan Qi, Xiang Ren, ECAI. Yankun Ren, Jianbin Lin, Siliang Tang, Jun Zhou, Shuang Yang, Yuan Qi, and Xiang Ren. 2020. Gener- ating natural language adversarial examples on a large scale with generative models. In ECAI.
Semantically equivalent adversarial rules for debugging nlp models. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, ACL. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversarial rules for de- bugging nlp models. In ACL, pages 856-865.
Learning from simulated and unsupervised images through adversarial training. Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Joshua Susskind, Wenda Wang, Russell Webb, CVPR. Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Joshua Susskind, Wenda Wang, and Russell Webb. 2017. Learning from simulated and unsupervised images through adversarial training. In CVPR, pages 2107- 2116.
Cyclical learning rates for training neural networks. N Leslie, Smith, WACV. Leslie N Smith. 2017. Cyclical learning rates for training neural networks. In WACV, pages 464-472.
Ensemble adversarial training: Attacks and defenses. F Tramèr, Boneh, Kurakin, N Goodfellow, P Papernot, Mcdaniel, ICLR. F Tramèr, D Boneh, A Kurakin, I Goodfellow, N Paper- not, and P McDaniel. 2018. Ensemble adversarial training: Attacks and defenses. In ICLR.
Universal adversarial triggers for attacking and analyzing nlp. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh, EMNLP. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial trig- gers for attacking and analyzing nlp. In EMNLP.
Catgen: Improving robustness in nlp models via controlled adversarial text generation. Tianlu Wang, Xuezhi Wang, Yao Qin, Ben Packer, Kang Li, Jilin Chen, EMNLP. Alex Beutel, and Ed Chi. 2020Tianlu Wang, Xuezhi Wang, Yao Qin, Ben Packer, Kang Li, Jilin Chen, Alex Beutel, and Ed Chi. 2020. Cat- gen: Improving robustness in nlp models via con- trolled adversarial text generation. In EMNLP, pages 5141-5146.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, NeurIPS. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xl- net: Generalized autoregressive pretraining for lan- guage understanding. In NeurIPS, pages 5753-5763.
Safer: A structure-free approach for certified robustness to adversarial word substitutions. Mao Ye, Chengyue Gong, Qiang Liu, ACL. Mao Ye, Chengyue Gong, and Qiang Liu. 2020. Safer: A structure-free approach for certified robustness to adversarial word substitutions. In ACL, pages 3465- 3475.
Word-level textual adversarial attacking as combinatorial optimization. Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, Maosong Sun, ACL. Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Meng Zhang, Qun Liu, and Maosong Sun. 2020. Word-level textual adversarial attacking as combina- torial optimization. In ACL, pages 6066-6080.
Generating fluent adversarial examples for natural languages. Huangzhao Zhang, Hao Zhou, Ning Miao, Lei Li, ACL. Huangzhao Zhang, Hao Zhou, Ning Miao, and Lei Li. 2019. Generating fluent adversarial examples for nat- ural languages. In ACL, pages 5564-5569.
Learning to discriminate perturbations for blocking adversarial attacks in text classification. Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, Wei Wang, EMNLP. Yichao Zhou, Jyun-Yu Jiang, Kai-Wei Chang, and Wei Wang. 2019. Learning to discriminate perturbations for blocking adversarial attacks in text classification. In EMNLP, pages 4906-4915.
| [] |
[
"Show, Don't Tell: Demonstrations Outperform Descriptions for Schema-Guided Task-Oriented Dialogue",
"Show, Don't Tell: Demonstrations Outperform Descriptions for Schema-Guided Task-Oriented Dialogue"
] | [
"Raghav Gupta raghavgupta@google.com \nGoogle Research\n\n",
"Harrison Lee harrisonlee@google.com \nGoogle Research\n\n",
"Jeffrey Zhao \nGoogle Research\n\n",
"Abhinav Rastogi \nGoogle Research\n\n",
"Yuan Cao \nGoogle Research\n\n",
"Yonghui Wu \nGoogle Research\n\n"
] | [
"Google Research\n",
"Google Research\n",
"Google Research\n",
"Google Research\n",
"Google Research\n",
"Google Research\n"
] | [
"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies"
] | Building universal dialogue systems that operate across multiple domains/APIs and generalize to new ones with minimal overhead is a critical challenge. Recent works have leveraged natural language descriptions of schema elements to enable such systems; however, descriptions only indirectly convey schema semantics. In this work, we propose Show, Don't Tell, which prompts seq2seq models with a labeled example dialogue to show the semantics of schema elements rather than tell the model through descriptions. While requiring similar effort from service developers as generating descriptions, we show that using short examples as schema representations with large language models results in state-of-the-art performance on two popular dialogue state tracking benchmarks designed to measure zeroshot generalization -the Schema-Guided Dialogue dataset and the MultiWOZ leave-oneout benchmark. | 10.18653/v1/2022.naacl-main.336 | [
"https://www.aclanthology.org/2022.naacl-main.336.pdf"
] | 248,085,039 | 2204.04327 | 9ccd1c381d17df14518da68bd8e91f1f19f8e0d1 |
Show, Don't Tell: Demonstrations Outperform Descriptions for Schema-Guided Task-Oriented Dialogue
July 10-15, 2022
Raghav Gupta raghavgupta@google.com
Google Research
Harrison Lee harrisonlee@google.com
Google Research
Jeffrey Zhao
Google Research
Abhinav Rastogi
Google Research
Yuan Cao
Google Research
Yonghui Wu
Google Research
Show, Don't Tell: Demonstrations Outperform Descriptions for Schema-Guided Task-Oriented Dialogue
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJuly 10-15, 2022
Building universal dialogue systems that operate across multiple domains/APIs and generalize to new ones with minimal overhead is a critical challenge. Recent works have leveraged natural language descriptions of schema elements to enable such systems; however, descriptions only indirectly convey schema semantics. In this work, we propose Show, Don't Tell, which prompts seq2seq models with a labeled example dialogue to show the semantics of schema elements rather than tell the model through descriptions. While requiring similar effort from service developers as generating descriptions, we show that using short examples as schema representations with large language models results in state-of-the-art performance on two popular dialogue state tracking benchmarks designed to measure zeroshot generalization -the Schema-Guided Dialogue dataset and the MultiWOZ leave-oneout benchmark.
Introduction
Task-oriented dialogue (TOD) systems need to support an ever-increasing variety of services. Since many service developers lack the resources to collect data and train models, zero and few-shot transfer to unseen services is critical to the democratization of dialogue agents.
Recent approaches to generalizable TOD systems primarily rely on combining two techniques: large language models like BERT (Devlin et al., 2019) and T5 (Raffel et al., 2020), and schemaguided modeling -i.e. using natural language descriptions of schema elements (intents and slots) as model inputs to enable transfer to unseen services (Rastogi et al., 2020a,b). Models combining the two currently hold state-of-the-art (SotA) results on dialogue state tracking (DST) (Heck et al., 2020;Lee et al., 2021a;Zhao et al., 2022). *
*Equal contribution
However, description-based schema representations have some drawbacks. Writing precise natural language descriptions requires manual effort and can be difficult to write succinctly. Also, descriptions only provide indirect supervision about how to interact with a service compared to an example. Furthermore, Lee et al. (2021b) showed that schema-guided DST models are not robust to variations in schema descriptions, causing significant quality drops.
We propose using a single dialogue example with state annotations as an alternative to the description-based schema representation, similar to one-shot priming (Brown et al., 2020) -an approach we call Show, Don't Tell (SDT). Through demonstration, we show models the schema semantics rather than tell them through natural language descriptions, as seen in Figure 1. SDT achieves SotA accuracy and generalization to new APIs across both the Schema-Guided Dataset (SGD) (Rastogi et al., 2020b) and MultiWOZ Leave-One-Out (Budzianowski et al., 2018;Lin et al., 2021b) benchmarks, while being more data-efficient and robust to schema variations.
Show, Don't Tell
Following SoTA models, we pose DST as a seq2seq task (Wu et al., 2019;Zhao et al., 2021a) and finetune T5 on DST datasets. The model input consists of a prompt to convey API semantics and context to represent the current dialogue instance. The target contains ground truth belief states corresponding to the context. We compare against two baselines:
• T5-ind (Lee et al., 2021a) T5-seq SDT-seq P = 0: The amount of money to send or request 1: Name of the contact or account to make the transaction with 2: Whether the transaction is private or not a) True b) False 3: The source of money used for making the payment a) credit card b) debit card c) app balance P seq = [ex] [user] I want to make a payment to Jerry for $82 from my mastercard [system] Confirming you want to pay Jerry $82 with your credit card yes? [user] Yes that's right, make the transaction private too [slot] amount=$82 receiver=Jerry private_visibility=a of a) True b) False payment_method=a of a) credit card b) debit card c) app balance Figure 1: Illustration of all prompt formats for a payment service for both description-based and Show, Don't Tell models with independent (top) and sequential (bottom) decoding of dialogue state.
• T5-seq (Zhao et al., 2022): Model input comprises the descriptions of all slots as the prompt, concatenated with the dialogue history as the context. The target is the sequence of slot-value pairs in the dialogue state -i.e. the dialogue state is decoded sequentially in a single pass.
We modify the prompt formats above to utilize demonstrations instead of descriptions as described below and illustrated in Figure 1.
• SDT-ind: A prompt P ind i comprises a single example utterance and the ground truth slot-value pair formatted as
P ind i = [ex]; u ind i ; [slot]; sv i where u ind i
is a user utterance where slot i is active/not null and sv i is the slot-value pair.
[ex], [slot] are special delimiter tokens, and ; denotes concatenation.
• SDT-seq: A prompt P seq comprises a single labeled dialogue formatted as:
P seq = [ex]; u 1 ; ...; u n ; [slot]; sv 1 ; ...; sv m
where u j is an utterance, and other symbols are explained in the SDT-ind section above. In simple terms, the prompt is constructed by concatenating all utterances in an example dialogue followed by all slot-value pairs in the dialogue state.
In both the T5-* and SDT-* approaches, the context is the serialized dialogue history for the current dialogue instance. The final model input is formed by concatenating the prompt and the context strings, and the target string is the same as T5-*, containing only a single slot value for *-ind models and the entire turn's belief state for *-seq models.
For both T5-* and SDT-*, we enumerate the categorical slot values in multiple-choice format in the prompt and task models with decoding the multiple choice letter corresponding to the correct categorical value.
More details on prompt design and its impact on performance are provided in Appendix A.
Creating prompt examples: It is imperative that SDT prompts contain enough information to infer the semantics for all slots in a schema. For SDT-ind, we create individual utterances that showcase a single slot. For SDT-seq, we create example dialogues where all slots in the schema are used.
Multi-domain examples: It is not feasible to construct multi-domain demonstrations for every combination of domains. Thus, we stick to singledomain SDT prompts and create separate training instances for each domain present in a multidomain dialogue turn; for inference, we run inference once for each domain and combine the results.
Experimental Setup
Datasets: We conduct experiments on two DST benchmarks: Schema-guided Dialogue (SGD) (Rastogi et al., 2020b) and MultiWOZ 2.1 (Budzianowski et al., 2018;Eric et al., 2020). For MultiWOZ, we evaluate on the leave-one-out setup (Wu et al., 2019;Lin et al., 2021a), where models are trained on all domains but one and evaluated on the holdout domain. Additionally, we apply the rec-ommended TRADE pre-processing script 1 for fair comparison with other work. For both datasets, we created concise example dialogues modeled after dialogues observed in the datasets.
Implementation: We train SDT models by finetuning pretrained T5 1.1 checkpoints. For SDT-seq, we select one example dialogue for each service to create a prompt and use that prompt across all dialogue instances of that service, across training and evaluation. We do the same for SDT-ind but create one prompt per slot instead of per service. Unless otherwise noted, all T5-based models are based on T5-XXL (11B parameters). Appendices B and C contain more details on training and baselines respectively. Table 1 contains results on the SGD test set. SDTseq achieves the highest JGA by +1.1%, outperforming the description-based T5-* models, particularly on unseen services. SDT-ind is comparable to its counterpart T5-ind and better than T5-seq.
Results
SGD Results
Since SDT results vary with the choice of example dialogue provided in the prompt, we created 5 different versions of prompts for each service using different examples. We report the average JGA across the 5 versions and the 95% confidence intervals using the Student's-t distribution.
We hypothesize that the main advantage of SDT is that the schema semantics are conveyed via demonstration, which is more similar in form to the end task of state tracking and more informative than descriptions. On the other hand, natural language descriptions can be viewed as an intermediary that models must interpret in order to achieve the end goal of slot value prediction.
We see that SDT-seq outperforms SDT-ind and posit that this is because the full dialogue prompts in SDT-seq demonstrate more complex linguistic patterns (e.g. coreference resolution, long term dependencies) than the single utterance prompts of SDT-ind. On the other hand, we believe T5seq does not outperform T5-ind because no additional information is conveyed to the model through concatenating independent descriptions. All-elseequal, decoding all slots in one pass is more challenging than decoding each slot independently. We also experimented with using up to 5 example dialogues in each prompt of SDT-seq, but accuracy did not increase. Table 2 summarizes results for the MultiWOZ 2.1 leave-one-out setup. SDT-seq outperforms T5-seq by +1.5% overall and in 3 of the 5 domains, achieving state-of-the-art performance.
MultiWOZ Results
Impact of Model Size
T5's XXL size (11B parameters) may be unsuitable in resource-constrained settings. To understand how the the impact of model size, we measure SDT's performance on SGD across multiple T5 sizes in Table 3. For base and large sizes, both SDT variations offer higher JGA than their descriptionbased counterparts, possibly due to smaller T5 models being less capable of inferring unseen slots with just a description, whereas SDT models provide more direct supervision in contrast. Additionally, SDT-ind outperforms SDT-seq for both the smaller sizes, potentially due to SDT-seq's prediction task being more complex than that of SDT-ind.
Data Efficiency
To examine the data efficiency of SDT models, we also experiment with training SDT-seq with 0.16% (10-shot), 1%, and 10% of the SGD training data and evaluating on the entire test set. For 10-shot, we randomly sample 10 training dialogues from every service; for 1% and 10%, we sample uniformly across the entire dataset. SDT-seq demonstrates far higher data efficiency than T5-seq (
Robustness
Large LMs are often sensitive to the choice of prompt (Zhao et al., 2021b;Reynolds and Mc-Donell, 2021). To this end, we evaluate SDT-seq on the SGD-X (Lee et al., 2021b) benchmark, comprising 5 variants with paraphrased slot names and descriptions for every schema (Appendix Figure 4). Note that SDT-seq only makes use of slot names, so variations in description have no effect on it. Table 5 shows SDT-seq achieves the highest average JGA (JGA v 1−5 ) and lowest schema sensitivity (SS JGA , lower value indicates higher robustness), making it the most robust of the compared models. While the JGA decline indicates that SDT-seq is somewhat sensitive to how slot names are written, when compared to a variant of T5-seq (Zhao et al., 2022) that only uses slot names, it is still more robust based on the schema sensitivity, and the relative drop in JGA is nearly equal.
Discussion
Writing descriptions vs. demonstrations
The information provided to SDT is not identical to what is provided to typical schema-guided models, as SDT exchanges natural language descriptions for a demonstration of identifying slots in a dialogue. However, we argue that from the developer standpoint, creating a single example is similar in effort to writing descriptions, so we consider the methods comparable. Creating the SDT-seq prompts for all 45 services in SGD took an experienced annotator ∼2 hours, compared to ∼1.5 hours for generating all slot descriptions. SDT-ind prompts are even simpler to write because they relax the requirement for creating a coherent dialogue involving all slots. Descriptions can sometimes be easier to generate than a succinct dialogue that covers all slots. However, given the performance gain, examplebased prompts may be a better choice for many settings, especially for smaller model sizes and low resource settings where the gain over descriptionbased prompts is more pronounced.
Descriptions plus demonstrations
We tried combining both descriptions and a demonstration in a single prompt to try to further improve performance. However, results showed that this did not improve upon using demonstrations alone (see Appendix Table A1 for details).
We hypothesize that demonstrations, along with slot names, already convey slot semantics sufficiently, rendering descriptions extraneous. However, given that using slot names alone underperforms using descriptions (Zhao et al., 2022), the improvement SDT exhibits over using descriptions does not result purely from the use of slot names.
Prompting vs. traditional finetuning
To understand the impact of using a single demonstration as a prompt vs. traditional finetuning, we finetune T5-seq an additional time on the same set of dialogues used in SDT-seq prompts; therefore it has access to both slot descriptions as well as a single demonstration for each service. In this case, T5-seq is provided strictly more information than SDT-seq. T5-seq with finetuning obtains a JGA of 87.7% on SGD, on par with T5-ind but still lower than SDT-seq, suggesting that, when scarce, dialogue examples are better used as prompts (Le Scao and Rush, 2021).
Interestingly, finetuning on up to 5 dialogue examples per service did not improve performance after the first example (Appendix Figure 3).
Error analysis
Figure 2 compares some common error patterns made by T5-seq vs. SDT-seq. The patterns suggest that SDT's demonstrations are helpful when multiple slots in the same domain are similar to each other (#1 in Figure 2) and when slots dissimilar from those seen in training are introduced (#2). However, SDT can sometimes be limited by its prompt. For instance, in #3 it has only seen the "music" value for the event_type slot in the prompt, potentially resulting in under-predicting the categorical values not featured in the example dialogue (e.g. "theater").
Related Work
Prior approaches focused on framing DST as question answering (Ruan et al., 2020;Ma et al., 2019;Zhang et al., 2021). Many MultiWOZ crossdomain models leverage slot names/descriptions (Wu et al., 2019;Lin et al., 2021a).
Pretrained generative LLMs (Raffel et al., 2020;Brown et al., 2020) have enabled framing NLP tasks as seq2seq problems. Some DST papers (Zhao et al., 2021a;Feng et al., 2021) look at settings with no train-test discrepancy. Many studies explore the efficacy of task-specific prompts (Jiang et al., 2020;Liu et al., 2021). Madotto et al. (2020) and prime LMs with examples for dialogue tasks, but without finetuning. Wei et al. (2021) finetunes language models to teach them to use prompts to generalize across NLP tasks.
Conclusion
We study the use of demonstrations as LM prompts to convey the semantics of APIs in lieu of natural language descriptions for TOD. While taking similar effort to construct, demonstrations outperform description-based prompts in our experiments across DST datasets (SGD and MultiWOZ), model sizes, and training data sizes, while being more robust to changes in schemata. This work provides developers of TOD systems with more options for API representations to enable transfer to unseen services. In future work, we would like to explore this representation for other TOD tasks (e.g. dialogue management and response generation).
Ethical Considerations
We proposed a more efficient way of building TOD systems by leveraging demonstrations in place of descriptions, leading to increased accuracy with minimal/no data preparation overhead. We conduct our experiments on publicly-available TOD datasets in English, covering domains which are popular for building conversational agents. We hope our work leads to building more accurate TOD systems with similar or less overhead and encourages further research in the area.
A Prompt Design
We experimented with various formats for the SDT prompt before arriving at the final format. Below, we list alternative designs that we tried and their impact on JGA, as evaluated on the SGD test set.
A.1 Categorical value strings vs. multiple choice answers
We found that JGA dropped -2% when we tasked the model with decoding categorical values instead of multiple choice answers -e.g. payment_method=debit card instead of payment_method=b (where b is linked to the value debit card in the prompt as described in Section 2). When tasking the model to decode categorical values, it would often decode related yet invalid values, which we counted as false in our evaluation. For example, instead of debit card, the model might decode bank balance.
A.2 Slot IDs vs. slot names
When we delexicalized slot names with slot IDs, JGA dropped -5%. One downside of this approach is that the model lost access to valuable semantic information conveyed by the slot name. Another downside is that the model could not distinguish two slots that had the same value in the prompt. For example, if the prompt was "I would like a petfriendly hotel room with wifi" and the corresponding slots were 1=True (has_wifi) and 2=True (pets_allowed), it is ambiguous which ID refers to which slot. The potential upside of using slot IDs was to remove dependence on the choice of slot name, but this did not succeed for the reasons above.
A.3 Decoding active slots vs. all slots
We experimented with training the model to only decode active slots rather than all slots with none values when they were inactive. JGA dropped -0.4%, which we hypothesized might be a result of greater dissimilarity between the slot-value string in the prompt (which contained all slots by construction) and the target, which only contained a subset of slots.
A.4 In-line annotations vs. dialogue+slots concatenated
We hypothesized that bringing the slot annotation in the prompt closer to where it was mentioned in the dialogue might help the model better understand the slot's semantic meaning. We changed the format as follows: However, this decreased JGA by more than -20%. We hypothesized that this was likely due to a mismatch between the prompt's annotations and the target string format, which we did not change.
B SDT Model Details
We used the publicly available T5 checkpoints 2 . For all experiments, we used a sequence length of 2048, 10% dropout and a batch size of 16. We used a constant learning rate of 1e − 3 or 1e − 4. All models were trained for 50k steps or until convergence, and each experiment was conducted on either 64 or 128 TPU v3 chips (Jouppi et al., 2017).
C Baseline Models
For SGD, we compare against SGP-DST (Ruan et al., 2020), MRC+WD-DST (Ma et al., 2019), T5-seq (Zhao et al., 2022) and T5-ind (Lee et al., 2021a).
For MultiWOZ, we compare against TRADE (Wu et al., 2019), SUMBT , Trans-ferQA (Lin et al., 2021a), and T5-seq. Transfer QA is based on T5-large.
2 https://github.com/google-research/ text-to-text-transfer-transformer/blob/ main/released_checkpoints.md
Model
All Seen Unseen SDT-seq + desc 88.6±0.9 95.7±0.5 86.2±1.0 SDT-seq 88.8±0.5 95.8±0.2 86.4±0.7 Table A1: We experiment with prompting using both descriptions and demonstrations (SDT-seq + desc) vs. demonstrations-only (SDT-seq) and find that adding descriptions does not improve performance. Figure 3: Results of secondarily finetuning T5-seq with dialogues, to help understand whether prompting or finetuning is more effective. The examples used for finetuning are derived from the set of dialogues used as prompts across the 5 trials of SDT-seq. From this, we observe that prompting with a single dialogue demonstration outperforms few-shot finetuning. Figure 4: The original schema for a Payment service alongside its closest (v 1 ) and farthest (v 5 ) SGD-X variants, as measured by linguistic distance functions. For the SGD-X benchmark, models are trained on the original SGD dataset and evaluated on the test set, where the original test set schemas are replaced by SGD-X variant schemas.
Figure 2 :
2Comparing common error patterns made by T5-seq vs. SDT-seq. Correct and incorrect predictions colored in red and blue, respectively.
: Model input comprises a single slot description for the prompt, concatenated with the dialogue history as the context. The target is the value of the single slot in the dialogue state. Model inference is invoked once per slot -i.e. values for different slots are independently decoded.T5-ind
SDT-ind
P1 = amount: The amount of money to send
or request
P ind
1
= [ex] [user] I need to transfer 125 dollars [slot]
amount=125 dollars
P2 = receiver: Name of the contact or account
to make the transaction with
P ind
2
= [ex] [user] Make the transfer to Victoria. [slot]
receiver=Victoria
. . .
. . .
Table 1: SDT achieves state-of-the-art JGA as evaluated on the SGD test set, performing especially well on unseen services. *Data augmentation/special rules applied.Model
All
Seen
Unseen
MRC+WD-DST*
86.5
92.4
84.6
T5-seq
86.4
95.8
83.3
T5-ind
87.7
95.3
85.2
SDT-ind
87.5±0.9 95.2±0.7 85.0±1.4
SDT-seq
88.8±0.5 95.8±0.2 86.4±0.7
Model
Attraction Hotel Restaurant Taxi Train Avg
TRADE
20.1
14.2
12.6
59.2 22.4 25.7
SUMBT
22.6
19.8
16.5
59.5 22.5 28.2
TransferQA 31.3
22.7
26.3
61.9 36.7 35.8
T5-seq
76.1
28.6
69.8
87.0 60.4 64.4
SDT-seq
74.4
33.9
72.0
86.4 62.9 65.9
Table 2 :
2SDT-seq outperforms T5-seq on the Mul-tiWOZ 2.1 cross-domain (leave-one-out) benchmark.Results for TRADE, SUMBT, and TransferQA from
Kumar et al. (2020), Campagna et al. (2020), and Lin
et al. (2021a), respectively.
Table 3 :
3SGD test set JGA across T5's Base, Large, and
XXL sizes. SDT's advantage is especially prominent
on smaller model sizes.
Table 4 )
4,
Table 4 :
4Data efficiency experiments on the SGD test set. SDT-seq's example-based prompt approach is more suited to low resource settings than T5-seq's description-based prompts.
Table 5 :
5Robustness evaluation on the SGD-X test sets.*Results from Lee et al. (2021b). #Result of using
T5-seq with only slot names and no descriptions, from
Zhao et al. (2022).
• Original :
Original[ex] [user] I would like a pet-friendly hotel room with wifi [system] I found ... [slot] has_wifi=True • In-line: [ex] [user] I would like a pet-friendly hotel room with wifi [has_wifi=True] [system] I found ...
https://github.com/budzianowski/ multiwoz#dialog-state-tracking
. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, 4545Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, 4545
Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Advances in Neural Information Processing Systems. Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec RadfordCurran Associates, Inc33Amanda Askell, Sandhini Agarwal, Ariel Herbert- Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Milica Osman Ramadan, Gašić, 10.18653/v1/D18-1547Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsPaweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Gašić. 2018. MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 5016-5026, Brus- sels, Belgium. Association for Computational Lin- guistics.
Zero-shot transfer learning with synthesized data for multi-domain dialogue state tracking. Giovanni Campagna, Agata Foryciarz, Mehrad Moradshahi, Monica Lam, 10.18653/v1/2020.acl-main.12Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsGiovanni Campagna, Agata Foryciarz, Mehrad Morad- shahi, and Monica Lam. 2020. Zero-shot transfer learning with synthesized data for multi-domain dia- logue state tracking. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 122-132, Online. Association for Computational Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
MultiWOZ 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines. Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj Goyal, Peter Ku, Dilek Hakkani-Tur, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationMihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj Goyal, Peter Ku, and Dilek Hakkani-Tur. 2020. MultiWOZ 2.1: A consolidated multi-domain dia- logue dataset with state corrections and state track- ing baselines. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 422-428, Marseille, France. European Language Re- sources Association.
A sequenceto-sequence approach to dialogue state tracking. Yue Feng, Yang Wang, Hang Li, 10.18653/v1/2021.acl-long.135Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational Linguistics1Yue Feng, Yang Wang, and Hang Li. 2021. A sequence- to-sequence approach to dialogue state tracking. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1714-1725, Online. Association for Computational Linguistics.
TripPy: A triple copy strategy for value independent neural dialog state tracking. Michael Heck, Nurul Carel Van Niekerk, Christian Lubis, Hsien-Chin Geishauser, Marco Lin, Milica Moresi, Gasic, Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue. the 21th Annual Meeting of the Special Interest Group on Discourse and DialogueAssociation for Computational Linguistics1st virtual meetingMichael Heck, Carel van Niekerk, Nurul Lubis, Chris- tian Geishauser, Hsien-Chin Lin, Marco Moresi, and Milica Gasic. 2020. TripPy: A triple copy strategy for value independent neural dialog state tracking. In Proceedings of the 21th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 35-44, 1st virtual meeting. Association for Computational Linguistics.
Araki, and Graham Neubig. 2020. How can we know what language models know?. Zhengbao Jiang, Frank F Xu, Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know?
Indatacenter performance analysis of a tensor processing unit. Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, Rick Boyle, Pierre-Luc , 10.1145/3140659.3080246SIGARCH Comput. Archit. News. 452Norman P. Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, Rick Boyle, and Pierre-luc et al. Cantin. 2017. In- datacenter performance analysis of a tensor pro- cessing unit. SIGARCH Comput. Archit. News, 45(2):1-12.
Angeliki Metallinou, and Dilek Hakkani-Tur. 2020. Ma-dst: Multi-attention-based scalable dialog state tracking. Adarsh Kumar, Peter Ku, Anuj Goyal, 10.1609/aaai.v34i05.6322Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Adarsh Kumar, Peter Ku, Anuj Goyal, Angeliki Met- allinou, and Dilek Hakkani-Tur. 2020. Ma-dst: Multi-attention-based scalable dialog state tracking. Proceedings of the AAAI Conference on Artificial In- telligence, 34(05):8107-8114.
How many data points is a prompt worth?. Le Teven, Alexander Scao, Rush, 10.18653/v1/2021.naacl-main.208Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsOnlineTeven Le Scao and Alexander Rush. 2021. How many data points is a prompt worth? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 2627-2636, On- line. Association for Computational Linguistics.
Dialogue state tracking with a language model using schema-driven prompting. Chia-Hsuan Lee, Hao Cheng, Mari Ostendorf, 10.18653/v1/2021.emnlp-main.404Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican Republic. Association for Computational LinguisticsChia-Hsuan Lee, Hao Cheng, and Mari Ostendorf. 2021a. Dialogue state tracking with a language model using schema-driven prompting. In Proceed- ings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4937-4949, Online and Punta Cana, Dominican Republic. Asso- ciation for Computational Linguistics.
Sgd-x: A benchmark for robust generalization in schemaguided dialogue systems. Harrison Lee, Raghav Gupta, Abhinav Rastogi, Yuan Cao, Bin Zhang, Yonghui Wu, Harrison Lee, Raghav Gupta, Abhinav Rastogi, Yuan Cao, Bin Zhang, and Yonghui Wu. 2021b. Sgd-x: A benchmark for robust generalization in schema- guided dialogue systems.
SUMBT: Slot-utterance matching for universal and scalable belief tracking. Hwaran Lee, Jinsik Lee, Tae-Yoon Kim, 10.18653/v1/P19-1546Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsHwaran Lee, Jinsik Lee, and Tae-Yoon Kim. 2019. SUMBT: Slot-utterance matching for universal and scalable belief tracking. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 5478-5483, Florence, Italy. Association for Computational Linguistics.
Zero-shot dialogue state tracking via cross-task transfer. Zhaojiang Lin, Bing Liu, Andrea Madotto, Seungwhan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Eunjoon Cho, Rajen Subba, Pascale Fung, Zhaojiang Lin, Bing Liu, Andrea Madotto, Seung- whan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Eunjoon Cho, Rajen Subba, and Pascale Fung. 2021a. Zero-shot dialogue state track- ing via cross-task transfer.
Leveraging slot descriptions for zero-shot cross-domain dialogue StateTracking. Zhaojiang Lin, Bing Liu, Seungwhan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Andrea Madotto, Eunjoon Cho, Rajen Subba, 10.18653/v1/2021.naacl-main.448Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnlineAssociation for Computational LinguisticsZhaojiang Lin, Bing Liu, Seungwhan Moon, Paul Crook, Zhenpeng Zhou, Zhiguang Wang, Zhou Yu, Andrea Madotto, Eunjoon Cho, and Rajen Subba. 2021b. Leveraging slot descriptions for zero-shot cross-domain dialogue StateTracking. In Proceed- ings of the 2021 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5640-5648, Online. Association for Computational Linguistics.
. Xiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. Gpt understands, tooXiao Liu, Yanan Zheng, Zhengxiao Du, Ming Ding, Yujie Qian, Zhilin Yang, and Jie Tang. 2021. Gpt understands, too.
An end-to-end dialogue state tracking system with machine reading comprehension and wide & deep classification. Yue Ma, Zengfeng Zeng, Dawei Zhu, Xuan Li, Yiying Yang, Xiaoyuan Yao, Kaijie Zhou, Jianping Shen, 10.48550/ARXIV.1912.09297Yue Ma, Zengfeng Zeng, Dawei Zhu, Xuan Li, Yiy- ing Yang, Xiaoyuan Yao, Kaijie Zhou, and Jianping Shen. 2019. An end-to-end dialogue state tracking system with machine reading comprehension and wide & deep classification.
Language models as few-shot learner for task-oriented dialogue systems. Andrea Madotto, Zihan Liu, Zhaojiang Lin, Pascale Fung, abs/2008.06239CoRR. Andrea Madotto, Zihan Liu, Zhaojiang Lin, and Pas- cale Fung. 2020. Language models as few-shot learner for task-oriented dialogue systems. CoRR, abs/2008.06239.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former.
Raghav Gupta, and Pranav Khaitan. 2020a. Schemaguided dialogue state tracking task at dstc8. Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, 10.48550/ARXIV.2002.01359Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020a. Schema- guided dialogue state tracking task at dstc8.
Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, Pranav Khaitan, 10.1609/aaai.v34i05.6394Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020b. To- wards scalable multi-domain conversational agents: The schema-guided dialogue dataset. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05):8689-8696.
Prompt programming for large language models: Beyond the few-shot paradigm. Laria Reynolds, Kyle Mcdonell, Laria Reynolds and Kyle McDonell. 2021. Prompt pro- gramming for large language models: Beyond the few-shot paradigm.
Fine-tuning bert for schema-guided zeroshot dialogue state tracking. Yu-Ping Ruan, Zhen-Hua Ling, Jia-Chen Gu, Quan Liu, 10.48550/ARXIV.2002.00181Yu-Ping Ruan, Zhen-Hua Ling, Jia-Chen Gu, and Quan Liu. 2020. Fine-tuning bert for schema-guided zero- shot dialogue state tracking.
. Jason Wei, Maarten Bosma, Y Vincent, Kelvin Zhao, Adams Wei Guu, Brian Yu, Nan Lester, Du, Andrew M. Dai, and Quoc V. Le. 2021Finetuned language models are zero-shot learnersJason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M. Dai, and Quoc V. Le. 2021. Finetuned lan- guage models are zero-shot learners.
Transferable multi-domain state generator for task-oriented dialogue systems. Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, Pascale Fung, Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini- Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state gener- ator for task-oriented dialogue systems.
Sgd-qa: Fast schema-guided dialogue state tracking for unseen services. Yang Zhang, Vahid Noroozi, Evelina Bakhturina, Boris Ginsburg, Yang Zhang, Vahid Noroozi, Evelina Bakhturina, and Boris Ginsburg. 2021. Sgd-qa: Fast schema-guided dialogue state tracking for unseen services.
Jeffrey Zhao, Raghav Gupta, Yuan Cao, Dian Yu, Mingqiu Wang, Harrison Lee, Abhinav Rastogi, 10.48550/ARXIV.2201.08904Izhak Shafran, and Yonghui Wu. 2022. Descriptiondriven task-oriented dialog modeling. Jeffrey Zhao, Raghav Gupta, Yuan Cao, Dian Yu, Mingqiu Wang, Harrison Lee, Abhinav Rastogi, Izhak Shafran, and Yonghui Wu. 2022. Description- driven task-oriented dialog modeling.
Effective sequence-tosequence dialogue state tracking. Jeffrey Zhao, Mahdis Mahdieh, Ye Zhang, Yuan Cao, Yonghui Wu, 10.18653/v1/2021.emnlp-main.593Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsJeffrey Zhao, Mahdis Mahdieh, Ye Zhang, Yuan Cao, and Yonghui Wu. 2021a. Effective sequence-to- sequence dialogue state tracking. In Proceedings of the 2021 Conference on Empirical Methods in Natu- ral Language Processing, pages 7486-7493, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Calibrate before use: Improving few-shot performance of language models. Tony Z Zhao, Eric Wallace, Shi Feng, Dan Klein, Sameer Singh, Tony Z. Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. 2021b. Calibrate before use: Improv- ing few-shot performance of language models.
| [
"https://github.com/google-research/",
"https://github.com/budzianowski/"
] |
[
"Universal, Unsupervised, Uncovered Sentiment Analysis",
"Universal, Unsupervised, Uncovered Sentiment Analysis"
] | [
"David Vilares david.vilares@udc.es \nDepartamento de Computación\nGrupo LyS\nUniversidade da Coruña Campus de A Coruña s/n\n15071A CoruñaSpain\n",
"Carlos Gómez-Rodríguez carlos.gomez@udc.es \nDepartamento de Computación\nGrupo LyS\nUniversidade da Coruña Campus de A Coruña s/n\n15071A CoruñaSpain\n",
"Miguel A Alonso miguel.alonso@udc.es \nDepartamento de Computación\nGrupo LyS\nUniversidade da Coruña Campus de A Coruña s/n\n15071A CoruñaSpain\n"
] | [
"Departamento de Computación\nGrupo LyS\nUniversidade da Coruña Campus de A Coruña s/n\n15071A CoruñaSpain",
"Departamento de Computación\nGrupo LyS\nUniversidade da Coruña Campus de A Coruña s/n\n15071A CoruñaSpain",
"Departamento de Computación\nGrupo LyS\nUniversidade da Coruña Campus de A Coruña s/n\n15071A CoruñaSpain"
] | [] | We present a novel unsupervised approach for multilingual sentiment analysis driven by compositional syntax-based rules. On the one hand, we exploit some of the main advantages of unsupervised algorithms: (1) the interpretability of their output, in contrast with most supervised models, which behave as a black box and (2) their robustness across different corpora and domains. On the other hand, by introducing the concept of compositional operations and exploiting syntactic information in the form of universal dependencies, we tackle one of their main drawbacks: their rigidity on data that are differently structured depending on the language. Experiments show an improvement both over existing unsupervised methods, and over state-of-the-art supervised models when evaluating outside their corpus of origin. The system is freely available 1 . | 10.1016/j.knosys.2016.11.014 | [
"https://arxiv.org/pdf/1606.05545v1.pdf"
] | 15,512,301 | 1606.05545 | 758e923acceb46de94ae0df36f8ccc711937d1f2 |
Universal, Unsupervised, Uncovered Sentiment Analysis
17 Jun 2016
David Vilares david.vilares@udc.es
Departamento de Computación
Grupo LyS
Universidade da Coruña Campus de A Coruña s/n
15071A CoruñaSpain
Carlos Gómez-Rodríguez carlos.gomez@udc.es
Departamento de Computación
Grupo LyS
Universidade da Coruña Campus de A Coruña s/n
15071A CoruñaSpain
Miguel A Alonso miguel.alonso@udc.es
Departamento de Computación
Grupo LyS
Universidade da Coruña Campus de A Coruña s/n
15071A CoruñaSpain
Universal, Unsupervised, Uncovered Sentiment Analysis
17 Jun 2016
We present a novel unsupervised approach for multilingual sentiment analysis driven by compositional syntax-based rules. On the one hand, we exploit some of the main advantages of unsupervised algorithms: (1) the interpretability of their output, in contrast with most supervised models, which behave as a black box and (2) their robustness across different corpora and domains. On the other hand, by introducing the concept of compositional operations and exploiting syntactic information in the form of universal dependencies, we tackle one of their main drawbacks: their rigidity on data that are differently structured depending on the language. Experiments show an improvement both over existing unsupervised methods, and over state-of-the-art supervised models when evaluating outside their corpus of origin. The system is freely available 1 .
Introduction
Semantic composition is a natural process for humans when understanding the sentiment of an opinion. In the sentence 'He is not very handsome, but he has something that I really like', humans have the ability to infer that the word 'very' emphasizes 'handsome', 'not' affects the whole expression 'very handsome', and 'but' decreases the relevance of 'He is not very handsome' and increases the one of 'he has something that I really like'. Based on this, a human could justify a positive overall sentiment on that sentence.
Our main contribution is the introduction of the first universal and unsupervised model for compositional sentiment analysis (SA) driven by syntax-based rules. We introduce a formalism for compositional operations, allowing the creation of arbitrarily complex rules to tackle relevant phenomena for SA, for any language and syntactic dependency annotation. A set of practical universal operations is evaluated on different corpora and languages. The model outperforms existing unsupervised approaches, and state-of-the-art compositional supervised models (Socher et al., 2013) on domain-transfer settings, and shows that the operations can be shared across languages, as they are defined using part-of-speech (PoS) tags and dependency types under the universal guidelines of (Petrov et al., 2011;McDonald et al., 2013).
Related work
A naïve approach to emulate the comprehension of the meaning of multiword phrases for SA consists in using n-grams with n > 1 (Pang et al., 2002). The approach is limited by the curse of dimensionality, although crawling data from the target domain can help to reduce that problem (Kiritchenko et al., 2014). Joshi and Penstein-Rosé (2009) went one step forward and proposed generalized dependency triplets as features for subjectivity detection, capturing non-local relations. Socher et al. (2012) modeled a recursive neural network that learns compositional vector representations for phrases and sentences of arbitrary syntactic type and length. Socher et al. (2013) presented an improved recursive deep model for SA over dependency trees, and trained it on a sentiment treebank tagged using Amazon Mechanical Turk, pushing the state of the art up to 85.4% on the Pang and Lee (2005) dataset. Kalchbrenner et al. (2014) showed how convolutional neural networks (CNN) can be used for semantic modeling of sentences. The model implicitly captures local and non-local relations without the need of a parse tree. It can be adapted for any language, as far as enough data is available. Severyn and Moschitti (2015) showed the effectiveness of a CNN in a Se-mEval SA shared task (Rosenthal et al., 2015), although crawling tens of millions of messages was first required to achieve state-of-the-art results.
In spite of being powerful and accurate, supervised approaches also present drawbacks. Firstly, they behave as a black box. Secondly, they do not perform so well on domain transfer applications (Aue and Gamon, 2005;Pang and Lee, 2008). Finally, feature and hyper-parameter engineering can be time and resource costly options.
When these limitations need to be addressed, unsupervised (rule-based) approaches are useful. In this line, Turney (2002) proposed an unsupervised learning algorithm to calculate the semantic orientation (SO) of a word. Taboada et al. (2011) presented a lexical rule-based approach to handle relevant linguistic phenomena such as intensification, negation, 'but' clauses and irrealis. Thelwall et al. (2012) released SentiStrength, a multilingual unsupervised system for micro-text SA that handles negation and intensification, among other web linguistic phenomena. Regarding syntax-based approaches, the few described in the literature are language-dependent. Jia et al. (2009) define a set of syntax-based rules for handling negation in English. Vilares et al. (2015a) propose a syntactic SA method, but limited to Spanish reviews and Ancora trees (Taulé et al., 2008).
In brief, most unsupervised approaches are language-dependent, and those that can manage multilinguality, such as SentiStrength, cannot apply semantic composition.
Unsupervised Compositional SA
Dependency graphs
Let w=w 1 , ..., w n be a sentence, where each word occurrence w i ∈ W is assigned a PoS tag t i ∈ T . Definition 1. A dependency tree for w is an edgelabeled directed tree T = (V, E) where V = {0, 1, 2, . . . , n} is the set of nodes and E = V × D × V is the set of labeled arcs. Each arc, of the form (i, d, j), corresponds to a syntactic dependency between the words w i and w j ; where i is the index of the head or parent word, j is the index of the dependent or child word and d is the dependency type representing the kind of syntactic relation between them. Following standard practice, we use node 0 as a dummy root node that acts as the head of the syntactic root(s) of the sentence.
We will write i d − → j as shorthand for (i, d, j) ∈ E and we will omit the dependency types when they are not relevant. Given a dependency tree T = (V, E), and a node i ∈ V , we define a set of functions to obtain the context of node i:
• ancestor T (i, δ) = {k ∈ V :
there is a path of length δ from k to i in T }, i.e., the singleton set containing the δth ancestor of i (or the empty set if there is no such node),
• children T (i) = {k ∈ V | i → k}, i.e.
, the set of children of node i,
• lm-branch T (i, d) = min{k ∈ V | i d − → k}, i.e.
, the set containing the leftmost among the children of i whose dependencies are labeled d (or the empty set if there is no such node).
Operations for compositional SA
Our compositional SA system will associate an SO value σ i to each node i in the dependency tree of a sentence, representing the SO of the subtree rooted at i. The system will use a set of compositional operations to propagate changes to the semantic orientations of the nodes in the tree. Once all the relevant operations have been executed, the SO of the sentence will be stored as σ 0 , i.e., the semantic orientation of the root node.
A compositional operation is triggered when a node in the tree matches a given condition (related to its associated PoS tag, dependency type and/or word form); and it is applied to a scope of one or more nodes calculated from the trigger node by ascending a number of levels in the tree and then applying a scope function. More formally, we define our operations as follows:
Definition 2. Given a dependency tree T (V, E), a compositional operation is a tuple o = (τ, C, δ, π, S) such that:
• τ : R → R is a transformation function to apply on the SO (σ) of nodes,
• C : V → {true, f alse} is a predicate that determines whether a node in the tree will trigger the operation,
• δ ∈ N is a number of levels that we need to ascend in the tree to calculate the scope of o,
• π is a priority that will be used to break ties when several operations coincide on a given node, and
• S is a scope calculation function that will be used to determine the nodes affected by the operation.
In practice, our system defines C(i) by means of sets of words, tags and/or dependency types such that the operation will be triggered if w i , t i and/or the head dependency of i are in those sets. Compositional operations where C(i) is defined using universal tags and dependency types only are universal and can be used across languages.
We propose two options for the transformation function τ :
• shift α (x) = x − α if x > 0 x + α if x < 0
where α is the shifting factor and α, x ∈ R.
• weighting β (x) = x × (1 + β) where β is the weighting factor and β, x ∈ R.
The scope calculation function, S, allows us to calculate the nodes of T whose SO is affected by the transformation τ . For this purpose, if the operation was triggered by a node i, we apply S to ancestor T (i, δ), i.e., the δth ancestor of i (if it exists), which we call the destination node of the operation. The proposed scopes are as follows (see also Figure 1):
• dest (destination node): The transformation τ is applied directly to the SO of ancestor T (i, δ) (see Figure 1.a).
• lm-branch d (branch of d): The affected nodes are lm-branch T (ancestor T (i, δ), d) (see Figure 1.b).
• rc n (n right children): τ affects the SO of the n smallest indexes of {j ∈ children T (ancestor T (i, δ)) | j > i} (see Figure 1.c).
• lc n (n left children): The transformation affects the n largest elements of {j ∈ children T (ancestor T (i, δ)) | j < i} (see Figure 1.d).
• subjr (first subjective right branch):
The affected node is min{j ∈ children T (ancestor T (i, δ)) | j > i ∧ σ j = 0} (see Figure 1.e).
• subjl (first subjective left branch):
The affected node is max{j
∈ children T (ancestor T (i, δ)) | j < i ∧ σ j = 0} (see Figure 1.f).
Compositional operations can be defined for any language or dependency annotation criterion. While it is possible to add rules for languagespecific phenomena if needed (see § 3.3), in this paper we focus on universal rules to obtain a truly multilingual system. 2
An algorithm for unsupervised SA
To execute the operations and calculate the SO of each node in the dependency tree of the sentence, we start by initializing the SO of each word using a subjective lexicon, as traditional unsupervised approaches do (Turney, 2002). Some options to obtain multilingual subjectivity lexica are Sen-tiStrength (subjective data for up to 34 languages) or the Chen and Skiena (2014) approach, which introduced a method for building sentiment lexicons for 136 languages. Our implementation supports the lexicon format of SentiStrength, which can be plugged directly into the system. Additionally, we provide the option to create different dictionary entries depending on PoS tags to avoid conflicts between homonymous words (e.g. 'I'm fine' versus 'They gave me a fine').
Then, we traverse the parse tree in postorder, applying Algorithm 1 to update semantic orientations when visiting each node i. In this algorithm, O is the set of compositional operations defined in our system, A i is a priority queue of the compositional operations to be applied at node i (because i is their destination node); and Q i is another priority queue of compositional operations to be queued for upper levels at node i (as i is not yet their destination node). ⊕ defines the operation to merge two priority queues, push inserts o in a priority queue and pop pulls the operation with the highest priority (ties are broken by giving preference to the operation that was queued earlier).
At a practical level, the set of compositional operations are specified using a simple XML file:
• <forms>: Indicates the tokens to be taken into account for the condition C that triggers the operation. Regular expressions are supported. indicates the node that triggers an operation o, the nodes to which it is applied.
o, =1 a) =1, s=dest o, =2 b) =2, s=lm-branch d d c) =1, s=rc² o, =1 o, =0 e) =1, s=subjr =0 =0 0 o, =1 o, =0 d) =1, s=lc² o, =1 o, =0 o, =0 d o, =1 o, =0 f) =1, s=subjl =0 =0 0 o, =0 o, =1Algorithm 1 Compute SO of a node 1: procedure COMPUTE(i, O ,T ) 2: Ai ← [] 3: Qi ← [] ⊲ Enqueue operations triggered by node i: 4: for o = (τ, C, δ, π, S) in O do 5: if C(i) then 6:
if δ > 0 then 7:
push((τ, C, δ, π, S), Qi) 8: else 9:
push((τ, C, δ, π, S), Ai) ⊲ Enqueue operations coming from child nodes: 10:
for c in childrenT (i) do 11:
for o = (τ, C, δ, π, S) in Qc do 12:
if δ − 1 = 0 then 13:
push((τ, C, δ − 1, π, S), Ai) 14: else 15:
push(τ, C, δ − 1, π, S), Qi) ⊲ Execute operations that have reached their destination node: 16:
while Ai is not empty do 17: o = (τ, C, δ, π, S) ← pop(Ai) 18:
for j in S(i) do 19:
σj ← τ (σj) ⊲ Join the SOs for node i and its children: 20:
σi ← σi + c∈children T (i) σc
• <dependency>: Indicates the dependency types taken into account for C. • <postags>: Indicates the PoS tags that must match to trigger the rule. • <rule>: Defines the operation to be executed when the rule is triggered. • <levelsup>: Defines the number of levels from i to spread before applying o. • <priority>: Defines the priority of o in case than more than one operation needs to be applied over i (a larger number implies a bigger priority).
NLP tools for universal unsupervised SA
The following resources serve us as the starting point to carry out state-of-the-art universal, unsupervised and syntactic sentiment analysis.
The system by Gimpel et al. (2011) is used for tokenizing. Although initially intended for English tweets, we have observed that it also performs robustly for many other language families (Romance, Slavic, etc.).
For part-of-speech tagging we rely on the free distribution of the Toutanova and Manning (2000) tagger. Dependency parsers are built using Malt-Parser (Nivre et al., 2007) and MaltOptimizer (Ballesteros and Nivre, 2012). We trained a set of taggers and parsers for different languages using the universal tag and dependency sets (Petrov et al., 2011;McDonald et al., 2013). Table 1 shows their performance under standard metrics. The tagging 3 and parsing 4 models are also available.
Defining compositional operations
We presented above a formalism to define arbitrarily complex compositional operations for unsupervised SA over a dependency tree. In this section, we show the definition of the most important rules that we used to evaluate our system. In practical terms, this implies studying how syntactic constructions that modify the sentiment of an ex- pression are represented in the annotation formalism used for the training of the dependency parser, in this case, Universal Dependencies. We are using examples following those universal guidelines, since they are available for more than 40 languages and, as shown in § 6, the same rules can be competitive across different languages.
Intensification
Intensification amplifies or diminishes the sentiment of a word or phrase. Simple cases of this phenomenon can be 'I have huge problems' or 'This is a bit dissapointing'. Traditional lexiconbased methods handle most of these cases with simple heuristics (e.g. amplifying or diminishing the sentiment of the word following an intensifier). However, ambiguous cases might appear where such lexical heuristics are not sufficient. For example, 'huge' can be a subjective adjective introducing its own SO (e.g. 'The house is huge'), but also an amplifier when it modifies a subjective noun or adjective (e.g. 'I have huge problems', where it makes 'problems' more negative).
Universal compositional operations overcome this problem without the need of any heuristic. A dependency tree already shows the behavior of a word within a sentence thanks to its dependency type, and it shows the role of a word independently of the language. Figure 2 shows graphically how universal dependencies represent the cases discussed above these lines. Formally, the operation for these forms of intensification is: (weighting β , w ∈ intensifiers ∧ t ∈ {ADV,ADJ} ∧ d ∈ {advmod,amod,nmod}, 1, 3, dest ∪ lm-branch acomp ), with the value of β depending on the strength of the intensifier as given by the sentiment lexicon.
'But' clauses
Compositional operations can also be defined to manage more challenging cases, such as clauses introduced by 'but', considered as a special case of intensification by authors such as Brooke et al. (2009) or Vilares et al. (2015a. It is assumed that the main clause connected by 'but' becomes less relevant for the reader (e.g. 'It is expensive, but I love it'). Figure 3 shows our proposed composition operation for this clause, formally: (weighting β , w ∈ {but} ∧ t ∈ {CONJ} ∧ d ∈ {cc}, 1, 1, subjl ) with β = −0.25. Note that the priority of this operation (π = 1) is smaller than that of intensification (π = 3), since we first need to process intensifiers, which are local phenomena, before resolving adversatives, which have a larger scope.
Negation
Negation is one of the most challenging phenomena to handle in SA, since its semantic scope can be non-local (e.g. 'I do not plan to make you suffer'). Existing unsupervised lexical approaches are limited to consider a snippet to guess the scope of negation. Thus, it is likely that they consider as a part of the scope terms that should not be negated from a semantic point of view. Dependency types help us to determine which nodes should act as negation and which should be its scope of influence. For brevity, we only illustrate some relevant negation cases and instructional examples in Figure 4. Formally, the proposed compositional operation to tackle most forms of negation under universal guidelines is: (shift α , w ∈ negations ∧ t ∈ U ∧ d ∈ {neg}, 1, 2, dest ∪ lm-branch attr ∪ lm-branch acomp ∪ subjr ), where U represents the universal tag set. The priority of negation (π = 2) is between those of intensification and 'but' clauses because its scope can be non-local, but it does not go beyond an adversative conjuction.
Irrealis
Irrealis denotes linguistic phenomena used to refer to non-factual actions, such as conditional, subjunctive or desiderative sentences (e.g. 'He would have died if he hadn't gone to the doctor'). It is a very complex phenomenon to deal with, and systems are usually unable to tackle this issue or they simply define rules to ignore sentences containing a list of irrealis stop-words (Taboada et al., 2011 [...]', considering that the phrase that contains it should be ignored from the final computation. Formally: (weighting β , w ∈ {if} ∧ t ∈ U ∧ d ∈ {mark}, 2, 3, dest ∪ subjr). Its graphical representation would be very similar to intensification (see Figures 1 a) and e)). Figure 5 represents an analysis of our introductory sentence 'He is not very handsome, but he has something that I really like', showing how compositional operations accurately capture semantic composition.
Discussion
It is hard to measure the coverage of our rules and the potential of these universal compositional operations, since it is possible to define arbitrarily complex operations for as many relevant linguistic phenomena as wished. In this line, Poria et al. (2014) define a set of English sentic patterns to determine how sentiment flows from concept to concept in a variety of situations (e.g. relations of complementation, direct nominal objects, relative clauses, . . . ) over a dependency tree following the De Marneffe and Manning (2008) guidelines.
Experimental results
We compare our algorithm with respect to existing approaches on three languages: English, Spanish and German. The availability of corpora and other unsupervised SA systems for English and Spanish enables us to perform a richer comparison than in the case of German, where we only have an ad-hoc corpus.
We compare our algorithm with respect to two of the most popular and widely used unsupervised systems: (1) SO-CAL (Taboada et al., 2011), a language-dependent system available for English and Spanish guided by lexical rules at the morphological level, and (2) SentiStrength, a multilingual system that does not apply any PoS tagging or parsing step in order to be able to do multilingual analysis, relying instead on a set of subjectivity lexica, snippet-based rules and treatment of non-grammatical phenomena (e.g. character replication). Additionally, for the Spanish evaluation, we also took into account the system developed by Vilares et al. (2015a), an unsupervised syntaxbased approach available for Spanish but, in contrast to ours, heavily language-dependent.
For comparison against state-of-the-art supervised approaches, we consider the deep recursive (1-0.25)x1+1.15 = 1.9 Figure 5: Analysis of a sentence applying universal unsupervised prediction. For the sake of clarity, the real post-order traversal is not illustrated. Instead we show an (in this case) equivalent computation by applying all operations with a given priority, π, at the same time, irrespective of the node. Semantic orientation, intensification and negation values are extracted from the dictionaries of Taboada et al. (2011). Phase a) shows how the intensification is computed on the branches rooted at 'handsome' and 'like'. Phase b) shows how the negation shifts the semantic orientation of the attribute (again, the branch rooted at 'handsome'). Phase c) illustrates how the clause 'but' diminishes the semantic orientation of the main sentence, in particular the semantic orientation of the attribute, the first left subjective branch of its head. Elements that are not playing a role in a specific phase appear dimmed. One of the interesting points in this example comes from illustrating how three different phenomena involving the same branch (the attribute 'handsome') are addressed properly thanks to the assigned π. neural network presented by Socher et al. (2013), trained on a movie sentiment treebank (English). To the best of our knowledge, there are no semantic compositional supervised methods for Spanish and German.
Accuracy is used as the evaluation metric for two reasons: (1) it is adequate for measuring the performance of classifiers when the chosen corpora are balanced and (2) the selected systems for comparison also report their results using this metric.
Resources
We selected the following standard English corpora for evaluation: short movie reviews (sentences). In particular, we used the test split used by Socher et al. (2013), removing the neutral ones, as they did, for the binary classification task (total: 1 821 subjective sentences).
To show the universal capabilities of our system we include an evaluation for Spanish using the corpus presented by Brooke et al. (2009) (200 positive and 200 negative long reviews from ciao.es). For German, we rely on a dataset of 2 000 reviews (1 000 positive and 1 000 negative reviews) extracted from Amazon.
As subjectivity lexica, we use the same dictionaries used by SO-CAL for both English and Spanish. For German, we use the German Sen-tiStrength dictionaries (Momtazi, 2012) instead, as Brooke et al. (2009) dictionaries are not available for languages other than Spanish or English. Table 2 compares the performance of our model with respect to SentiStrength 5 and SO-CAL on 5 We used the default configuration, which already applies many optimizations. We set the length of the snippet between a negator and its scope to 3, based on empirical evaluation, and applied the configuration to compute sentiment on long reviews. the Taboada and Grieve (2004) corpus. With respect to SO-CAL, results show that our handling of negation and intensification provides better results (outperforming SO-CAL by 3.25 percent points overall). With respect to SentiStrength, our system achieves better performance on long reviews. Table 3 compares these three unsupervised systems on the Pang and Lee (2004) corpus, showing the robustness of our approach across different domains. Our system again performs better than SO-CAL for negation and intensification (although it does not behave as well when dealing with irrealis, probably due to the need of more complex compositional operations to handle this phenomenon), and also better than SentiStrength on long movie reviews.
Comparison to unsupervised approaches
Rules
SentiStrength SO- Table 3: Accuracy (%) on Pang and Lee (2004) test set. Table 4 compares the performance of our universal approach on a different language (Spanish) with respect to: Spanish SentiStrength (Vilares et al., 2015b), the Spanish SO-CAL (Brooke et al., 2009) and a syntactic language-dependent system inspired on the latter (Vilares et al., 2015a). We used exactly the same set of compositional operations as used for English (only changing the list of word forms for negation, intensification and 'but' clauses, as explained in §3.2). Our universal system again outperforms SentiStrength and SO-CAL in its Spanish version. The system also obtains results very similar to the ones reported by Vilares et al. (2015a), even though their system is languagedependent and the set of rules is fixed and written specifically for Spanish. In order to check the validity of our approach for languages other than English and Spanish, we have considered the case of German. It is worth noting that the authors of this article have no notions of German at all. In spite of this, we have been able to create a state-of-the-art unsupervised SA system by integrating an existing sentiment lexicon into the framework that we propose in this article.
We use the German SentiStrength system (Momtazi, 2012) for comparison. The use of the German SentiStrength dictionary allows us to show how our system is robust when using different lexica. Experimental results show an accuracy of 72.75% on the Amazon review dataset when all rules are included, while SentiStrength reports 69.95%. Again, adding first negation (72.05%) and then intensification (72.85%) as compositional operations produced relevant improvements over our baseline (69.85%). The results are comparable to those obtained for other languages, using a dataset of comparable size, reinforcing the robustness of our approach across different domains, languages, and base dictionaries.
Comparison to supervised approaches
Supervised systems are usually unbeatable on the test portion of the corpus with which they have been trained. However, in real applications, a sufficiently large training corpus matching the target texts in terms of genre, style, length, etc. is often not available; and the performance of supervised systems has proven controversial on domain transfer applications (Aue and Gamon, 2005). Table 5 compares our universal unsupervised system to Socher et al. (2013) on a number of corpora: (1) the collection used in the evaluation of the Socher et al. system (Pang and Lee, 2005), (2) a corpus of the same domain, i.e., movies (Pang and Lee, 2004), and (3) Socher et al. (2013) model Pang and Lee (2005) 85.40 75.01 Other corpora Taboada and Grieve (2004) 62.00 73.75 Pang and Lee (2004) 63.80 74.10 Lee (2004) and Taboada and Grieve (2004) corpora are collections of long reviews, we needed to collect the global sentiment of the text. We count the number of outputs of each class (very positive and very negative count double, positive and negative count one and neutral counts zero). We take the majority class, and in the case of a tie, it is classified as negative. 6 The experimental results show that our approach obtains better results on corpora (2) and (3). It is worth mentioning that our unsupervised compositional approach outperformed the supervised model not only on an out-of-domain corpus, but also on another dataset of the same domain (movies) as the one where the neural network was trained and evaluated. This reinforces the usefulness of an unsupervised approach for applications that need to analyze a number of texts coming from different domains, styles or dates, but there is a lack of labeled data to train supervised classifiers for all of them. As expected, Socher et al. (2013) is unbeatable for an unsupervised approach on the test set of the corpus where it was trained. However, our unsupervised algorithm also performs very robustly on this dataset.
In this article, we have described, implemented and evaluated a novel model for universal and unsupervised sentiment analysis driven by a set of syntactic rules for semantic composition. Existing unsupervised approaches are purely lexical, their rules are heavily dependent on the language concerned or they do not consider any kind of natural language processing step in order to be able to handle different languages, using shallow rules instead.
To overcome these limitations, we introduce from a theoretical and practical point of view the concept of compositional operations, to define arbitrarily complex semantic relations between different nodes of a dependency tree. Universal partof-speech tagging and dependency parsing guidelines make it feasible to create multilingual sentiment analysis compositional operations that effectively address semantic composition over natural language sentences. The system is not restricted to any corpus or language, and by simply adapting or defining new operations it can be adapted to any other PoS tag or dependency annotation criteria.
We have compared our universal unsupervised model with state-of-the-art unsupervised and supervised approaches. Experimental results show:
(1) that our algorithm outperforms two of the most commonly used unsupervised systems, (2) the universality of the model's compositional operations across different languages and (3) the usefulness of our approach on domain-transfer applications, especially with respect to supervised models.
As future work, we plan to design algorithms for the automatic extraction of compositional operations that capture the semantic relations between tree nodes. We would also like to collect corpora to extend our evaluation to more languages, since collections that are directly available on the web are scarcer than expected. Additionally, the concept of compositional operations is not limited to generic SA and could be adapted for other tasks such as universal aspect extraction. Finally, we plan to adapt the Poria et al. (2014) sentic patterns as compositional operations, so they can be handled universally.
Figure 1 :
1Graphical representation of the proposed set of influence scopes S.
Figure 2 :
2Skeleton for intensification compositional operations (2.a, 2.c) and one case without intensification (2.b), together with examples annotated with universal dependencies.
Figure 3 :
3Skeleton for 'but' compositional operation illustrated with one example according to universal dependencies.
Figure 4 :
4Skeleton for negation compositional operations illustrated together with one example.
•
Taboada and Grieve (2004) corpus: A general-domain collection of 400 long reviews (200 positive, 200 negative) about hotels, movies, computers or music among other topics, extracted from epinions.com. • Pang and Lee (2004) corpus: A corpus of 2 000 long movie reviews (1 000 positive, 1 000 negative). • Pang and Lee (2005) corpus: A corpus of
). We do not address this phenomenon in detail in this study, but only propose a rule to deal with 'if' constructions (e.g. 'if I die [...]' or 'if you are happy
Table 2 :
2Accuracy (%) on the Taboada and Grieve
(2004) corpus. We only provide one column
for SentiStrength since we are using the standard
configuration for English (which already includes
negation and intensification functionalities).
Rules
SentiStrength SO-CAL Our system
Baseline
N/A
68.05
67.77
+negation
N/A
70.10
71.85
+intensification
56.90
73.47
74.00
+irrealis
N/A
74.95
74.10
Table 4 :
4Accuracy (%) on the Spanish Brooke et al. (2009) test set.
the Taboada and Grieve (2004) collection. Socher et al.'s system provides sentence-level polarity classification with five possible outputs: very positive, positive, neutral, negative, very negative. Since the Pang andCorpora
Socher et al. (2013) Our system
Origin corpus of
Table 5 :
5Accuracy (%) on different corpora for
Socher et al. (2013) and our system. On the
Pang and Lee 2005 (Pang and Lee, 2005) collec-
tion, our detailed results taking into account differ-
ent compositional operations were: 73.75 (base-
line), 74.13 (+negation), 74.68 (+intensification)
and 75.01 (+irrealis)
http://grupolys.org/software/UUUSA/
Apart from universal dependencies and PoS tags, the only extra information used by our rules is a short list of negation words, intensifiers, adversative conjunctions and words introducing conditionals (like the English "if" or "would"). While this information is language-specific, it is standardly included in multilingual sentiment lexica which are available for many languages ( § 3.3), so it does not prevent our system from working on a wide set of languages without any adaptation.
http://grupolys.org/software/TAGGERS/universal-tagsets/monolingual/ 4 http://www.grupolys.org/software/PARSERS/universaltag-sets/monolingual/
These criteria were selected empirically. Assigning the positive class in the case of a tie was also tested, as well as not doubling the very positive and very negative output, but these settings produced similar or worse results with the (Socher et al., 2013) system.
AcknowledgmentsThis research is supported by the Ministerio de Economía y Competitividad (FFI2014-51978-C2). David Vilares is funded by the Ministerio de Educación, Cultura y Deporte (FPU13/01180). Carlos Gómez-Rodríguez is funded by an Oportunius program grant (Xunta de Galicia). We thank Roman Klinger for his help in obtaining the German data.
Customizing sentiment classifiers to new domains: A case study. A Aue, M Gamon, Proceedings of the International Conference on Recent Advances in Natural Language Processing. the International Conference on Recent Advances in Natural Language ProcessingRANLPA. Aue and M. Gamon. 2005. Customizing sen- timent classifiers to new domains: A case study. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP).
MaltOptimizer: an optimization tool for MaltParser. M Ballesteros, J Nivre, Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics. the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational LinguisticsAssociation for Computational LinguisticsM. Ballesteros and J. Nivre. 2012. MaltOptimizer: an optimization tool for MaltParser. In Proceed- ings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 58-62. Association for Computational Linguistics.
Cross-Linguistic Sentiment Analysis: From English to Spanish. J Brooke, M Tofiloski, M Taboada, Proceedings of RANLP 2009, Recent Advances in Natural Language Processing. RANLP 2009, Recent Advances in Natural Language ProcessingBovorets, BulgariaJ. Brooke, M. Tofiloski, and M. Taboada. 2009. Cross- Linguistic Sentiment Analysis: From English to Spanish. In Proceedings of RANLP 2009, Recent Advances in Natural Language Processing, pages 50-54, Bovorets, Bulgaria, September.
Building Sentiment Lexicons for All Major Languages. Y Chen, S Skiena, The 52nd Annual Meeting of the Association for Computational Linguistics. Proceedings of the Conference. Baltimore2Short Papers. ACL 2014Y. Chen and S. Skiena. 2014. Building Sentiment Lex- icons for All Major Languages. In The 52nd Annual Meeting of the Association for Computational Lin- guistics. Proceedings of the Conference. Volume 2: Short Papers. ACL 2014, pages 383-389, Baltimore, June. ACL.
The Stanford typed dependencies representation. M De Marneffe, C D Manning, Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation. Association for Computational LinguisticsM. De Marneffe and C. D. Manning. 2008. The Stanford typed dependencies representation. In Col- ing 2008: Proceedings of the workshop on Cross- Framework and Cross-Domain Parser Evaluation, pages 1-8. Association for Computational Linguis- tics.
Part-of-speech tagging for twitter: Annotation, features, and experiments. K Gimpel, N Schneider, B O'connor, D Das, D Mills, J Eisenstein, M Heilman, D Yogatama, J Flanigan, N A Smith, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies2Association for Computational LinguisticsK. Gimpel, N. Schneider, B. O'Connor, D. Das, D. Mills, J. Eisenstein, M. Heilman, D. Yogatama, J. Flanigan, and N. A. Smith. 2011. Part-of-speech tagging for twitter: Annotation, features, and exper- iments. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguis- tics: Human Language Technologies: short papers- Volume 2, pages 42-47. Association for Computa- tional Linguistics.
The effect of negation on Sentiment Analysis and Retrieval Effectiveness. L Jia, C Yu, W Meng, CIKM'09 Proceeding of the 18th ACM conference on Information and knowledge management. Hong KongACM PressL. Jia, C. Yu, and W. Meng. 2009. The effect of nega- tion on Sentiment Analysis and Retrieval Effective- ness. In CIKM'09 Proceeding of the 18th ACM con- ference on Information and knowledge management, pages 1827-1830, Hong Kong, November. ACM, ACM Press.
Generalizing dependency features for opinion mining. M Joshi, C Penstein-Rosé, Proceedings of the ACL-IJCNLP 2009 Conference Short Papers, ACLShort '09. the ACL-IJCNLP 2009 Conference Short Papers, ACLShort '09Stroudsburg, PA, USAAssociation for Computational LinguisticsM. Joshi and C. Penstein-Rosé. 2009. Generalizing dependency features for opinion mining. In Pro- ceedings of the ACL-IJCNLP 2009 Conference Short Papers, ACLShort '09, pages 313-316, Strouds- burg, PA, USA. Association for Computational Lin- guistics.
A Convolutional Neural Network for Modelling Sentences. N Kalchbrenner, E Grefenstette, P Blunsom, The 52nd Annual Meeting of the Association for Computational Linguistics. Baltimore, Maryland, USA. ACLLong Papers1N. Kalchbrenner, E. Grefenstette, and P. Blunsom. 2014. A Convolutional Neural Network for Mod- elling Sentences. In The 52nd Annual Meeting of the Association for Computational Linguistics. Pro- ceedings of the Conference. Volume 1: Long Papers, pages 655-665, Baltimore, Maryland, USA. ACL.
Sentiment Analysis of Short Informal Texts. S Kiritchenko, X Zhu, S M Mohammad, Journal of Artificial Intelligence Research. 501S. Kiritchenko, X. Zhu, and S. M. Mohammad. 2014. Sentiment Analysis of Short Informal Texts. Jour- nal of Artificial Intelligence Research, 50(1):723- 762, may.
Universal Dependency Annotation for Multilingual Parsing. R Mcdonald, J Nivre, Y Quirmbach-Brundage, Y Goldberg, K Das, K Ganchev, S Hall, H Petrov, O Zhang, C Täckström, N Bedini, J Castelló, Lee, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsR. McDonald, J. Nivre, Y. Quirmbach-brundage, Y. Goldberg, D Das, K. Ganchev, K. Hall, S. Petrov, H. Zhang, O. Täckström, C. Bedini, N. Castelló, and J. Lee. 2013. Universal Dependency Annotation for Multilingual Parsing. In Proceedings of the 51st An- nual Meeting of the Association for Computational Linguistics, pages 92-97. Association for Computa- tional Linguistics.
Fine-grained german sentiment analysis on social media. S Momtazi, LREC. CiteseerS. Momtazi. 2012. Fine-grained german sentiment analysis on social media. In LREC, pages 1215- 1220. Citeseer.
Malt-Parser: A language-independent system for datadriven dependency parsing. J Nivre, J Hall, J Nilsson, A Chanev, G Eryigit, S Kübler, S Marinov, E Marsi, Natural Language Engineering. 132J. Nivre, J. Hall, J. Nilsson, A. Chanev, G. Eryigit, S. Kübler, S. Marinov, and E. Marsi. 2007. Malt- Parser: A language-independent system for data- driven dependency parsing. Natural Language En- gineering, 13(2):95-135.
A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. B Pang, L Lee, Proceedings of the 42nd annual meeting on Association for Computational Linguistics. the 42nd annual meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsB. Pang and L. Lee. 2004. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd annual meeting on Association for Computational Linguistics, pages 271-278. Association for Com- putational Linguistics.
Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. B Pang, L Lee, Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics. the 43rd Annual Meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsB. Pang and L. Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 115-124. Association for Com- putational Linguistics.
Opinion Mining and Sentiment Analysis. B Pang, L Lee, now Publishers IncHanover, MA, USAB. Pang and L. Lee. 2008. Opinion Mining and Senti- ment Analysis. now Publishers Inc., Hanover, MA, USA.
Thumbs up? Sentiment classification using machine learning techniques. B Pang, L Lee, S Vaithyanathan, Proceedings of EMNLP. EMNLPB. Pang, L. Lee, and S. Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of EMNLP, pages 79- 86.
A universal part-of-speech tagset. S Petrov, D Das, R Mcdonald, arXiv:1104.2086arXiv preprintS. Petrov, D. Das, and R. McDonald. 2011. A universal part-of-speech tagset. arXiv preprint arXiv:1104.2086.
Sentic patterns: Dependency-based rules for concept-level sentiment analysis. Knowledge-Based Systems. S Poria, E Cambria, G Winterstein, G Huang, 69S. Poria, E. Cambria, G. Winterstein, and G. Huang. 2014. Sentic patterns: Dependency-based rules for concept-level sentiment analysis. Knowledge-Based Systems, 69:45-63, October.
S Rosenthal, P Nakov, S Kiritchenko, S Mohammad, A Ritter, V Stoyanov, Proceedings of the 9th International Workshop on Semantic Evaluation. the 9th International Workshop on Semantic EvaluationSemeval-2015 task 10: Sentiment analysis in TwitterS. Rosenthal, P. Nakov, S. Kiritchenko, S. M Moham- mad, A. Ritter, and V. Stoyanov. 2015. Semeval- 2015 task 10: Sentiment analysis in Twitter. In Pro- ceedings of the 9th International Workshop on Se- mantic Evaluation (SemEval 2015).
UNITN: Training Deep Convolutional Neural Network for Twitter Sentiment Classification. A Severyn, A Moschitti, Proceedings of the 9th International Workshop on Semantic Evaluation. the 9th International Workshop on Semantic EvaluationDenver, ColoradoAssociation for Computational LinguisticsA. Severyn and A. Moschitti. 2015. UNITN: Train- ing Deep Convolutional Neural Network for Twit- ter Sentiment Classification. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 464-469, Denver, Colorado. Association for Computational Linguistics.
Semantic compositionality through recursive matrix-vector spaces. R Socher, B Huval, C D Manning, A Y Ng, Proceedings of the 2012. the 2012R. Socher, B. Huval, C. D. Manning, and A. Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012
Natural Language Processing and Computational Natural Language Learning. Association for Computational LinguisticsJoint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201-1211. Association for Computational Linguistics.
Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. R Socher, A Perelygin, J Wu, J Chuang, C D Manning, A Ng, C Potts, EMNLP 2013. 2013 Conference on Empirical Methods in Natural Language Processing. Proceedings of the Conference. Seattle, Washington, USA, oct. ACLR. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Man- ning, A. Ng, and C. Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sen- timent Treebank. In EMNLP 2013. 2013 Confer- ence on Empirical Methods in Natural Language Processing. Proceedings of the Conference, pages 1631-1642, Seattle, Washington, USA, oct. ACL.
Analyzing appraisal automatically. M Taboada, J Grieve, Proceedings of AAAI Spring Symposium on Exploring Attitude and Affect in Text (AAAI Technical Re# port SS# 04# 07). AAAI Spring Symposium on Exploring Attitude and Affect in Text (AAAI Technical Re# port SS# 04# 07)Stanford University, CAAAAI PressM. Taboada and J. Grieve. 2004. Analyzing appraisal automatically. In Proceedings of AAAI Spring Sym- posium on Exploring Attitude and Affect in Text (AAAI Technical Re# port SS# 04# 07), Stanford University, CA, pp. 158q161. AAAI Press.
Lexicon-based methods for sentiment analysis. M Taboada, J Brooke, M Tofiloski, K Voll, M Stede, Computational Linguistics. 372M. Taboada, J. Brooke, M. Tofiloski, K. Voll, and M. Stede. 2011. Lexicon-based methods for sentiment analysis. Computational Linguistics, 37(2):267-307.
An-Cora: Multilevel Annotated Corpora for Catalan and Spanish. M Taulé, M A Martí, M Recasens, Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08). Nicoletta Calzolari, Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odjik, Stelios Piperidis, and Daniel Tapiasthe Sixth International Conference on Language Resources and Evaluation (LREC'08)Marrakech, MoroccoM. Taulé, M. A. Martí, and M. Recasens. 2008. An- Cora: Multilevel Annotated Corpora for Catalan and Spanish. In Nicoletta Calzolari, Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odjik, Stelios Piperidis, and Daniel Tapias, editors, Proceedings of the Sixth International Conference on Language Re- sources and Evaluation (LREC'08), pages 96-101, Marrakech, Morocco.
Sentiment strength detection for the social web. M Thelwall, K Buckley, G Paltoglou, J. Am. Soc. Inf. Sci. Technol. 631M. Thelwall, K. Buckley, and G. Paltoglou. 2012. Sen- timent strength detection for the social web. J. Am. Soc. Inf. Sci. Technol., 63(1):163-173.
Enriching the knowledge sources used in a maximum entropy partof-speech tagger. K Toutanova, C D Manning, Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics. the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics13K. Toutanova and C. D. Manning. 2000. Enriching the knowledge sources used in a maximum entropy part- of-speech tagger. In Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in nat- ural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics- Volume 13, pages 63-70.
Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews. P D Turney, Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL '02. the 40th Annual Meeting on Association for Computational Linguistics, ACL '02Stroudsburg, PA, USA. ACLP. D. Turney. 2002. Thumbs up or thumbs down?: semantic orientation applied to unsupervised classi- fication of reviews. In Proceedings of the 40th An- nual Meeting on Association for Computational Lin- guistics, ACL '02, pages 417-424, Stroudsburg, PA, USA. ACL.
A syntactic approach for opinion mining on Spanish reviews. D Vilares, M A Alonso, C Gómez-Rodríguez, Natural Language Engineering. 211D. Vilares, M. A. Alonso, and C. Gómez-Rodríguez. 2015a. A syntactic approach for opinion mining on Spanish reviews. Natural Language Engineering, 21(1):139-163.
The megaphone of the people? Spanish Sen-tiStrength for real-time analysis of political tweets. D Vilares, M Thelwall, M A Alonso, Journal of Information Science. 416D. Vilares, M. Thelwall, and M. A. Alonso. 2015b. The megaphone of the people? Spanish Sen- tiStrength for real-time analysis of political tweets. Journal of Information Science, 41(6):799-813.
| [] |