title
stringlengths
5
342
author
stringlengths
3
2.17k
year
int64
1.95k
2.02k
abstract
stringlengths
0
12.7k
pages
stringlengths
1
702
queryID
stringlengths
4
40
query
stringlengths
1
300
paperID
stringlengths
0
40
include
int64
0
1
Information Extraction over Structured Data: Question Answering with {F}reebase
Yao, Xuchen and Van Durme, Benjamin
2,014
nan
956--966
4f929eb557a990cd3062c86c4be157909742245d
Knowledge-Based Reasoning Network for Relation Detection
319e572fcddff77513eed8a25effbc7d9ff8ef85
1
Word storage does not guarantee accessibility (Stocker des Mots ne Garantit nullement leur Acc{\`e}s) [in {F}rench]
Zock, Michael and Schwab, Didier
2,014
nan
312--320
4f929eb557a990cd3062c86c4be157909742245d
Knowledge-Based Reasoning Network for Relation Detection
17b9bfc84ed072cc6e0af5b9b8520b93718b8008
0
Large-scale Semantic Parsing via Schema Matching and Lexicon Extension
Cai, Qingqing and Yates, Alexander
2,013
nan
423--433
4f929eb557a990cd3062c86c4be157909742245d
Knowledge-Based Reasoning Network for Relation Detection
80c2d8c691b09f8b4e53f512b9d2641b49fda935
1
Linguistic Models for Analyzing and Detecting Biased Language
Recasens, Marta and Danescu-Niculescu-Mizil, Cristian and Jurafsky, Dan
2,013
nan
1650--1659
4f929eb557a990cd3062c86c4be157909742245d
Knowledge-Based Reasoning Network for Relation Detection
2a501b074261e81b9126e80a0a308cfa5e76f8c1
0
Exploiting Syntactic and Shallow Semantic Kernels for Question Answer Classification
Moschitti, Alessandro and Quarteroni, Silvia and Basili, Roberto and Manandhar, Suresh
2,007
nan
776--783
4f929eb557a990cd3062c86c4be157909742245d
Knowledge-Based Reasoning Network for Relation Detection
e1163694d71b7372fdb5676f156806dfa6f38a58
1
Accented Pronouns and Unusual Antecedents: A Corpus Study
Kothari, Anubha
2,007
nan
150--157
4f929eb557a990cd3062c86c4be157909742245d
Knowledge-Based Reasoning Network for Relation Detection
506d979e7c0d4f07a4e59693bc5b9c8fa6d9223d
0
Improving Semantic Parsing via Answer Type Inference
Yavuz, Semih and Gur, Izzeddin and Su, Yu and Srivatsa, Mudhakar and Yan, Xifeng
2,016
nan
149--159
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
f3594f9d60c98cac88f9033c69c2b666713ed6d6
1
Sentiment Analysis of Tweets in Three {I}ndian Languages
Phani, Shanta and Lahiri, Shibamouli and Biswas, Arindam
2,016
In this paper, we describe the results of sentiment analysis on tweets in three Indian languages {--} Bengali, Hindi, and Tamil. We used the recently released SAIL dataset (Patra et al., 2015), and obtained state-of-the-art results in all three languages. Our features are simple, robust, scalable, and language-independent. Further, we show that these simple features provide better results than more complex and language-specific features, in two separate classification tasks. Detailed feature analysis and error analysis have been reported, along with learning curves for Hindi and Bengali.
93--102
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
c548717c5e0206c65efe4d3814ff545bd59ea936
0
{E}-{BERT}: Efficient-Yet-Effective Entity Embeddings for {BERT}
Poerner, Nina and Waltinger, Ulli and Sch{\"u}tze, Hinrich
2,020
We present a novel way of injecting factual knowledge about entities into the pretrained BERT model (Devlin et al., 2019): We align Wikipedia2Vec entity vectors (Yamada et al., 2016) with BERT{'}s native wordpiece vector space and use the aligned entity vectors as if they were wordpiece vectors. The resulting entity-enhanced version of BERT (called E-BERT) is similar in spirit to ERNIE (Zhang et al., 2019) and KnowBert (Peters et al., 2019), but it requires no expensive further pre-training of the BERT encoder. We evaluate E-BERT on unsupervised question answering (QA), supervised relation classification (RC) and entity linking (EL). On all three tasks, E-BERT outperforms BERT and other baselines. We also show quantitatively that the original BERT model is overly reliant on the surface form of entity names (e.g., guessing that someone with an Italian-sounding name speaks Italian), and that E-BERT mitigates this problem.
803--818
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
2bd5b4aed18400bf1a1cc866d9b8d931aa047290
1
{F}acts2{S}tory: Controlling Text Generation by Key Facts
Orbach, Eyal and Goldberg, Yoav
2,020
Recent advancements in self-attention neural network architectures have raised the bar for open-ended text generation. Yet, while current methods are capable of producing a coherent text which is several hundred words long, attaining control over the content that is being generated{---}as well as evaluating it{---}are still open questions. We propose a controlled generation task which is based on expanding a sequence of facts, expressed in natural language, into a longer narrative. We introduce human-based evaluation metrics for this task, as well as a method for deriving a large training dataset. We evaluate three methods on this task, based on fine-tuning pre-trained models. We show that while auto-regressive, unidirectional Language Models such as GPT2 produce better fluency, they struggle to adhere to the requested facts. We propose a plan-and-cloze model (using fine-tuned XLNet) which produces competitive fluency while adhering to the requested content.
2329--2345
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
e5fe69ec9ac9c8fd48a6e532cd86d3903a375fb2
0
No Noun Phrase Left Behind: Detecting and Typing Unlinkable Entities
Lin, Thomas and {Mausam} and Etzioni, Oren
2,012
nan
893--903
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
cffb556ee3d1e188f4688b71a8608bbe1883bc49
1
La variation prosodique dialectale en fran{\c{c}}ais. Donn{\'e}es et hypoth{\`e}ses (Speech Prosody of Dialectal {F}rench: Data and Hypotheses) [in {F}rench]
Avanzi, Mathieu and Obin, Nicolas and Bordal, Guri and Bardiaux, Alice
2,012
nan
457--464
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
77a02067a2467f4cccba660ae42cd665f69721b8
0
Entities as Experts: Sparse Memory Access with Entity Supervision
F{\'e}vry, Thibault and Baldini Soares, Livio and FitzGerald, Nicholas and Choi, Eunsol and Kwiatkowski, Tom
2,020
We focus on the problem of capturing declarative knowledge about entities in the learned parameters of a language model. We introduce a new model{---}Entities as Experts (EaE){---}that can access distinct memories of the entities mentioned in a piece of text. Unlike previous efforts to integrate entity knowledge into sequence models, EaE{'}s entity representations are learned directly from text. We show that EaE{'}s learned representations capture sufficient knowledge to answer TriviaQA questions such as {``}Which Dr. Who villain has been played by Roger Delgado, Anthony Ainley, Eric Roberts?{''}, outperforming an encoder-generator Transformer model with 10x the parameters on this task. According to the Lama knowledge probes, EaE contains more factual knowledge than a similar sized Bert, as well as previous approaches that integrate external sources of entity knowledge. Because EaE associates parameters with specific entities, it only needs to access a fraction of its parameters at inference time, and we show that the correct identification and representation of entities is essential to EaE{'}s performance.
4937--4951
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
016368185723d0ec99aafa4b5927300590d0647f
1
Pratiques d{'}{\'e}valuation en {ASR} et biais de performance (Evaluation methodology in {ASR} and performance bias)
Garnerin, Mahault and Rossato, Solange and Besacier, Laurent
2,020
Nous proposons une r{\'e}flexion sur les pratiques d{'}{\'e}valuation des syst{\`e}mes de reconnaissance automatique de la parole (ASR). Apr{\`e}s avoir d{\'e}fini la notion de discrimination d{'}un point de vue l{\'e}gal et la notion d{'}{\'e}quit{\'e} dans les syst{\`e}mes d{'}intelligence artificielle, nous nous int{\'e}ressons aux pratiques actuelles lors des grandes campagnes d{'}{\'e}valuation. Nous observons que la variabilit{\'e} de la parole et plus particuli{\`e}rement celle de l{'}individu n{'}est pas prise en compte dans les protocoles d{'}{\'e}valuation actuels rendant impossible l{'}{\'e}tude de biais potentiels dans les syst{\`e}mes.
1--9
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
346f23afb93a67fe044f475dff61776f051dec6d
0
Entity Linking via Joint Encoding of Types, Descriptions, and Context
Gupta, Nitish and Singh, Sameer and Roth, Dan
2,017
For accurate entity linking, we need to capture various information aspects of an entity, such as its description in a KB, contexts in which it is mentioned, and structured knowledge. Additionally, a linking system should work on texts from different domains without requiring domain-specific training data or hand-engineered features. In this work we present a neural, modular entity linking system that learns a unified dense representation for each entity using multiple sources of information, such as its description, contexts around its mentions, and its fine-grained types. We show that the resulting entity linking system is effective at combining these sources, and performs competitively, sometimes out-performing current state-of-the-art systems across datasets, without requiring any domain-specific training data or hand-engineered features. We also show that our model can effectively {``}embed{''} entities that are new to the KB, and is able to link its mentions accurately.
2681--2690
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
2927dfc481446568fc9108795570eb4d416be021
1
Group Linguistic Bias Aware Neural Response Generation
Wang, Jianan and Wang, Xin and Li, Fang and Xu, Zhen and Wang, Zhuoran and Wang, Baoxun
2,017
For practical chatbots, one of the essential factor for improving user experience is the capability of customizing the talking style of the agents, that is, to make chatbots provide responses meeting users{'} preference on language styles, topics, etc. To address this issue, this paper proposes to incorporate linguistic biases, which implicitly involved in the conversation corpora generated by human groups in the Social Network Services (SNS), into the encoder-decoder based response generator. By attaching a specially designed neural component to dynamically control the impact of linguistic biases in response generation, a Group Linguistic Bias Aware Neural Response Generation (GLBA-NRG) model is eventually presented. The experimental results on the dataset from the Chinese SNS show that the proposed architecture outperforms the current response generating models by producing both meaningful and vivid responses with customized styles.
1--10
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
63ad6c788f36e38e615abaadf29279c8e42f0742
0
Improving Fine-grained Entity Typing with Entity Linking
Dai, Hongliang and Du, Donghong and Li, Xin and Song, Yangqiu
2,019
Fine-grained entity typing is a challenging problem since it usually involves a relatively large tag set and may require to understand the context of the entity mention. In this paper, we use entity linking to help with the fine-grained entity type classification process. We propose a deep neural model that makes predictions based on both the context and the information obtained from entity linking results. Experimental results on two commonly used datasets demonstrates the effectiveness of our approach. On both datasets, it achieves more than 5{\%} absolute strict accuracy improvement over the state of the art.
6210--6215
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
b74b272c7fe881614f3eb8c2504b037439571eec
1
An Unsupervised Query Rewriting Approach Using N-gram Co-occurrence Statistics to Find Similar Phrases in Large Text Corpora
Moen, Hans and Peltonen, Laura-Maria and Suhonen, Henry and Matinolli, Hanna-Maria and Mieronkoski, Riitta and Telen, Kirsi and Terho, Kirsi and Salakoski, Tapio and Salanter{\"a}, Sanna
2,019
We present our work towards developing a system that should find, in a large text corpus, contiguous phrases expressing similar meaning as a query phrase of arbitrary length. Depending on the use case, this task can be seen as a form of (phrase-level) query rewriting. The suggested approach works in a generative manner, is unsupervised and uses a combination of a semantic word n-gram model, a statistical language model and a document search engine. A central component is a distributional semantic model containing word n-grams vectors (or embeddings) which models semantic similarities between n-grams of different order. As data we use a large corpus of PubMed abstracts. The presented experiment is based on manual evaluation of extracted phrases for arbitrary queries provided by a group of evaluators. The results indicate that the proposed approach is promising and that the use of distributional semantic models trained with uni-, bi- and trigrams seems to work better than a more traditional unigram model.
131--139
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
1fd2612b598db75f51d8db2cc4cbff49b6bf311e
0
Multilingual {LAMA}: Investigating Knowledge in Multilingual Pretrained Language Models
Kassner, Nora and Dufter, Philipp and Sch{\"u}tze, Hinrich
2,021
Recently, it has been found that monolingual English language models can be used as knowledge bases. Instead of structural knowledge base queries, masked sentences such as {``}Paris is the capital of [MASK]{''} are used as probes. We translate the established benchmarks TREx and GoogleRE into 53 languages. Working with mBERT, we investigate three questions. (i) Can mBERT be used as a multilingual knowledge base? Most prior work only considers English. Extending research to multiple languages is important for diversity and accessibility. (ii) Is mBERT{'}s performance as knowledge base language-independent or does it vary from language to language? (iii) A multilingual model is trained on more text, e.g., mBERT is trained on 104 Wikipedias. Can mBERT leverage this for better performance? We find that using mBERT as a knowledge base yields varying performance across languages and pooling predictions across languages improves performance. Conversely, mBERT exhibits a language bias; e.g., when queried in Italian, it tends to predict Italy as the country of origin.
3250--3258
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
fcfc9648561a221750b8085790ad9ba1bebb1800
1
Non-Parametric Unsupervised Domain Adaptation for Neural Machine Translation
Zheng, Xin and Zhang, Zhirui and Huang, Shujian and Chen, Boxing and Xie, Jun and Luo, Weihua and Chen, Jiajun
2,021
Recently, kNN-MT (Khandelwal et al., 2020) has shown the promising capability of directly incorporating the pre-trained neural machine translation (NMT) model with domain-specific token-level k-nearest-neighbor (kNN) retrieval to achieve domain adaptation without retraining. Despite being conceptually attractive, it heavily relies on high-quality in-domain parallel corpora, limiting its capability on unsupervised domain adaptation, where in-domain parallel corpora are scarce or nonexistent. In this paper, we propose a novel framework that directly uses in-domain monolingual sentences in the target language to construct an effective datastore for k-nearest-neighbor retrieval. To this end, we first introduce an autoencoder task based on the target language, and then insert lightweight adapters into the original NMT model to map the token-level representation of this task to the ideal representation of the translation task. Experiments on multi-domain datasets demonstrate that our proposed approach significantly improves the translation accuracy with target-side monolingual data, while achieving comparable performance with back-translation. Our implementation is open-sourced at \url{https://github.com/zhengxxn/UDA-KNN}.
4234--4241
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
89d7a037646d37bc80bcd0a0b55d2a1550369c42
0
Neural Architectures for Fine-grained Entity Type Classification
Shimaoka, Sonse and Stenetorp, Pontus and Inui, Kentaro and Riedel, Sebastian
2,017
In this work, we investigate several neural network architectures for fine-grained entity type classification and make three key contributions. Despite being a natural comparison and addition, previous work on attentive neural architectures have not considered hand-crafted features and we combine these with learnt features and establish that they complement each other. Additionally, through quantitative analysis we establish that the attention mechanism learns to attend over syntactic heads and the phrase containing the mention, both of which are known to be strong hand-crafted features for our task. We introduce parameter sharing between labels through a hierarchical encoding method, that in low-dimensional projections show clear clusters for each type hierarchy. Lastly, despite using the same evaluation dataset, the literature frequently compare models trained using different data. We demonstrate that the choice of training data has a drastic impact on performance, which decreases by as much as 9.85{\%} loose micro F1 score for a previously proposed method. Despite this discrepancy, our best model achieves state-of-the-art results with 75.36{\%} loose micro F1 score on the well-established Figer (GOLD) dataset and we report the best results for models trained using publicly available data for the OntoNotes dataset with 64.93{\%} loose micro F1 score.
1271--1280
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
800dd1672789fe97513b84e65e75e370b10d6c13
1
Adaptation au domaine pour l{'}analyse morpho-syntaxique (Domain Adaptation for {P}o{S} tagging)
Bartenlian, {\'E}l{\'e}onor and Lacour, Margot and Labeau, Matthieu and Allauzen, Alexandre and Wisniewski, Guillaume and Yvon, Fran{\c{c}}ois
2,017
Ce travail cherche {\`a} comprendre pourquoi les performances d{'}un analyseur morpho-syntaxiques chutent fortement lorsque celui-ci est utilis{\'e} sur des donn{\'e}es hors domaine. Nous montrons {\`a} l{'}aide d{'}une exp{\'e}rience jouet que ce comportement peut {\^e}tre d{\^u} {\`a} un ph{\'e}nom{\`e}ne de masquage des caract{\'e}ristiques lexicalis{\'e}es par les caract{\'e}ristiques non lexicalis{\'e}es. Nous proposons plusieurs mod{\`e}les essayant de r{\'e}duire cet effet.
134--141
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
3a2f45bd175f279cc219afcadc51131feb35b0d9
0
A Joint Model for Entity Analysis: Coreference, Typing, and Linking
Durrett, Greg and Klein, Dan
2,014
We present a joint model of three core tasks in the entity analysis stack: coreference resolution (within-document clustering), named entity recognition (coarse semantic typing), and entity linking (matching to Wikipedia entities). Our model is formally a structured conditional random field. Unary factors encode local features from strong baselines for each task. We then add binary and ternary factors to capture cross-task interactions, such as the constraint that coreferent mentions have the same semantic type. On the ACE 2005 and OntoNotes datasets, we achieve state-of-the-art results for all three tasks. Moreover, joint modeling improves performance on each task over strong independent baselines.
477--490
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
28eb033eee5f51c5e5389cbb6b777779203a6778
1
Developing a {S}ina {W}eibo Incident Monitor for Disasters
Robinson, Bella and Bai, Hua and Power, Robert and Lin, Xunguo
2,014
nan
59--68
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
38df094b353a3389c8dff948a788fbebd4ddbadd
0
A {C}hinese Corpus for Fine-grained Entity Typing
Lee, Chin and Dai, Hongliang and Song, Yangqiu and Li, Xin
2,020
Fine-grained entity typing is a challenging task with wide applications. However, most existing datasets for this task are in English. In this paper, we introduce a corpus for Chinese fine-grained entity typing that contains 4,800 mentions manually labeled through crowdsourcing. Each mention is annotated with free-form entity types. To make our dataset useful in more possible scenarios, we also categorize all the fine-grained types into 10 general types. Finally, we conduct experiments with some neural models whose structures are typical in fine-grained entity typing and show how well they perform on our dataset. We also show the possibility of improving Chinese fine-grained entity typing through cross-lingual transfer learning.
4451--4457
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
853986783fdc27c7cebb04ba638dd7fe48c5de23
1
Unsupervised Discovery of Firm-Level Variables in Earnings Call Transcript Embeddings
Edmiston, Daniel and Park, Ziho
2,020
nan
34--39
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
c47ed92327d8dd7c8780b3014de4d242fb3b9f5f
0
Interpretable Entity Representations through Large-Scale Typing
Onoe, Yasumasa and Durrett, Greg
2,020
In standard methodology for natural language processing, entities in text are typically embedded in dense vector spaces with pre-trained models. The embeddings produced this way are effective when fed into downstream models, but they require end-task fine-tuning and are fundamentally difficult to interpret. In this paper, we present an approach to creating entity representations that are human readable and achieve high performance on entity-related tasks out of the box. Our representations are vectors whose values correspond to posterior probabilities over fine-grained entity types, indicating the confidence of a typing model{'}s decision that the entity belongs to the corresponding type. We obtain these representations using a fine-grained entity typing model, trained either on supervised ultra-fine entity typing data (Choi et al. 2018) or distantly-supervised examples from Wikipedia. On entity probing tasks involving recognizing entity identity, our embeddings used in parameter-free downstream models achieve competitive performance with ELMo- and BERT-based embeddings in trained models. We also show that it is possible to reduce the size of our type set in a learning-based way for particular domains. Finally, we show that these embeddings can be post-hoc modified through a small number of rules to incorporate domain knowledge and improve performance.
612--624
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
782a50a48ba5d32839631254285d989bfadfd193
1
A Pointer Network Architecture for Joint Morphological Segmentation and Tagging
Seker, Amit and Tsarfaty, Reut
2,020
Morphologically Rich Languages (MRLs) such as Arabic, Hebrew and Turkish often require Morphological Disambiguation (MD), i.e., the prediction of morphological decomposition of tokens into morphemes, early in the pipeline. Neural MD may be addressed as a simple pipeline, where segmentation is followed by sequence tagging, or as an end-to-end model, predicting morphemes from raw tokens. Both approaches are sub-optimal; the former is heavily prone to error propagation, and the latter does not enjoy explicit access to the basic processing units called morphemes. This paper offers MD architecture that combines the symbolic knowledge of morphemes with the learning capacity of neural end-to-end modeling. We propose a new, general and easy-to-implement Pointer Network model where the input is a morphological lattice and the output is a sequence of indices pointing at a single disambiguated path of morphemes. We demonstrate the efficacy of the model on segmentation and tagging, for Hebrew and Turkish texts, based on their respective Universal Dependencies (UD) treebanks. Our experiments show that with complete lattices, our model outperforms all shared-task results on segmenting and tagging these languages. On the SPMRL treebank, our model outperforms all previously reported results for Hebrew MD in realistic scenarios.
4368--4378
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
bb0b0c9853f658db9d6a7f4aa90c24906ba4b1dc
0
{E}ntity {L}inking in 100 {L}anguages
Botha, Jan A. and Shan, Zifei and Gillick, Daniel
2,020
We propose a new formulation for multilingual entity linking, where language-specific mentions resolve to a language-agnostic Knowledge Base. We train a dual encoder in this new setting, building on prior work with improved feature representation, negative mining, and an auxiliary entity-pairing task, to obtain a single entity retrieval model that covers 100+ languages and 20 million entities. The model outperforms state-of-the-art results from a far more limited cross-lingual linking task. Rare entities and low-resource languages pose challenges at this large-scale, so we advocate for an increased focus on zero- and few-shot evaluation. To this end, we provide Mewsli-9, a large new multilingual dataset matched to our setting, and show how frequency-based analysis provided key insights for our model and training enhancements.
7833--7845
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
04a7d9f0388ded93c1ec16e36a6df3cd44cb95b0
1
{SubjQA}: {A} {D}ataset for {S}ubjectivity and {R}eview {C}omprehension
Bjerva, Johannes and Bhutani, Nikita and Golshan, Behzad and Tan, Wang-Chiew and Augenstein, Isabelle
2,020
Subjectivity is the expression of internal opinions or beliefs which cannot be objectively observed or verified, and has been shown to be important for sentiment analysis and word-sense disambiguation. Furthermore, subjectivity is an important aspect of user-generated data. In spite of this, subjectivity has not been investigated in contexts where such data is widespread, such as in question answering (QA). We develop a new dataset which allows us to investigate this relationship. We find that subjectivity is an important feature in the case of QA, albeit with more intricate interactions between subjectivity and QA performance than found in previous work on sentiment analysis. For instance, a subjective question may or may not be associated with a subjective answer. We release an English QA dataset (SubjQA) based on customer reviews, containing subjectivity annotations for questions and answer spans across 6 domains.
5480--5494
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
14ba0a8fdb8c6b4bfd6bc8c434cd277b6e7a4156
0
Ultra-Fine Entity Typing
Choi, Eunsol and Levy, Omer and Choi, Yejin and Zettlemoyer, Luke
2,018
We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict ultra-fine types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets.
87--96
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
4157834ed2d2fea6b6f652a72a9d0487edbc9f57
1
Quantifying the Visual Concreteness of Words and Topics in Multimodal Datasets
Hessel, Jack and Mimno, David and Lee, Lillian
2,018
Multimodal machine learning algorithms aim to learn visual-textual correspondences. Previous work suggests that concepts with concrete visual manifestations may be easier to learn than concepts with abstract ones. We give an algorithm for automatically computing the visual concreteness of words and topics within multimodal datasets. We apply the approach in four settings, ranging from image captions to images/text scraped from historical books. In addition to enabling explorations of concepts in multimodal datasets, our concreteness scores predict the capacity of machine learning algorithms to learn textual/visual relationships. We find that 1) concrete concepts are indeed easier to learn; 2) the large number of algorithms we consider have similar failure cases; 3) the precise positive relationship between concreteness and performance varies between datasets. We conclude with recommendations for using concreteness scores to facilitate future multimodal research.
2194--2205
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
d02bf4082850a667bf0b7b6205df1cf9c1899233
0
Noise Mitigation for Neural Entity Typing and Relation Extraction
Yaghoobzadeh, Yadollah and Adel, Heike and Sch{\"u}tze, Hinrich
2,017
In this paper, we address two different types of noise in information extraction models: noise from distant supervision and noise from pipeline input features. Our target tasks are entity typing and relation extraction. For the first noise type, we introduce multi-instance multi-label learning algorithms using neural network models, and apply them to fine-grained entity typing for the first time. Our model outperforms the state-of-the-art supervised approach which uses global embeddings of entities. For the second noise type, we propose ways to improve the integration of noisy entity type predictions into relation extraction. Our experiments show that probabilistic predictions are more robust than discrete predictions and that joint training of the two tasks performs best.
1183--1194
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
b0b0c68c3457faa85ed3bbd3252ac65ba55da5c6
1
Greedy Transition-Based Dependency Parsing with Stack {LSTM}s
Ballesteros, Miguel and Dyer, Chris and Goldberg, Yoav and Smith, Noah A.
2,017
We introduce a greedy transition-based parser that learns to represent parser states using recurrent neural networks. Our primary innovation that enables us to do this efficiently is a new control structure for sequential neural networks{---}the stack long short-term memory unit (LSTM). Like the conventional stack data structures used in transition-based parsers, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. Our model captures three facets of the parser{'}s state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of transition actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. In addition, we compare two different word representations: (i) standard word vectors based on look-up tables and (ii) character-based models of words. Although standard word embedding models work well in all languages, the character-based models improve the handling of out-of-vocabulary words, particularly in morphologically rich languages. Finally, we discuss the use of dynamic oracles in training the parser. During training, dynamic oracles alternate between sampling parser states from the training data and from the model as it is being learned, making the model more robust to the kinds of errors that will be made at test time. Training our model with dynamic oracles yields a linear-time greedy parser with very competitive performance.
311--347
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
df80d1d53e8c8e17eb65174addeec668b8a59b71
0
Corpus-level Fine-grained Entity Typing Using Contextual Information
Yaghoobzadeh, Yadollah and Sch{\"u}tze, Hinrich
2,015
nan
715--725
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
8b298ce5f81c5ffd63f5c5ab3634dbfd350a92e4
1
{S}plusplus: A Feature-Rich Two-stage Classifier for Sentiment Analysis of Tweets
Dong, Li and Wei, Furu and Yin, Yichun and Zhou, Ming and Xu, Ke
2,015
nan
515--519
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
e167441f84d331257f13cc52a0054a5d9b8681c8
0
{O}nto{N}otes: The 90{\%} Solution
Hovy, Eduard and Marcus, Mitchell and Palmer, Martha and Ramshaw, Lance and Weischedel, Ralph
2,006
nan
57--60
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
e54d8b07ef659f9ee2671441c4355e414e408836
1
Estimation of Consistent Probabilistic Context-free Grammars
Nederhof, Mark-Jan and Satta, Giorgio
2,006
nan
343--350
44f7242d96c5e33ecf4122edebe1c36a39c7ec43
Cross-Lingual Fine-Grained Entity Typing
080c719a3b22249d6e967f3f42200b1801cc622c
0
Improving Semantic Parsing via Answer Type Inference
Yavuz, Semih and Gur, Izzeddin and Su, Yu and Srivatsa, Mudhakar and Yan, Xifeng
2,016
nan
149--159
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
f3594f9d60c98cac88f9033c69c2b666713ed6d6
1
{A}lternations: From Lexicon to Grammar And Back Again
Lopatkov{\'a}, Mark{\'e}ta and Kettnerov{\'a}, V{\'a}clava
2,016
An excellent example of a phenomenon bridging a lexicon and a grammar is provided by grammaticalized alternations (e.g., passivization, reflexivity, and reciprocity): these alternations represent productive grammatical processes which are, however, lexically determined. While grammaticalized alternations keep lexical meaning of verbs unchanged, they are usually characterized by various changes in their morphosyntactic structure. In this contribution, we demonstrate on the example of reciprocity and its representation in the valency lexicon of Czech verbs, VALLEX how a linguistic description of complex (and still systemic) changes characteristic of grammaticalized alternations can benefit from an integration of grammatical rules into a valency lexicon. In contrast to other types of grammaticalized alternations, reciprocity in Czech has received relatively little attention although it closely interacts with various linguistic phenomena (e.g., with light verbs, diatheses, and reflexivity).
18--27
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
ea59a9c0cee02ae9b95fe0f74d1051b60a88f061
0
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations
nan
2,021
nan
nan
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
a0650db3ae712b04fd2586fe5770b2c6b0536dcb
1
Multi Output Learning using Task Wise Attention for Predicting Binary Properties of Tweets : Shared-Task-On-Fighting the {COVID}-19 Infodemic
Suhane, Ayush and Kowshik, Shreyas
2,021
In this paper, we describe our system for the shared task on Fighting the COVID-19 Infodemic in the English Language. Our proposed architecture consists of a multi-output classification model for the seven tasks, with a task-wise multi-head attention layer for inter-task information aggregation. This was built on top of the Bidirectional Encoder Representations obtained from the RoBERTa Transformer. We were able to achieve a mean F1 score of 0.891 on the test data, leading us to the second position on the test-set leaderboard.
110--114
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
637b0db85dcd196073865a55fccfa89d16916f5d
0
Knowledge Base Question Answering via Encoding of Complex Query Graphs
Luo, Kangqi and Lin, Fengli and Luo, Xusheng and Zhu, Kenny
2,018
Answering complex questions that involve multiple entities and multiple relations using a standard knowledge base is an open and challenging task. Most existing KBQA approaches focus on simpler questions and do not work very well on complex questions because they were not able to simultaneously represent the question and the corresponding complex query structure. In this work, we encode such complex query structure into a uniform vector representation, and thus successfully capture the interactions between individual semantic components within a complex question. This approach consistently outperforms existing methods on complex questions while staying competitive on simple questions.
2185--2194
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
1bfbb6429e56286cc92dce4fc34f2f45f9f55ae7
1
Localizing Moments in Video with Temporal Language
Hendricks, Lisa Anne and Wang, Oliver and Shechtman, Eli and Sivic, Josef and Darrell, Trevor and Russell, Bryan
2,018
Localizing moments in a longer video via natural language queries is a new, challenging task at the intersection of language and video understanding. Though moment localization with natural language is similar to other language and vision tasks like natural language object retrieval in images, moment localization offers an interesting opportunity to model temporal dependencies and reasoning in text. We propose a new model that explicitly reasons about different temporal segments in a video, and shows that temporal context is important for localizing phrases which include temporal language. To benchmark whether our model, and other recent video localization models, can effectively reason about temporal language, we collect the novel TEMPOral reasoning in video and language (TEMPO) dataset. Our dataset consists of two parts: a dataset with real videos and template sentences (TEMPO - Template Language) which allows for controlled studies on temporal language, and a human language dataset which consists of temporal sentences annotated by humans (TEMPO - Human Language).
1380--1390
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
0b1ff82db09672656157874718860bee942483cf
0
Sketch-Driven Regular Expression Generation from Natural Language and Examples
Ye, Xi and Chen, Qiaochu and Wang, Xinyu and Dillig, Isil and Durrett, Greg
2,020
Recent systems for converting natural language descriptions into regular expressions (regexes) have achieved some success, but typically deal with short, formulaic text and can only produce simple regexes. Real-world regexes are complex, hard to describe with brief sentences, and sometimes require examples to fully convey the user{'}s intent. We present a framework for regex synthesis in this setting where both natural language (NL) and examples are available. First, a semantic parser (either grammar-based or neural) maps the natural language description into an intermediate sketch, which is an incomplete regex containing holes to denote missing components. Then a program synthesizer searches over the regex space defined by the sketch and finds a regex that is consistent with the given string examples. Our semantic parser can be trained purely from weak supervision based on correctness of the synthesized regex, or it can leverage heuristically derived sketches. We evaluate on two prior datasets (Kushman and Barzilay 2013; Locascio et al. 2016) and a real-world dataset from Stack Overflow. Our system achieves state-of-the-art performance on the prior datasets and solves 57{\%} of the real-world dataset, which existing neural systems completely fail on.1
679--694
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
1bb27a0180f0c5fdd66cf34864752dfb1d6d94d2
1
Unsupervised Cross-lingual Representation Learning at Scale
Conneau, Alexis and Khandelwal, Kartikay and Goyal, Naman and Chaudhary, Vishrav and Wenzek, Guillaume and Guzm{\'a}n, Francisco and Grave, Edouard and Ott, Myle and Zettlemoyer, Luke and Stoyanov, Veselin
2,020
This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6{\%} average accuracy on XNLI, +13{\%} average F1 score on MLQA, and +2.4{\%} F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7{\%} in XNLI accuracy for Swahili and 11.4{\%} for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code and models publicly available.
8440--8451
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
6fec3e579c7cd4f13bdabbee2b6ac2e8ff5941c6
0
Efficient One-Pass End-to-End Entity Linking for Questions
Li, Belinda Z. and Min, Sewon and Iyer, Srinivasan and Mehdad, Yashar and Yih, Wen-tau
2,020
We present ELQ, a fast end-to-end entity linking model for questions, which uses a biencoder to jointly perform mention detection and linking in one pass. Evaluated on WebQSP and GraphQuestions with extended annotations that cover multiple entities per question, ELQ outperforms the previous state of the art by a large margin of +12.7{\%} and +19.6{\%} F1, respectively. With a very fast inference time (1.57 examples/s on a single CPU), ELQ can be useful for downstream question answering systems. In a proof-of-concept experiment, we demonstrate that using ELQ significantly improves the downstream QA performance of GraphRetriever.
6433--6441
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
bdf67f2655dd9b015bb0f34ebde87728117db6e3
1
Do Models of Mental Health Based on Social Media Data Generalize?
Harrigian, Keith and Aguirre, Carlos and Dredze, Mark
2,020
Proxy-based methods for annotating mental health status in social media have grown popular in computational research due to their ability to gather large training samples. However, an emerging body of literature has raised new concerns regarding the validity of these types of methods for use in clinical applications. To further understand the robustness of distantly supervised mental health models, we explore the generalization ability of machine learning classifiers trained to detect depression in individuals across multiple social media platforms. Our experiments not only reveal that substantial loss occurs when transferring between platforms, but also that there exist several unreliable confounding factors that may enable researchers to overestimate classification performance. Based on these results, we enumerate recommendations for future mental health dataset construction.
3774--3788
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
91075e07f1ffcd7ee0b3ed927925cfca05f5e41e
0
Recovering Question Answering Errors via Query Revision
Yavuz, Semih and Gur, Izzeddin and Su, Yu and Yan, Xifeng
2,017
The existing factoid QA systems often lack a post-inspection component that can help models recover from their own mistakes. In this work, we propose to crosscheck the corresponding KB relations behind the predicted answers and identify potential inconsistencies. Instead of developing a new model that accepts evidences collected from these relations, we choose to plug them back to the original questions directly and check if the revised question makes sense or not. A bidirectional LSTM is applied to encode revised questions. We develop a scoring mechanism over the revised question encodings to refine the predictions of a base QA system. This approach can improve the F1 score of STAGG (Yih et al., 2015), one of the leading QA systems, from 52.5{\%} to 53.9{\%} on WEBQUESTIONS data.
903--909
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
d03e4fa280ab64461027b3f0b477ac23171d7649
1
Neural Paraphrase Identification of Questions with Noisy Pretraining
Tomar, Gaurav Singh and Duque, Thyago and T{\"a}ckstr{\"o}m, Oscar and Uszkoreit, Jakob and Das, Dipanjan
2,017
We present a solution to the problem of paraphrase identification of questions. We focus on a recent dataset of question pairs annotated with binary paraphrase labels and show that a variant of the decomposable attention model (replacing the word embeddings of the decomposable attention model of Parikh et al. 2016 with character n-gram representations) results in accurate performance on this task, while being far simpler than many competing neural architectures. Furthermore, when the model is pretrained on a noisy dataset of automatically collected question paraphrases, it obtains the best reported performance on the dataset.
142--147
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
f056fb8cca0c30f59c22513d0dcd444f53cfa151
0
Case-based Reasoning for Natural Language Queries over Knowledge Bases
Das, Rajarshi and Zaheer, Manzil and Thai, Dung and Godbole, Ameya and Perez, Ethan and Lee, Jay Yoon and Tan, Lizhen and Polymenakos, Lazaros and McCallum, Andrew
2,021
It is often challenging to solve a complex problem from scratch, but much easier if we can access other similar problems with their solutions {---} a paradigm known as case-based reasoning (CBR). We propose a neuro-symbolic CBR approach (CBR-KBQA) for question answering over large knowledge bases. CBR-KBQA consists of a nonparametric memory that stores cases (question and logical forms) and a parametric model that can generate a logical form for a new question by retrieving cases that are relevant to it. On several KBQA datasets that contain complex questions, CBR-KBQA achieves competitive performance. For example, on the CWQ dataset, CBR-KBQA outperforms the current state of the art by 11{\%} on accuracy. Furthermore, we show that CBR-KBQA is capable of using new cases \textit{without} any further training: by incorporating a few human-labeled examples in the case memory, CBR-KBQA is able to successfully generate logical forms containing unseen KB entities as well as relations.
9594--9611
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
a07a94168608322600fd3cab54df1410b96852b6
1
Frozen Pretrained Transformers for Neural Sign Language Translation
De Coster, Mathieu and D{'}Oosterlinck, Karel and Pizurica, Marija and Rabaey, Paloma and Verlinden, Severine and Van Herreweghe, Mieke and Dambre, Joni
2,021
One of the major challenges in sign language translation from a sign language to a spoken language is the lack of parallel corpora. Recent works have achieved promising results on the RWTH-PHOENIX-Weather 2014T dataset, which consists of over eight thousand parallel sentences between German sign language and German. However, from the perspective of neural machine translation, this is still a tiny dataset. To improve the performance of models trained on small datasets, transfer learning can be used. While this has been previously applied in sign language translation for feature extraction, to the best of our knowledge, pretrained language models have not yet been investigated. We use pretrained BERT-base and mBART-50 models to initialize our sign language video to spoken language text translation model. To mitigate overfitting, we apply the frozen pretrained transformer technique: we freeze the majority of parameters during training. Using a pretrained BERT model, we outperform a baseline trained from scratch by 1 to 2 BLEU-4. Our results show that pretrained language models can be used to improve sign language translation performance and that the self-attention patterns in BERT transfer in zero-shot to the encoder and decoder of sign language translation models.
88--97
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
7cc9d291d947c3321e74a909a0984cb233a9f6e2
0
{T}ransfer{N}et: An Effective and Transparent Framework for Multi-hop Question Answering over Relation Graph
Shi, Jiaxin and Cao, Shulin and Hou, Lei and Li, Juanzi and Zhang, Hanwang
2,021
Multi-hop Question Answering (QA) is a challenging task because it requires precise reasoning with entity relations at every step towards the answer. The relations can be represented in terms of labels in knowledge graph (e.g., spouse) or text in text corpus (e.g., they have been married for 26 years). Existing models usually infer the answer by predicting the sequential relation path or aggregating the hidden graph features. The former is hard to optimize, and the latter lacks interpretability. In this paper, we propose TransferNet, an effective and transparent model for multi-hop QA, which supports both label and text relations in a unified framework. TransferNet jumps across entities at multiple steps. At each step, it attends to different parts of the question, computes activated scores for relations, and then transfer the previous entity scores along activated relations in a differentiable way. We carry out extensive experiments on three datasets and demonstrate that TransferNet surpasses the state-of-the-art models by a large margin. In particular, on MetaQA, it achieves 100{\%} accuracy in 2-hop and 3-hop questions. By qualitative analysis, we show that TransferNet has transparent and interpretable intermediate results.
4149--4158
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
53cffcd498d78b68c7c22f6fec5760be4c8a368c
1
Risk Minimization for Zero-shot Sequence Labeling
Hu, Zechuan and Jiang, Yong and Bach, Nguyen and Wang, Tao and Huang, Zhongqiang and Huang, Fei and Tu, Kewei
2,021
Zero-shot sequence labeling aims to build a sequence labeler without human-annotated datasets. One straightforward approach is utilizing existing systems (source models) to generate pseudo-labeled datasets and train a target sequence labeler accordingly. However, due to the gap between the source and the target languages/domains, this approach may fail to recover the true labels. In this paper, we propose a novel unified framework for zero-shot sequence labeling with minimum risk training and design a new decomposable risk function that models the relations between the predicted labels from the source models and the true labels. By making the risk function trainable, we draw a connection between minimum risk training and latent variable model learning. We propose a unified learning algorithm based on the expectation maximization (EM) algorithm. We extensively evaluate our proposed approaches on cross-lingual/domain sequence labeling tasks over twenty-one datasets. The results show that our approaches outperform state-of-the-art baseline systems.
4909--4920
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
961a9f9529a37484f216d5c4086d277c2fffa89e
0
{R}e{T}ra{C}k: A Flexible and Efficient Framework for Knowledge Base Question Answering
Chen, Shuang and Liu, Qian and Yu, Zhiwei and Lin, Chin-Yew and Lou, Jian-Guang and Jiang, Feng
2,021
We present Retriever-Transducer-Checker (ReTraCk), a neural semantic parsing framework for large scale knowledge base question answering (KBQA). ReTraCk is designed as a modular framework to maintain high flexibility. It includes a retriever to retrieve relevant KB items efficiently, a transducer to generate logical form with syntax correctness guarantees and a checker to improve transduction procedure. ReTraCk is ranked at top1 overall performance on the GrailQA leaderboard and obtains highly competitive performance on the typical WebQuestionsSP benchmark. Our system can interact with users timely, demonstrating the efficiency of the proposed framework.
325--336
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
ebc64974e9e0021984a0158b3c04b60327730a88
1
Data-driven Identification of Idioms in Song Lyrics
Amin, Miriam and Fankhauser, Peter and Kupietz, Marc and Schneider, Roman
2,021
The automatic recognition of idioms poses a challenging problem for NLP applications. Whereas native speakers can intuitively handle multiword expressions whose compositional meanings are hard to trace back to individual word semantics, there is still ample scope for improvement regarding computational approaches. We assume that idiomatic constructions can be characterized by gradual intensities of semantic non-compositionality, formal fixedness, and unusual usage context, and introduce a number of measures for these characteristics, comprising count-based and predictive collocation measures together with measures of context (un)similarity. We evaluate our approach on a manually labelled gold standard, derived from a corpus of German pop lyrics. To this end, we apply a Random Forest classifier to analyze the individual contribution of features for automatically detecting idioms, and study the trade-off between recall and precision. Finally, we evaluate the classifier on an independent dataset of idioms extracted from a list of Wikipedia idioms, achieving state-of-the art accuracy.
13--22
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
e37e9a4e7274c9f7cadbec08fc58d86679009b23
0
Query Graph Generation for Answering Multi-hop Complex Questions from Knowledge Bases
Lan, Yunshi and Jiang, Jing
2,020
Previous work on answering complex questions from knowledge bases usually separately addresses two types of complexity: questions with constraints and questions with multiple hops of relations. In this paper, we handle both types of complexity at the same time. Motivated by the observation that early incorporation of constraints into query graphs can more effectively prune the search space, we propose a modified staged query graph generation method with more flexible ways to generate query graphs. Our experiments clearly show that our method achieves the state of the art on three benchmark KBQA datasets.
969--974
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
065691db44b3b1ae48ffb64559e7a98e5b80306f
1
Correcting the Autocorrect: Context-Aware Typographical Error Correction via Training Data Augmentation
Shah, Kshitij and de Melo, Gerard
2,020
In this paper, we explore the artificial generation of typographical errors based on real-world statistics. We first draw on a small set of annotated data to compute spelling error statistics. These are then invoked to introduce errors into substantially larger corpora. The generation methodology allows us to generate particularly challenging errors that require context-aware error detection. We use it to create a set of English language error detection and correction datasets. Finally, we examine the effectiveness of machine learning models for detecting and correcting errors based on this data.
6930--6936
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
ea099dc45506580de966803e4305e705f96e45e2
0
Benchmarking Multimodal Regex Synthesis with Complex Structures
Ye, Xi and Chen, Qiaochu and Dillig, Isil and Durrett, Greg
2,020
Existing datasets for regular expression (regex) generation from natural language are limited in complexity; compared to regex tasks that users post on StackOverflow, the regexes in these datasets are simple, and the language used to describe them is not diverse. We introduce StructuredRegex, a new regex synthesis dataset differing from prior ones in three aspects. First, to obtain structurally complex and realistic regexes, we generate the regexes using a probabilistic grammar with pre-defined macros observed from real-world StackOverflow posts. Second, to obtain linguistically diverse natural language descriptions, we show crowdworkers abstract depictions of the underlying regex and ask them to describe the pattern they see, rather than having them paraphrase synthetic language. Third, we augment each regex example with a collection of strings that are and are not matched by the ground truth regex, similar to how real users give examples. Our quantitative and qualitative analysis demonstrates the advantages of StructuredRegex over prior datasets. Further experimental results using various multimodal synthesis techniques highlight the challenge presented by our dataset, including non-local constraints and multi-modal inputs.
6081--6094
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
9bb3a41d3f7d2e69a40c2ebaf050684a78a2f667
1
{D}o{QA} - Accessing Domain-Specific {FAQ}s via Conversational {QA}
Campos, Jon Ander and Otegi, Arantxa and Soroa, Aitor and Deriu, Jan and Cieliebak, Mark and Agirre, Eneko
2,020
The goal of this work is to build conversational Question Answering (QA) interfaces for the large body of domain-specific information available in FAQ sites. We present DoQA, a dataset with 2,437 dialogues and 10,917 QA pairs. The dialogues are collected from three Stack Exchange sites using the Wizard of Oz method with crowdsourcing. Compared to previous work, DoQA comprises well-defined information needs, leading to more coherent and natural conversations with less factoid questions and is multi-domain. In addition, we introduce a more realistic information retrieval (IR) scenario where the system needs to find the answer in any of the FAQ documents. The results of an existing, strong, system show that, thanks to transfer learning from a Wikipedia QA dataset and fine tuning on a single FAQ domain, it is possible to build high quality conversational QA systems for FAQs without in-domain training data. The good results carry over into the more challenging IR scenario. In both cases, there is still ample room for improvement, as indicated by the higher human upperbound.
7302--7314
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
661f3174766c8aeba31b3cc0d4fcdf7d44a3bad5
0
Complex Question Decomposition for Semantic Parsing
Zhang, Haoyu and Cai, Jingjing and Xu, Jianjun and Wang, Ji
2,019
In this work, we focus on complex question semantic parsing and propose a novel Hierarchical Semantic Parsing (HSP) method, which utilizes the decompositionality of complex questions for semantic parsing. Our model is designed within a three-stage parsing architecture based on the idea of decomposition-integration. In the first stage, we propose a question decomposer which decomposes a complex question into a sequence of sub-questions. In the second stage, we design an information extractor to derive the type and predicate information of these questions. In the last stage, we integrate the generated information from previous stages and generate a logical form for the complex question. We conduct experiments on COMPLEXWEBQUESTIONS which is a large scale complex question semantic parsing dataset, results show that our model achieves significant improvement compared to state-of-the-art methods.
4477--4486
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
6d5ae956f59071a9f3a0003f793e056791376752
1
{IITP} at {MEDIQA} 2019: Systems Report for Natural Language Inference, Question Entailment and Question Answering
Bandyopadhyay, Dibyanayan and Gain, Baban and Saikh, Tanik and Ekbal, Asif
2,019
This paper presents the experiments accomplished as a part of our participation in the MEDIQA challenge, an (Abacha et al., 2019) shared task. We participated in all the three tasks defined in this particular shared task. The tasks are viz. i. Natural Language Inference (NLI) ii. Recognizing Question Entailment(RQE) and their application in medical Question Answering (QA). We submitted runs using multiple deep learning based systems (runs) for each of these three tasks. We submitted five system results in each of the NLI and RQE tasks, and four system results for the QA task. The systems yield encouraging results in all the three tasks. The highest performance obtained in NLI, RQE and QA tasks are 81.8{\%}, 53.2{\%}, and 71.7{\%}, respectively.
517--522
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
3704952698c50baee8cb07f9c6af4c05c2cfe194
0
{UH}op: An Unrestricted-Hop Relation Extraction Framework for Knowledge-Based Question Answering
Chen, Zi-Yuan and Chang, Chih-Hung and Chen, Yi-Pei and Nayak, Jijnasa and Ku, Lun-Wei
2,019
In relation extraction for knowledge-based question answering, searching from one entity to another entity via a single relation is called {``}one hop{''}. In related work, an exhaustive search from all one-hop relations, two-hop relations, and so on to the max-hop relations in the knowledge graph is necessary but expensive. Therefore, the number of hops is generally restricted to two or three. In this paper, we propose UHop, an unrestricted-hop framework which relaxes this restriction by use of a transition-based search framework to replace the relation-chain-based search one. We conduct experiments on conventional 1- and 2-hop questions as well as lengthy questions, including datasets such as WebQSP, PathQuestion, and Grid World. Results show that the proposed framework enables the ability to halt, works well with state-of-the-art models, achieves competitive performance without exhaustive searches, and opens the performance gap for long relation paths.
345--356
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
4e4230418470efbb3a86d407d8eedf329f361f6d
1
Textual and Visual Characteristics of Mathematical Expressions in Scholar Documents
Daudaravicius, Vidas
2,019
Mathematical expressions (ME) are widely used in scholar documents. In this paper we analyze characteristics of textual and visual MEs characteristics for the image-to-LaTeX translation task. While there are open data-sets of LaTeX files with MEs included it is very complicated to extract these MEs from a document and to compile the list of MEs. Therefore we release a corpus of open-access scholar documents with PDF and JATS-XML parallel files. The MEs in these documents are LaTeX encoded and are document independent. The data contains more than 1.2 million distinct annotated formulae and more than 80 million raw tokens of LaTeX MEs in more than 8 thousand documents. While the variety of textual lengths and visual sizes of MEs are not well defined we found that the task of analyzing MEs in scholar documents can be reduced to the subtask of a particular text length, image width and height bounds, and display MEs can be processed as arrays of partial MEs.
72--81
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
feb89e01d80b5a52a694c1816c893837caf2f0e2
0
Neural Symbolic Machines: Learning Semantic Parsers on {F}reebase with Weak Supervision
Liang, Chen and Berant, Jonathan and Le, Quoc and Forbus, Kenneth D. and Lao, Ni
2,017
Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base. In this work, we introduce a Neural Symbolic Machine, which contains (a) a neural {``}programmer{''}, i.e., a sequence-to-sequence model that maps language utterances to programs and utilizes a key-variable memory to handle compositionality (b) a symbolic {``}computer{''}, i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space. We apply REINFORCE to directly optimize the task reward of this structured prediction problem. To train with weak supervision and improve the stability of REINFORCE, we augment it with an iterative maximum-likelihood training process. NSM outperforms the state-of-the-art on the WebQuestionsSP dataset when trained from question-answer pairs only, without requiring any feature engineering or domain-specific knowledge.
23--33
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
4ca430d4640afa4a3838371a08f8f418284bdb7c
1
The {SUMMA} Platform Prototype
Liepins, Renars and Germann, Ulrich and Barzdins, Guntis and Birch, Alexandra and Renals, Steve and Weber, Susanne and van der Kreeft, Peggy and Bourlard, Herv{\'e} and Prieto, Jo{\~a}o and Klejch, Ond{\v{r}}ej and Bell, Peter and Lazaridis, Alexandros and Mendes, Alfonso and Riedel, Sebastian and Almeida, Mariana S. C. and Balage, Pedro and Cohen, Shay B. and Dwojak, Tomasz and Garner, Philip N. and Giefer, Andreas and Junczys-Dowmunt, Marcin and Imran, Hina and Nogueira, David and Ali, Ahmed and Miranda, Sebasti{\~a}o and Popescu-Belis, Andrei and Miculicich Werlen, Lesly and Papasarantopoulos, Nikos and Obamuyide, Abiola and Jones, Clive and Dalvi, Fahim and Vlachos, Andreas and Wang, Yang and Tong, Sibo and Sennrich, Rico and Pappas, Nikolaos and Narayan, Shashi and Damonte, Marco and Durrani, Nadir and Khurana, Sameer and Abdelali, Ahmed and Sajjad, Hassan and Vogel, Stephan and Sheppey, David and Hernon, Chris and Mitchell, Jeff
2,017
We present the first prototype of the SUMMA Platform: an integrated platform for multilingual media monitoring. The platform contains a rich suite of low-level and high-level natural language processing technologies: automatic speech recognition of broadcast media, machine translation, automated tagging and classification of named entities, semantic parsing to detect relationships between entities, and automatic construction / augmentation of factual knowledge bases. Implemented on the Docker platform, it can easily be deployed, customised, and scaled to large volumes of incoming media streams.
116--119
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
0f3daba80458baff1f8879d2989d65ff4469a29e
0
Abstract Syntax Networks for Code Generation and Semantic Parsing
Rabinovich, Maxim and Stern, Mitchell and Klein, Dan
2,017
Tasks like code generation and semantic parsing require mapping unstructured (or partially structured) inputs to well-formed, executable outputs. We introduce abstract syntax networks, a modeling framework for these problems. The outputs are represented as abstract syntax trees (ASTs) and constructed by a decoder with a dynamically-determined modular structure paralleling the structure of the output tree. On the benchmark Hearthstone dataset for code generation, our model obtains 79.2 BLEU and 22.7{\%} exact match accuracy, compared to previous state-of-the-art values of 67.1 and 6.1{\%}. Furthermore, we perform competitively on the Atis, Jobs, and Geo semantic parsing datasets with no task-specific engineering.
1139--1149
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
2c1e874c3b67510a3215e535f5646b362de5bc89
1
Modeling Target-Side Inflection in Neural Machine Translation
Tamchyna, Ale{\v{s}} and Weller-Di Marco, Marion and Fraser, Alexander
2,017
nan
32--42
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
1f080959faf9bf2a282c0aadbd8584d8b32f6e24
0
Universal Semantic Parsing
Reddy, Siva and T{\"a}ckstr{\"o}m, Oscar and Petrov, Slav and Steedman, Mark and Lapata, Mirella
2,017
Universal Dependencies (UD) offer a uniform cross-lingual syntactic representation, with the aim of advancing multilingual applications. Recent work shows that semantic parsing can be accomplished by transforming syntactic dependencies to logical forms. However, this work is limited to English, and cannot process dependency graphs, which allow handling complex phenomena such as control. In this work, we introduce UDepLambda, a semantic interface for UD, which maps natural language to logical forms in an almost language-independent fashion and can process dependency graphs. We perform experiments on question answering against Freebase and provide German and Spanish translations of the WebQuestions and GraphQuestions datasets to facilitate multilingual evaluation. Results show that UDepLambda outperforms strong baselines across languages and datasets. For English, it achieves a 4.9 F1 point improvement over the state-of-the-art on GraphQuestions.
89--101
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
faee0c81a1170402b149500f1b91c51ccaf24027
1
Modeling Context Words as Regions: An Ordinal Regression Approach to Word Embedding
Jameel, Shoaib and Schockaert, Steven
2,017
Vector representations of word meaning have found many applications in the field of natural language processing. Word vectors intuitively represent the average context in which a given word tends to occur, but they cannot explicitly model the diversity of these contexts. Although region representations of word meaning offer a natural alternative to word vectors, only few methods have been proposed that can effectively learn word regions. In this paper, we propose a new word embedding model which is based on SVM regression. We show that the underlying ranking interpretation of word contexts is sufficient to match, and sometimes outperform, the performance of popular methods such as Skip-gram. Furthermore, we show that by using a quadratic kernel, we can effectively learn word regions, which outperform existing unsupervised models for the task of hypernym detection.
123--133
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
1a57b15f2e0b122028925b580c3b616980f5fd30
0
On Generating Characteristic-rich Question Sets for {QA} Evaluation
Su, Yu and Sun, Huan and Sadler, Brian and Srivatsa, Mudhakar and G{\"u}r, Izzeddin and Yan, Zenghui and Yan, Xifeng
2,016
nan
562--572
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
ce23476759263bd3f5e95fc758385eb62b3ab59a
1
{CAMR} at {S}em{E}val-2016 Task 8: An Extended Transition-based {AMR} Parser
Wang, Chuan and Pradhan, Sameer and Pan, Xiaoman and Ji, Heng and Xue, Nianwen
2,016
nan
1173--1178
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
f417e2860170f2fd3ac17bc8eaa9f4eda27b39bc
0
Large-scale Semantic Parsing via Schema Matching and Lexicon Extension
Cai, Qingqing and Yates, Alexander
2,013
nan
423--433
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
80c2d8c691b09f8b4e53f512b9d2641b49fda935
1
Finite State Morphology Tool for {L}atvian
Deksne, Daiga
2,013
nan
49--53
9b2473c1d88f2300358dc8b11e826dc5ba81bbbc
RNG-KBQA: Generation Augmented Iterative Ranking for Knowledge Base Question Answering
4f604c234875b54c6813c504f90114638cd926cc
0
Improving Semantic Parsing via Answer Type Inference
Yavuz, Semih and Gur, Izzeddin and Su, Yu and Srivatsa, Mudhakar and Yan, Xifeng
2,016
nan
149--159
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
f3594f9d60c98cac88f9033c69c2b666713ed6d6
1
Deep Fusion {LSTM}s for Text Semantic Matching
Liu, Pengfei and Qiu, Xipeng and Chen, Jifan and Huang, Xuanjing
2,016
nan
1034--1043
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
adc184fcb04107f95e35ea1b07ef9aad749da8d7
0
End-to-End Trainable Attentive Decoder for Hierarchical Entity Classification
Karn, Sanjeev and Waltinger, Ulli and Sch{\"u}tze, Hinrich
2,017
We address fine-grained entity classification and propose a novel attention-based recurrent neural network (RNN) encoder-decoder that generates paths in the type hierarchy and can be trained end-to-end. We show that our model performs better on fine-grained entity classification than prior work that relies on flat or local classifiers that do not directly model hierarchical structure.
752--758
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
7615ed4f35f84cc086b9ae8e421891f3d33c68a6
1
Adversarial Generation of Natural Language
Subramanian, Sandeep and Rajeswar, Sai and Dutil, Francis and Pal, Chris and Courville, Aaron
2,017
Generative Adversarial Networks (GANs) have gathered a lot of attention from the computer vision community, yielding impressive results for image generation. Advances in the adversarial generation of natural language from noise however are not commensurate with the progress made in generating images, and still lag far behind likelihood based methods. In this paper, we take a step towards generating natural language with a GAN objective alone. We introduce a simple baseline that addresses the discrete output space problem without relying on gradient estimators and show that it is able to achieve state-of-the-art results on a Chinese poem generation dataset. We present quantitative results on generating sentences from context-free and probabilistic context-free grammars, and qualitative language modeling results. A conditional version is also described that can generate sequences conditioned on sentence characteristics.
241--251
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
bad429f1fff54bff3d20cde79651fec2eb805a7c
0
Imposing Label-Relational Inductive Bias for Extremely Fine-Grained Entity Typing
Xiong, Wenhan and Wu, Jiawei and Lei, Deren and Yu, Mo and Chang, Shiyu and Guo, Xiaoxiao and Wang, William Yang
2,019
Existing entity typing systems usually exploit the type hierarchy provided by knowledge base (KB) schema to model label correlations and thus improve the overall performance. Such techniques, however, are not directly applicable to more open and practical scenarios where the type set is not restricted by KB schema and includes a vast number of free-form types. To model the underlying label correlations without access to manually annotated label structures, we introduce a novel label-relational inductive bias, represented by a graph propagation layer that effectively encodes both global label co-occurrence statistics and word-level similarities. On a large dataset with over 10,000 free-form types, the graph-enhanced model equipped with an attention-based matching module is able to achieve a much higher recall score while maintaining a high-level precision. Specifically, it achieves a 15.3{\%} relative F1 improvement and also less inconsistency in the outputs. We further show that a simple modification of our proposed graph layer can also improve the performance on a conventional and widely-tested dataset that only includes KB-schema types.
773--784
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
a0713d945b2e5c2bdeeba68399c8ac6ea84e0ca6
1
From Insanely Jealous to Insanely Delicious: Computational Models for the Semantic Bleaching of {E}nglish Intensifiers
Luo, Yiwei and Jurafsky, Dan and Levin, Beth
2,019
We introduce novel computational models for modeling semantic bleaching, a widespread category of change in which words become more abstract or lose elements of meaning, like the development of {``}arrive{''} from its earlier meaning {`}become at shore.{'} We validate our methods on a widespread case of bleaching in English: de-adjectival adverbs that originate as manner adverbs (as in {``}awfully behaved{''}) and later become intensifying adverbs (as in {``}awfully nice{''}). Our methods formally quantify three reflexes of bleaching: decreasing similarity to the source meaning (e.g., {``}awful{''}), increasing similarity to a fully bleached prototype (e.g., {``}very{''}), and increasing productivity (e.g., the breadth of adjectives that an adverb modifies). We also test a new causal model and find evidence that bleaching is initially triggered in contexts such as {``}conspicuously evident{''} and {``}insanely jealous{''}, where an adverb premodifies a semantically similar adjective. These contexts provide a form of {``}bridging context{''} (Evans and Wilkins, 2000) that allow a manner adverb to be reinterpreted as an intensifying adverb similar to {``}very{''}.
1--13
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
a66a9420c879f5db5aada2c0eacdce82cb8cf33b
0
Label Embedding for Zero-shot Fine-grained Named Entity Typing
Ma, Yukun and Cambria, Erik and Gao, Sa
2,016
Named entity typing is the task of detecting the types of a named entity in context. For instance, given {``}Eric is giving a presentation{''}, our goal is to infer that {`}Eric{'} is a speaker or a presenter and a person. Existing approaches to named entity typing cannot work with a growing type set and fails to recognize entity mentions of unseen types. In this paper, we present a label embedding method that incorporates prototypical and hierarchical information to learn pre-trained label embeddings. In addition, we adapt a zero-shot learning framework that can predict both seen and previously unseen entity types. We perform evaluation on three benchmark datasets with two settings: 1) few-shots recognition where all types are covered by the training set; and 2) zero-shot recognition where fine-grained types are assumed absent from training set. Results show that prior knowledge encoded using our label embedding methods can significantly boost the performance of classification for both cases.
171--180
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
0d12035f96d795fef0d6b4f70340934dd3dd98a1
1
Normalized Log-Linear Interpolation of Backoff Language Models is Efficient
Heafield, Kenneth and Geigle, Chase and Massung, Sean and Schwartz, Lane
2,016
nan
876--886
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
05ebfb48f601dbb451d257d2ec1818a6b5ca1668
0
Hierarchical Losses and New Resources for Fine-grained Entity Typing and Linking
Murty, Shikhar and Verga, Patrick and Vilnis, Luke and Radovanovic, Irena and McCallum, Andrew
2,018
Extraction from raw text to a knowledge base of entities and fine-grained types is often cast as prediction into a flat set of entity and type labels, neglecting the rich hierarchies over types and entities contained in curated ontologies. Previous attempts to incorporate hierarchical structure have yielded little benefit and are restricted to shallow ontologies. This paper presents new methods using real and complex bilinear mappings for integrating hierarchical information, yielding substantial improvement over flat predictions in entity linking and fine-grained entity typing, and achieving new state-of-the-art results for end-to-end models on the benchmark FIGER dataset. We also present two new human-annotated datasets containing wide and deep hierarchies which we will release to the community to encourage further research in this direction: \textit{MedMentions}, a collection of PubMed abstracts in which 246k mentions have been mapped to the massive UMLS ontology; and \textit{TypeNet}, which aligns Freebase types with the WordNet hierarchy to obtain nearly 2k entity types. In experiments on all three datasets we show substantial gains from hierarchy-aware training.
97--109
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
35112824817b78156a6b2bcd2a5622a26ee16600
1
Automatic Metric Validation for Grammatical Error Correction
Choshen, Leshem and Abend, Omri
2,018
Metric validation in Grammatical Error Correction (GEC) is currently done by observing the correlation between human and metric-induced rankings. However, such correlation studies are costly, methodologically troublesome, and suffer from low inter-rater agreement. We propose MAEGE, an automatic methodology for GEC metric validation, that overcomes many of the difficulties in the existing methodology. Experiments with MAEGE shed a new light on metric quality, showing for example that the standard $M^2$ metric fares poorly on corpus-level ranking. Moreover, we use MAEGE to perform a detailed analysis of metric behavior, showing that some types of valid edits are consistently penalized by existing metrics.
1372--1382
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
1fe490f22516fcc94584cdd76fa0890560ae4398
0
{AFET}: Automatic Fine-Grained Entity Typing by Hierarchical Partial-Label Embedding
Ren, Xiang and He, Wenqi and Qu, Meng and Huang, Lifu and Ji, Heng and Han, Jiawei
2,016
nan
1369--1378
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
ee42c6c3c5db2f0eb40faacf6e3b80035a645287
1
A Web Tool for Building Parallel Corpora of Spoken and Sign Languages
Becker, Alex and Kepler, Fabio and Candeias, Sara
2,016
In this paper we describe our work in building an online tool for manually annotating texts in any spoken language with SignWriting in any sign language. The existence of such tool will allow the creation of parallel corpora between spoken and sign languages that can be used to bootstrap the creation of efficient tools for the Deaf community. As an example, a parallel corpus between English and American Sign Language could be used for training Machine Learning models for automatic translation between the two languages. Clearly, this kind of tool must be designed in a way that it eases the task of human annotators, not only by being easy to use, but also by giving smart suggestions as the annotation progresses, in order to save time and effort. By building a collaborative, online, easy to use annotation tool for building parallel corpora between spoken and sign languages we aim at helping the development of proper resources for sign languages that can then be used in state-of-the-art models currently used in tools for spoken languages. There are several issues and difficulties in creating this kind of resource, and our presented tool already deals with some of them, like adequate text representation of a sign and many to many alignments between words and signs.
1438--1445
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
797529a6f73638f2d7b6e9ecf4be56c9c79c95e0
0
Sequence-to-Nuggets: Nested Entity Mention Detection via Anchor-Region Networks
Lin, Hongyu and Lu, Yaojie and Han, Xianpei and Sun, Le
2,019
Sequential labeling-based NER approaches restrict each word belonging to at most one entity mention, which will face a serious problem when recognizing nested entity mentions. In this paper, we propose to resolve this problem by modeling and leveraging the head-driven phrase structures of entity mentions, i.e., although a mention can nest other mentions, they will not share the same head word. Specifically, we propose Anchor-Region Networks (ARNs), a sequence-to-nuggets architecture for nested mention detection. ARNs first identify anchor words (i.e., possible head words) of all mentions, and then recognize the mention boundaries for each anchor word by exploiting regular phrase structures. Furthermore, we also design Bag Loss, an objective function which can train ARNs in an end-to-end manner without using any anchor word annotation. Experiments show that ARNs achieve the state-of-the-art performance on three standard nested entity mention detection benchmarks.
5182--5192
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
f9a5048c9cf069d8345eabe9afe2b5d3029f5f4d
1
Decomposing Generalization: Models of Generic, Habitual, and Episodic Statements
Govindarajan, Venkata and Van Durme, Benjamin and White, Aaron Steven
2,019
We present a novel semantic framework for modeling linguistic expressions of generalization{---} generic, habitual, and episodic statements{---}as combinations of simple, real-valued referential properties of predicates and their arguments. We use this framework to construct a dataset covering the entirety of the Universal Dependencies English Web Treebank. We use this dataset to probe the efficacy of type-level and token-level information{---}including hand-engineered features and static (GloVe) and contextual (ELMo) word embeddings{---}for predicting expressions of generalization.
501--517
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
e71aae34bdbb193952b8d20fea0c7da98fa39ac6
0
Fine-Grained Entity Type Classification by Jointly Learning Representations and Label Embeddings
Abhishek, Abhishek and Anand, Ashish and Awekar, Amit
2,017
Fine-grained entity type classification (FETC) is the task of classifying an entity mention to a broad set of types. Distant supervision paradigm is extensively used to generate training data for this task. However, generated training data assigns same set of labels to every mention of an entity without considering its local context. Existing FETC systems have two major drawbacks: assuming training data to be noise free and use of hand crafted features. Our work overcomes both drawbacks. We propose a neural network model that jointly learns entity mentions and their context representation to eliminate use of hand crafted features. Our model treats training data as noisy and uses non-parametric variant of hinge loss function. Experiments show that the proposed model outperforms previous state-of-the-art methods on two publicly available datasets, namely FIGER (GOLD) and BBN with an average relative improvement of 2.69{\%} in micro-F1 score. Knowledge learnt by our model on one dataset can be transferred to other datasets while using same model or other FETC systems. These approaches of transferring knowledge further improve the performance of respective models.
797--807
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
f4283dbf7883b1ab1a7fe01b58ebd627bcfdf008
1
Leveraging Linguistic Resources for Improving Neural Text Classification
Liu, Ming and Haffari, Gholamreza and Buntine, Wray and Ananda-Rajah, Michelle
2,017
nan
34--42
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
bfdb24a50a5f5bdd302235726ff74247288c57e5
0
Improving Fine-grained Entity Typing with Entity Linking
Dai, Hongliang and Du, Donghong and Li, Xin and Song, Yangqiu
2,019
Fine-grained entity typing is a challenging problem since it usually involves a relatively large tag set and may require to understand the context of the entity mention. In this paper, we use entity linking to help with the fine-grained entity type classification process. We propose a deep neural model that makes predictions based on both the context and the information obtained from entity linking results. Experimental results on two commonly used datasets demonstrates the effectiveness of our approach. On both datasets, it achieves more than 5{\%} absolute strict accuracy improvement over the state of the art.
6210--6215
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
b74b272c7fe881614f3eb8c2504b037439571eec
1
When and Why is Document-level Context Useful in Neural Machine Translation?
Kim, Yunsu and Tran, Duc Thanh and Ney, Hermann
2,019
Document-level context has received lots of attention for compensating neural machine translation (NMT) of isolated sentences. However, recent advances in document-level NMT focus on sophisticated integration of the context, explaining its improvement with only a few selected examples or targeted test sets. We extensively quantify the causes of improvements by a document-level model in general test sets, clarifying the limit of the usefulness of document-level context in NMT. We show that most of the improvements are not interpretable as utilizing the context. We also show that a minimal encoding is sufficient for the context modeling and very long context is not helpful for NMT.
24--34
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
fc18676ef52caf30004ba7a2ce884b84c2a5e552
0
Neural Architectures for Fine-grained Entity Type Classification
Shimaoka, Sonse and Stenetorp, Pontus and Inui, Kentaro and Riedel, Sebastian
2,017
In this work, we investigate several neural network architectures for fine-grained entity type classification and make three key contributions. Despite being a natural comparison and addition, previous work on attentive neural architectures have not considered hand-crafted features and we combine these with learnt features and establish that they complement each other. Additionally, through quantitative analysis we establish that the attention mechanism learns to attend over syntactic heads and the phrase containing the mention, both of which are known to be strong hand-crafted features for our task. We introduce parameter sharing between labels through a hierarchical encoding method, that in low-dimensional projections show clear clusters for each type hierarchy. Lastly, despite using the same evaluation dataset, the literature frequently compare models trained using different data. We demonstrate that the choice of training data has a drastic impact on performance, which decreases by as much as 9.85{\%} loose micro F1 score for a previously proposed method. Despite this discrepancy, our best model achieves state-of-the-art results with 75.36{\%} loose micro F1 score on the well-established Figer (GOLD) dataset and we report the best results for models trained using publicly available data for the OntoNotes dataset with 64.93{\%} loose micro F1 score.
1271--1280
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
800dd1672789fe97513b84e65e75e370b10d6c13
1
Online Deception Detection Refueled by Real World Data Collection
Yao, Wenlin and Dai, Zeyu and Huang, Ruihong and Caverlee, James
2,017
The lack of large realistic datasets presents a bottleneck in online deception detection studies. In this paper, we apply a data collection method based on social network analysis to quickly identify high quality deceptive and truthful online reviews1 from Amazon. The dataset contains more than 10,000 deceptive reviews and is diverse in product domains and reviewers. Using this dataset, we explore effective general features for online deception detection that perform well across domains. We demonstrate that with generalized features {--} advertising speak and writing complexity scores {--} deception detection performance can be further improved by adding additional deceptive reviews from assorted domains in training. Finally, reviewer level evaluation gives an interesting insight into different deceptive reviewers{'} writing styles.
793--802
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
fc0386b208fc85558ab36cac597e5e04395d7a6b
0
Neural Fine-Grained Entity Type Classification with Hierarchy-Aware Loss
Xu, Peng and Barbosa, Denilson
2,018
The task of Fine-grained Entity Type Classification (FETC) consists of assigning types from a hierarchy to entity mentions in text. Existing methods rely on distant supervision and are thus susceptible to noisy labels that can be out-of-context or overly-specific for the training sentence. Previous methods that attempt to address these issues do so with heuristics or with the help of hand-crafted features. Instead, we propose an end-to-end solution with a neural network model that uses a variant of cross-entropy loss function to handle out-of-context labels, and hierarchical loss normalization to cope with overly-specific ones. Also, previous work solve FETC a multi-label classification followed by ad-hoc post-processing. In contrast, our solution is more elegant: we use public word embeddings to train a single-label that jointly learns representations for entity mentions and their context. We show experimentally that our approach is robust against noise and consistently outperforms the state-of-the-art on established benchmarks for the task.
16--25
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
008405f7ee96677ac23cc38be360832af2d9f437
1
Interoperable Annotation of Events and Event Relations across Domains
Araki, Jun and Mulaffer, Lamana and Pandian, Arun and Yamakawa, Yukari and Oflazer, Kemal and Mitamura, Teruko
2,018
nan
10--20
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
fb6e8bce5acd960161baa18a2c6c5573f4cd898b
0
Gazetteer-Enhanced Attentive Neural Networks for Named Entity Recognition
Lin, Hongyu and Lu, Yaojie and Han, Xianpei and Sun, Le and Dong, Bin and Jiang, Shanshan
2,019
Current region-based NER models only rely on fully-annotated training data to learn effective region encoder, which often face the training data bottleneck. To alleviate this problem, this paper proposes Gazetteer-Enhanced Attentive Neural Networks, which can enhance region-based NER by learning name knowledge of entity mentions from easily-obtainable gazetteers, rather than only from fully-annotated data. Specially, we first propose an attentive neural network (ANN), which explicitly models the mention-context association and therefore is convenient for integrating externally-learned knowledge. Then we design an auxiliary gazetteer network, which can effectively encode name regularity of mentions only using gazetteers. Finally, the learned gazetteer network is incorporated into ANN for better NER. Experiments show that our ANN can achieve the state-of-the-art performance on ACE2005 named entity recognition benchmark. Besides, incorporating gazetteer network can further improve the performance and significantly reduce the requirement of training data.
6232--6237
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
27f7023c07cf5f0e37b61bda936cd3ad26c3c413
1
Synthetic Propaganda Embeddings To Train A Linear Projection
Ek, Adam and Ghanimifard, Mehdi
2,019
This paper presents a method of detecting fine-grained categories of propaganda in text. Given a sentence, our method aims to identify a span of words and predict the type of propaganda used. To detect propaganda, we explore a method for extracting features of propaganda from contextualized embeddings without fine-tuning the large parameters of the base model. We show that by generating synthetic embeddings we can train a linear function with ReLU activation to extract useful labeled embeddings from an embedding space generated by a general-purpose language model. We also introduce an inference technique to detect continuous spans in sequences of propaganda tokens in sentences. A result of the ensemble model is submitted to the first shared task in fine-grained propaganda detection at NLP4IF as Team Stalin. In this paper, we provide additional analysis regarding our method of detecting spans of propaganda with synthetically generated representations.
155--161
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
ebd3c772fed12be35a8bf1f1f2022ca10c317176
0
An Attentive Fine-Grained Entity Typing Model with Latent Type Representation
Lin, Ying and Ji, Heng
2,019
We propose a fine-grained entity typing model with a novel attention mechanism and a hybrid type classifier. We advance existing methods in two aspects: feature extraction and type prediction. To capture richer contextual information, we adopt contextualized word representations instead of fixed word embeddings used in previous work. In addition, we propose a two-step mention-aware attention mechanism to enable the model to focus on important words in mentions and contexts. We also present a hybrid classification method beyond binary relevance to exploit type inter-dependency with latent type representation. Instead of independently predicting each type, we predict a low-dimensional vector that encodes latent type features and reconstruct the type vector from this latent representation. Experiment results on multiple data sets show that our model significantly advances the state-of-the-art on fine-grained entity typing, obtaining up to 6.1{\%} and 5.5{\%} absolute gains in macro averaged F-score and micro averaged F-score respectively.
6197--6202
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
ed3a6ff80bd9892a5d8bf6490147fcd518ebc413
1
The World in My Mind: Visual Dialog with Adversarial Multi-modal Feature Encoding
Yao, Yiqun and Xu, Jiaming and Xu, Bo
2,019
Visual Dialog is a multi-modal task that requires a model to participate in a multi-turn human dialog grounded on an image, and generate correct, human-like responses. In this paper, we propose a novel Adversarial Multi-modal Feature Encoding (AMFE) framework for effective and robust auxiliary training of visual dialog systems. AMFE can force the language-encoding part of a model to generate hidden states in a distribution closely related to the distribution of real-world images, resulting in language features containing general knowledge from both modalities by nature, which can help generate both more correct and more general responses with reasonably low time cost. Experimental results show that AMFE can steadily bring performance gains to different models on different scales of data. Our method outperforms both the supervised learning baselines and other fine-tuning methods, achieving state-of-the-art results on most metrics of VisDial v0.5/v0.9 generative tasks.
2588--2598
7f30821267a11138497107d947ea39726e4b7fbd
Fine-grained Entity Typing via Label Reasoning
c880ad812f1195c1199a7e50fcedfca1c41a8e29
0