title
stringlengths
5
342
author
stringlengths
3
2.17k
year
int64
1.95k
2.02k
abstract
stringlengths
0
12.7k
pages
stringlengths
1
702
queryID
stringlengths
4
40
query
stringlengths
1
300
paperID
stringlengths
0
40
include
int64
0
1
Fine-Grained Entity Typing via Hierarchical Multi Graph Convolutional Networks
Jin, Hailong and Hou, Lei and Li, Juanzi and Dong, Tiansi
2,019
This paper addresses the problem of inferring the fine-grained type of an entity from a knowledge base. We convert this problem into the task of graph-based semi-supervised classification, and propose Hierarchical Multi Graph Convolutional Network (HMGCN), a novel Deep Learning architecture to tackle this problem. We construct three kinds of connectivity matrices to capture different kinds of semantic correlations between entities. A recursive regularization is proposed to model the subClassOf relations between types in given type hierarchy. Extensive experiments with two large-scale public datasets show that our proposed method significantly outperforms four state-of-the-art methods.
4969--4978
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
074e3497b03366caf2e17acd59fb1c52ccf8be55
1
{EUSP}: An Easy-to-Use Semantic Parsing {P}lat{F}orm
An, Bo and Bo, Chen and Han, Xianpei and Sun, Le
2,019
Semantic parsing aims to map natural language utterances into structured meaning representations. We present a modular platform, EUSP (Easy-to-Use Semantic Parsing PlatForm), that facilitates developers to build semantic parser from scratch. Instead of requiring a large amount of training data or complex grammar knowledge, in our platform developers can build grammar-based semantic parser or neural-based semantic parser through configure files which specify the modules and components that compose semantic parsing system. A high quality grammar-based semantic parsing system only requires domain lexicons rather than costly training data for a semantic parser. Furthermore, we provide a browser-based method to generate the semantic parsing system to minimize the difficulty of development. Experimental results show that the neural-based semantic parser system achieves competitive performance on semantic parsing task, and grammar-based semantic parsers significantly improve the performance of a business search engine.
67--72
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
0aa3aa92f19aaaeeb02444a4ed7995de2ce643e3
0
Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference
Li, Bangzheng and Yin, Wenpeng and Chen, Muhao
2,022
The task of ultra-fine entity typing (UFET) seeks to predict diverse and free-form words or phrases that describe the appropriate types of entities mentioned in sentences. A key challenge for this task lies in the large number of types and the scarcity of annotated data per type. Existing systems formulate the task as a multi-way classification problem and train directly or distantly supervised classifiers. This causes two issues: (i) the classifiers do not capture the type semantics because types are often converted into indices; (ii) systems developed in this way are limited to predicting within a pre-defined type set, and often fall short of generalizing to types that are rarely seen or unseen in training. This work presents LITE🍻, a new approach that formulates entity typing as a natural language inference (NLI) problem, making use of (i) the indirect supervision from NLI to infer type information meaningfully represented as textual hypotheses and alleviate the data scarcity issue, as well as (ii) a learning-to-rank objective to avoid the pre-defining of a type set. Experiments show that, with limited training data, LITE obtains state-of-the-art performance on the UFET task. In addition, LITE demonstrates its strong generalizability by not only yielding best results on other fine-grained entity typing benchmarks, more importantly, a pre-trained LITE system works well on new data containing unseen types.1
607--622
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
ef25f1586cf6630f4a30d41ee5a2848b064dede3
1
{AB}/{BA} analysis: A framework for estimating keyword spotting recall improvement while maintaining audio privacy
Petegrosso, Raphael and Baderdinnni, VasistaKrishna and Senechal, Thibaud and Bullough, Benjamin
2,022
Evaluation of keyword spotting (KWS) systems that detect keywords in speech is a challenging task under realistic privacy constraints. The KWS is designed to only collect data when the keyword is present, limiting the availability of hard samples that may contain false negatives, and preventing direct estimation of model recall from production data. Alternatively, complementary data collected from other sources may not be fully representative of the real application. In this work, we propose an evaluation technique which we call AB/BA analysis. Our framework evaluates a candidate KWS model B against a baseline model A, using cross-dataset offline decoding for relative recall estimation, without requiring negative examples. Moreover, we propose a formulation with assumptions that allow estimation of relative false positive rate between models with low variance even when the number of false positives is small. Finally, we propose to leverage machine-generated soft labels, in a technique we call Semi-Supervised AB/BA analysis, that improves the analysis time, privacy, and cost. Experiments with both simulation and real data show that AB/BA analysis is successful at measuring recall improvement in conjunction with the trade-off in relative false positive rate.
27--36
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
62348960bd30f562ef733261b1a47b6d1981f8cd
0
Neural Fine-Grained Entity Type Classification with Hierarchy-Aware Loss
Xu, Peng and Barbosa, Denilson
2,018
The task of Fine-grained Entity Type Classification (FETC) consists of assigning types from a hierarchy to entity mentions in text. Existing methods rely on distant supervision and are thus susceptible to noisy labels that can be out-of-context or overly-specific for the training sentence. Previous methods that attempt to address these issues do so with heuristics or with the help of hand-crafted features. Instead, we propose an end-to-end solution with a neural network model that uses a variant of cross-entropy loss function to handle out-of-context labels, and hierarchical loss normalization to cope with overly-specific ones. Also, previous work solve FETC a multi-label classification followed by ad-hoc post-processing. In contrast, our solution is more elegant: we use public word embeddings to train a single-label that jointly learns representations for entity mentions and their context. We show experimentally that our approach is robust against noise and consistently outperforms the state-of-the-art on established benchmarks for the task.
16--25
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
008405f7ee96677ac23cc38be360832af2d9f437
1
Strategies and Challenges for Crowdsourcing Regional Dialect Perception Data for {S}wiss {G}erman and {S}wiss {F}rench
Goldman, Jean-Philippe and Clematide, Simon and Avanzi, Mathieu and Tandler, Raphael
2,018
nan
nan
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
47829603b26c306c68242cdde6200fa6aa4d9083
0
Label Semantic Aware Pre-training for Few-shot Text Classification
Mueller, Aaron and Krone, Jason and Romeo, Salvatore and Mansour, Saab and Mansimov, Elman and Zhang, Yi and Roth, Dan
2,022
In text classification tasks, useful information is encoded in the label names. Label semantic aware systems have leveraged this information for improved text classification performance during fine-tuning and prediction. However, use of label-semantics during pre-training has not been extensively explored. We therefore propose Label Semantic Aware Pre-training (LSAP) to improve the generalization and data efficiency of text classification systems. LSAP incorporates label semantics into pre-trained generative models (T5 in our case) by performing secondary pre-training on labeled sentences from a variety of domains. As domain-general pre-training requires large amounts of data, we develop a filtering and labeling pipeline to automatically create sentence-label pairs from unlabeled text. We perform experiments on intent (ATIS, Snips, TOPv2) and topic classification (AG News, Yahoo! Answers). LSAP obtains significant accuracy improvements over state-of-the-art models for few-shot text classification while maintaining performance comparable to state of the art in high-resource settings.
8318--8334
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
17ae9c4297e0feb23b2ef84a406d76dc7033c98c
1
{ELRC} Action: Covering Confidentiality, Correctness and Cross-linguality
Vanallemeersch, Tom and Defauw, Arne and Szoc, Sara and Kramchaninova, Alina and Van den Bogaert, Joachim and L{\"o}sch, Andrea
2,022
We describe the language technology (LT) assessments carried out in the ELRC action (European Language Resource Coordination) of the European Commission, which aims towards minimising language barriers across the EU. We zoom in on the two most extensive assessments. These LT specifications do not only involve experiments with tools and techniques but also an extensive consultation round with stakeholders from public organisations, academia and industry, in order to gather insights into scenarios and best practices. The LT specifications concern (1) the field of automated anonymisation, which is motivated by the need of public and other organisations to be able to store and share data, and (2) the field of multilingual fake news processing, which is motivated by the increasingly pressing problem of disinformation and the limited language coverage of systems for automatically detecting misleading articles. For each specification, we set up a corresponding proof-of-concept software to demonstrate the opportunities and challenges involved in the field.
6240--6249
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
c35ea7cbf1571e6dc30afcaf4368dbb87df295ff
0
Fine-grained Entity Typing via Label Reasoning
Liu, Qing and Lin, Hongyu and Xiao, Xinyan and Han, Xianpei and Sun, Le and Wu, Hua
2,021
Conventional entity typing approaches are based on independent classification paradigms, which make them difficult to recognize inter-dependent, long-tailed and fine-grained entity types. In this paper, we argue that the implicitly entailed extrinsic and intrinsic dependencies between labels can provide critical knowledge to tackle the above challenges. To this end, we propose Label Reasoning Network(LRN), which sequentially reasons fine-grained entity labels by discovering and exploiting label dependencies knowledge entailed in the data. Specifically, LRN utilizes an auto-regressive network to conduct deductive reasoning and a bipartite attribute graph to conduct inductive reasoning between labels, which can effectively model, learn and reason complex label dependencies in a sequence-to-set, end-to-end manner. Experiments show that LRN achieves the state-of-the-art performance on standard ultra fine-grained entity typing benchmarks, and can also resolve the long tail label problem effectively.
4611--4622
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
7f30821267a11138497107d947ea39726e4b7fbd
1
{COVID}-Fact: Fact Extraction and Verification of Real-World Claims on {COVID}-19 Pandemic
Saakyan, Arkadiy and Chakrabarty, Tuhin and Muresan, Smaranda
2,021
We introduce a FEVER-like dataset COVID-Fact of 4,086 claims concerning the COVID-19 pandemic. The dataset contains claims, evidence for the claims, and contradictory claims refuted by the evidence. Unlike previous approaches, we automatically detect true claims and their source articles and then generate counter-claims using automatic methods rather than employing human annotators. Along with our constructed resource, we formally present the task of identifying relevant evidence for the claims and verifying whether the evidence refutes or supports a given claim. In addition to scientific claims, our data contains simplified general claims from media sources, making it better suited for detecting general misinformation regarding COVID-19. Our experiments indicate that COVID-Fact will provide a challenging testbed for the development of new systems and our approach will reduce the costs of building domain-specific datasets for detecting misinformation.
2116--2129
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
c530bef97ee809c01ce59d04a7011d445fb1e147
0
Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model
Dai, Hongliang and Song, Yangqiu and Wang, Haixun
2,021
Recently, there is an effort to extend fine-grained entity typing by using a richer and ultra-fine set of types, and labeling noun phrases including pronouns and nominal nouns instead of just named entity mentions. A key challenge for this ultra-fine entity typing task is that human annotated data are extremely scarce, and the annotation ability of existing distant or weak supervision approaches is very limited. To remedy this problem, in this paper, we propose to obtain training data for ultra-fine entity typing by using a BERT Masked Language Model (MLM). Given a mention in a sentence, our approach constructs an input for the BERT MLM so that it predicts context dependent hypernyms of the mention, which can be used as type labels. Experimental results demonstrate that, with the help of these automatically generated labels, the performance of an ultra-fine entity typing model can be improved substantially. We also show that our approach can be applied to improve traditional fine-grained entity typing after performing simple type mapping.
1790--1799
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
70b49a024787d3ad374fb78dc87e3ba2b5e16566
1
A Fine-Grained Domain Adaption Model for Joint Word Segmentation and {POS} Tagging
Jiang, Peijie and Long, Dingkun and Sun, Yueheng and Zhang, Meishan and Xu, Guangwei and Xie, Pengjun
2,021
Domain adaption for word segmentation and POS tagging is a challenging problem for Chinese lexical processing. Self-training is one promising solution for it, which struggles to construct a set of high-quality pseudo training instances for the target domain. Previous work usually assumes a universal source-to-target adaption to collect such pseudo corpus, ignoring the different gaps from the target sentences to the source domain. In this work, we start from joint word segmentation and POS tagging, presenting a fine-grained domain adaption method to model the gaps accurately. We measure the gaps by one simple and intuitive metric, and adopt it to develop a pseudo target domain corpus based on fine-grained subdomains incrementally. A novel domain-mixed representation learning model is proposed accordingly to encode the multiple subdomains effectively. The whole process is performed progressively for both corpus construction and model training. Experimental results on a benchmark dataset show that our method can gain significant improvements over a vary of baselines. Extensive analyses are performed to show the advantages of our final domain adaption model as well.
3587--3598
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
ce29e0aa6e6569f137d1d248ec497a63c65235fe
0
Modeling Fine-Grained Entity Types with Box Embeddings
Onoe, Yasumasa and Boratko, Michael and McCallum, Andrew and Durrett, Greg
2,021
Neural entity typing models typically represent fine-grained entity types as vectors in a high-dimensional space, but such spaces are not well-suited to modeling these types{'} complex interdependencies. We study the ability of box embeddings, which embed concepts as d-dimensional hyperrectangles, to capture hierarchies of types even when these relationships are not defined explicitly in the ontology. Our model represents both types and entity mentions as boxes. Each mention and its context are fed into a BERT-based model to embed that mention in our box space; essentially, this model leverages typological clues present in the surface text to hypothesize a type representation for the mention. Box containment can then be used to derive both the posterior probability of a mention exhibiting a given type and the conditional probability relations between types themselves. We compare our approach with a vector-based typing model and observe state-of-the-art performance on several entity typing benchmarks. In addition to competitive typing performance, our box-based model shows better performance in prediction consistency (predicting a supertype and a subtype together) and confidence (i.e., calibration), demonstrating that the box-based model captures the latent type hierarchies better than the vector-based model does.
2051--2064
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
176e3cbe3141c8b874df663711dca9b7470b8243
1
Which is Better for Deep Learning: Python or {MATLAB}? Answering Comparative Questions in Natural Language
Chekalina, Viktoriia and Bondarenko, Alexander and Biemann, Chris and Beloucif, Meriem and Logacheva, Varvara and Panchenko, Alexander
2,021
We present a system for answering comparative questions (Is X better than Y with respect to Z?) in natural language. Answering such questions is important for assisting humans in making informed decisions. The key component of our system is a natural language interface for comparative QA that can be used in personal assistants, chatbots, and similar NLP devices. Comparative QA is a challenging NLP task, since it requires collecting support evidence from many different sources, and direct comparisons of rare objects may be not available even on the entire Web. We take the first step towards a solution for such a task offering a testbed for comparative QA in natural language by probing several methods, making the three best ones available as an online demo.
302--311
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
45d9d0d9b74605135e3e0dfe3b84661952013760
0
Few-{NERD}: A Few-shot Named Entity Recognition Dataset
Ding, Ning and Xu, Guangwei and Chen, Yulin and Wang, Xiaobin and Han, Xu and Xie, Pengjun and Zheng, Haitao and Liu, Zhiyuan
2,021
Recently, considerable literature has grown up around the theme of few-shot named entity recognition (NER), but little published benchmark data specifically focused on the practical and challenging task. Current approaches collect existing supervised NER datasets and re-organize them to the few-shot setting for empirical study. These strategies conventionally aim to recognize coarse-grained entity types with few examples, while in practice, most unseen entity types are fine-grained. In this paper, we present Few-NERD, a large-scale human-annotated few-shot NER dataset with a hierarchy of 8 coarse-grained and 66 fine-grained entity types. Few-NERD consists of 188,238 sentences from Wikipedia, 4,601,160 words are included and each is annotated as context or a part of the two-level entity type. To the best of our knowledge, this is the first few-shot NER dataset and the largest human-crafted NER dataset. We construct benchmark tasks with different emphases to comprehensively assess the generalization capability of models. Extensive empirical results and analysis show that Few-NERD is challenging and the problem requires further research. The Few-NERD dataset and the baselines will be publicly available to facilitate the research on this problem.
3198--3213
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
a293a01ddd639b25360cf4f23e2df8dd0d1caa8e
1
Weakly Supervised Pre-Training for Multi-Hop Retriever
Seonwoo, Yeon and Lee, Sang-Woo and Kim, Ji-Hoon and Ha, Jung-Woo and Oh, Alice
2,021
nan
694--704
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
9651c3f83b9310829622305f5316443253861fba
0
A Fully Hyperbolic Neural Model for Hierarchical Multi-Class Classification
L{\'o}pez, Federico and Strube, Michael
2,020
Label inventories for fine-grained entity typing have grown in size and complexity. Nonetheless, they exhibit a hierarchical structure. Hyperbolic spaces offer a mathematically appealing approach for learning hierarchical representations of symbolic data. However, it is not clear how to integrate hyperbolic components into downstream tasks. This is the first work that proposes a fully hyperbolic model for multi-class multi-label classification, which performs all operations in hyperbolic space. We evaluate the proposed model on two challenging datasets and compare to different baselines that operate under Euclidean assumptions. Our hyperbolic model infers the latent hierarchy from the class distribution, captures implicit hyponymic relations in the inventory, and shows performance on par with state-of-the-art methods on fine-grained classification with remarkable reduction of the parameter size. A thorough analysis sheds light on the impact of each component in the final prediction and showcases its ease of integration with Euclidean layers.
460--475
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
9109814833731812513bb80f99c94277fc459625
1
Coupled Hierarchical Transformer for Stance-Aware Rumor Verification in Social Media Conversations
Yu, Jianfei and Jiang, Jing and Khoo, Ling Min Serena and Chieu, Hai Leong and Xia, Rui
2,020
The prevalent use of social media enables rapid spread of rumors on a massive scale, which leads to the emerging need of automatic rumor verification (RV). A number of previous studies focus on leveraging stance classification to enhance RV with multi-task learning (MTL) methods. However, most of these methods failed to employ pre-trained contextualized embeddings such as BERT, and did not exploit inter-task dependencies by using predicted stance labels to improve the RV task. Therefore, in this paper, to extend BERT to obtain thread representations, we first propose a Hierarchical Transformer, which divides each long thread into shorter subthreads, and employs BERT to separately represent each subthread, followed by a global Transformer layer to encode all the subthreads. We further propose a Coupled Transformer Module to capture the inter-task interactions and a Post-Level Attention layer to use the predicted stance labels for RV, respectively. Experiments on two benchmark datasets show the superiority of our Coupled Hierarchical Transformer model over existing MTL approaches.
1392--1401
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
65bd80bc3498f7feef170da29ceb58fea28f652b
0
An Investigation of Potential Function Designs for Neural {CRF}
Hu, Zechuan and Jiang, Yong and Bach, Nguyen and Wang, Tao and Huang, Zhongqiang and Huang, Fei and Tu, Kewei
2,020
The neural linear-chain CRF model is one of the most widely-used approach to sequence labeling. In this paper, we investigate a series of increasingly expressive potential functions for neural CRF models, which not only integrate the emission and transition functions, but also explicitly take the representations of the contextual words as input. Our extensive experiments show that the decomposed quadrilinear potential function based on the vector representations of two neighboring labels and two neighboring words consistently achieves the best performance.
2600--2609
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
d587e3b402064be3c0321e4bf88cc598893e6439
1
Human-Paraphrased References Improve Neural Machine Translation
Freitag, Markus and Foster, George and Grangier, David and Cherry, Colin
2,020
Automatic evaluation comparing candidate translations to human-generated paraphrases of reference translations has recently been proposed by freitag2020bleu. When used in place of original references, the paraphrased versions produce metric scores that correlate better with human judgment. This effect holds for a variety of different automatic metrics, and tends to favor natural formulations over more literal (translationese) ones. In this paper we compare the results of performing end-to-end system development using standard and paraphrased references. With state-of-the-art English-German NMT components, we show that tuning to paraphrased references produces a system that is ignificantly better according to human judgment, but 5 BLEU points worse when tested on standard references. Our work confirms the finding that paraphrased references yield metric scores that correlate better with human judgment, and demonstrates for the first time that using these scores for system development can lead to significant improvements.
1183--1192
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
d8fe41ef4f202e01aac9d78a589e22734cea8e07
0
Learning to Denoise Distantly-Labeled Data for Entity Typing
Onoe, Yasumasa and Durrett, Greg
2,019
Distantly-labeled data can be used to scale up training of statistical models, but it is typically noisy and that noise can vary with the distant labeling technique. In this work, we propose a two-stage procedure for handling this type of data: denoise it with a learned model, then train our final model on clean and denoised distant data with standard supervised training. Our denoising approach consists of two parts. First, a filtering function discards examples from the distantly labeled data that are wholly unusable. Second, a relabeling function repairs noisy labels for the retained examples. Each of these components is a model trained on synthetically-noised examples generated from a small manually-labeled set. We investigate this approach on the ultra-fine entity typing task of Choi et al. (2018). Our baseline model is an extension of their model with pre-trained ELMo representations, which already achieves state-of-the-art performance. Adding distant data that has been denoised with our learned models gives further performance gains over this base model, outperforming models trained on raw distant data or heuristically-denoised distant data.
2407--2417
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
dc138300b87f5bfccec609644d5edc08c4d783e9
1
{P}ro{S}eqo: Projection Sequence Networks for On-Device Text Classification
Kozareva, Zornitsa and Ravi, Sujith
2,019
We propose a novel on-device sequence model for text classification using recurrent projections. Our model ProSeqo uses dynamic recurrent projections without the need to store or look up any pre-trained embeddings. This results in fast and compact neural networks that can perform on-device inference for complex short and long text classification tasks. We conducted exhaustive evaluation on multiple text classification tasks. Results show that ProSeqo outperformed state-of-the-art neural and on-device approaches for short text classification tasks such as dialog act and intent prediction. To the best of our knowledge, ProSeqo is the first on-device long text classification neural model. It achieved comparable results to previous neural approaches for news article, answers and product categorization, while preserving small memory footprint and maintaining high accuracy.
3894--3903
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
cc26c8dae566a7aed07179db77c1cc0d5ca427db
0
Ultra-Fine Entity Typing
Choi, Eunsol and Levy, Omer and Choi, Yejin and Zettlemoyer, Luke
2,018
We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict ultra-fine types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets.
87--96
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
4157834ed2d2fea6b6f652a72a9d0487edbc9f57
1
Semantic role labeling tools for biomedical question answering: a study of selected tools on the {B}io{ASQ} datasets
Eckert, Fabian and Neves, Mariana
2,018
Question answering (QA) systems usually rely on advanced natural language processing components to precisely understand the questions and extract the answers. Semantic role labeling (SRL) is known to boost performance for QA, but its use for biomedical texts has not yet been fully studied. We analyzed the performance of three SRL tools (BioKIT, BIOSMILE and PathLSTM) on 1776 questions from the BioASQ challenge. We compared the systems regarding the coverage of the questions and snippets, as well as based on pre-defined criteria, such as easiness of installation, supported formats and usability. Finally, we integrated two of the tools in a simple QA system to further evaluate their performance over the official BioASQ test sets.
11--21
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
e8d5e16e2302fccdda6730fa9f1600d8c1419431
0
{O}nto{N}otes: The 90{\%} Solution
Hovy, Eduard and Marcus, Mitchell and Palmer, Martha and Ramshaw, Lance and Weischedel, Ralph
2,006
nan
57--60
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
e54d8b07ef659f9ee2671441c4355e414e408836
1
Compiling a Lexicon of Cooking Actions for Animation Generation
Shirai, Kiyoaki and Ookawa, Hiroshi
2,006
nan
771--778
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
8235935ce1d7e58d45fa63f114bdc98a91746ecb
0
Improving Semantic Parsing via Answer Type Inference
Yavuz, Semih and Gur, Izzeddin and Su, Yu and Srivatsa, Mudhakar and Yan, Xifeng
2,016
nan
149--159
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
f3594f9d60c98cac88f9033c69c2b666713ed6d6
1
Potential impact of {QT}21
Cornelius, Eleanor
2,016
nan
nan
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
90a4a2036ef89350722752c4ed657d55d83aa2ba
0
Imposing Label-Relational Inductive Bias for Extremely Fine-Grained Entity Typing
Xiong, Wenhan and Wu, Jiawei and Lei, Deren and Yu, Mo and Chang, Shiyu and Guo, Xiaoxiao and Wang, William Yang
2,019
Existing entity typing systems usually exploit the type hierarchy provided by knowledge base (KB) schema to model label correlations and thus improve the overall performance. Such techniques, however, are not directly applicable to more open and practical scenarios where the type set is not restricted by KB schema and includes a vast number of free-form types. To model the underlying label correlations without access to manually annotated label structures, we introduce a novel label-relational inductive bias, represented by a graph propagation layer that effectively encodes both global label co-occurrence statistics and word-level similarities. On a large dataset with over 10,000 free-form types, the graph-enhanced model equipped with an attention-based matching module is able to achieve a much higher recall score while maintaining a high-level precision. Specifically, it achieves a 15.3{\%} relative F1 improvement and also less inconsistency in the outputs. We further show that a simple modification of our proposed graph layer can also improve the performance on a conventional and widely-tested dataset that only includes KB-schema types.
773--784
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
a0713d945b2e5c2bdeeba68399c8ac6ea84e0ca6
1
A Qualitative Comparison of {C}o{QA}, {SQ}u{AD} 2.0 and {Q}u{AC}
Yatskar, Mark
2,019
We compare three new datasets for question answering: SQuAD 2.0, QuAC, and CoQA, along several of their new features: (1) unanswerable questions, (2) multi-turn interactions, and (3) abstractive answers. We show that the datasets provide complementary coverage of the first two aspects, but weak coverage of the third. Because of the datasets{'} structural similarity, a single extractive model can be easily adapted to any of the datasets and we show improved baseline results on both SQuAD 2.0 and CoQA. Despite the similarity, models trained on one dataset are ineffective on another dataset, but we find moderate performance improvement through pretraining. To encourage cross-evaluation, we release code for conversion between datasets.
2318--2323
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
0a5606f0d56c618aa610cb1677e2788a3bd678fa
0
{ZS}-{BERT}: Towards Zero-Shot Relation Extraction with Attribute Representation Learning
Chen, Chih-Yao and Li, Cheng-Te
2,021
While relation extraction is an essential task in knowledge acquisition and representation, and new-generated relations are common in the real world, less effort is made to predict unseen relations that cannot be observed at the training stage. In this paper, we formulate the zero-shot relation extraction problem by incorporating the text description of seen and unseen relations. We propose a novel multi-task learning model, Zero-Shot BERT (ZS-BERT), to directly predict unseen relations without hand-crafted attribute labeling and multiple pairwise classifications. Given training instances consisting of input sentences and the descriptions of their seen relations, ZS-BERT learns two functions that project sentences and relations into an embedding space by jointly minimizing the distances between them and classifying seen relations. By generating the embeddings of unseen relations and new-coming sentences based on such two functions, we use nearest neighbor search to obtain the prediction of unseen relations. Experiments conducted on two well-known datasets exhibit that ZS-BERT can outperform existing methods by at least 13.54{\%} improvement on F1 score.
3470--3479
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
93df9dc530b1cf0af6d5eef90d017741a2aab5d8
1
Deep Cognitive Reasoning Network for Multi-hop Question Answering over Knowledge Graphs
Cai, Jianyu and Zhang, Zhanqiu and Wu, Feng and Wang, Jie
2,021
nan
219--229
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
14ee04939eae5610d5d6141ad953021967ab2de5
0
Prompt-learning for Fine-grained Entity Typing
Ding, Ning and Chen, Yulin and Han, Xu and Xu, Guangwei and Wang, Xiaobin and Xie, Pengjun and Zheng, Haitao and Liu, Zhiyuan and Li, Juanzi and Kim, Hong-Gee
2,022
As an effective approach to adapting pre-trained language models (PLMs) for specific tasks, prompt-learning has recently attracted much attention from researchers. By using cloze-style language prompts to stimulate the versatile knowledge of PLMs, prompt-learning can achieve promising results on a series of NLP tasks, such as natural language inference, sentiment classification, and knowledge probing. In this work, we investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot, and zero-shot scenarios. We first develop a simple and effective prompt-learning pipeline by constructing entity-oriented verbalizers and templates and conducting masked language modeling. Further, to tackle the zero-shot regime, we propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types. Extensive experiments on four fine-grained entity typing benchmarks under fully supervised, few-shot, and zero-shot settings show the effectiveness of the prompt-learning paradigm and further make a powerful alternative to vanilla fine-tuning.
6888--6901
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
bf722dc893ddaad5045fca5646212ec3badf3c5a
1
{M}emo{S}en: A Multimodal Dataset for Sentiment Analysis of Memes
Hossain, Eftekhar and Sharif, Omar and Hoque, Mohammed Moshiul
2,022
Posting and sharing memes have become a powerful expedient of expressing opinions on social media in recent days. Analysis of sentiment from memes has gained much attention to researchers due to its substantial implications in various domains like finance and politics. Past studies on sentiment analysis of memes have primarily been conducted in English, where low-resource languages gain little or no attention. However, due to the proliferation of social media usage in recent years, sentiment analysis of memes is also a crucial research issue in low resource languages. The scarcity of benchmark datasets is a significant barrier to performing multimodal sentiment analysis research in resource-constrained languages like Bengali. This paper presents a novel multimodal dataset (named MemoSen) for Bengali containing 4417 memes with three annotated labels positive, negative, and neutral. A detailed annotation guideline is provided to facilitate further resource development in this domain. Additionally, a set of experiments are carried out on MemoSen by constructing twelve unimodal (i.e., visual, textual) and ten multimodal (image+text) models. The evaluation exhibits that the integration of multimodal information significantly improves (about 1.2{\%}) the meme sentiment classification compared to the unimodal counterparts and thus elucidate the novel aspects of multimodality.
1542--1554
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
fc34892a8419d5378c456d70d54494920320bd55
0
Fine-Grained Entity Typing via Hierarchical Multi Graph Convolutional Networks
Jin, Hailong and Hou, Lei and Li, Juanzi and Dong, Tiansi
2,019
This paper addresses the problem of inferring the fine-grained type of an entity from a knowledge base. We convert this problem into the task of graph-based semi-supervised classification, and propose Hierarchical Multi Graph Convolutional Network (HMGCN), a novel Deep Learning architecture to tackle this problem. We construct three kinds of connectivity matrices to capture different kinds of semantic correlations between entities. A recursive regularization is proposed to model the subClassOf relations between types in given type hierarchy. Extensive experiments with two large-scale public datasets show that our proposed method significantly outperforms four state-of-the-art methods.
4969--4978
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
074e3497b03366caf2e17acd59fb1c52ccf8be55
1
Automatic Data-Driven Approaches for Evaluating the Phonemic Verbal Fluency Task with Healthy Adults
Lindsay, Hali and Linz, Nicklas and Troeger, Johannes and Alexandersson, Jan
2,019
nan
17--24
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
efd1a3ad8a0803e34d073380eefdf5381a2dfaf5
0
{L}earning from {C}ontext or {N}ames? {A}n {E}mpirical {S}tudy on {N}eural {R}elation {E}xtraction
Peng, Hao and Gao, Tianyu and Han, Xu and Lin, Yankai and Li, Peng and Liu, Zhiyuan and Sun, Maosong and Zhou, Jie
2,020
Neural models have achieved remarkable success on relation extraction (RE) benchmarks. However, there is no clear understanding what information in text affects existing RE models to make decisions and how to further improve the performance of these models. To this end, we empirically study the effect of two main information sources in text: textual context and entity mentions (names). We find that (i) while context is the main source to support the predictions, RE models also heavily rely on the information from entity mentions, most of which is type information, and (ii) existing datasets may leak shallow heuristics via entity mentions and thus contribute to the high performance on RE benchmarks. Based on the analyses, we propose an entity-masked contrastive pre-training framework for RE to gain a deeper understanding on both textual context and type information while avoiding rote memorization of entities or use of superficial cues in mentions. We carry out extensive experiments to support our views, and show that our framework can improve the effectiveness and robustness of neural models in different RE scenarios. All the code and datasets are released at \url{https://github.com/thunlp/RE-Context-or-Names}.
3661--3672
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
6a5608e6fee3ecc65361525906b0d092ad9952bb
1
Unknown Intent Detection Using {G}aussian Mixture Model with an Application to Zero-shot Intent Classification
Yan, Guangfeng and Fan, Lu and Li, Qimai and Liu, Han and Zhang, Xiaotong and Wu, Xiao-Ming and Lam, Albert Y.S.
2,020
User intent classification plays a vital role in dialogue systems. Since user intent may frequently change over time in many realistic scenarios, unknown (new) intent detection has become an essential problem, where the study has just begun. This paper proposes a semantic-enhanced Gaussian mixture model (SEG) for unknown intent detection. In particular, we model utterance embeddings with a Gaussian mixture distribution and inject dynamic class semantic information into Gaussian means, which enables learning more class-concentrated embeddings that help to facilitate downstream outlier detection. Coupled with a density-based outlier detection algorithm, SEG achieves competitive results on three real task-oriented dialogue datasets in two languages for unknown intent detection. On top of that, we propose to integrate SEG as an unknown intent identifier into existing generalized zero-shot intent classification models to improve their performance. A case study on a state-of-the-art method, ReCapsNet, shows that SEG can push the classification performance to a significantly higher level.
1050--1060
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
b387b19b8c02f3087bacd8514ea31e55e494ccf7
0
{CLEVE}: {C}ontrastive {P}re-training for {E}vent {E}xtraction
Wang, Ziqi and Wang, Xiaozhi and Han, Xu and Lin, Yankai and Hou, Lei and Liu, Zhiyuan and Li, Peng and Li, Juanzi and Zhou, Jie
2,021
Event extraction (EE) has considerably benefited from pre-trained language models (PLMs) by fine-tuning. However, existing pre-training methods have not involved modeling event characteristics, resulting in the developed EE models cannot take full advantage of large-scale unsupervised data. To this end, we propose CLEVE, a contrastive pre-training framework for EE to better learn event knowledge from large unsupervised data and their semantic structures (e.g. AMR) obtained with automatic parsers. CLEVE contains a text encoder to learn event semantics and a graph encoder to learn event structures respectively. Specifically, the text encoder learns event semantic representations by self-supervised contrastive learning to represent the words of the same events closer than those unrelated words; the graph encoder learns event structure representations by graph contrastive pre-training on parsed event-related semantic structures. The two complementary representations then work together to improve both the conventional supervised EE and the unsupervised {``}liberal{''} EE, which requires jointly extracting events and discovering event schemata without any annotated data. Experiments on ACE 2005 and MAVEN datasets show that CLEVE achieves significant improvements, especially in the challenging unsupervised setting. The source code and pre-trained checkpoints can be obtained from \url{https://github.com/THU-KEG/CLEVE}.
6283--6297
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
2580aed3ac10d971f86d21f4c06db2de0cfb3c22
1
Epistemic Semantics in Guarded String Models
Campbell, Eric Hayden and Rooth, Mats
2,021
nan
81--90
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
57666117528e99ae907bcfb67b080d700cf83ece
0
An Improved Baseline for Sentence-level Relation Extraction
Zhou, Wenxuan and Chen, Muhao
2,022
Sentence-level relation extraction (RE) aims at identifying the relationship between two entities in a sentence. Many efforts have been devoted to this problem, while the best performing methods are still far from perfect. In this paper, we revisit two problems that affect the performance of existing RE models, namely entity representation and noisy or ill-defined labels. Our improved RE baseline, incorporated with entity representations with typed markers, achieves an F1 of 74.6{\%} on TACRED, significantly outperforms previous SOTA methods. Furthermore, the presented new baseline achieves an F1 of 91.1{\%} on the refined Re-TACRED dataset, demonstrating that the pretrained language models (PLMs) achieve high performance on this task. We release our code to the community for future research.
161--168
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
11baa9cc02d6158edd9cb1f299579dad7828e162
1
Raccoons at {S}em{E}val-2022 Task 11: Leveraging Concatenated Word Embeddings for Named Entity Recognition
Dogra, Atharvan and Kaur, Prabsimran and Kohli, Guneet and Bedi, Jatin
2,022
Named Entity Recognition (NER), an essential subtask in NLP that identifies text belonging to predefined semantics such as a person, location, organization, drug, time, clinical procedure, biological protein, etc. NER plays a vital role in various fields such as informationextraction, question answering, and machine translation. This paper describes our participating system run to the Named entity recognitionand classification shared task SemEval-2022. The task is motivated towards detecting semantically ambiguous and complex entities in shortand low-context settings. Our team focused on improving entity recognition by improving the word embeddings. We concatenated the word representations from State-of-the-art language models and passed them to find the best representation through a reinforcement trainer. Our results highlight the improvements achieved by various embedding concatenations.
1576--1582
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
6b8c47ca2fcd98e2e1a876cc02c60a5b2183b381
0
Improving Fine-grained Entity Typing with Entity Linking
Dai, Hongliang and Du, Donghong and Li, Xin and Song, Yangqiu
2,019
Fine-grained entity typing is a challenging problem since it usually involves a relatively large tag set and may require to understand the context of the entity mention. In this paper, we use entity linking to help with the fine-grained entity type classification process. We propose a deep neural model that makes predictions based on both the context and the information obtained from entity linking results. Experimental results on two commonly used datasets demonstrates the effectiveness of our approach. On both datasets, it achieves more than 5{\%} absolute strict accuracy improvement over the state of the art.
6210--6215
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
b74b272c7fe881614f3eb8c2504b037439571eec
1
{S}amvaadhana: A {T}elugu Dialogue System in Hospital Domain
Duggenpudi, Suma Reddy and Siva Subrahamanyam Varma, Kusampudi and Mamidi, Radhika
2,019
In this paper, a dialogue system for Hospital domain in Telugu, which is a resource-poor Dravidian language, has been built. It handles various hospital and doctor related queries. The main aim of this paper is to present an approach for modelling a dialogue system in a resource-poor language by combining linguistic and domain knowledge. Focusing on the question answering aspect of the dialogue system, we identified Question Classification and Query Processing as the two most important parts of the dialogue system. Our method combines deep learning techniques for question classification and computational rule-based analysis for query processing. Human evaluation of the system has been performed as there is no automated evaluation tool for dialogue systems in Telugu. Our system achieves a high overall rating along with a significantly accurate context-capturing method as shown in the results.
234--242
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
a51ac7085dc35cc9c4c8fffab2f4821fadc7af9e
0
Event Detection with Multi-Order Graph Convolution and Aggregated Attention
Yan, Haoran and Jin, Xiaolong and Meng, Xiangbin and Guo, Jiafeng and Cheng, Xueqi
2,019
Syntactic relations are broadly used in many NLP tasks. For event detection, syntactic relation representations based on dependency tree can better capture the interrelations between candidate trigger words and related entities than sentence representations. But, existing studies only use first-order syntactic relations (i.e., the arcs) in dependency trees to identify trigger words. For this reason, this paper proposes a new method for event detection, which uses a dependency tree based graph convolution network with aggregative attention to explicitly model and aggregate multi-order syntactic representations in sentences. Experimental comparison with state-of-the-art baselines shows the superiority of the proposed method.
5766--5770
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
d9213d53aeb3ee0a2a5db9024f8d75afd8c6f4d7
1
Two Discourse Tree - Based Approaches to Indexing Answers
Galitsky, Boris and Ilvovsky, Dmitry
2,019
We explore anatomy of answers with respect to which text fragments from an answer are worth matching with a question and which should not be matched. We apply the Rhetorical Structure Theory to build a discourse tree of an answer and select elementary discourse units that are suitable for indexing. Manual rules for selection of these discourse units as well as automated classification based on web search engine mining are evaluated con-cerning improving search accuracy. We form two sets of question-answer pairs for FAQ and community QA search domains and use them for evaluation of the proposed indexing methodology, which delivers up to 16 percent improvement in search recall.
367--372
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
a0924f96924980ef1f414c169d00da1ecfb98b5b
0
Adversarial Training for Weakly Supervised Event Detection
Wang, Xiaozhi and Han, Xu and Liu, Zhiyuan and Sun, Maosong and Li, Peng
2,019
Modern weakly supervised methods for event detection (ED) avoid time-consuming human annotation and achieve promising results by learning from auto-labeled data. However, these methods typically rely on sophisticated pre-defined rules as well as existing instances in knowledge bases for automatic annotation and thus suffer from low coverage, topic bias, and data noise. To address these issues, we build a large event-related candidate set with good coverage and then apply an adversarial training mechanism to iteratively identify those informative instances from the candidate set and filter out those noisy ones. The experiments on two real-world datasets show that our candidate selection and adversarial training can cooperate together to obtain more diverse and accurate training data for ED, and significantly outperform the state-of-the-art methods in various weakly supervised scenarios. The datasets and source code can be obtained from \url{https://github.com/thunlp/Adv-ED}.
998--1008
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
569829ab76a311b1f7f5b33d37ffff6a3fae6490
1
A Comparison of Sense-level Sentiment Scores
Bond, Francis and Janz, Arkadiusz and Piasecki, Maciej
2,019
In this paper, we compare a variety of sense-tagged sentiment resources, including SentiWordNet, ML-Senticon, plWordNet emo and the NTU Multilingual Corpus. The goal is to investigate the quality of the resources and see how well the sentiment polarity annotation maps across languages.
363--372
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
eb44b5c7b75a32786a1bc025dc1f8304dd4d3444
0
Label Verbalization and Entailment for Effective Zero and Few-Shot Relation Extraction
Sainz, Oscar and Lopez de Lacalle, Oier and Labaka, Gorka and Barrena, Ander and Agirre, Eneko
2,021
Relation extraction systems require large amounts of labeled examples which are costly to annotate. In this work we reformulate relation extraction as an entailment task, with simple, hand-made, verbalizations of relations produced in less than 15 min per relation. The system relies on a pretrained textual entailment engine which is run as-is (no training examples, zero-shot) or further fine-tuned on labeled examples (few-shot or fully trained). In our experiments on TACRED we attain 63{\%} F1 zero-shot, 69{\%} with 16 examples per relation (17{\%} points better than the best supervised system on the same conditions), and only 4 points short to the state-of-the-art (which uses 20 times more training data). We also show that the performance can be improved significantly with larger entailment models, up to 12 points in zero-shot, allowing to report the best results to date on TACRED when fully trained. The analysis shows that our few-shot systems are specially effective when discriminating between relations, and that the performance difference in low data regimes comes mainly from identifying no-relation cases.
1199--1212
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
85061c524fdd5ec75f06a3329352621bb8d05f43
1
L{'}identification de langue, un outil au service du corse et de l{'}{\'e}valuation des ressources linguistiques [Language identification, a tool for {C}orsican and for the evaluation of linguistic resources]
Kevers, Laurent
2,021
nan
13--37
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
0a8058b64a5f708fa2ec7d6f7d1ed26ae57cc331
0
Exploring the zero-shot limit of {F}ew{R}el
Cetoli, Alberto
2,020
This paper proposes a general purpose relation extractor that uses Wikidata descriptions to represent the relation{'}s surface form. The results are tested on the FewRel 1.0 dataset, which provides an excellent framework for training and evaluating the proposed zero-shot learning system in English. This relation extractor architecture exploits the implicit knowledge of a language model through a question-answering approach.
1447--1451
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
c41fee04146a7893948b978676f0fc19fa622f94
1
Large Corpus of {C}zech Parliament Plenary Hearings
Kratochvil, Jon{\'a}{\v{s}} and Pol{\'a}k, Peter and Bojar, Ond{\v{r}}ej
2,020
We present a large corpus of Czech parliament plenary sessions. The corpus consists of approximately 1200 hours of speech data and corresponding text transcriptions. The whole corpus has been segmented to short audio segments making it suitable for both training and evaluation of automatic speech recognition (ASR) systems. The source language of the corpus is Czech, which makes it a valuable resource for future research as only a few public datasets are available in the Czech language. We complement the data release with experiments of two baseline ASR systems trained on the presented data: the more traditional approach implemented in the Kaldi ASRtoolkit which combines hidden Markov models and deep neural networks (NN) and a modern ASR architecture implemented in Jaspertoolkit which uses deep NNs in an end-to-end fashion.
6363--6367
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
c990a60c9f0004a74222aa2e84dcd2b2f238fa0d
0
Zero-shot User Intent Detection via Capsule Neural Networks
Xia, Congying and Zhang, Chenwei and Yan, Xiaohui and Chang, Yi and Yu, Philip
2,018
User intent detection plays a critical role in question-answering and dialog systems. Most previous works treat intent detection as a classification problem where utterances are labeled with predefined intents. However, it is labor-intensive and time-consuming to label users{'} utterances as intents are diversely expressed and novel intents will continually be involved. Instead, we study the zero-shot intent detection problem, which aims to detect emerging user intents where no labeled utterances are currently available. We propose two capsule-based architectures: IntentCapsNet that extracts semantic features from utterances and aggregates them to discriminate existing intents, and IntentCapsNet-ZSL which gives IntentCapsNet the zero-shot learning ability to discriminate emerging intents via knowledge transfer from existing intents. Experiments on two real-world datasets show that our model not only can better discriminate diversely expressed existing intents, but is also able to discriminate emerging intents when no labeled utterances are available.
3090--3099
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
17e61004345661deef6ee9b749c54b6a5a8c76ac
1
Exploring Conversational Language Generation for Rich Content about Hotels
Walker, Marilyn and Smither, Albry and Oraby, Shereen and Harrison, Vrindavan and Shemtov, Hadar
2,018
nan
nan
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
1a4d8740decca98ac41b7ec7de97172de8bcff77
0
Hierarchy-aware Label Semantics Matching Network for Hierarchical Text Classification
Chen, Haibin and Ma, Qianli and Lin, Zhenxi and Yan, Jiangyue
2,021
Hierarchical text classification is an important yet challenging task due to the complex structure of the label hierarchy. Existing methods ignore the semantic relationship between text and labels, so they cannot make full use of the hierarchical information. To this end, we formulate the text-label semantics relationship as a semantic matching problem and thus propose a hierarchy-aware label semantics matching network (HiMatch). First, we project text semantics and label semantics into a joint embedding space. We then introduce a joint embedding loss and a matching learning loss to model the matching relationship between the text semantics and the label semantics. Our model captures the text-label semantics matching relationship among coarse-grained labels and fine-grained labels in a hierarchy-aware manner. The experimental results on various benchmark datasets verify that our model achieves state-of-the-art results.
4370--4379
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
57516ba4a5356154b81a9332010544dce24ee494
1
{SNACS} Annotation of Case Markers and Adpositions in {H}indi
Arora, Aryaman and Venkateswaran, Nitin and Schneider, Nathan
2,021
nan
454--458
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
764dc286a4183adc19f49614386177cb999f0144
0
{T}axo{C}lass: Hierarchical Multi-Label Text Classification Using Only Class Names
Shen, Jiaming and Qiu, Wenda and Meng, Yu and Shang, Jingbo and Ren, Xiang and Han, Jiawei
2,021
Hierarchical multi-label text classification (HMTC) aims to tag each document with a set of classes from a taxonomic class hierarchy. Most existing HMTC methods train classifiers using massive human-labeled documents, which are often too costly to obtain in real-world applications. In this paper, we explore to conduct HMTC based on only class surface names as supervision signals. We observe that to perform HMTC, human experts typically first pinpoint a few most essential classes for the document as its {``}core classes{''}, and then check core classes{'} ancestor classes to ensure the coverage. To mimic human experts, we propose a novel HMTC framework, named TaxoClass. Specifically, TaxoClass (1) calculates document-class similarities using a textual entailment model, (2) identifies a document{'}s core classes and utilizes confident core classes to train a taxonomy-enhanced classifier, and (3) generalizes the classifier via multi-label self-training. Our experiments on two challenging datasets show TaxoClass can achieve around 0.71 Example-F1 using only class names, outperforming the best previous method by 25{\%}.
4239--4249
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
15e100120f080b9ef4230b4cbb8e107b76e2b839
1
Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question Answering
Karamcheti, Siddharth and Krishna, Ranjay and Fei-Fei, Li and Manning, Christopher
2,021
Active learning promises to alleviate the massive data needs of supervised machine learning: it has successfully improved sample efficiency by an order of magnitude on traditional tasks like topic classification and object recognition. However, we uncover a striking contrast to this promise: across 5 models and 4 datasets on the task of visual question answering, a wide variety of active learning approaches fail to outperform random selection. To understand this discrepancy, we profile 8 active learning methods on a per-example basis, and identify the problem as collective outliers {--} groups of examples that active learning methods prefer to acquire but models fail to learn (e.g., questions that ask about text in images or require external knowledge). Through systematic ablation experiments and qualitative visualizations, we verify that collective outliers are a general phenomenon responsible for degrading pool-based active learning. Notably, we show that active learning sample efficiency increases significantly as the number of collective outliers in the active learning pool decreases. We conclude with a discussion and prescriptive recommendations for mitigating the effects of these outliers in future work.
7265--7281
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
5441598e2b690a15198b7a38359e5936e4a46114
0
Relation Classification with Entity Type Restriction
Lyu, Shengfei and Chen, Huanhuan
2,021
nan
390--395
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
97cbc8a78ad588931d7adfe319b4c68f3d167461
1
Retrieval, Re-ranking and Multi-task Learning for Knowledge-Base Question Answering
Wang, Zhiguo and Ng, Patrick and Nallapati, Ramesh and Xiang, Bing
2,021
Question answering over knowledge bases (KBQA) usually involves three sub-tasks, namely topic entity detection, entity linking and relation detection. Due to the large number of entities and relations inside knowledge bases (KB), previous work usually utilized sophisticated rules to narrow down the search space and managed only a subset of KBs in memory. In this work, we leverage a \textit{retrieve-and-rerank} framework to access KBs via traditional information retrieval (IR) method, and re-rank retrieved candidates with more powerful neural networks such as the pre-trained BERT model. Considering the fact that directly assigning a different BERT model for each sub-task may incur prohibitive costs, we propose to share a BERT encoder across all three sub-tasks and define task-specific layers on top of the shared layer. The unified model is then trained under a multi-task learning framework. Experiments show that: (1) Our IR-based retrieval method is able to collect high-quality candidates efficiently, thus enables our method adapt to large-scale KBs easily; (2) the BERT model improves the accuracy across all three sub-tasks; and (3) benefiting from multi-task learning, the unified model obtains further improvements with only 1/3 of the original parameters. Our final model achieves competitive results on the SimpleQuestions dataset and superior performance on the FreebaseQA dataset.
347--357
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
489efd419d5690bdf8a255a9de8458e320f306c2
0
{M}ap{RE}: An Effective Semantic Mapping Approach for Low-resource Relation Extraction
Dong, Manqing and Pan, Chunguang and Luo, Zhipeng
2,021
Neural relation extraction models have shown promising results in recent years; however, the model performance drops dramatically given only a few training samples. Recent works try leveraging the advance in few-shot learning to solve the low resource problem, where they train label-agnostic models to directly compare the semantic similarities among context sentences in the embedding space. However, the label-aware information, i.e., the relation label that contains the semantic knowledge of the relation itself, is often neglected for prediction. In this work, we propose a framework considering both label-agnostic and label-aware semantic mapping information for low resource relation extraction. We show that incorporating the above two types of mapping information in both pretraining and fine-tuning can significantly improve the model performance on low-resource relation extraction tasks.
2694--2704
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
35f26a3c3f9013e419b47c928d92c333a0e09aa3
1
Auditing Keyword Queries Over Text Documents
Apparreddy, Bharath Kumar Reddy and Rajanala, Sailaja and Singh, Manish
2,021
Data security and privacy is an issue of growing importance in the healthcare domain. In this paper, we present an auditing system to detect privacy violations for unstructured text documents such as healthcare records. Given a sensitive document, we present an anomaly detection algorithm that can find the top-k suspicious keyword queries that may have accessed the sensitive document. Since unstructured healthcare data, such as medical reports and query logs, are not easily available for public research, in this paper, we show how one can use the publicly available DBLP data to create an equivalent healthcare data and query log, which can then be used for experimental evaluation.
378--387
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
046abb93061df136f6aace440ebd13e22d8a272c
0
Fine-grained Entity Typing via Label Reasoning
Liu, Qing and Lin, Hongyu and Xiao, Xinyan and Han, Xianpei and Sun, Le and Wu, Hua
2,021
Conventional entity typing approaches are based on independent classification paradigms, which make them difficult to recognize inter-dependent, long-tailed and fine-grained entity types. In this paper, we argue that the implicitly entailed extrinsic and intrinsic dependencies between labels can provide critical knowledge to tackle the above challenges. To this end, we propose Label Reasoning Network(LRN), which sequentially reasons fine-grained entity labels by discovering and exploiting label dependencies knowledge entailed in the data. Specifically, LRN utilizes an auto-regressive network to conduct deductive reasoning and a bipartite attribute graph to conduct inductive reasoning between labels, which can effectively model, learn and reason complex label dependencies in a sequence-to-set, end-to-end manner. Experiments show that LRN achieves the state-of-the-art performance on standard ultra fine-grained entity typing benchmarks, and can also resolve the long tail label problem effectively.
4611--4622
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
7f30821267a11138497107d947ea39726e4b7fbd
1
{M}i{SS}@{WMT}21: Contrastive Learning-reinforced Domain Adaptation in Neural Machine Translation
Li, Zuchao and Utiyama, Masao and Sumita, Eiichiro and Zhao, Hai
2,021
In this paper, we describe our MiSS system that participated in the WMT21 news translation task. We mainly participated in the evaluation of the three translation directions of English-Chinese and Japanese-English translation tasks. In the systems submitted, we primarily considered wider networks, deeper networks, relative positional encoding, and dynamic convolutional networks in terms of model structure, while in terms of training, we investigated contrastive learning-reinforced domain adaptation, self-supervised training, and optimization objective switching training methods. According to the final evaluation results, a deeper, wider, and stronger network can improve translation performance in general, yet our data domain adaption method can improve performance even more. In addition, we found that switching to the use of our proposed objective during the finetune phase using relatively small domain-related data can effectively improve the stability of the model{'}s convergence and achieve better optimal performance.
154--161
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
79159305945229a16f1c02bde93e8015ebd7dc55
0
Learning from Noisy Labels for Entity-Centric Information Extraction
Zhou, Wenxuan and Chen, Muhao
2,021
Recent information extraction approaches have relied on training deep neural models. However, such models can easily overfit noisy labels and suffer from performance degradation. While it is very costly to filter noisy labels in large learning resources, recent studies show that such labels take more training steps to be memorized and are more frequently forgotten than clean labels, therefore are identifiable in training. Motivated by such properties, we propose a simple co-regularization framework for entity-centric information extraction, which consists of several neural models with identical structures but different parameter initialization. These models are jointly optimized with the task-specific losses and are regularized to generate similar predictions based on an agreement loss, which prevents overfitting on noisy labels. Extensive experiments on two widely used but noisy benchmarks for information extraction, TACRED and CoNLL03, demonstrate the effectiveness of our framework. We release our code to the community for future research.
5381--5392
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
dbfc17833434243e07c4629e58f3d8ed7112dbfe
1
{SHAPELURN}: An Interactive Language Learning Game with Logical Inference
Stein, Katharina and Harter, Leonie and Geiger, Luisa
2,021
We investigate if a model can learn natural language with minimal linguistic input through interaction. Addressing this question, we design and implement an interactive language learning game that learns logical semantic representations compositionally. Our game allows us to explore the benefits of logical inference for natural language learning. Evaluation shows that the model can accurately narrow down potential logical representations for words over the course of the game, suggesting that our model is able to learn lexical mappings from scratch successfully.
16--24
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
67f3d4addcfb9066ec436934f8d48ac58fa2b479
0
Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model
Dai, Hongliang and Song, Yangqiu and Wang, Haixun
2,021
Recently, there is an effort to extend fine-grained entity typing by using a richer and ultra-fine set of types, and labeling noun phrases including pronouns and nominal nouns instead of just named entity mentions. A key challenge for this ultra-fine entity typing task is that human annotated data are extremely scarce, and the annotation ability of existing distant or weak supervision approaches is very limited. To remedy this problem, in this paper, we propose to obtain training data for ultra-fine entity typing by using a BERT Masked Language Model (MLM). Given a mention in a sentence, our approach constructs an input for the BERT MLM so that it predicts context dependent hypernyms of the mention, which can be used as type labels. Experimental results demonstrate that, with the help of these automatically generated labels, the performance of an ultra-fine entity typing model can be improved substantially. We also show that our approach can be applied to improve traditional fine-grained entity typing after performing simple type mapping.
1790--1799
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
70b49a024787d3ad374fb78dc87e3ba2b5e16566
1
Euphemistic Phrase Detection by Masked Language Model
Zhu, Wanzheng and Bhat, Suma
2,021
It is a well-known approach for fringe groups and organizations to use euphemisms{---}ordinary-sounding and innocent-looking words with a secret meaning{---}to conceal what they are discussing. For instance, drug dealers often use {``}pot{''} for marijuana and {``}avocado{''} for heroin. From a social media content moderation perspective, though recent advances in NLP have enabled the automatic detection of such single-word euphemisms, no existing work is capable of automatically detecting multi-word euphemisms, such as {``}blue dream{''} (marijuana) and {``}black tar{''} (heroin). Our paper tackles the problem of euphemistic phrase detection without human effort for the first time, as far as we are aware. We first perform phrase mining on a raw text corpus (e.g., social media posts) to extract quality phrases. Then, we utilize word embedding similarities to select a set of euphemistic phrase candidates. Finally, we rank those candidates by a masked language model{---}SpanBERT. Compared to strong baselines, we report 20-50{\%} higher detection accuracies using our algorithm for detecting euphemistic phrases.
163--168
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
8bcb8dd3fadb35320fad382abda725e49454be6f
0
Modeling Fine-Grained Entity Types with Box Embeddings
Onoe, Yasumasa and Boratko, Michael and McCallum, Andrew and Durrett, Greg
2,021
Neural entity typing models typically represent fine-grained entity types as vectors in a high-dimensional space, but such spaces are not well-suited to modeling these types{'} complex interdependencies. We study the ability of box embeddings, which embed concepts as d-dimensional hyperrectangles, to capture hierarchies of types even when these relationships are not defined explicitly in the ontology. Our model represents both types and entity mentions as boxes. Each mention and its context are fed into a BERT-based model to embed that mention in our box space; essentially, this model leverages typological clues present in the surface text to hypothesize a type representation for the mention. Box containment can then be used to derive both the posterior probability of a mention exhibiting a given type and the conditional probability relations between types themselves. We compare our approach with a vector-based typing model and observe state-of-the-art performance on several entity typing benchmarks. In addition to competitive typing performance, our box-based model shows better performance in prediction consistency (predicting a supertype and a subtype together) and confidence (i.e., calibration), demonstrating that the box-based model captures the latent type hierarchies better than the vector-based model does.
2051--2064
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
176e3cbe3141c8b874df663711dca9b7470b8243
1
Auditing Keyword Queries Over Text Documents
Apparreddy, Bharath Kumar Reddy and Rajanala, Sailaja and Singh, Manish
2,021
Data security and privacy is an issue of growing importance in the healthcare domain. In this paper, we present an auditing system to detect privacy violations for unstructured text documents such as healthcare records. Given a sensitive document, we present an anomaly detection algorithm that can find the top-k suspicious keyword queries that may have accessed the sensitive document. Since unstructured healthcare data, such as medical reports and query logs, are not easily available for public research, in this paper, we show how one can use the publicly available DBLP data to create an equivalent healthcare data and query log, which can then be used for experimental evaluation.
378--387
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
046abb93061df136f6aace440ebd13e22d8a272c
0
Interpretable Entity Representations through Large-Scale Typing
Onoe, Yasumasa and Durrett, Greg
2,020
In standard methodology for natural language processing, entities in text are typically embedded in dense vector spaces with pre-trained models. The embeddings produced this way are effective when fed into downstream models, but they require end-task fine-tuning and are fundamentally difficult to interpret. In this paper, we present an approach to creating entity representations that are human readable and achieve high performance on entity-related tasks out of the box. Our representations are vectors whose values correspond to posterior probabilities over fine-grained entity types, indicating the confidence of a typing model{'}s decision that the entity belongs to the corresponding type. We obtain these representations using a fine-grained entity typing model, trained either on supervised ultra-fine entity typing data (Choi et al. 2018) or distantly-supervised examples from Wikipedia. On entity probing tasks involving recognizing entity identity, our embeddings used in parameter-free downstream models achieve competitive performance with ELMo- and BERT-based embeddings in trained models. We also show that it is possible to reduce the size of our type set in a learning-based way for particular domains. Finally, we show that these embeddings can be post-hoc modified through a small number of rules to incorporate domain knowledge and improve performance.
612--624
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
782a50a48ba5d32839631254285d989bfadfd193
1
Understanding Pure Character-Based Neural Machine Translation: The Case of Translating {F}innish into {E}nglish
Tang, Gongbo and Sennrich, Rico and Nivre, Joakim
2,020
Recent work has shown that deeper character-based neural machine translation (NMT) models can outperform subword-based models. However, it is still unclear what makes deeper character-based models successful. In this paper, we conduct an investigation into pure character-based models in the case of translating Finnish into English, including exploring the ability to learn word senses and morphological inflections and the attention mechanism. We demonstrate that word-level information is distributed over the entire character sequence rather than over a single character, and characters at different positions play different roles in learning linguistic knowledge. In addition, character-based models need more layers to encode word senses which explains why only deeper models outperform subword-based models. The attention distribution pattern shows that separators attract a lot of attention and we explore a sparse word-level attention to enforce character hidden states to capture the full word-level information. Experimental results show that the word-level attention with a single head results in 1.2 BLEU points drop.
4251--4262
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
48df86003af463a518ebee931cbc6114fe45651a
0
{MAVEN}: {A} {M}assive {G}eneral {D}omain {E}vent {D}etection {D}ataset
Wang, Xiaozhi and Wang, Ziqi and Han, Xu and Jiang, Wangyi and Han, Rong and Liu, Zhiyuan and Li, Juanzi and Li, Peng and Lin, Yankai and Zhou, Jie
2,020
Event detection (ED), which means identifying event trigger words and classifying event types, is the first and most fundamental step for extracting event knowledge from plain text. Most existing datasets exhibit the following issues that limit further development of ED: (1) Data scarcity. Existing small-scale datasets are not sufficient for training and stably benchmarking increasingly sophisticated modern neural methods. (2) Low coverage. Limited event types of existing datasets cannot well cover general-domain events, which restricts the applications of ED models. To alleviate these problems, we present a MAssive eVENt detection dataset (MAVEN), which contains 4,480 Wikipedia documents, 118,732 event mention instances, and 168 event types. MAVEN alleviates the data scarcity problem and covers much more general event types. We reproduce the recent state-of-the-art ED models and conduct a thorough evaluation on MAVEN. The experimental results show that existing ED methods cannot achieve promising results on MAVEN as on the small datasets, which suggests that ED in the real world remains a challenging task and requires further research efforts. We also discuss further directions for general domain ED with empirical analyses. The source code and dataset can be obtained from \url{https://github.com/THU-KEG/MAVEN-dataset}.
1652--1671
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
46e84b0a7b3761d1a3c1577c66225453ab2cbc1c
1
Interpreting Neural {CWI} Classifiers{'} Weights as Vocabulary Size
Ehara, Yo
2,020
Complex Word Identification (CWI) is a task for the identification of words that are challenging for second-language learners to read. Even though the use of neural classifiers is now common in CWI, the interpretation of their parameters remains difficult. This paper analyzes neural CWI classifiers and shows that some of their parameters can be interpreted as vocabulary size. We present a novel formalization of vocabulary size measurement methods that are practiced in the applied linguistics field as a kind of neural classifier. We also contribute to building a novel dataset for validating vocabulary testing and readability via crowdsourcing.
171--176
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
8cba5bf43132b3f7785e2bdde80aef26e38fa9d4
0
Improving {AMR} Parsing with Sequence-to-Sequence Pre-training
Xu, Dongqin and Li, Junhui and Zhu, Muhua and Zhang, Min and Zhou, Guodong
2,020
In the literature, the research on abstract meaning representation (AMR) parsing is much restricted by the size of human-curated dataset which is critical to build an AMR parser with good performance. To alleviate such data size restriction, pre-trained models have been drawing more and more attention in AMR parsing. However, previous pre-trained models, like BERT, are implemented for general purpose which may not work as expected for the specific task of AMR parsing. In this paper, we focus on sequence-to-sequence (seq2seq) AMR parsing and propose a seq2seq pre-training approach to build pre-trained models in both single and joint way on three relevant tasks, i.e., machine translation, syntactic parsing, and AMR parsing itself. Moreover, we extend the vanilla fine-tuning method to a multi-task learning fine-tuning method that optimizes for the performance of AMR parsing while endeavors to preserve the response of pre-trained models. Extensive experimental results on two English benchmark datasets show that both the single and joint pre-trained models significantly improve the performance (e.g., from 71.5 to 80.2 on AMR 2.0), which reaches the state of the art. The result is very encouraging since we achieve this with seq2seq models rather than complex models. We make our code and model available at \url{https://github.com/xdqkid/S2S-AMR-Parser}.
2501--2511
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
12b28c2d1b58234daa0f06ab43353c401eda1958
1
{F}lex{E}val, cr{\'e}ation de sites web l{\'e}gers pour des campagnes de tests perceptifs multim{\'e}dias ({F}lex{E}val, creation of light websites for multimedia perceptual test campaigns)
Fayet, C{\'e}dric and Blond, Alexis and Coulombel, Gr{\'e}goire and Simon, Claude and Lolive, Damien and Lecorv{\'e}, Gw{\'e}nol{\'e} and Chevelu, Jonathan and Le Maguer, S{\'e}bastien
2,020
Nous pr{\'e}sentons FlexEval, un outil de conception et d{\'e}ploiement de tests perceptifs multim{\'e}dias sous la forme d{'}un site web l{\'e}ger. S{'}appuyant sur des technologies standards et ouvertes du web, notamment le framework Flask, FlexEval offre une grande souplesse de conception, des gages de p{\'e}rennit{\'e}, ainsi que le support de communaut{\'e}s actives d{'}utilisateurs. L{'}application est disponible en open-source via le d{\'e}p{\^o}t Git \url{https://gitlab.inria.fr/expression/tools/flexeval}.
22--25
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
29bbf9d73cfc35ec9e6e602968eb0f76bba0fc91
0
What Are You Trying to Do? Semantic Typing of Event Processes
Chen, Muhao and Zhang, Hongming and Wang, Haoyu and Roth, Dan
2,020
This paper studies a new cognitively motivated semantic typing task,multi-axis event process typing, that, given anevent process, attempts to infer free-form typelabels describing (i) the type of action made bythe process and (ii) the type of object the pro-cess seeks to affect. This task is inspired bycomputational and cognitive studies of eventunderstanding, which suggest that understand-ing processes of events is often directed by rec-ognizing the goals, plans or intentions of theprotagonist(s). We develop a large dataset con-taining over 60k event processes, featuring ul-tra fine-grained typing on both the action andobject type axes with very large (10{\^{}}3∼10{\^{}}4)label vocabularies. We then propose a hybridlearning framework,P2GT, which addressesthe challenging typing problem with indirectsupervision from glosses1and a joint learning-to-rank framework. As our experiments indi-cate,P2GTsupports identifying the intent ofprocesses, as well as the fine semantic type ofthe affected object. It also demonstrates the ca-pability of handling few-shot cases, and stronggeneralizability on out-of-domain processes.
531--542
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
7a2206c883165864e545f25cf00259e29eec058f
1
Modularized Syntactic Neural Networks for Sentence Classification
Wu, Haiyan and Liu, Ying and Shi, Shaoyun
2,020
This paper focuses on tree-based modeling for the sentence classification task. In existing works, aggregating on a syntax tree usually considers local information of sub-trees. In contrast, in addition to the local information, our proposed Modularized Syntactic Neural Network (MSNN) utilizes the syntax category labels and takes advantage of the global context while modeling sub-trees. In MSNN, each node of a syntax tree is modeled by a label-related syntax module. Each syntax module aggregates the outputs of lower-level modules, and finally, the root module provides the sentence representation. We design a tree-parallel mini-batch strategy for efficient training and predicting. Experimental results on four benchmark datasets show that our MSNN significantly outperforms previous state-of-the-art tree-based methods on the sentence classification task.
2786--2792
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
9f609d4ccebe4d651515375d3481bbcd5fe963f9
0
Entity-Relation Extraction as Multi-Turn Question Answering
Li, Xiaoya and Yin, Fan and Sun, Zijun and Li, Xiayu and Yuan, Arianna and Chai, Duo and Zhou, Mingxin and Li, Jiwei
2,019
In this paper, we propose a new paradigm for the task of entity-relation extraction. We cast the task as a multi-turn question answering problem, i.e., the extraction of entities and elations is transformed to the task of identifying answer spans from the context. This multi-turn QA formalization comes with several key advantages: firstly, the question query encodes important information for the entity/relation class we want to identify; secondly, QA provides a natural way of jointly modeling entity and relation; and thirdly, it allows us to exploit the well developed machine reading comprehension (MRC) models. Experiments on the ACE and the CoNLL04 corpora demonstrate that the proposed paradigm significantly outperforms previous best models. We are able to obtain the state-of-the-art results on all of the ACE04, ACE05 and CoNLL04 datasets, increasing the SOTA results on the three datasets to 49.6 (+1.2), 60.3 (+0.7) and 69.2 (+1.4), respectively. Additionally, we construct and will release a newly developed dataset RESUME, which requires multi-step reasoning to construct entity dependencies, as opposed to the single-step dependency extraction in the triplet exaction in previous datasets. The proposed multi-turn QA model also achieves the best performance on the RESUME dataset.
1340--1350
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
2c5ec74fb56fbfbceaa4cd5c8312ada4e2e19503
1
Identifying Grammar Rules for Language Education with Dependency Parsing in {G}erman
Metheniti, Eleni and Park, Pomi and Kolesova, Kristina and Neumann, G{\"u}nter
2,019
nan
100--111
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
dea084fbd77e6a8dd14f03edba06eca633ca0964
0
Learning to Denoise Distantly-Labeled Data for Entity Typing
Onoe, Yasumasa and Durrett, Greg
2,019
Distantly-labeled data can be used to scale up training of statistical models, but it is typically noisy and that noise can vary with the distant labeling technique. In this work, we propose a two-stage procedure for handling this type of data: denoise it with a learned model, then train our final model on clean and denoised distant data with standard supervised training. Our denoising approach consists of two parts. First, a filtering function discards examples from the distantly labeled data that are wholly unusable. Second, a relabeling function repairs noisy labels for the retained examples. Each of these components is a model trained on synthetically-noised examples generated from a small manually-labeled set. We investigate this approach on the ultra-fine entity typing task of Choi et al. (2018). Our baseline model is an extension of their model with pre-trained ELMo representations, which already achieves state-of-the-art performance. Adding distant data that has been denoised with our learned models gives further performance gains over this base model, outperforming models trained on raw distant data or heuristically-denoised distant data.
2407--2417
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
dc138300b87f5bfccec609644d5edc08c4d783e9
1
{HABL}ex: Human Annotated Bilingual Lexicons for Experiments in Machine Translation
Thompson, Brian and Knowles, Rebecca and Zhang, Xuan and Khayrallah, Huda and Duh, Kevin and Koehn, Philipp
2,019
Bilingual lexicons are valuable resources used by professional human translators. While these resources can be easily incorporated in statistical machine translation, it is unclear how to best do so in the neural framework. In this work, we present the HABLex dataset, designed to test methods for bilingual lexicon integration into neural machine translation. Our data consists of human generated alignments of words and phrases in machine translation test sets in three language pairs (Russian-English, Chinese-English, and Korean-English), resulting in clean bilingual lexicons which are well matched to the reference. We also present two simple baselines - constrained decoding and continued training - and an improvement to continued training to address overfitting.
1382--1387
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
5d9bb7e6fa899ec8e1de66389cfeb5639044c56b
0
Ultra-Fine Entity Typing
Choi, Eunsol and Levy, Omer and Choi, Yejin and Zettlemoyer, Luke
2,018
We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict ultra-fine types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets.
87--96
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
4157834ed2d2fea6b6f652a72a9d0487edbc9f57
1
Training for Diversity in Image Paragraph Captioning
Melas-Kyriazi, Luke and Rush, Alexander and Han, George
2,018
Image paragraph captioning models aim to produce detailed descriptions of a source image. These models use similar techniques as standard image captioning models, but they have encountered issues in text generation, notably a lack of diversity between sentences, that have limited their effectiveness. In this work, we consider applying sequence-level training for this task. We find that standard self-critical training produces poor results, but when combined with an integrated penalty on trigram repetition produces much more diverse paragraphs. This simple training approach improves on the best result on the Visual Genome paragraph captioning dataset from 16.9 to 30.6 CIDEr, with gains on METEOR and BLEU as well, without requiring any architectural changes.
757--761
8ba4a5f890b13b1cee77cdc976a712245cc6e9c0
Unified Semantic Typing with Meaningful Label Inference
b8298cf0056af5afa3185181ddd5f6bb03181696
0
Improving Semantic Parsing via Answer Type Inference
Yavuz, Semih and Gur, Izzeddin and Su, Yu and Srivatsa, Mudhakar and Yan, Xifeng
2,016
nan
149--159
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
f3594f9d60c98cac88f9033c69c2b666713ed6d6
1
What{'}s the Issue Here?: Task-based Evaluation of Reader Comment Summarization Systems
Barker, Emma and Paramita, Monica and Funk, Adam and Kurtic, Emina and Aker, Ahmet and Foster, Jonathan and Hepple, Mark and Gaizauskas, Robert
2,016
Automatic summarization of reader comments in on-line news is an extremely challenging task and a capability for which there is a clear need. Work to date has focussed on producing extractive summaries using well-known techniques imported from other areas of language processing. But are extractive summaries of comments what users really want? Do they support users in performing the sorts of tasks they are likely to want to perform with reader comments? In this paper we address these questions by doing three things. First, we offer a specification of one possible summary type for reader comment, based on an analysis of reader comment in terms of issues and viewpoints. Second, we define a task-based evaluation framework for reader comment summarization that allows summarization systems to be assessed in terms of how well they support users in a time-limited task of identifying issues and characterising opinion on issues in comments. Third, we describe a pilot evaluation in which we used the task-based evaluation framework to evaluate a prototype reader comment clustering and summarization system, demonstrating the viability of the evaluation framework and illustrating the sorts of insight such an evaluation affords.
3094--3101
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
e7c7d867a4729953f500db6f8dbd5266b04af9b9
0
Question Answering on Knowledge Bases and Text using Universal Schema and Memory Networks
Das, Rajarshi and Zaheer, Manzil and Reddy, Siva and McCallum, Andrew
2,017
Existing question answering methods infer answers either from a knowledge base or from raw text. While knowledge base (KB) methods are good at answering compositional questions, their performance is often affected by the incompleteness of the KB. Au contraire, web text contains millions of facts that are absent in the KB, however in an unstructured form. Universal schema can support reasoning on the union of both structured KBs and unstructured text by aligning them in a common embedded space. In this paper we extend universal schema to natural language question answering, employing Memory networks to attend to the large body of facts in the combination of text and KB. Our models can be trained in an end-to-end fashion on question-answer pairs. Evaluation results on Spades fill-in-the-blank question answering dataset show that exploiting universal schema for question answering is better than using either a KB or text alone. This model also outperforms the current state-of-the-art by 8.5 F1 points.
358--365
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
2b2090eab4abe27e6e5e4ca94afaf82e511b63bd
1
Personalized Machine Translation: Preserving Original Author Traits
Rabinovich, Ella and Patel, Raj Nath and Mirkin, Shachar and Specia, Lucia and Wintner, Shuly
2,017
The language that we produce reflects our personality, and various personal and demographic characteristics can be detected in natural language texts. We focus on one particular personal trait of the author, gender, and study how it is manifested in original texts and in translations. We show that author{'}s gender has a powerful, clear signal in originals texts, but this signal is obfuscated in human and machine translation. We then propose simple domain-adaptation techniques that help retain the original gender traits in the translation, without harming the quality of the translation, thereby creating more personalized machine translation systems.
1074--1084
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
07fab9a1a5d8e65ce50965f514f2d0e6022a6b94
0
Mapping Text to Knowledge Graph Entities using Multi-Sense {LSTM}s
Kartsaklis, Dimitri and Pilehvar, Mohammad Taher and Collier, Nigel
2,018
This paper addresses the problem of mapping natural language text to knowledge base entities. The mapping process is approached as a composition of a phrase or a sentence into a point in a multi-dimensional entity space obtained from a knowledge graph. The compositional model is an LSTM equipped with a dynamic disambiguation mechanism on the input word embeddings (a Multi-Sense LSTM), addressing polysemy issues. Further, the knowledge base space is prepared by collecting random walks from a graph enhanced with textual features, which act as a set of semantic bridges between text and knowledge base entities. The ideas of this work are demonstrated on large-scale text-to-entity mapping and entity classification tasks, with state of the art results.
1959--1970
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
33b60f5493e1a1cb303dd33569925e0ed0c7e189
1
Multi-Source Multi-Class Fake News Detection
Karimi, Hamid and Roy, Proteek and Saba-Sadiya, Sari and Tang, Jiliang
2,018
Fake news spreading through media outlets poses a real threat to the trustworthiness of information and detecting fake news has attracted increasing attention in recent years. Fake news is typically written intentionally to mislead readers, which determines that fake news detection merely based on news content is tremendously challenging. Meanwhile, fake news could contain true evidence to mock true news and presents different degrees of fakeness, which further exacerbates the detection difficulty. On the other hand, the spread of fake news produces various types of data from different perspectives. These multiple sources provide rich contextual information about fake news and offer unprecedented opportunities for advanced fake news detection. In this paper, we study fake news detection with different degrees of fakeness by integrating multiple sources. In particular, we introduce approaches to combine information from multiple sources and to discriminate between different degrees of fakeness, and propose a Multi-source Multi-class Fake news Detection framework MMFD, which combines automated feature extraction, multi-source fusion and automated degrees of fakeness detection into a coherent and interpretable model. Experimental results on the real-world data demonstrate the effectiveness of the proposed framework and extensive experiments are further conducted to understand the working of the proposed framework.
1546--1557
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
03aa69b71705890cb1555effddbd91ade9aa234c
0
Bidirectional {LSTM}-{CRF} for Named Entity Recognition
Panchendrarajan, Rrubaa and Amaresan, Aravindh
2,018
nan
nan
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
574453271bf9dbce7df005e9e1c2e0bb77eb1c6d
1
{IPSL}: A Database of Iconicity Patterns in Sign Languages. Creation and Use
Kimmelman, Vadim and Klezovich, Anna and Moroz, George
2,018
nan
nan
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
9950b5aaeb9b73554568b0630ef490f5457d110e
0
Progressively Pretrained Dense Corpus Index for Open-Domain Question Answering
Xiong, Wenhan and Wang, Hong and Wang, William Yang
2,021
Commonly used information retrieval methods such as TF-IDF in open-domain question answering (QA) systems are insufficient to capture deep semantic matching that goes beyond lexical overlaps. Some recent studies consider the retrieval process as maximum inner product search (MIPS) using dense question and paragraph representations, achieving promising results on several information-seeking QA datasets. However, the pretraining of the dense vector representations is highly resource-demanding, \textit{e.g.}, requires a very large batch size and lots of training steps. In this work, we propose a sample-efficient method to pretrain the paragraph encoder. First, instead of using heuristically created pseudo question-paragraph pairs for pretraining, we use an existing pretrained sequence-to-sequence model to build a strong question generator that creates high-quality pretraining data. Second, we propose a simple progressive pretraining algorithm to ensure the existence of effective negative samples in each batch. Across three open-domain QA datasets, our method consistently outperforms a strong dense retrieval baseline that uses 6 times more computation for training. On two of the datasets, our method achieves more than 4-point absolute improvement in terms of answer exact match.
2803--2815
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
469d92f195aebfa09e9b411ad92b3c879bcd1eba
1
Coreference-Aware Dialogue Summarization
Liu, Zhengyuan and Shi, Ke and Chen, Nancy
2,021
Summarizing conversations via neural approaches has been gaining research traction lately, yet it is still challenging to obtain practical solutions. Examples of such challenges include unstructured information exchange in dialogues, informal interactions between speakers, and dynamic role changes of speakers as the dialogue evolves. Many of such challenges result in complex coreference links. Therefore, in this work, we investigate different approaches to explicitly incorporate coreference information in neural abstractive dialogue summarization models to tackle the aforementioned challenges. Experimental results show that the proposed approaches achieve state-of-the-art performance, implying it is useful to utilize coreference information in dialogue summarization. Evaluation results on factual correctness suggest such coreference-aware models are better at tracing the information flow among interlocutors and associating accurate status/actions with the corresponding interlocutors and person mentions.
509--519
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
4696d6dfaf78ce2a65c3111550a50eff9423b896
0
Proceedings of the Workshop Computational Semantics Beyond Events and Roles
nan
2,017
nan
nan
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
04482d34aacdd6d0170d0935855ee5b403b84aa9
1
Morphology-based Entity and Relational Entity Extraction Framework for {A}rabic
Jaber, Amin and Zaraket, Fadi A.
2,017
nan
97--121
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
46a6d8578f3f61965992039df8a4c8aabdff275f
0
Question Answering on {F}reebase via Relation Extraction and Textual Evidence
Xu, Kun and Reddy, Siva and Feng, Yansong and Huang, Songfang and Zhao, Dongyan
2,016
nan
2326--2336
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
e3919e94c811fd85f5038926fa354619861674f9
1
{DTS}im at {S}em{E}val-2016 Task 1: Semantic Similarity Model Including Multi-Level Alignment and Vector-Based Compositional Semantics
Banjade, Rajendra and Maharjan, Nabin and Gautam, Dipesh and Rus, Vasile
2,016
nan
640--644
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
60ace2c8f672f56702257296918a26e1c99c3862
0
Constraint-Based Question Answering with Knowledge Graph
Bao, Junwei and Duan, Nan and Yan, Zhao and Zhou, Ming and Zhao, Tiejun
2,016
WebQuestions and SimpleQuestions are two benchmark data-sets commonly used in recent knowledge-based question answering (KBQA) work. Most questions in them are {`}simple{'} questions which can be answered based on a single relation in the knowledge base. Such data-sets lack the capability of evaluating KBQA systems on complicated questions. Motivated by this issue, we release a new data-set, namely ComplexQuestions, aiming to measure the quality of KBQA systems on {`}multi-constraint{'} questions which require multiple knowledge base relations to get the answer. Beside, we propose a novel systematic KBQA approach to solve multi-constraint questions. Compared to state-of-the-art methods, our approach not only obtains comparable results on the two existing benchmark data-sets, but also achieves significant improvements on the ComplexQuestions.
2503--2514
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
3689102f44116a46304ec512594478a1c615ae02
1
Improving Statistical Machine Translation Performance by Oracle-{BLEU} Model Re-estimation
Dakwale, Praveen and Monz, Christof
2,016
nan
38--44
0ce715758e4a7d62bfb1c4cebcca8afa520694f3
Schema and content aware classification for predicting the sources containing an answer over corpus and knowledge graphs
fc452d9d926e14bca793e44c3ee8f8760521852e
0