title
stringlengths
5
342
author
stringlengths
3
2.17k
year
int64
1.95k
2.02k
abstract
stringlengths
0
12.7k
pages
stringlengths
1
702
queryID
stringlengths
4
40
query
stringlengths
1
300
paperID
stringlengths
0
40
include
int64
0
1
Embedding Methods for Fine Grained Entity Type Classification
Yogatama, Dani and Gillick, Daniel and Lazic, Nevena
2,015
nan
291--296
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
cd51e6faf377104269ba1e905ce430650677155c
1
Feature-Rich Part-Of-Speech Tagging Using Deep Syntactic and Semantic Analysis
Jackov, Luchezar
2,015
nan
224--231
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
6c27573c00e04f9956c6ccc38fac8fc753267161
0
Improving Entity Linking through Semantic Reinforced Entity Embeddings
Hou, Feng and Wang, Ruili and He, Jun and Zhou, Yi
2,020
Entity embeddings, which represent different aspects of each entity with a single vector like word embeddings, are a key component of neural entity linking models. Existing entity embeddings are learned from canonical Wikipedia articles and local contexts surrounding target entities. Such entity embeddings are effective, but too distinctive for linking models to learn contextual commonality. We propose a simple yet effective method, FGS2EE, to inject fine-grained semantic information into entity embeddings to reduce the distinctiveness and facilitate the learning of contextual commonality. FGS2EE first uses the embeddings of semantic type words to generate semantic embeddings, and then combines them with existing entity embeddings through linear aggregation. Extensive experiments show the effectiveness of such embeddings. Based on our entity embeddings, we achieved new sate-of-the-art performance on entity linking.
6843--6848
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
10108878e053d28d72f059d7ec9e4a15281dad96
1
Marking Trustworthiness with Near Synonyms: A Corpus-based Study of {``}Renwei{''} and {``}Yiwei{''} in {C}hinese
Li, Bei and Huang, Chu-Ren and Chen, Si
2,020
nan
453--461
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
ef018aa1e8f465ab76e192d41c32c6c237cfeb31
0
{FINET}: Context-Aware Fine-Grained Named Entity Typing
Del Corro, Luciano and Abujabal, Abdalghani and Gemulla, Rainer and Weikum, Gerhard
2,015
nan
868--878
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
729698ea49c505771038cc84756ad4569f35e816
1
{WSD}-games: a Game-Theoretic Algorithm for Unsupervised Word Sense Disambiguation
Tripodi, Rocco and Pelillo, Marcello
2,015
nan
329--334
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
8b25ca2fcceb3ad47e3a552d122e25c841088676
0
{MZET}: Memory Augmented Zero-Shot Fine-grained Named Entity Typing
Zhang, Tao and Xia, Congying and Lu, Chun-Ta and Yu, Philip
2,020
Named entity typing (NET) is a classification task of assigning an entity mention in the context with given semantic types. However, with the growing size and granularity of the entity types, few previous researches concern with newly emerged entity types. In this paper, we propose MZET, a novel memory augmented FNET (Fine-grained NET) model, to tackle the unseen types in a zero-shot manner. MZET incorporates character-level, word-level, and contextural-level information to learn the entity mention representation. Besides, MZET considers the semantic meaning and the hierarchical structure into the entity type representation. Finally, through the memory component which models the relationship between the entity mention and the entity type, MZET transfers the knowledge from seen entity types to the zero-shot ones. Extensive experiments on three public datasets show the superior performance obtained by MZET, which surpasses the state-of-the-art FNET neural network models with up to 8{\%} gain in Micro-F1 and Macro-F1 score.
77--87
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
564693e8f95ea1046f567f73715a838900289c3f
1
Incremental Neural Lexical Coherence Modeling
Jeon, Sungho and Strube, Michael
2,020
Pretrained language models, neural models pretrained on massive amounts of data, have established the state of the art in a range of NLP tasks. They are based on a modern machine-learning technique, the Transformer which relates all items simultaneously to capture semantic relations in sequences. However, it differs from what humans do. Humans read sentences one-by-one, incrementally. Can neural models benefit by interpreting texts incrementally as humans do? We investigate this question in coherence modeling. We propose a coherence model which interprets sentences incrementally to capture lexical relations between them. We compare the state of the art in each task, simple neural models relying on a pretrained language model, and our model in two downstream tasks. Our findings suggest that interpreting texts incrementally as humans could be useful to design more advanced models.
6752--6758
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
8f70089de702d5da30e600ae53d35bc1580381cb
0
{HYENA}: Hierarchical Type Classification for Entity Names
Yosef, Mohamed Amir and Bauer, Sandro and Hoffart, Johannes and Spaniol, Marc and Weikum, Gerhard
2,012
nan
1361--1370
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
bcaef36e362c84c5b492425880e85f1ac781c661
1
Employing Compositional Semantics and Discourse Consistency in {C}hinese Event Extraction
Li, Peifeng and Zhou, Guodong and Zhu, Qiaoming and Hou, Libin
2,012
nan
1006--1016
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
f00b7db20c7b65292c4536cc82ad6bdb8e4afd04
0
Extended Named Entity Ontology with Attribute Information
Sekine, Satoshi
2,008
Named Entities (NE) are regarded as an important type of semantic knowledge in many natural language processing (NLP) applications. Originally, a limited number of NE categories were proposed. In MUC, it was 7 categories - people, organization, location, time, date, money and percentage expressions. However, it was noticed that such a limited number of NE categories is too small for many applications. The author has proposed Extended Named Entity (ENE), which has about 200 categories (Sekine and Nobata 04). During the development of ENE, we noticed that many ENE categories have specific attributes, and those provide very important information for the entities. For example, “rivers” have attributes like “source location”, “outflow”, and “length”. Some such information is essential to “knowing about” the river, while the name is only a label which can be used to refer to the river. Also, such attributes are important information for many NLP applications. In this paper, we report on the design of a set of attributes for ENE categories. We used a bottom up approach to creating the knowledge using a Japanese encyclopedia, which contains abundant descriptions of ENE instances.
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
24424f4050700dfa940851385d2e1ab7ba5d0cdc
1
Latent Morpho-Semantic Analysis: Multilingual Information Retrieval with Character N-Grams and Mutual Information
Chew, Peter A. and Bader, Brett W. and Abdelali, Ahmed
2,008
nan
129--136
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
89677da2c13fc1647ed1ade5aecaa8a40d9002b2
0
Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model
Dai, Hongliang and Song, Yangqiu and Wang, Haixun
2,021
Recently, there is an effort to extend fine-grained entity typing by using a richer and ultra-fine set of types, and labeling noun phrases including pronouns and nominal nouns instead of just named entity mentions. A key challenge for this ultra-fine entity typing task is that human annotated data are extremely scarce, and the annotation ability of existing distant or weak supervision approaches is very limited. To remedy this problem, in this paper, we propose to obtain training data for ultra-fine entity typing by using a BERT Masked Language Model (MLM). Given a mention in a sentence, our approach constructs an input for the BERT MLM so that it predicts context dependent hypernyms of the mention, which can be used as type labels. Experimental results demonstrate that, with the help of these automatically generated labels, the performance of an ultra-fine entity typing model can be improved substantially. We also show that our approach can be applied to improve traditional fine-grained entity typing after performing simple type mapping.
1790--1799
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
70b49a024787d3ad374fb78dc87e3ba2b5e16566
1
Optimizing {NLU} Reranking Using Entity Resolution Signals in Multi-domain Dialog Systems
Wang, Tong and Chen, Jiangning and Malmir, Mohsen and Dong, Shuyan and He, Xin and Wang, Han and Su, Chengwei and Liu, Yue and Liu, Yang
2,021
In dialog systems, the Natural Language Understanding (NLU) component typically makes the interpretation decision (including domain, intent and slots) for an utterance before the mentioned entities are resolved. This may result in intent classification and slot tagging errors. In this work, we propose to leverage Entity Resolution (ER) features in NLU reranking and introduce a novel loss term based on ER signals to better learn model weights in the reranking framework. In addition, for a multi-domain dialog scenario, we propose a score distribution matching method to ensure scores generated by the NLU reranking models for different domains are properly calibrated. In offline experiments, we demonstrate our proposed approach significantly outperforms the baseline model on both single-domain and cross-domain evaluations.
19--25
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
32e501a0cd9a4ebcaa5989657690be38b8340340
0
Modeling Fine-Grained Entity Types with Box Embeddings
Onoe, Yasumasa and Boratko, Michael and McCallum, Andrew and Durrett, Greg
2,021
Neural entity typing models typically represent fine-grained entity types as vectors in a high-dimensional space, but such spaces are not well-suited to modeling these types{'} complex interdependencies. We study the ability of box embeddings, which embed concepts as d-dimensional hyperrectangles, to capture hierarchies of types even when these relationships are not defined explicitly in the ontology. Our model represents both types and entity mentions as boxes. Each mention and its context are fed into a BERT-based model to embed that mention in our box space; essentially, this model leverages typological clues present in the surface text to hypothesize a type representation for the mention. Box containment can then be used to derive both the posterior probability of a mention exhibiting a given type and the conditional probability relations between types themselves. We compare our approach with a vector-based typing model and observe state-of-the-art performance on several entity typing benchmarks. In addition to competitive typing performance, our box-based model shows better performance in prediction consistency (predicting a supertype and a subtype together) and confidence (i.e., calibration), demonstrating that the box-based model captures the latent type hierarchies better than the vector-based model does.
2051--2064
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
176e3cbe3141c8b874df663711dca9b7470b8243
1
{LGESQL}: Line Graph Enhanced Text-to-{SQL} Model with Mixed Local and Non-Local Relations
Cao, Ruisheng and Chen, Lu and Chen, Zhi and Zhao, Yanbin and Zhu, Su and Yu, Kai
2,021
This work aims to tackle the challenging heterogeneous graph encoding problem in the text-to-SQL task. Previous methods are typically node-centric and merely utilize different weight matrices to parameterize edge types, which 1) ignore the rich semantics embedded in the topological structure of edges, and 2) fail to distinguish local and non-local relations for each node. To this end, we propose a Line Graph Enhanced Text-to-SQL (LGESQL) model to mine the underlying relational features without constructing meta-paths. By virtue of the line graph, messages propagate more efficiently through not only connections between nodes, but also the topology of directed edges. Furthermore, both local and non-local relations are integrated distinctively during the graph iteration. We also design an auxiliary task called graph pruning to improve the discriminative capability of the encoder. Our framework achieves state-of-the-art results (62.8{\%} with Glove, 72.0{\%} with Electra) on the cross-domain text-to-SQL benchmark Spider at the time of writing.
2541--2555
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
50db74aa7e662b640ccbf37788af62cd8af3e930
0
A {C}hinese Corpus for Fine-grained Entity Typing
Lee, Chin and Dai, Hongliang and Song, Yangqiu and Li, Xin
2,020
Fine-grained entity typing is a challenging task with wide applications. However, most existing datasets for this task are in English. In this paper, we introduce a corpus for Chinese fine-grained entity typing that contains 4,800 mentions manually labeled through crowdsourcing. Each mention is annotated with free-form entity types. To make our dataset useful in more possible scenarios, we also categorize all the fine-grained types into 10 general types. Finally, we conduct experiments with some neural models whose structures are typical in fine-grained entity typing and show how well they perform on our dataset. We also show the possibility of improving Chinese fine-grained entity typing through cross-lingual transfer learning.
4451--4457
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
853986783fdc27c7cebb04ba638dd7fe48c5de23
1
{``}What Do You Mean by That?{''} A Parser-Independent Interactive Approach for Enhancing Text-to-{SQL}
Li, Yuntao and Chen, Bei and Liu, Qian and Gao, Yan and Lou, Jian-Guang and Zhang, Yan and Zhang, Dongmei
2,020
In Natural Language Interfaces to Databases systems, the text-to-SQL technique allows users to query databases by using natural language questions. Though significant progress in this area has been made recently, most parsers may fall short when they are deployed in real systems. One main reason stems from the difficulty of fully understanding the users{'} natural language questions. In this paper, we include human in the loop and present a novel parser-independent interactive approach (PIIA) that interacts with users using multi-choice questions and can easily work with arbitrary parsers. Experiments were conducted on two cross-domain datasets, the WikiSQL and the more complex Spider, with five state-of-the-art parsers. These demonstrated that PIIA is capable of enhancing the text-to-SQL performance with limited interaction turns by using both simulation and human evaluation.
6913--6922
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
bc247abf8180f583a42de392e4f7d2b2a41ad72d
0
Fine-grained Named Entity Annotations for {G}erman Biographic Interviews
Ruppenhofer, Josef and Rehbein, Ines and Flinz, Carolina
2,020
We present a fine-grained NER annotations with 30 labels and apply it to German data. Building on the OntoNotes 5.0 NER inventory, our scheme is adapted for a corpus of transcripts of biographic interviews by adding categories for AGE and LAN(guage) and also features extended numeric and temporal categories. Applying the scheme to the spoken data as well as a collection of teaser tweets from newspaper sites, we can confirm its generality for both domains, also achieving good inter-annotator agreement. We also show empirically how our inventory relates to the well-established 4-category NER inventory by re-annotating a subset of the GermEval 2014 NER coarse-grained dataset with our fine label inventory. Finally, we use a BERT-based system to establish some baseline models for NER tagging on our two new datasets. Global results in in-domain testing are quite high on the two datasets, near what was achieved for the coarse inventory on the CoNLLL2003 data. Cross-domain testing produces much lower results due to the severe domain differences.
4605--4614
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
8122242e40d95b288cfbe14024988f41fd17ab6b
1
Collocations in {R}ussian Lexicography and {R}ussian Collocations Database
Khokhlova, Maria
2,020
The paper presents the issue of collocability and collocations in Russian and gives a survey of a wide range of dictionaries both printed and online ones that describe collocations. Our project deals with building a database that will include dictionary and statistical collocations. The former can be described in various lexicographic resources whereas the latter can be extracted automatically from corpora. Dictionaries differ among themselves, the information is given in various ways, making it hard for language learners and researchers to acquire data. A number of dictionaries were analyzed and processed to retrieve verified collocations, however the overlap between the lists of collocations extracted from them is still rather small. This fact indicates there is a need to create a unified resource which takes into account collocability and more examples. The proposed resource will also be useful for linguists and for studying Russian as a foreign language. The obtained results can be important for machine learning and for other NLP tasks, for instance, automatic clustering of word combinations and disambiguation.
3198--3206
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
9780480a952edddef523c98c2ba0f500a572ad46
0
{ENTYFI}: A System for Fine-grained Entity Typing in Fictional Texts
Chu, Cuong Xuan and Razniewski, Simon and Weikum, Gerhard
2,020
Fiction and fantasy are archetypes of long-tail domains that lack suitable NLP methodologies and tools. We present ENTYFI, a web-based system for fine-grained typing of entity mentions in fictional texts. It builds on 205 automatically induced high-quality type systems for popular fictional domains, and provides recommendations towards reference type systems for given input texts. Users can exploit the richness and diversity of these reference type systems for fine-grained supervised typing, in addition, they can choose among and combine four other typing modules: pre-trained real-world models, unsupervised dependency-based typing, knowledge base lookups, and constraint-based candidate consolidation. The demonstrator is available at: \url{https://d5demos.mpi-inf.mpg.de/entyfi}.
100--106
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
2d2eaf2a13c50f49bc3a1842581a1b9dc8c1ffc3
1
{S}eg{B}o: A Database of Borrowed Sounds in the World{'}s Languages
Grossman, Eitan and Eisen, Elad and Nikolaev, Dmitry and Moran, Steven
2,020
Phonological segment borrowing is a process through which languages acquire new contrastive speech sounds as the result of borrowing new words from other languages. Despite the fact that phonological segment borrowing is documented in many of the world{'}s languages, to date there has been no large-scale quantitative study of the phenomenon. In this paper, we present SegBo, a novel cross-linguistic database of borrowed phonological segments. We describe our data aggregation pipeline and the resulting language sample. We also present two short case studies based on the database. The first deals with the impact of large colonial languages on the sound systems of the world{'}s languages; the second deals with universals of borrowing in the domain of rhotic consonants.
5316--5322
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
82b05cfaab8691236c88fa388b3477d06f108819
0
Description-Based Zero-shot Fine-Grained Entity Typing
Obeidat, Rasha and Fern, Xiaoli and Shahbazi, Hamed and Tadepalli, Prasad
2,019
Fine-grained Entity typing (FGET) is the task of assigning a fine-grained type from a hierarchy to entity mentions in the text. As the taxonomy of types evolves continuously, it is desirable for an entity typing system to be able to recognize novel types without additional training. This work proposes a zero-shot entity typing approach that utilizes the type description available from Wikipedia to build a distributed semantic representation of the types. During training, our system learns to align the entity mentions and their corresponding type representations on the known types. At test time, any new type can be incorporated into the system given its Wikipedia descriptions. We evaluate our approach on FIGER, a public benchmark entity tying dataset. Because the existing test set of FIGER covers only a small portion of the fine-grained types, we create a new test set by manually annotating a portion of the noisy training data. Our experiments demonstrate the effectiveness of the proposed method in recognizing novel types that are not present in the training data.
807--814
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
51b958dd76a6aefcd521ec0f503c3e334f711362
1
Continuous Quality Control and Advanced Text Segment Annotation with {WAT}-{SL} 2.0
Lohr, Christina and Kiesel, Johannes and Luther, Stephanie and Hellrich, Johannes and Kolditz, Tobias and Stein, Benno and Hahn, Udo
2,019
Today{'}s widely used annotation tools were designed for annotating typically short textual mentions of entities or relations, making their interface cumbersome to use for long(er) stretches of text, e.g, sentences running over several lines in a document. They also lack systematic support for hierarchically structured labels, i.e., one label being conceptually more general than another (e.g., anamnesis in relation to family anamnesis). Moreover, as a more fundamental shortcoming of today{'}s tools, they provide no continuous quality con trol mechanisms for the annotation process, an essential feature to intrinsically support iterative cycles in the development of annotation guidelines. We alleviated these problems by developing WAT-SL 2.0, an open-source web-based annotation tool for long-segment labeling, hierarchically structured label sets and built-ins for quality control.
215--219
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
1c404bcaf18e749a450578daf322f79f82a4e949
0
Fine-grained Entity Typing through Increased Discourse Context and Adaptive Classification Thresholds
Zhang, Sheng and Duh, Kevin and Van Durme, Benjamin
2,018
Fine-grained entity typing is the task of assigning fine-grained semantic types to entity mentions. We propose a neural architecture which learns a distributional semantic representation that leverages a greater amount of semantic context {--} both document and sentence level information {--} than prior work. We find that additional context improves performance, with further improvements gained by utilizing adaptive classification thresholds. Experiments show that our approach without reliance on hand-crafted features achieves the state-of-the-art results on three benchmark datasets.
173--179
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
87abde0432f4377aed50ade6fb49299d4bd018bb
1
{AMR} dependency parsing with a typed semantic algebra
Groschwitz, Jonas and Lindemann, Matthias and Fowlie, Meaghan and Johnson, Mark and Koller, Alexander
2,018
We present a semantic parser for Abstract Meaning Representations which learns to parse strings into tree representations of the compositional structure of an AMR graph. This allows us to use standard neural techniques for supertagging and dependency tree parsing, constrained by a linguistically principled type system. We present two approximative decoding algorithms, which achieve state-of-the-art accuracy and outperform strong baselines.
1831--1841
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
25109699b098c786832c906e4b36fa76fb2b66a0
0
Ultra-Fine Entity Typing
Choi, Eunsol and Levy, Omer and Choi, Yejin and Zettlemoyer, Luke
2,018
We introduce a new entity typing task: given a sentence with an entity mention, the goal is to predict a set of free-form phrases (e.g. skyscraper, songwriter, or criminal) that describe appropriate types for the target entity. This formulation allows us to use a new type of distant supervision at large scale: head words, which indicate the type of the noun phrases they appear in. We show that these ultra-fine types can be crowd-sourced, and introduce new evaluation sets that are much more diverse and fine-grained than existing benchmarks. We present a model that can predict ultra-fine types, and is trained using a multitask objective that pools our new head-word supervision with prior supervision from entity linking. Experimental results demonstrate that our model is effective in predicting entity types at varying granularity; it achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for our newly-introduced datasets.
87--96
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
4157834ed2d2fea6b6f652a72a9d0487edbc9f57
1
Aggression Identification Using Deep Learning and Data Augmentation
Risch, Julian and Krestel, Ralf
2,018
Social media platforms allow users to share and discuss their opinions online. However, a minority of user posts is aggressive, thereby hinders respectful discussion, and {---} at an extreme level {---} is liable to prosecution. The automatic identification of such harmful posts is important, because it can support the costly manual moderation of online discussions. Further, the automation allows unprecedented analyses of discussion datasets that contain millions of posts. This system description paper presents our submission to the First Shared Task on Aggression Identification. We propose to augment the provided dataset to increase the number of labeled comments from 15,000 to 60,000. Thereby, we introduce linguistic variety into the dataset. As a consequence of the larger amount of training data, we are able to train a special deep neural net, which generalizes especially well to unseen data. To further boost the performance, we combine this neural net with three logistic regression classifiers trained on character and word n-grams, and hand-picked syntactic features. This ensemble is more robust than the individual single models. Our team named {``}Julian{''} achieves an F1-score of 60{\%} on both English datasets, 63{\%} on the Hindi Facebook dataset, and 38{\%} on the Hindi Twitter dataset.
150--158
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
f2a9d16e852e6008b11244df899672231efb7a12
0
Improving Entity Linking by Modeling Latent Relations between Mentions
Le, Phong and Titov, Ivan
2,018
Entity linking involves aligning textual mentions of named entities to their corresponding entries in a knowledge base. Entity linking systems often exploit relations between textual mentions in a document (e.g., coreference) to decide if the linking decisions are compatible. Unlike previous approaches, which relied on supervised systems or heuristics to predict these relations, we treat relations as latent variables in our neural entity-linking model. We induce the relations without any supervision while optimizing the entity-linking system in an end-to-end fashion. Our multi-relational model achieves the best reported scores on the standard benchmark (AIDA-CoNLL) and substantially outperforms its relation-agnostic version. Its training also converges much faster, suggesting that the injected structural bias helps to explain regularities in the training data.
1595--1604
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
44b18e054bc0ef6e9afe04732807a1f38d002179
1
A Hybrid Approach to Automatic Corpus Generation for {C}hinese Spelling Check
Wang, Dingmin and Song, Yan and Li, Jing and Han, Jialong and Zhang, Haisong
2,018
Chinese spelling check (CSC) is a challenging yet meaningful task, which not only serves as a preprocessing in many natural language processing(NLP) applications, but also facilitates reading and understanding of running texts in peoples{'} daily lives. However, to utilize data-driven approaches for CSC, there is one major limitation that annotated corpora are not enough in applying algorithms and building models. In this paper, we propose a novel approach of constructing CSC corpus with automatically generated spelling errors, which are either visually or phonologically resembled characters, corresponding to the OCR- and ASR-based methods, respectively. Upon the constructed corpus, different models are trained and evaluated for CSC with respect to three standard test sets. Experimental results demonstrate the effectiveness of the corpus, therefore confirm the validity of our approach.
2517--2527
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
c12e270f347334ced34614e110b9319888522da8
0
Building Language Models for Text with Named Entities
Parvez, Md Rizwan and Chakraborty, Saikat and Ray, Baishakhi and Chang, Kai-Wei
2,018
Text in many domains involves a significant amount of named entities. Predicting the entity names is often challenging for a language model as they appear less frequent on the training corpus. In this paper, we propose a novel and effective approach to building a language model which can learn the entity names by leveraging their entity type information. We also introduce two benchmark datasets based on recipes and Java programming codes, on which we evaluate the proposed model. Experimental results show that our model achieves 52.2{\%} better perplexity in recipe generation and 22.06{\%} on code generation than state-of-the-art language models.
2373--2383
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
6e618c1be08cecd8d71fe65512ad44814c650ffc
1
Analogical Reasoning on {C}hinese Morphological and Semantic Relations
Li, Shen and Zhao, Zhe and Hu, Renfen and Li, Wensi and Liu, Tao and Du, Xiaoyong
2,018
Analogical reasoning is effective in capturing linguistic regularities. This paper proposes an analogical reasoning task on Chinese. After delving into Chinese lexical knowledge, we sketch 68 implicit morphological relations and 28 explicit semantic relations. A big and balanced dataset CA8 is then built for this task, including 17813 questions. Furthermore, we systematically explore the influences of vector representations, context features, and corpora on analogical reasoning. With the experiments, CA8 is proved to be a reliable benchmark for evaluating Chinese word embeddings.
138--143
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
1f8c70ce22fc5b34ee725d79d4a061b3062f6fc5
0
Zero-Shot Open Entity Typing as Type-Compatible Grounding
Zhou, Ben and Khashabi, Daniel and Tsai, Chen-Tse and Roth, Dan
2,018
The problem of entity-typing has been studied predominantly as a supervised learning problems, mostly with task-specific annotations (for coarse types) and sometimes with distant supervision (for fine types). While such approaches have strong performance within datasets they often lack the flexibility to transfer across text genres and to generalize to new type taxonomies. In this work we propose a zero-shot entity typing approach that requires no annotated data and can flexibly identify newly defined types. Given a type taxonomy, the entries of which we define as Boolean functions of freebase {``}types,{''} we ground a given mention to a set of \textit{type-compatible} Wikipedia entries, and then infer the target mention{'}s type using an inference algorithm that makes use of the types of these entries. We evaluate our system on a broad range of datasets, including standard fine-grained and coarse-grained entity typing datasets, and on a dataset in the biological domain. Our system is shown to be competitive with state-of-the-art supervised NER systems, and to outperform them on out-of-training datasets. We also show that our system significantly outperforms other zero-shot fine typing systems.
2065--2076
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
8456a5ed15b465e82bba3b974ff4e25c3b652826
1
Quantifying Qualitative Data for Understanding Controversial Issues
Wojatzki, Michael and Mohammad, Saif and Zesch, Torsten and Kiritchenko, Svetlana
2,018
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
7df3bca7de01f2e017feb46eb59d7232e2494439
0
An Empirical Study on Fine-Grained Named Entity Recognition
Mai, Khai and Pham, Thai-Hoang and Nguyen, Minh Trung and Nguyen, Tuan Duc and Bollegala, Danushka and Sasano, Ryohei and Sekine, Satoshi
2,018
Named entity recognition (NER) has attracted a substantial amount of research. Recently, several neural network-based models have been proposed and achieved high performance. However, there is little research on fine-grained NER (FG-NER), in which hundreds of named entity categories must be recognized, especially for non-English languages. It is still an open question whether there is a model that is robust across various settings or the proper model varies depending on the language, the number of named entity categories, and the size of training datasets. This paper first presents an empirical comparison of FG-NER models for English and Japanese and demonstrates that LSTM+CNN+CRF (Ma and Hovy, 2016), one of the state-of-the-art methods for English NER, also works well for English FG-NER but does not work well for Japanese, a language that has a large number of character types. To tackle this problem, we propose a method to improve the neural network-based Japanese FG-NER performance by removing the CNN layer and utilizing dictionary and category embeddings. Experiment results show that the proposed method improves Japanese FG-NER F-score from 66.76{\%} to 75.18{\%}.
711--722
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
f0c39dd1715d0050168467a5afa22855d6d2fe2c
1
A Fast and Flexible Webinterface for Dialect Research in the Low Countries
van Hout, Roeland and van der Sijs, Nicoline and Komen, Erwin and van den Heuvel, Henk
2,018
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
db91c269785a12b21c7b187112f2233a3897384e
0
Fine-Grained Entity Typing with High-Multiplicity Assignments
Rabinovich, Maxim and Klein, Dan
2,017
As entity type systems become richer and more fine-grained, we expect the number of types assigned to a given entity to increase. However, most fine-grained typing work has focused on datasets that exhibit a low degree of type multiplicity. In this paper, we consider the high-multiplicity regime inherent in data sources such as Wikipedia that have semi-open type systems. We introduce a set-prediction approach to this problem and show that our model outperforms unstructured baselines on a new Wikipedia-based fine-grained typing corpus.
330--334
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
1908e93bfa8ee6f1707a2513095e48945823727a
1
{ECNU} at {S}em{E}val-2017 Task 4: Evaluating Effective Features on Machine Learning Methods for {T}witter Message Polarity Classification
Zhou, Yunxiao and Lan, Man and Wu, Yuanbin
2,017
This paper reports our submission to subtask A of task 4 (Sentiment Analysis in Twitter, SAT) in SemEval 2017, i.e., Message Polarity Classification. We investigated several traditional Natural Language Processing (NLP) features, domain specific features and word embedding features together with supervised machine learning methods to address this task. Officially released results showed that our system ranked above average.
812--816
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
1b0cf0cededba48d2fea32cdcf407906c61cf14f
0
Multi-level Representations for Fine-Grained Typing of Knowledge Base Entities
Yaghoobzadeh, Yadollah and Sch{\"u}tze, Hinrich
2,017
Entities are essential elements of natural language. In this paper, we present methods for learning multi-level representations of entities on three complementary levels: character (character patterns in entity names extracted, e.g., by neural networks), word (embeddings of words in entity names) and entity (entity embeddings). We investigate state-of-the-art learning methods on each level and find large differences, e.g., for deep learning models, traditional ngram features and the subword model of fasttext (Bojanowski et al., 2016) on the character level; for word2vec (Mikolov et al., 2013) on the word level; and for the order-aware model wang2vec (Ling et al., 2015a) on the entity level. We confirm experimentally that each level of representation contributes complementary information and a joint representation of all three levels improves the existing embedding based baseline for fine-grained entity typing by a large margin. Additionally, we show that adding information from entity descriptions further improves multi-level representations of entities.
578--589
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
bdeb6ff1a9607468af50609ccde1f55ce64b0ad4
1
Automatic classification of doctor-patient questions for a virtual patient record query task
Campillos Llanos, Leonardo and Rosset, Sophie and Zweigenbaum, Pierre
2,017
We present the work-in-progress of automating the classification of doctor-patient questions in the context of a simulated consultation with a virtual patient. We classify questions according to the computational strategy (rule-based or other) needed for looking up data in the clinical record. We compare {`}traditional{'} machine learning methods (Gaussian and Multinomial Naive Bayes, and Support Vector Machines) and a neural network classifier (FastText). We obtained the best results with the SVM using semantic annotations, whereas the neural classifier achieved promising results without it.
333--341
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
17e36e9193f8154a8fd2e5c6ac44b2c4ad22a6ed
0
Noise Mitigation for Neural Entity Typing and Relation Extraction
Yaghoobzadeh, Yadollah and Adel, Heike and Sch{\"u}tze, Hinrich
2,017
In this paper, we address two different types of noise in information extraction models: noise from distant supervision and noise from pipeline input features. Our target tasks are entity typing and relation extraction. For the first noise type, we introduce multi-instance multi-label learning algorithms using neural network models, and apply them to fine-grained entity typing for the first time. Our model outperforms the state-of-the-art supervised approach which uses global embeddings of entities. For the second noise type, we propose ways to improve the integration of noisy entity type predictions into relation extraction. Our experiments show that probabilistic predictions are more robust than discrete predictions and that joint training of the two tasks performs best.
1183--1194
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
b0b0c68c3457faa85ed3bbd3252ac65ba55da5c6
1
{LIPN}-{IIMAS} at {S}em{E}val-2017 Task 1: Subword Embeddings, Attention Recurrent Neural Networks and Cross Word Alignment for Semantic Textual Similarity
Arroyo-Fern{\'a}ndez, Ignacio and Meza Ruiz, Ivan Vladimir
2,017
In this paper we report our attempt to use, on the one hand, state-of-the-art neural approaches that are proposed to measure Semantic Textual Similarity (STS). On the other hand, we propose an unsupervised cross-word alignment approach, which is linguistically motivated. The neural approaches proposed herein are divided into two main stages. The first stage deals with constructing neural word embeddings, the components of sentence embeddings. The second stage deals with constructing a semantic similarity function relating pairs of sentence embeddings. Unfortunately our competition results were poor in all tracks, therefore we concentrated our research to improve them for Track 5 (EN-EN).
208--212
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
a640fb4a11fc767f4bf801f7a7320b92efc807d3
0
Deep Joint Entity Disambiguation with Local Neural Attention
Ganea, Octavian-Eugen and Hofmann, Thomas
2,017
We propose a novel deep learning model for joint document-level entity disambiguation, which leverages learned neural representations. Key components are entity embeddings, a neural attention mechanism over local context windows, and a differentiable joint inference stage for disambiguation. Our approach thereby combines benefits of deep learning with more traditional approaches such as graphical models and probabilistic mention-entity maps. Extensive experiments show that we are able to obtain competitive or state-of-the-art accuracy at moderate computational costs.
2619--2629
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
eead15f6cd00df5e1bd7733108695778c8d43240
1
Temporal Orientation of Tweets for Predicting Income of Users
Hasanuzzaman, Mohammed and Kamila, Sabyasachi and Kaur, Mandeep and Saha, Sriparna and Ekbal, Asif
2,017
Automatically estimating a user{'}s socio-economic profile from their language use in social media can significantly help social science research and various downstream applications ranging from business to politics. The current paper presents the first study where user cognitive structure is used to build a predictive model of income. In particular, we first develop a classifier using a weakly supervised learning framework to automatically time-tag tweets as past, present, or future. We quantify a user{'}s overall temporal orientation based on their distribution of tweets, and use it to build a predictive model of income. Our analysis uncovers a correlation between future temporal orientation and income. Finally, we measure the predictive power of future temporal orientation on income by performing regression.
659--665
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
9bc68cf51f15af853694f63cbf01dd7051685cc2
0
Inferring Missing Entity Type Instances for Knowledge Base Completion: New Dataset and Methods
Neelakantan, Arvind and Chang, Ming-Wei
2,015
nan
515--525
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
4e278a0fe9fbfeceb29acde435706aa790aeda56
1
{CUNI} in {WMT}15: Chimera Strikes Again
Bojar, Ond{\v{r}}ej and Tamchyna, Ale{\v{s}}
2,015
nan
79--83
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
b95c8e996b37d3dc81e29e44b2adde23bfb4d951
0
Corpus-level Fine-grained Entity Typing Using Contextual Information
Yaghoobzadeh, Yadollah and Sch{\"u}tze, Hinrich
2,015
nan
715--725
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
8b298ce5f81c5ffd63f5c5ab3634dbfd350a92e4
1
Lost in Discussion? Tracking Opinion Groups in Complex Political Discussions by the Example of the {FOMC} Meeting Transcriptions
Zirn, C{\"a}cilia and Meusel, Robert and Stuckenschmidt, Heiner
2,015
nan
747--753
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
8a5b7bba4fa1ce57009fadacd77f9b8656b35bab
0
Incremental Joint Extraction of Entity Mentions and Relations
Li, Qi and Ji, Heng
2,014
nan
402--412
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
8b156bdce947783b8c7071f02557b414ab7b5276
1
{HBB}4{ALL}: media accessibility for {HBB} {TV}
nan
2,014
nan
127
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
88a0a281e5b95b608d75ab0b786006fc9ed8575f
0
A Convolutional Neural Network for Modelling Sentences
Kalchbrenner, Nal and Grefenstette, Edward and Blunsom, Phil
2,014
nan
655--665
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
27725a2d2a8cee9bf9fffc6c2167017103aba0fa
1
Exploring Compositional Architectures and Word Vector Representations for Prepositional Phrase Attachment
Belinkov, Yonatan and Lei, Tao and Barzilay, Regina and Globerson, Amir
2,014
Prepositional phrase (PP) attachment disambiguation is a known challenge in syntactic parsing. The lexical sparsity associated with PP attachments motivates research in word representations that can capture pertinent syntactic and semantic features of the word. One promising solution is to use word vectors induced from large amounts of raw text. However, state-of-the-art systems that employ such representations yield modest gains in PP attachment accuracy. In this paper, we show that word vector representations can yield significant PP attachment performance gains. This is achieved via a non-linear architecture that is discriminatively trained to maximize PP attachment accuracy. The architecture is initialized with word vectors trained from unlabeled data, and relearns those to maximize attachment accuracy. We obtain additional performance gains with alternative representations such as dependency-based word vectors. When tested on both English and Arabic datasets, our method outperforms both a strong SVM classifier and state-of-the-art parsers. For instance, we achieve 82.6{\%} PP attachment accuracy on Arabic, while the Turbo and Charniak self-trained parsers obtain 76.7{\%} and 80.8{\%} respectively.
561--572
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
18f648bb494c87f9cf9fe7db744aa233de9313c1
0
Fine-grained Semantic Typing of Emerging Entities
Nakashole, Ndapandula and Tylenda, Tomasz and Weikum, Gerhard
2,013
nan
1488--1497
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
6629785cb5c9c96921f97e7a8c56dbe63f80d9ef
1
A User Study: Technology to Increase Teachers{'} Linguistic Awareness to Improve Instructional Language Support for {E}nglish Language Learners
Burstein, Jill and Sabatini, John and Shore, Jane and Moulder, Brad and Lentini, Jennifer
2,013
nan
1--10
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
77727365299413c51d85a0a7848bbcbbcce824d4
0
Multi-instance Multi-label Learning for Relation Extraction
Surdeanu, Mihai and Tibshirani, Julie and Nallapati, Ramesh and Manning, Christopher D.
2,012
nan
455--465
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
fbe358ce706371b93c10c4395cab9a78ad3aef67
1
Classification of Interviews - A Case Study on Cancer Patients
Patra, Braja Gopal and Kundu, Amitava and Das, Dipankar and Bandyopadhyay, Sivaji
2,012
nan
27--36
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
eed4d404a91f803a8f408b22a5ddf338b59ba7bc
0
{PATTY}: A Taxonomy of Relational Patterns with Semantic Types
Nakashole, Ndapandula and Weikum, Gerhard and Suchanek, Fabian
2,012
nan
1135--1145
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
b162c99873c929447bb7ff48d454867aa83f375c
1
Code-Switch Language Model with Inversion Constraints for Mixed Language Speech Recognition
Li, Ying and Fung, Pascale
2,012
nan
1671--1680
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
97e0304db883c30393534adc5dea2c891b50280c
0
Class Label Enhancement via Related Instances
Kozareva, Zornitsa and Voevodski, Konstantin and Teng, Shanghua
2,011
nan
118--128
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
791031c4af681f032175a35b935194fe0ac26534
1
The Semi-Automatic Construction of Part-Of-Speech Taggers for Specific Languages by Statistical Methods
Yamasaki, Tomohiro and Wakaki, Hiromi and Suzuki, Masaru
2,011
nan
23--29
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
4212340339fff0148d774caae05221c686b4d1ea
0
Robust Disambiguation of Named Entities in Text
Hoffart, Johannes and Yosef, Mohamed Amir and Bordino, Ilaria and F{\"u}rstenau, Hagen and Pinkal, Manfred and Spaniol, Marc and Taneva, Bilyana and Thater, Stefan and Weikum, Gerhard
2,011
nan
782--792
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
d95738f38d97a030d98508357e4d5c78a4a208ba
1
Using a {W}ikipedia-based Semantic Relatedness Measure for Document Clustering
Yazdani, Majid and Popescu-Belis, Andrei
2,011
nan
29--36
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
da1e1ee70d3be350ec1ceb70fc1de34048dc0c33
0
Identifying Relations for Open Information Extraction
Fader, Anthony and Soderland, Stephen and Etzioni, Oren
2,011
nan
1535--1545
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
d4b651d6a904f69f8fa1dcad4ebe972296af3a9a
1
Query Weighting for Ranking Model Adaptation
Cai, Peng and Gao, Wei and Zhou, Aoying and Wong, Kam-Fai
2,011
nan
112--122
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
281c587dddbda1ad32f7566d44d18c5f771e5cb2
0
Inducing Fine-Grained Semantic Classes via Hierarchical and Collective Classification
Rahman, Altaf and Ng, Vincent
2,010
nan
931--939
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
184b5d6fd0ec7b94b815ca18227fa00d9a6b58b1
1
Streaming First Story Detection with application to {T}witter
Petrovi{\'c}, Sa{\v{s}}a and Osborne, Miles and Lavrenko, Victor
2,010
nan
181--189
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
8854ca5546396ef225112ec828094882a71fd01e
0
{W}iki{S}ense: Supersense Tagging of {W}ikipedia Named Entities Based {W}ord{N}et
Chang, Joseph and Tsai, Richard Tzong-Han and Chang, Jason S.
2,009
nan
72--81
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
559e2679ccb23f722b262410c32bab131214bbae
1
The Construction of a {C}hinese-{E}nglish Patent Parallel Corpus
Lu, Bin and Tsou, Benjamin K. and Zhu, Jingbo and Jiang, Tao and Kwong, Oi Yee
2,009
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
ab6c0ef09337c398aa12eaf93805b706b0fb2ed9
0
{W}eb-Scale Distributional Similarity and Entity Set Expansion
Pantel, Patrick and Crestan, Eric and Borkovsky, Arkady and Popescu, Ana-Maria and Vyas, Vishnu
2,009
nan
938--947
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
00fce98c3fda59bcb84b6d0626fb3137d2fbb984
1
k-{N}earest Neighbor {M}onte-{C}arlo Control Algorithm for {POMDP}-Based Dialogue Systems
Lef{\`e}vre, Fabrice and Ga{\v{s}}i{\'c}, Milica and Jur{\v{c}}{\'\i}{\v{c}}ek, Filip and Keizer, Simon and Mairesse, Fran{\c{c}}ois and Thomson, Blaise and Yu, Kai and Young, Steve
2,009
nan
272--275
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
5367ae4fd4dbb8c21b8c7f083d434a7f69d0577e
0
Distributed Word Clustering for Large Scale Class-Based Language Modeling in Machine Translation
Uszkoreit, Jakob and Brants, Thorsten
2,008
nan
755--762
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
00ae51ba9340abc30d36804f9b51ab83b81cec23
1
Revisiting the Impact of Different Annotation Schemes on {PCFG} Parsing: A Grammatical Dependency Evaluation
Boyd, Adriane and Meurers, Detmar
2,008
nan
24--32
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
8d616d33bddd764960280936e40ceb0cbbd0e60c
0
Weakly-Supervised Acquisition of Labeled Class Instances using Graph Random Walks
Talukdar, Partha Pratim and Reisinger, Joseph and Pa{\c{s}}ca, Marius and Ravichandran, Deepak and Bhagat, Rahul and Pereira, Fernando
2,008
nan
582--590
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
eca6dfe0a741b52db388e04febf71f542353a63c
1
Semantic Frame Annotation on the {F}rench {MEDIA} corpus
Meurs, Marie-Jean and Duvert, Fr{\'e}d{\'e}ric and B{\'e}chet, Fr{\'e}d{\'e}ric and Lef{\`e}vre, Fabrice and de Mori, Renato
2,008
This paper introduces a knowledge representation formalism used for annotation of the French MEDIA dialogue corpus in terms of high level semantic structures. The semantic annotation, worked out according to the Berkeley FrameNet paradigm, is incremental and partially automated. We describe an automatic interpretation process for composing semantic structures from basic semantic constituents using patterns involving words and constituents. This process contains procedures which provide semantic compositions and generating frame hypotheses by inference. The MEDIA corpus is a French dialogue corpus recorded using a Wizard of Oz system simulating a telephone server for tourist information and hotel booking. It had been manually transcribed and annotated at the word and semantic constituent levels. These levels support the automatic interpretation process which provides a high level semantic frame annotation. The Frame based Knowledge Source we composed contains Frame definitions and composition rules. We finally provide some results obtained on the automatically-derived annotation.
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
175b20b24dc4f7980c756fd24541ffb5e2a1533b
0
Question Classification using Head Words and their Hypernyms
Huang, Zhiheng and Thint, Marcus and Qin, Zengchang
2,008
nan
927--936
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
94a9af119df61f501980cf095700f35c2a7762a3
1
Entailment-based Question Answering for Structured Data
Sacaleanu, Bogdan and Orasan, Constantin and Spurk, Christian and Ou, Shiyan and Ferrandez, Oscar and Kouylekov, Milen and Negri, Matteo
2,008
nan
173--176
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
866e10618d9e05595dc685a73e1a8965d3aaa391
0
Definition, Dictionaries and Tagger for Extended Named Entity Hierarchy
Sekine, Satoshi and Nobata, Chikashi
2,004
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
b2434644b7178a01f97235a75bddd87b614313af
1
Benchmarking Ontology Tools. A Case Study for the {W}eb{ODE} Platform.
Corcho, Oscar and Garc{\'\i}a-Castro, Ra{\'u}l and G{\'o}mez-P{\'e}rez, Asunci{\'o}n
2,004
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
4c461cbac24e23e1160ca153bd604dc4fad75285
0
Extended Named Entity Hierarchy
Sekine, Satoshi and Sudo, Kiyoshi and Nobata, Chikashi
2,002
nan
nan
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
f664c4a6aee50411f1db79999fd5e7c88a35b926
1
Handling Noisy Training and Testing Data
Blaheta, Don
2,002
nan
111--116
566bd3f672357b8e35343ab6bda4cc25a3071922
Fine-Grained Entity Typing With a Type Taxonomy: A Systematic Review
c5ecf3a9de15699b86456e64ae4d3dea5c83934a
0
Improving Semantic Parsing via Answer Type Inference
Yavuz, Semih and Gur, Izzeddin and Su, Yu and Srivatsa, Mudhakar and Yan, Xifeng
2,016
nan
149--159
82bf873a702e005c9e6e2f83d7c4af3fb649e743
Extreme Classification for Answer Type Prediction in Question Answering
f3594f9d60c98cac88f9033c69c2b666713ed6d6
1
Verbal fields in {H}ungarian simple sentences and infinitival clausal complements
Balogh, Kata
2,016
nan
58--66
82bf873a702e005c9e6e2f83d7c4af3fb649e743
Extreme Classification for Answer Type Prediction in Question Answering
a42e3bcd05df952558c7d4bac258a02191c83b0d
0
A Rule-based Question Answering System for Reading Comprehension Tests
Riloff, Ellen and Thelen, Michael
2,000
nan
nan
82bf873a702e005c9e6e2f83d7c4af3fb649e743
Extreme Classification for Answer Type Prediction in Question Answering
445406b0d88ae965fa587cf5c167374ff1bbc09a
1
Dialogue Helpsystem based on Flexible Matching of User Query with Natural Language Knowledge Base
Kurohashi, Sadao and Higasa, Wataru
2,000
nan
141--149
82bf873a702e005c9e6e2f83d7c4af3fb649e743
Extreme Classification for Answer Type Prediction in Question Answering
23a99a851485b3d6419e2d98de9ea4e9ea1a34d8
0
The {TREC}-8 Question Answering Track
Voorhees, Ellen M. and Tice, Dawn M.
2,000
nan
nan
82bf873a702e005c9e6e2f83d7c4af3fb649e743
Extreme Classification for Answer Type Prediction in Question Answering
74e03acd5532fbad4c770e9293d2a788b11364f7
1
Thistle and Interarbora
Calder, Jo
2,000
nan
nan
82bf873a702e005c9e6e2f83d7c4af3fb649e743
Extreme Classification for Answer Type Prediction in Question Answering
8548b03340130f0e5d8a7880d1f78fa192518e75
0
Multi-Task Learning for Conversational Question Answering over a Large-Scale Knowledge Base
Shen, Tao and Geng, Xiubo and Qin, Tao and Guo, Daya and Tang, Duyu and Duan, Nan and Long, Guodong and Jiang, Daxin
2,019
We consider the problem of conversational question answering over a large-scale knowledge base. To handle huge entity vocabulary of a large-scale knowledge base, recent neural semantic parsing based approaches usually decompose the task into several subtasks and then solve them sequentially, which leads to following issues: 1) errors in earlier subtasks will be propagated and negatively affect downstream ones; and 2) each subtask cannot naturally share supervision signals with others. To tackle these issues, we propose an innovative multi-task learning framework where a pointer-equipped semantic parsing model is designed to resolve coreference in conversations, and naturally empower joint learning with a novel type-aware entity detection model. The proposed framework thus enables shared supervisions and alleviates the effect of error propagation. Experiments on a large-scale conversational question answering dataset containing 1.6M question answering pairs over 12.8M entities show that the proposed framework improves overall F1 score from 67{\%} to 79{\%} compared with previous state-of-the-art work.
2442--2451
82bf873a702e005c9e6e2f83d7c4af3fb649e743
Extreme Classification for Answer Type Prediction in Question Answering
788d28e234fc69fb07b4a4da7fb1bcf05e5160b5
1
Sentence-Level Agreement for Neural Machine Translation
Yang, Mingming and Wang, Rui and Chen, Kehai and Utiyama, Masao and Sumita, Eiichiro and Zhang, Min and Zhao, Tiejun
2,019
The training objective of neural machine translation (NMT) is to minimize the loss between the words in the translated sentences and those in the references. In NMT, there is a natural correspondence between the source sentence and the target sentence. However, this relationship has only been represented using the entire neural network and the training objective is computed in word-level. In this paper, we propose a sentence-level agreement module to directly minimize the difference between the representation of source and target sentence. The proposed agreement module can be integrated into NMT as an additional training objective function and can also be used to enhance the representation of the source sentences. Empirical results on the NIST Chinese-to-English and WMT English-to-German tasks show the proposed agreement module can significantly improve the NMT performance.
3076--3082
82bf873a702e005c9e6e2f83d7c4af3fb649e743
Extreme Classification for Answer Type Prediction in Question Answering
dfac457f4f688e9759a6e12acf96ef4b20e18c3d
0
Question Classification using Head Words and their Hypernyms
Huang, Zhiheng and Thint, Marcus and Qin, Zengchang
2,008
nan
927--936
82bf873a702e005c9e6e2f83d7c4af3fb649e743
Extreme Classification for Answer Type Prediction in Question Answering
94a9af119df61f501980cf095700f35c2a7762a3
1
15 Years of Language Resource Creation and Sharing: a Progress Report on {LDC} Activities
Cieri, Christopher and Liberman, Mark
2,008
This paper, the fifth in a series of biennial progress reports, reviews the activities of the Linguistic Data Consortium with particular emphasis on general trends in the language resource landscape and on changes that distinguish the two years since LDC’s last report at LREC from the preceding 8 years. After providing a perspective on the current landscape of language resources, the paper goes on to describe our vision of the role of LDC within the research communities it serves before sketching briefly specific publications and resources creations projects that have been the focus our attention since the last report.
nan
82bf873a702e005c9e6e2f83d7c4af3fb649e743
Extreme Classification for Answer Type Prediction in Question Answering
754580728c0166755db0d6c6f91db2f6a9a53ed7
0
Performance Issues and Error Analysis in an Open-Domain Question Answering System
Moldovan, Dan and Pasca, Marius and Harabagiu, Sanda and Surdeanu, Mihai
2,002
nan
33--40
82bf873a702e005c9e6e2f83d7c4af3fb649e743
Extreme Classification for Answer Type Prediction in Question Answering
9d0776666d8c7da0f6c40950563687f8ba5b6f7f
1
Getting the message in: a global company{'}s experience with the new generation of low-cost,high-performance machine translation systems
Morland, Vernon
2,002
Most large companies are very good at {``}getting the message out{''} {--}publishing reams of announcements and documentation to their employees and customers. More challenging by far is {``}getting the message in{''} {--} ensuring that these messages are read, understood, and acted upon by the recipients. This paper describes NCR Corporation{'}s experience with the selection and implementation of a machine translation (MT) system in the Global Learning division of Human Resources. The author summarizes NCR{`}s vision for the use of MT, the competitive {``}fly-off{''} evaluation process he conducted in the spring of 2000, the current MT production environment, and the reactions of the MT users. Although the vision is not yet fulfilled, progress is being made. The author describes NCR{'}s plans to extend its current MT architecture to provide real-time translation of web pages and other intranet resources.
195--206
82bf873a702e005c9e6e2f83d7c4af3fb649e743
Extreme Classification for Answer Type Prediction in Question Answering
56271b943f90914fb1bbed737748589efa4b655a
0
Improving Semantic Parsing via Answer Type Inference
Yavuz, Semih and Gur, Izzeddin and Su, Yu and Srivatsa, Mudhakar and Yan, Xifeng
2,016
nan
149--159
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
f3594f9d60c98cac88f9033c69c2b666713ed6d6
1
{VR}ep at {S}em{E}val-2016 Task 1 and Task 2: A System for Interpretable Semantic Similarity
Henry, Sam and Sands, Allison
2,016
nan
577--583
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
ca27c3503740b30224115c054bace15bf3e88ab1
0
Imposing Label-Relational Inductive Bias for Extremely Fine-Grained Entity Typing
Xiong, Wenhan and Wu, Jiawei and Lei, Deren and Yu, Mo and Chang, Shiyu and Guo, Xiaoxiao and Wang, William Yang
2,019
Existing entity typing systems usually exploit the type hierarchy provided by knowledge base (KB) schema to model label correlations and thus improve the overall performance. Such techniques, however, are not directly applicable to more open and practical scenarios where the type set is not restricted by KB schema and includes a vast number of free-form types. To model the underlying label correlations without access to manually annotated label structures, we introduce a novel label-relational inductive bias, represented by a graph propagation layer that effectively encodes both global label co-occurrence statistics and word-level similarities. On a large dataset with over 10,000 free-form types, the graph-enhanced model equipped with an attention-based matching module is able to achieve a much higher recall score while maintaining a high-level precision. Specifically, it achieves a 15.3{\%} relative F1 improvement and also less inconsistency in the outputs. We further show that a simple modification of our proposed graph layer can also improve the performance on a conventional and widely-tested dataset that only includes KB-schema types.
773--784
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
a0713d945b2e5c2bdeeba68399c8ac6ea84e0ca6
1
{CASA}-{NLU}: Context-Aware Self-Attentive Natural Language Understanding for Task-Oriented Chatbots
Gupta, Arshit and Zhang, Peng and Lalwani, Garima and Diab, Mona
2,019
Natural Language Understanding (NLU) is a core component of dialog systems. It typically involves two tasks - Intent Classification (IC) and Slot Labeling (SL), which are then followed by a dialogue management (DM) component. Such NLU systems cater to utterances in isolation, thus pushing the problem of context management to DM. However, contextual information is critical to the correct prediction of intents in a conversation. Prior work on contextual NLU has been limited in terms of the types of contextual signals used and the understanding of their impact on the model. In this work, we propose a context-aware self-attentive NLU (CASA-NLU) model that uses multiple signals over a variable context window, such as previous intents, slots, dialog acts and utterances, in addition to the current user utterance. CASA-NLU outperforms a recurrent contextual NLU baseline on two conversational datasets, yielding a gain of up to 7{\%} on the IC task. Moreover, a non-contextual variant of CASA-NLU achieves state-of-the-art performance on standard public datasets - SNIPS and ATIS.
1285--1290
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
4a0a5f2ac98e8b1ed453265d96f777d2ebc7b679
0
Learning from Sibling Mentions with Scalable Graph Inference in Fine-Grained Entity Typing
Chen, Yi and Cheng, Jiayang and Jiang, Haiyun and Liu, Lemao and Zhang, Haisong and Shi, Shuming and Xu, Ruifeng
2,022
In this paper, we firstly empirically find that existing models struggle to handle hard mentions due to their insufficient contexts, which consequently limits their overall typing performance. To this end, we propose to exploit sibling mentions for enhancing the mention representations. Specifically, we present two different metrics for sibling selection and employ an attentive graph neural network to aggregate information from sibling mentions. The proposed graph model is scalable in that unseen test mentions are allowed to be added as new nodes for inference. Exhaustive experiments demonstrate the effectiveness of our sibling learning strategy, where our model outperforms ten strong baselines. Moreover, our experiments indeed prove the superiority of sibling mentions in helping clarify the types for hard mentions.
2076--2087
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
5a09cd029ffa71cac553405c7fbe927a8ebe9fe7
1
Delivering Fairness in Human Resources {AI}: Mutual Information to the Rescue
Hemamou, Leo and Coleman, William
2,022
Automatic language processing is used frequently in the Human Resources (HR) sector for automated candidate sourcing and evaluation of resumes. These models often use pre-trained language models where it is difficult to know if possible biases exist. Recently, Mutual Information (MI) methods have demonstrated notable performance in obtaining representations agnostic to sensitive variables such as gender or ethnicity. However, accessing these variables can sometimes be challenging, and their use is prohibited in some jurisdictions. These factors can make detecting and mitigating biases challenging. In this context, we propose to minimize the MI between a candidate{'}s name and a latent representation of their CV or short biography. This method may mitigate bias from sensitive variables without requiring the collection of these variables. We evaluate this methodology by first projecting the name representation into a smaller space to prevent potential MI minimization problems in high dimensions.
867--882
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
cdec75f901a93c75ee5386a98abbe44746286e80
0
Prompt-learning for Fine-grained Entity Typing
Ding, Ning and Chen, Yulin and Han, Xu and Xu, Guangwei and Wang, Xiaobin and Xie, Pengjun and Zheng, Haitao and Liu, Zhiyuan and Li, Juanzi and Kim, Hong-Gee
2,022
As an effective approach to adapting pre-trained language models (PLMs) for specific tasks, prompt-learning has recently attracted much attention from researchers. By using cloze-style language prompts to stimulate the versatile knowledge of PLMs, prompt-learning can achieve promising results on a series of NLP tasks, such as natural language inference, sentiment classification, and knowledge probing. In this work, we investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot, and zero-shot scenarios. We first develop a simple and effective prompt-learning pipeline by constructing entity-oriented verbalizers and templates and conducting masked language modeling. Further, to tackle the zero-shot regime, we propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types. Extensive experiments on four fine-grained entity typing benchmarks under fully supervised, few-shot, and zero-shot settings show the effectiveness of the prompt-learning paradigm and further make a powerful alternative to vanilla fine-tuning.
6888--6901
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
bf722dc893ddaad5045fca5646212ec3badf3c5a
1
{DPTDR}: Deep Prompt Tuning for Dense Passage Retrieval
Tang, Zhengyang and Wang, Benyou and Yao, Ting
2,022
Deep prompt tuning (DPT) has gained great success in most natural language processing (NLP) tasks. However, it is not well-investigated in dense retrieval where fine-tuning (FT) still dominates. When deploying multiple retrieval tasks using the same backbone model (e.g., RoBERTa), FT-based methods are unfriendly in terms of deployment cost: each new retrieval model needs to repeatedly deploy the backbone model without reuse. To reduce the deployment cost in such a scenario, this work investigates applying DPT in dense retrieval. The challenge is that directly applying DPT in dense retrieval largely underperforms FT methods. To compensate for the performance drop, we propose two model-agnostic and task-agnostic strategies for DPT-based retrievers, namely retrieval-oriented intermediate pretraining and unified negative mining, as a general approach that could be compatible with any pre-trained language model and retrieval task. The experimental results show that the proposed method (called DPTDR) outperforms previous state-of-the-art models on both MS-MARCO and Natural Questions. We also conduct ablation studies to examine the effectiveness of each strategy in DPTDR. We believe this work facilitates the industry, as it saves enormous efforts and costs of deployment and increases the utility of computing resources. Our code is available at \url{https://github.com/tangzhy/DPTDR}.
1193--1202
1e87aefc92004a0e4000bb0fa2f5351c3644e8e7
Modeling Label Correlations for Ultra-Fine Entity Typing with Neural Pairwise Conditional Random Field
94b34ad657bcfc9f1a8ed1ab1c3144aae9980901
0