Papers
arxiv:2404.09145

ToNER: Type-oriented Named Entity Recognition with Generative Language Model

Published on Apr 14
Authors:
,
,
,
,
,

Abstract

In recent years, the fine-tuned generative models have been proven more powerful than the previous tagging-based or span-based models on named entity recognition (NER) task. It has also been found that the information related to entities, such as entity types, can prompt a model to achieve NER better. However, it is not easy to determine the entity types indeed existing in the given sentence in advance, and inputting too many potential entity types would distract the model inevitably. To exploit entity types' merit on promoting NER task, in this paper we propose a novel NER framework, namely ToNER based on a generative model. In ToNER, a type matching model is proposed at first to identify the entity types most likely to appear in the sentence. Then, we append a multiple binary classification task to fine-tune the generative model's encoder, so as to generate the refined representation of the input sentence. Moreover, we add an auxiliary task for the model to discover the entity types which further fine-tunes the model to output more accurate results. Our extensive experiments on some NER benchmarks verify the effectiveness of our proposed strategies in ToNER that are oriented towards entity types' exploitation.

Community

We conducted our experiments on eight NVIDIA Tesla A100 GPU with 80GB of GPU memory.

Hm, I don't know if this setup is really required, but if so, it's insance. One could just use a strong encoder-only model as LM backbone and fine-tuning runs on < 24GB of GPU RAM with on-par performance on CoNLL!

Anyway, LUKE with RoBERTA Large (https://huggingface.co/papers/2010.01057) achieves strong performance of 94.3 on CoNLL and was not even mentioned in the paper for comparison.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2404.09145 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2404.09145 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2404.09145 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.