Edit model card

UniNER-7B-type

Description: A UniNER-7B model trained from LLama-7B using the Pile-NER-type data without human-labeled data. The data was collected by prompting gpt-3.5-turbo-0301 to label entities from passages and provide entity tags. The data collection prompt is as follows:

Instruction:
Given a passage, your task is to extract all entities and identify their entity types. The output should be in a list of tuples of the following format: [("entity 1", "type of entity 1"), ... ].

Check our paper for more information. Check our repo about how to use the model.

Comparison with UniNER-7B-definition

The UniNER-7B-type model excels when handling entity tags. It performs better on the Universal NER benchmark, which consists of 43 academic datasets across 9 domains. In contrast, UniNER-7B-definition performs better at processing entity types defined in short sentences and is more robust to type paraphrasing.

Inference

The template for inference instances is as follows:

Prompting template:
A virtual assistant answers questions from a user based on the provided text.
USER: Text: {Fill the input text here}
ASSISTANT: I’ve read this text.
USER: What describes {Fill the entity type here} in the text?
ASSISTANT: (model's predictions in JSON format)

Note: Inferences are based on one entity type at a time. For multiple entity types, create separate instances for each type.

License

This model and its associated data are released under the CC BY-NC 4.0 license. They are primarily used for research purposes.

Citation

@article{zhou2023universalner,
      title={UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition}, 
      author={Wenxuan Zhou and Sheng Zhang and Yu Gu and Muhao Chen and Hoifung Poon},
      year={2023},
      eprint={2308.03279},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
145
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using Universal-NER/UniNER-7B-type 1