Edit model card

numind/NuNER-v1.0 fine-tuned on FewNERD-coarse-supervised

This is a NuNER model fine-tuned on the FewNERD dataset that can be used for Named Entity Recognition. NuNER model uses RoBERTa-base as the backbone encoder and it was trained on the NuNER dataset, which is a large and diverse dataset synthetically labeled by gpt-3.5-turbo-0301 of 1M sentences. This further pre-training phase allowed the generation of high quality token embeddings, a good starting point for fine-tuning on more specialized datasets.

Model Details

The model was fine-tuned as a regular BERT-based model for NER task using HuggingFace Trainer class.

Model labels

Label Examples
art "The Seven Year Itch", "Time", "Imelda de ' Lambertazzi"
building "Boston Garden", "Henry Ford Museum", "Sheremetyevo International Airport"
event "Russian Revolution", "Iranian Constitutional Revolution", "French Revolution"
location "Croatian", "the Republic of Croatia", "Mediterranean Basin"
organization "IAEA", "Church 's Chicken", "Texas Chicken"
other "BAR", "Amphiphysin", "N-terminal lipid"
person "Ellaline Terriss", "Edmund Payne", "Hicks"
product "Phantom", "100EX", "Corvettes - GT1 C6R"

Uses

Direct Use for Inference

>>> from transformers import pipeline

>>> text = """Foreign governments may be spying on your smartphone notifications, senator says. Washington (CNN) — Foreign governments have reportedly attempted to spy on iPhone and Android users through the mobile app notifications they receive on their smartphones - and the US government has forced Apple and Google to keep quiet about it, according to a top US senator. Through legal demands sent to the tech giants, governments have allegedly tried to force Apple and Google to turn over sensitive information that could include the contents of a notification - such as previews of a text message displayed on a lock screen, or an update about app activity, Oregon Democratic Sen. Ron Wyden said in a new report. Wyden's report reflects the latest example of long-running tensions between tech companies and governments over law enforcement demands, which have stretched on for more than a decade. Governments around the world have particularly battled with tech companies over encryption, which provides critical protections to users and businesses while in some cases preventing law enforcement from pursuing investigations into messages sent over the internet."""

>>> classifier = pipeline(
    "ner",
    model="guishe/nuner-v1_fewnerd_coarse_super",
    aggregation_strategy="simple",
)
>>> classifier(text)

[{'entity_group': 'location',
  'score': 0.94416517,
  'word': ' Washington',
  'start': 82,
  'end': 92},
 {'entity_group': 'organization',
  'score': 0.9091303,
  'word': 'CNN',
  'start': 94,
  'end': 97},
 {'entity_group': 'product',
  'score': 0.89087117,
  'word': ' iPhone',
  'start': 157,
  'end': 163},
 {'entity_group': 'product',
  'score': 0.90455395,
  'word': ' Android',
  'start': 168,
  'end': 175},
 {'entity_group': 'location',
  'score': 0.6770658,
  'word': ' US',
  'start': 263,
  'end': 265},
 {'entity_group': 'organization',
  'score': 0.9525505,
  'word': ' Apple',
  'start': 288,
  'end': 293},
 {'entity_group': 'organization',
  'score': 0.9560463,
  'word': ' Google',
  'start': 298,
  'end': 304},
 {'entity_group': 'location',
  'score': 0.8810142,
  'word': ' US',
  'start': 348,
  'end': 350},
 {'entity_group': 'organization',
  'score': 0.9553381,
  'word': ' Apple',
  'start': 449,
  'end': 454},
 {'entity_group': 'organization',
  'score': 0.9610842,
  'word': ' Google',
  'start': 459,
  'end': 465},
 {'entity_group': 'location',
  'score': 0.58132076,
  'word': ' Oregon',
  'start': 649,
  'end': 655},
 {'entity_group': 'organization',
  'score': 0.8128647,
  'word': ' Democratic',
  'start': 656,
  'end': 666},
 {'entity_group': 'person',
  'score': 0.9909808,
  'word': ' Ron Wyden',
  'start': 672,
  'end': 681},
 {'entity_group': 'person',
  'score': 0.9864806,
  'word': ' Wyden',
  'start': 704,
  'end': 709}]

Training Details

Training Set Metrics

Training set Min Median Max
Sentence length 1 24.4945 267
Entities per sentence 0 2.5832 88

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
0.1498 1.0 2059 0.1477 0.7710 0.8013 0.7859 0.9522
0.1368 2.0 4118 0.1422 0.7797 0.8101 0.7946 0.9540
0.1139 3.0 6177 0.1433 0.7813 0.8145 0.7976 0.9547

Framework versions

  • Transformers 4.36.0
  • Pytorch 2.0.0+cu117
  • Datasets 2.18.0
  • Tokenizers 0.15.2

Citation

BibTeX

@misc{bogdanov2024nuner,
      title={NuNER: Entity Recognition Encoder Pre-training via LLM-Annotated Data}, 
      author={Sergei Bogdanov and Alexandre Constantin and Timothée Bernard and Benoit Crabbé and Etienne Bernard},
      year={2024},
      eprint={2402.15343},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
8
Safetensors
Model size
124M params
Tensor type
F32
·

Finetuned from

Dataset used to train guishe/nuner-v1_fewnerd_coarse_super

Evaluation results