SpanMarker with bert-base-cased on DFKI-SLT/few-nerd

This is a SpanMarker model trained on the DFKI-SLT/few-nerd dataset that can be used for Named Entity Recognition. This SpanMarker model uses bert-base-cased as the underlying encoder.

Model Details

Model Description

  • Model Type: SpanMarker
  • Encoder: bert-base-cased
  • Maximum Sequence Length: 256 tokens
  • Maximum Entity Length: 8 words
  • Training Dataset: DFKI-SLT/few-nerd
  • Language: en
  • License: cc-by-sa-4.0

Model Sources

Model Labels

Label Examples
art "Time", "The Seven Year Itch", "Imelda de ' Lambertazzi"
building "Henry Ford Museum", "Boston Garden", "Sheremetyevo International Airport"
event "French Revolution", "Iranian Constitutional Revolution", "Russian Revolution"
location "Croatian", "the Republic of Croatia", "Mediterranean Basin"
organization "Church 's Chicken", "IAEA", "Texas Chicken"
other "Amphiphysin", "BAR", "N-terminal lipid"
person "Hicks", "Ellaline Terriss", "Edmund Payne"
product "Phantom", "Corvettes - GT1 C6R", "100EX"

Evaluation

Metrics

Label Precision Recall F1
all 0.7685 0.7674 0.7679
art 0.7749 0.6884 0.7291
building 0.6045 0.6612 0.6316
event 0.6437 0.5161 0.5729
location 0.8066 0.8425 0.8241
organization 0.7127 0.6836 0.6978
other 0.6802 0.6775 0.6789
person 0.8900 0.9135 0.9016
product 0.6596 0.6305 0.6447

Uses

Direct Use for Inference

from span_marker import SpanMarkerModel

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")
# Run inference
entities = model.predict("Caretaker manager George Goss led them on a run in the FA Cup, defeating Liverpool in round 4, to reach the semi-final at Stamford Bridge, where they were defeated 2–0 by Sheffield United on 28 March 1925.")

Downstream Use

You can finetune this model on your own dataset.

Click to expand
from span_marker import SpanMarkerModel, Trainer

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")

# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003

# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
    model=model,
    train_dataset=dataset["train"],
    eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("span_marker_model_id-finetuned")

Training Details

Training Set Metrics

Training set Min Median Max
Sentence length 1 24.4956 163
Entities per sentence 0 2.5439 35

Training Hyperparameters

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1
  • mixed_precision_training: Native AMP

Training Results

Epoch Step Validation Loss Validation Precision Validation Recall Validation F1 Validation Accuracy
0.1629 200 0.0323 0.7242 0.5919 0.6514 0.8980
0.3259 400 0.0232 0.7537 0.7149 0.7337 0.9252
0.4888 600 0.0212 0.7767 0.7301 0.7527 0.9301
0.6517 800 0.0209 0.7605 0.7615 0.7610 0.9353
0.8147 1000 0.0194 0.7815 0.7604 0.7708 0.9383
0.9776 1200 0.0195 0.7681 0.7833 0.7756 0.9403

Framework Versions

  • Python: 3.10.12
  • SpanMarker: 1.5.0
  • Transformers: 4.35.2
  • PyTorch: 2.1.0+cu121
  • Datasets: 2.16.0
  • Tokenizers: 0.15.0

Citation

BibTeX

@software{Aarsen_SpanMarker,
    author = {Aarsen, Tom},
    license = {Apache-2.0},
    title = {{SpanMarker for Named Entity Recognition}},
    url = {https://github.com/tomaarsen/SpanMarkerNER}
}
Downloads last month
16
Safetensors
Model size
108M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for MinhMinh09/span-marker-bert-base-fewnerd-coarse-super

Finetuned
(2000)
this model

Dataset used to train MinhMinh09/span-marker-bert-base-fewnerd-coarse-super

Evaluation results