Pratik-B's picture
Pratik-B/span-marker-bert-base-fewnerd-coarse-super
9d5a838 verified
metadata
library_name: span-marker
tags:
  - span-marker
  - token-classification
  - ner
  - named-entity-recognition
  - generated_from_span_marker_trainer
datasets:
  - DFKI-SLT/few-nerd
metrics:
  - precision
  - recall
  - f1
widget:
  - text: >-
      The Hebrew Union College libraries in Cincinnati and Los Angeles, the
      Library of Congress in Washington, D.C ., the Jewish Theological Seminary
      in New York City, and the Harvard University Library (which received
      donations of Deinard's texts from Lucius Nathan Littauer, housed in
      Widener and Houghton libraries) also have large collections of Deinard
      works.
  - text: >-
      Abu Abd Allah Muhammad al-Idrisi (1099–1165 or 1166), the Moroccan Muslim
      geographer, cartographer, Egyptologist and traveller who lived in Sicily
      at the court of King Roger II, mentioned this island, naming it جزيرة
      مليطمة ("jazīrat Malīṭma", "the island of Malitma ") on page 583 of his
      book "Nuzhat al-mushtaq fi ihtiraq ghal afaq", otherwise known as The Book
      of Roger, considered a geographic encyclopaedia of the medieval world.
  - text: >-
      The font is also used in the logo of the American rock band Greta Van
      Fleet, in the logo for Netflix show "Stranger Things ", and in the album
      art for rapper Logic's album "Supermarket ".
  - text: >-
      Caretaker manager George Goss led them on a run in the FA Cup, defeating
      Liverpool in round 4, to reach the semi-final at Stamford Bridge, where
      they were defeated 2–0 by Sheffield United on 28 March 1925.
  - text: >-
      In 1991, the National Science Foundation (NSF), which manages the U.S .
      Antarctic Program (US AP), honoured his memory by dedicating a
      state-of-the-art laboratory complex in his name, the Albert P. Crary
      Science and Engineering Center (CSEC) located in McMurdo Station.
pipeline_tag: token-classification
model-index:
  - name: SpanMarker
    results:
      - task:
          type: token-classification
          name: Named Entity Recognition
        dataset:
          name: Unknown
          type: DFKI-SLT/few-nerd
          split: test
        metrics:
          - type: f1
            value: 0.7710703953712633
            name: F1
          - type: precision
            value: 0.778881745567894
            name: Precision
          - type: recall
            value: 0.7634141684170327
            name: Recall

SpanMarker

This is a SpanMarker model trained on the DFKI-SLT/few-nerd dataset that can be used for Named Entity Recognition.

Model Details

Model Description

  • Model Type: SpanMarker
  • Maximum Sequence Length: 256 tokens
  • Maximum Entity Length: 8 words
  • Training Dataset: DFKI-SLT/few-nerd

Model Sources

Model Labels

Label Examples
art "The Seven Year Itch", "Time", "Imelda de ' Lambertazzi"
building "Henry Ford Museum", "Sheremetyevo International Airport", "Boston Garden"
event "French Revolution", "Iranian Constitutional Revolution", "Russian Revolution"
location "Croatian", "the Republic of Croatia", "Mediterranean Basin"
organization "IAEA", "Church 's Chicken", "Texas Chicken"
other "Amphiphysin", "N-terminal lipid", "BAR"
person "Edmund Payne", "Ellaline Terriss", "Hicks"
product "100EX", "Phantom", "Corvettes - GT1 C6R"

Evaluation

Metrics

Label Precision Recall F1
all 0.7789 0.7634 0.7711
art 0.7610 0.7256 0.7429
building 0.6316 0.6857 0.6575
event 0.6304 0.5346 0.5786
location 0.8114 0.8554 0.8328
organization 0.7370 0.68 0.7074
other 0.7407 0.6085 0.6682
person 0.8611 0.9035 0.8818
product 0.704 0.5966 0.6459

Uses

Direct Use for Inference

from span_marker import SpanMarkerModel

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")
# Run inference
entities = model.predict("Caretaker manager George Goss led them on a run in the FA Cup, defeating Liverpool in round 4, to reach the semi-final at Stamford Bridge, where they were defeated 2–0 by Sheffield United on 28 March 1925.")

Downstream Use

You can finetune this model on your own dataset.

Click to expand
from span_marker import SpanMarkerModel, Trainer

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")

# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003

# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
    model=model,
    train_dataset=dataset["train"],
    eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("span_marker_model_id-finetuned")

Training Details

Training Set Metrics

Training set Min Median Max
Sentence length 1 24.4956 163
Entities per sentence 0 2.5439 35

Training Hyperparameters

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training Results

Epoch Step Validation Loss Validation Precision Validation Recall Validation F1 Validation Accuracy
0.1629 200 0.0335 0.6884 0.6223 0.6537 0.9062
0.3259 400 0.0238 0.7412 0.7193 0.7301 0.9242
0.4888 600 0.0220 0.7628 0.7378 0.7501 0.9325
0.6517 800 0.0211 0.7614 0.7677 0.7645 0.9376
0.8147 1000 0.0197 0.7839 0.7596 0.7716 0.9384
0.9776 1200 0.0194 0.7803 0.7633 0.7717 0.9393

Framework Versions

  • Python: 3.10.12
  • SpanMarker: 1.5.0
  • Transformers: 4.37.2
  • PyTorch: 2.1.0+cu121
  • Datasets: 2.17.1
  • Tokenizers: 0.15.2

Citation

BibTeX

@software{Aarsen_SpanMarker,
    author = {Aarsen, Tom},
    license = {Apache-2.0},
    title = {{SpanMarker for Named Entity Recognition}},
    url = {https://github.com/tomaarsen/SpanMarkerNER}
}