tomaarsen's picture
tomaarsen HF staff
Update metrics using correct evaluation script
ee44a17
metadata
language:
  - en
library_name: span-marker
tags:
  - span-marker
  - token-classification
  - ner
  - named-entity-recognition
  - generated_from_span_marker_trainer
datasets:
  - tomaarsen/ner-orgs
metrics:
  - precision
  - recall
  - f1
widget:
  - text: >-
      Hallacas are also commonly consumed in eastern Cuba parts of Colombia,
      Ecuador, Aruba, and Curaçao.
  - text: >-
      The co-production of Yvon Michel's GYM and Jean Bédard's Interbox
      promotions and televised via HBO, has trumped a proposed HBO -televised
      rematch between Jean Pascal and RING and WBC 175-pound champion Chad
      Dawson that was slated for the same date at Bell Centre in Montreal.
  - text: >-
      The synoptic conditions see a low over southern Norway, bringing warm
      south and southwesterly flows of air up from the inner continental areas
      of Russia and Belarus.
  - text: >-
      The RCIS recommended amongst other things that the Australian Security
      Intelligence Organisation (ASIO) areas of investigation be widened to
      include terrorism.
  - text: >-
      The large network had multiple campuses in Minnesota, Wisconsin, and South
      Dakota.
pipeline_tag: token-classification
co2_eq_emissions:
  emissions: 532.6472478623315
  source: codecarbon
  training_type: fine-tuning
  on_cloud: false
  cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
  ram_total_size: 31.777088165283203
  hours_used: 3.696
  hardware_used: 1 x NVIDIA GeForce RTX 3090
base_model: bert-base-cased
model-index:
  - name: >-
      SpanMarker with bert-base-cased on FewNERD, CoNLL2003, OntoNotes v5, and
      MultiNERD
    results:
      - task:
          type: token-classification
          name: Named Entity Recognition
        dataset:
          name: FewNERD, CoNLL2003, OntoNotes v5, and MultiNERD
          type: tomaarsen/ner-orgs
          split: test
        metrics:
          - type: f1
            value: 0.8311343653918766
            name: F1
          - type: precision
            value: 0.8334090564894745
            name: Precision
          - type: recall
            value: 0.8288720574945131
            name: Recall

SpanMarker with bert-base-cased on FewNERD, CoNLL2003, OntoNotes v5, and MultiNERD

This is a SpanMarker model trained on the FewNERD, CoNLL2003, OntoNotes v5, and MultiNERD dataset that can be used for Named Entity Recognition. This SpanMarker model uses bert-base-cased as the underlying encoder.

Model Details

Model Description

Model Sources

Model Labels

Label Examples
ORG "IAEA", "Church 's Chicken", "Texas Chicken"

Evaluation

Metrics

Label Precision Recall F1
all 0.8334 0.8289 0.8311
ORG 0.8334 0.8289 0.8311

Uses

Direct Use for Inference

from span_marker import SpanMarkerModel

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-base-orgs")
# Run inference
entities = model.predict("The large network had multiple campuses in Minnesota, Wisconsin, and South Dakota.")

Downstream Use

You can finetune this model on your own dataset.

Click to expand
from span_marker import SpanMarkerModel, Trainer

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-base-orgs")

# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003

# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
    model=model,
    train_dataset=dataset["train"],
    eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("tomaarsen/span-marker-bert-base-orgs-finetuned")

Training Details

Training Set Metrics

Training set Min Median Max
Sentence length 1 22.1911 267
Entities per sentence 0 0.8144 39

Training Hyperparameters

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3

Training Results

Epoch Step Validation Loss
0.3273 3000 0.0052
0.6546 6000 0.0047
0.9819 9000 0.0045
1.3092 12000 0.0047
1.6365 15000 0.0045
1.9638 18000 0.0046
2.2911 21000 0.0054
2.6184 24000 0.0053
2.9457 27000 0.0052

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Carbon Emitted: 0.533 kg of CO2
  • Hours Used: 3.696 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.9.16
  • SpanMarker: 1.5.1.dev
  • Transformers: 4.30.0
  • PyTorch: 2.0.1+cu118
  • Datasets: 2.14.0
  • Tokenizers: 0.13.3

Citation

BibTeX

@software{Aarsen_SpanMarker,
    author = {Aarsen, Tom},
    license = {Apache-2.0},
    title = {{SpanMarker for Named Entity Recognition}},
    url = {https://github.com/tomaarsen/SpanMarkerNER}
}