zhang19991111's picture
Upload 10 files
6947173 verified
metadata
language: en
license: cc-by-sa-4.0
library_name: span-marker
tags:
  - span-marker
  - token-classification
  - ner
  - named-entity-recognition
  - generated_from_span_marker_trainer
metrics:
  - precision
  - recall
  - f1
widget:
  - text: >-
      Inductively Coupled Plasma - Mass Spectrometry ( ICP - MS ) analysis of
      Longcliffe SP52 limestone was undertaken to identify other impurities
      present , and the effect of sorbent mass and SO2 concentration on
      elemental partitioning in the carbonator between solid sorbent and gaseous
      phase was investigated , using a bubbler sampling system .
  - text: >-
      We extensively evaluate our work against benchmark and competitive
      protocols across a range of metrics over three real connectivity and GPS
      traces such as Sassy [ 44 ] , San Francisco Cabs [ 45 ] and Infocom 2006 [
      33 ] .
  - text: >-
      In this research , we developed a robust two - layer classifier that can
      accurately classify normal hearing ( NH ) from hearing impaired ( HI )
      infants with congenital sensori - neural hearing loss ( SNHL ) based on
      their Magnetic Resonance ( MR ) images .
  - text: >-
      In situ Peak Force Tapping AFM was employed for determining morphology and
      nano - mechanical properties of the surface layer .
  - text: >-
      By means of a criterion of Gilmer for polynomially dense subsets of the
      ring of integers of a number field , we show that , if h∈K[X ] maps every
      element of OK of degree n to an algebraic integer , then h(X ) is integral
      - valued over OK , that is , h(OK)⊂OK .
pipeline_tag: token-classification
base_model: roberta-large
model-index:
  - name: SpanMarker with roberta-large on my-data
    results:
      - task:
          type: token-classification
          name: Named Entity Recognition
        dataset:
          name: my-data
          type: unknown
          split: test
        metrics:
          - type: f1
            value: 0.7147595356550579
            name: F1
          - type: precision
            value: 0.7292724196277496
            name: Precision
          - type: recall
            value: 0.7008130081300813
            name: Recall

SpanMarker with roberta-large on my-data

This is a SpanMarker model that can be used for Named Entity Recognition. This SpanMarker model uses roberta-large as the underlying encoder.

Model Details

Model Description

  • Model Type: SpanMarker
  • Encoder: roberta-large
  • Maximum Sequence Length: 256 tokens
  • Maximum Entity Length: 8 words
  • Language: en
  • License: cc-by-sa-4.0

Model Sources

Model Labels

Label Examples
Data "Depth time - series", "an overall mitochondrial", "defect"
Material "cross - shore measurement locations", "the subject 's fibroblasts", "COXI , COXII and COXIII subunits"
Method "an approximation", "in vitro", "EFSA"
Process "intake", "translation", "a significant reduction of synthesis"

Evaluation

Metrics

Label Precision Recall F1
all 0.7293 0.7008 0.7148
Data 0.6583 0.6931 0.6753
Material 0.8141 0.8060 0.8100
Method 0.5556 0.5 0.5263
Process 0.7314 0.6244 0.6737

Uses

Direct Use for Inference

from span_marker import SpanMarkerModel

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")
# Run inference
entities = model.predict("In situ Peak Force Tapping AFM was employed for determining morphology and nano - mechanical properties of the surface layer .")

Downstream Use

You can finetune this model on your own dataset.

Click to expand
from span_marker import SpanMarkerModel, Trainer

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("span_marker_model_id")

# Specify a Dataset with "tokens" and "ner_tag" columns
dataset = load_dataset("conll2003") # For example CoNLL2003

# Initialize a Trainer using the pretrained model & dataset
trainer = Trainer(
    model=model,
    train_dataset=dataset["train"],
    eval_dataset=dataset["validation"],
)
trainer.train()
trainer.save_model("span_marker_model_id-finetuned")

Training Details

Training Set Metrics

Training set Min Median Max
Sentence length 3 25.6049 106
Entities per sentence 0 5.2439 22

Training Hyperparameters

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 10

Training Results

Epoch Step Validation Loss Validation Precision Validation Recall Validation F1 Validation Accuracy
2.0134 300 0.0544 0.6819 0.6260 0.6527 0.8016
4.0268 600 0.0525 0.7217 0.7176 0.7196 0.8387
6.0403 900 0.0688 0.7652 0.7214 0.7426 0.8459
8.0537 1200 0.0703 0.7636 0.7214 0.7419 0.8349

Framework Versions

  • Python: 3.10.12
  • SpanMarker: 1.5.0
  • Transformers: 4.36.2
  • PyTorch: 2.0.1+cu118
  • Datasets: 2.16.1
  • Tokenizers: 0.15.0

Citation

BibTeX

@software{Aarsen_SpanMarker,
    author = {Aarsen, Tom},
    license = {Apache-2.0},
    title = {{SpanMarker for Named Entity Recognition}},
    url = {https://github.com/tomaarsen/SpanMarkerNER}
}