tomaarsen's picture
tomaarsen HF staff
Only -> always
1508430
metadata
license: apache-2.0
library_name: span-marker
tags:
  - span-marker
  - token-classification
  - ner
  - named-entity-recognition
pipeline_tag: token-classification
widget:
  - text: >-
      here, da = direct assessment, rr = relative ranking, ds = discrete scale
      and cs = continuous scale.
    example_title: Uncased 1
  - text: >-
      modifying or replacing the erasable programmable read only memory (eprom)
      in a phone would allow the configuration of any esn and min via software
      for cellular devices.
    example_title: Uncased 2
  - text: >-
      we propose a technique called aggressive stochastic weight averaging
      (aswa) and an extension called norm-filtered aggressive stochastic weight
      averaging (naswa) which improves te stability of models over random seeds.
    example_title: Uncased 3
  - text: >-
      the choice of the encoder and decoder modules of dnpg can be quite
      flexible, for instance long-short term memory networks (lstm) or
      convolutional neural network (cnn).
    example_title: Uncased 4
model-index:
  - name: SpanMarker w. bert-base-uncased on Acronym Identification by Tom Aarsen
    results:
      - task:
          type: token-classification
          name: Named Entity Recognition
        dataset:
          type: acronym_identification
          name: Acronym Identification
          split: validation
          revision: c3c245a18bbd57b1682b099e14460eebf154cbdf
        metrics:
          - type: f1
            value: 0.9198
            name: F1
          - type: precision
            value: 0.9252
            name: Precision
          - type: recall
            value: 0.9145
            name: Recall
datasets:
  - acronym_identification
language:
  - en
metrics:
  - f1
  - recall
  - precision

SpanMarker for uncased Acronyms Named Entity Recognition

This is a SpanMarker model that can be used for Named Entity Recognition. In particular, this SpanMarker model uses bert-base-uncased as the underlying encoder. See train.py for the training script.

Is your data always capitalized correctly? Then consider using the cased variant of this model instead for better performance: tomaarsen/span-marker-bert-base-acronyms.

Metrics

It achieves the following results on the validation set:

  • Overall Precision: 0.9252
  • Overall Recall: 0.9145
  • Overall F1: 0.9198
  • Overall Accuracy: 0.9797

Labels

Label Examples
SHORT "nlp", "coqa", "soda", "sca"
LONG "natural language processing", "conversational question answering", "symposium on discrete algorithms", "successive convex approximation"

Usage

To use this model for inference, first install the span_marker library:

pip install span_marker

You can then run inference with this model like so:

from span_marker import SpanMarkerModel

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span-marker-bert-base-uncased-acronyms")
# Run inference
entities = model.predict("compression algorithms like principal component analysis (pca) can reduce noise and complexity.")

See the SpanMarker repository for documentation and additional information on this library.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss Overall Precision Overall Recall Overall F1 Overall Accuracy
0.013 0.31 200 0.0101 0.8998 0.8514 0.8749 0.9696
0.0088 0.62 400 0.0082 0.8997 0.9142 0.9069 0.9764
0.0082 0.94 600 0.0071 0.9173 0.8955 0.9063 0.9765
0.0063 1.25 800 0.0066 0.9210 0.9187 0.9198 0.9802
0.0066 1.56 1000 0.0066 0.9302 0.8941 0.9118 0.9783
0.0064 1.87 1200 0.0063 0.9304 0.9042 0.9171 0.9792
0.0063 2.00 1290 0.0063 0.9252 0.9145 0.9198 0.9797

Framework versions

  • SpanMarker 1.2.4
  • Transformers 4.31.0
  • Pytorch 1.13.1+cu117
  • Datasets 2.14.3
  • Tokenizers 0.13.2