tomaarsen's picture
tomaarsen HF staff
Add improved README
3f687ac
|
raw
history blame
4.74 kB
metadata
license: apache-2.0
library_name: span-marker
tags:
  - span-marker
  - token-classification
  - ner
  - named-entity-recognition
pipeline_tag: token-classification
widget:
  - text: >-
      Here, DA = direct assessment, RR = relative ranking, DS = discrete scale
      and CS = continuous scale.
    example_title: Example 1
  - text: >-
      Modifying or replacing the Erasable Programmable Read Only Memory (EPROM)
      in a phone would allow the configuration of any ESN and MIN via software
      for cellular devices.
    example_title: Example 2
  - text: >-
      We propose a technique called Aggressive Stochastic Weight Averaging
      (ASWA) and an extension called Norm-filtered Aggressive Stochastic Weight
      Averaging (NASWA) which improves the stability of models over random
      seeds.
    example_title: Example 3
  - text: >-
      The choice of the encoder and decoder modules of DNPG can be quite
      flexible, for instance long-short term memory networks (LSTM) or
      convolutional neural network (CNN).
    example_title: Example 4
model-index:
  - name: SpanMarker w. bert-base-cased on Acronym Identification by Tom Aarsen
    results:
      - task:
          type: token-classification
          name: Named Entity Recognition
        dataset:
          type: acronym_identification
          name: Acronym Identification
          split: validation
          revision: c3c245a18bbd57b1682b099e14460eebf154cbdf
        metrics:
          - type: f1
            value: 0.931
            name: F1
          - type: precision
            value: 0.9423
            name: Precision
          - type: recall
            value: 0.9199
            name: Recall
datasets:
  - acronym_identification
language:
  - en
metrics:
  - f1
  - recall
  - precision

SpanMarker for Acronyms Named Entity Recognition

This is a SpanMarker model trained on the acronym_identification dataset. In particular, this SpanMarker model uses bert-base-cased as the underlying encoder. See train.py for the training script.

Metrics

It achieves the following results on the validation set:

  • Overall Precision: 0.9423
  • Overall Recall: 0.9199
  • Overall F1: 0.9310
  • Overall Accuracy: 0.9830

Labels

Label Examples
SHORT "NLP", "CoQA", "SODA", "SCA"
LONG "Natural Language Processing", "Conversational Question Answering", "Symposium on Discrete Algorithms", "successive convex approximation"

Usage

To use this model for inference, first install the span_marker library:

pip install span_marker

You can then run inference with this model like so:

from span_marker import SpanMarkerModel

# Download from the 🤗 Hub
model = SpanMarkerModel.from_pretrained("tomaarsen/span_marker_bert_base_acronyms")
# Run inference
entities = model.predict("Compression algorithms like Principal Component Analysis (PCA) can reduce noise and complexity.")

See the SpanMarker repository for documentation and additional information on this library.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss Overall Precision Overall Recall Overall F1 Overall Accuracy
0.0109 0.31 200 0.0079 0.9202 0.8962 0.9080 0.9765
0.0075 0.62 400 0.0070 0.9358 0.8724 0.9030 0.9765
0.0068 0.93 600 0.0059 0.9363 0.9203 0.9282 0.9821
0.0057 1.24 800 0.0056 0.9372 0.9187 0.9278 0.9824
0.0051 1.55 1000 0.0054 0.9381 0.9170 0.9274 0.9824
0.0054 1.86 1200 0.0053 0.9424 0.9218 0.9320 0.9834
0.0054 2.00 1290 0.0054 0.9423 0.9199 0.9310 0.9830

Framework versions

  • SpanMarker 1.2.4
  • Transformers 4.31.0
  • Pytorch 1.13.1+cu117
  • Datasets 2.14.3
  • Tokenizers 0.13.2