librarian-bot's picture
Librarian Bot: Add base_model information to model
70a7480
|
raw
history blame
3.09 kB
metadata
language:
  - de
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - germeval_14
metrics:
  - precision
  - recall
  - f1
  - accuracy
widget:
  - text: Mein Name ist Wolfgang und ich  lebe in Berlin
    example_title: Example 1
  - text: Mein Name ist Sarah und ich lebe in London
    example_title: Example 2
  - text: Mein Name ist Clara und ich lebe in Berkeley, California.
    example_title: Example 3
base_model: bert-base-uncased
model-index:
  - name: bert-base-uncased-de-ner
    results:
      - task:
          type: token-classification
          name: Token Classification
        dataset:
          name: germeval_14
          type: germeval_14
          config: germeval_14
          split: test
          args: germeval_14
        metrics:
          - type: precision
            value: 0.8109431552054502
            name: Precision
          - type: recall
            value: 0.771990271584921
            name: Recall
          - type: f1
            value: 0.7909874364032811
            name: F1
          - type: accuracy
            value: 0.9786213727432309
            name: Accuracy

bert-base-uncased-de-ner

This model is a fine-tuned version of bert-base-uncased on the germeval_14 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1374
  • Precision: 0.8109
  • Recall: 0.7720
  • F1: 0.7910
  • Accuracy: 0.9786

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

The model was trained on data that follows the IOB convention. Full tagset with indices:

{'O': 0, 'B-PER': 1, 'I-PER': 2, 'B-ORG': 3, 'I-ORG': 4, 'B-LOC': 5, 'I-LOC': 6}

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 0
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 6
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
0.104 1.0 3000 0.0973 0.7027 0.7323 0.7172 0.9712
0.0597 2.0 6000 0.0942 0.8135 0.7172 0.7623 0.9766
0.0345 3.0 9000 0.1051 0.7924 0.7569 0.7742 0.9773
0.0172 4.0 12000 0.1170 0.8074 0.7628 0.7844 0.9779
0.0092 5.0 15000 0.1264 0.8068 0.7803 0.7933 0.9788
0.0035 6.0 18000 0.1374 0.8109 0.7720 0.7910 0.9786

Framework versions

  • Transformers 4.27.4
  • Pytorch 1.13.1+cu116
  • Datasets 2.11.0
  • Tokenizers 0.13.2