smith
update model card README.md
7a73558
metadata
tags:
  - generated_from_trainer
datasets:
  - peoples_daily_ner
metrics:
  - precision
  - recall
  - f1
  - accuracy
model-index:
  - name: bert-finetuned-ner-chinese-people-daily
    results:
      - task:
          name: Token Classification
          type: token-classification
        dataset:
          name: peoples_daily_ner
          type: peoples_daily_ner
          config: peoples_daily_ner
          split: validation
          args: peoples_daily_ner
        metrics:
          - name: Precision
            type: precision
            value: 0.8608247422680413
          - name: Recall
            type: recall
            value: 0.8608247422680413
          - name: F1
            type: f1
            value: 0.8608247422680413
          - name: Accuracy
            type: accuracy
            value: 0.9852778800147222

bert-finetuned-ner-chinese-people-daily

This model is a fine-tuned version of bert-base-chinese on the peoples_daily_ner dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0604
  • Precision: 0.8608
  • Recall: 0.8608
  • F1: 0.8608
  • Accuracy: 0.9853

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 1.0 131 0.0753 0.6955 0.7887 0.7391 0.9764
No log 2.0 262 0.0588 0.7971 0.8505 0.8229 0.9840
No log 3.0 393 0.0604 0.8608 0.8608 0.8608 0.9853

Framework versions

  • Transformers 4.29.2
  • Pytorch 2.0.1
  • Datasets 2.12.0
  • Tokenizers 0.13.3