--- license: apache-2.0 tags: - generated_from_trainer datasets: - wnut_17 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-multilingual-cased-WNUT-ner results: - task: name: Token Classification type: token-classification dataset: name: wnut_17 type: wnut_17 config: wnut_17 split: test args: wnut_17 metrics: - name: Precision type: precision value: 0.5913669064748202 - name: Recall type: recall value: 0.3809082483781279 - name: F1 type: f1 value: 0.463359639233371 - name: Accuracy type: accuracy value: 0.9500726682055228 --- # bert-base-multilingual-cased-WNUT-ner This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.3832 - Precision: 0.5914 - Recall: 0.3809 - F1: 0.4634 - Accuracy: 0.9501 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.2791 | 0.6008 | 0.2817 | 0.3836 | 0.9427 | | No log | 2.0 | 426 | 0.2697 | 0.6520 | 0.3299 | 0.4382 | 0.9479 | | 0.148 | 3.0 | 639 | 0.2846 | 0.5783 | 0.3661 | 0.4484 | 0.9492 | | 0.148 | 4.0 | 852 | 0.3032 | 0.6248 | 0.3642 | 0.4602 | 0.9500 | | 0.0413 | 5.0 | 1065 | 0.3355 | 0.5729 | 0.3568 | 0.4397 | 0.9495 | | 0.0413 | 6.0 | 1278 | 0.3343 | 0.5714 | 0.3892 | 0.4631 | 0.9501 | | 0.0413 | 7.0 | 1491 | 0.3522 | 0.5877 | 0.3818 | 0.4629 | 0.9500 | | 0.0182 | 8.0 | 1704 | 0.3844 | 0.6120 | 0.3698 | 0.4610 | 0.9499 | | 0.0182 | 9.0 | 1917 | 0.3847 | 0.5986 | 0.3828 | 0.4669 | 0.9504 | | 0.008 | 10.0 | 2130 | 0.3832 | 0.5914 | 0.3809 | 0.4634 | 0.9501 | ### Framework versions - Transformers 4.26.0 - Pytorch 1.13.1+cu117 - Datasets 2.9.0 - Tokenizers 0.13.2