--- license: apache-2.0 base_model: bert-base-multilingual-uncased tags: - generated_from_trainer datasets: - id_nergrit_corpus metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-multilingual-uncased-ner-silvanus results: - task: name: Token Classification type: token-classification dataset: name: id_nergrit_corpus type: id_nergrit_corpus config: ner split: validation args: ner metrics: - name: Precision type: precision value: 0.9022118742724098 - name: Recall type: recall value: 0.9189723320158103 - name: F1 type: f1 value: 0.9105149794399845 - name: Accuracy type: accuracy value: 0.983813651582688 widget: - text: >- Kebakaran hutan dan lahan terus terjadi dan semakin meluas di Kota Palangkaraya, Kalimantan Tengah (Kalteng) pada hari Rabu, 15 Nopember 2023 20.00 WIB. Bahkan kobaran api mulai membakar pondok warga dan mendekati permukiman. BZK #RCTINews #SeputariNews #News #Karhutla #KebakaranHutan #HutanKalimantan #SILVANUS_Italian_Pilot_Testing example_title: Indonesia - text: >- Wildfire rages for a second day in Evia destroying a Natura 2000 protected pine forest. - 5:51 PM Aug 14, 2019 example_title: English - text: >- 3 nov 2023 21:57 - Incendio forestal obliga a la evacuación de hasta 850 personas cerca del pueblo de Montichelvo en Valencia. example_title: Spanish - text: >- Incendi boschivi nell'est del Paese: 2 morti e oltre 50 case distrutte nello stato del Queensland. example_title: Italian - text: >- Lesné požiare na Sicílii si vyžiadali dva ľudské životy a evakuáciu hotela http://dlvr.it/SwW3sC - 23. septembra 2023 20:57 example_title: Slovak --- # bert-base-multilingual-uncased-ner-silvanus This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the id_nergrit_corpus dataset. It achieves the following results on the evaluation set: - Loss: 0.0662 - Precision: 0.9022 - Recall: 0.9190 - F1: 0.9105 - Accuracy: 0.9838 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1429 | 1.0 | 827 | 0.0587 | 0.8885 | 0.9075 | 0.8979 | 0.9829 | | 0.0464 | 2.0 | 1654 | 0.0609 | 0.9081 | 0.9103 | 0.9092 | 0.9846 | | 0.0288 | 3.0 | 2481 | 0.0662 | 0.9022 | 0.9190 | 0.9105 | 0.9838 | ### Framework versions - Transformers 4.35.0 - Pytorch 2.1.0+cu118 - Datasets 2.14.6 - Tokenizers 0.14.1