--- license: apache-2.0 base_model: google-bert/bert-base-multilingual-cased tags: - generated_from_trainer metrics: - accuracy - precision - recall - roc_auc - f1 model-index: - name: results results: [] datasets: - alecmontero/dataset_tweetsmx_areasCPC language: - es library_name: transformers --- # results This model is a fine-tuned version of [google-bert/bert-base-multilingual-cased](https://huggingface.co/google-bert/bert-base-multilingual-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1516 - Roc Auc: 0.8130 - Hamming Loss: 0.0509 - F1 Score: 0.6969 - Accuracy: 0.4418 - Precision: 0.8279 - Recall: 0.6583 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Roc Auc | Hamming Loss | F1 Score | Accuracy | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:--------:|:--------:|:---------:|:------:| | No log | 1.0 | 374 | 0.2285 | 0.6386 | 0.0822 | 0.3390 | 0.2731 | 0.8932 | 0.3080 | | 0.2678 | 2.0 | 748 | 0.1870 | 0.7175 | 0.0679 | 0.5123 | 0.3481 | 0.7842 | 0.4679 | | 0.1722 | 3.0 | 1122 | 0.1727 | 0.7839 | 0.0607 | 0.6116 | 0.3949 | 0.7611 | 0.6096 | | 0.1722 | 4.0 | 1496 | 0.1577 | 0.7865 | 0.0545 | 0.6408 | 0.4137 | 0.8178 | 0.6096 | | 0.1236 | 5.0 | 1870 | 0.1537 | 0.8055 | 0.0523 | 0.6798 | 0.4230 | 0.8250 | 0.6423 | | 0.0847 | 6.0 | 2244 | 0.1570 | 0.8069 | 0.0541 | 0.6695 | 0.4297 | 0.7839 | 0.6503 | | 0.063 | 7.0 | 2618 | 0.1516 | 0.8130 | 0.0509 | 0.6969 | 0.4418 | 0.8279 | 0.6583 | | 0.063 | 8.0 | 2992 | 0.1531 | 0.8147 | 0.0512 | 0.6856 | 0.4458 | 0.7982 | 0.6622 | | 0.0465 | 9.0 | 3366 | 0.1526 | 0.8427 | 0.0489 | 0.7544 | 0.4565 | 0.8190 | 0.7174 | | 0.0349 | 10.0 | 3740 | 0.1534 | 0.8349 | 0.0498 | 0.7414 | 0.4431 | 0.8212 | 0.7023 | ### Framework versions - Transformers 4.42.4 - Pytorch 2.3.0+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1