--- base_model: csebuetnlp/banglabert tags: - generated_from_trainer metrics: - f1 - accuracy model-index: - name: banglabert-MLTC-BB results: [] --- # banglabert-MLTC-BB This model is a fine-tuned version of [csebuetnlp/banglabert](https://huggingface.co/csebuetnlp/banglabert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3626 - F1: 0.8666 - Roc Auc: 0.8624 - Accuracy: 0.5861 - Hamming Loss: 0.1375 - Jaccard Score: 0.7646 - Zero One Loss: 0.4139 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | Hamming Loss | Jaccard Score | Zero One Loss | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|:------------:|:-------------:|:-------------:| | 0.4962 | 1.0 | 73 | 0.4868 | 0.8020 | 0.7962 | 0.4730 | 0.2037 | 0.6694 | 0.5270 | | 0.3992 | 2.0 | 146 | 0.3993 | 0.8420 | 0.8386 | 0.5656 | 0.1613 | 0.7272 | 0.4344 | | 0.3163 | 3.0 | 219 | 0.3647 | 0.8616 | 0.8586 | 0.5810 | 0.1414 | 0.7569 | 0.4190 | | 0.2545 | 4.0 | 292 | 0.3626 | 0.8666 | 0.8624 | 0.5861 | 0.1375 | 0.7646 | 0.4139 | | 0.2464 | 5.0 | 365 | 0.3537 | 0.8626 | 0.8612 | 0.5835 | 0.1388 | 0.7584 | 0.4165 | | 0.2534 | 6.0 | 438 | 0.3591 | 0.8600 | 0.8566 | 0.5707 | 0.1433 | 0.7544 | 0.4293 | | 0.194 | 7.0 | 511 | 0.3525 | 0.8644 | 0.8624 | 0.5938 | 0.1375 | 0.7612 | 0.4062 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.1.2 - Datasets 2.19.1 - Tokenizers 0.19.1