--- library_name: transformers license: mit base_model: ai4bharat/indic-bert tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: indic-bert-profanity-mr results: [] --- # indic-bert-profanity-mr This model is a fine-tuned version of [ai4bharat/indic-bert](https://huggingface.co/ai4bharat/indic-bert) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3187 - Accuracy: 0.9035 - Precision: 0.4517 - Recall: 0.5 - F1: 0.4746 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:------:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.3272 | 0.9836 | 30 | 0.3721 | 0.8819 | 0.4410 | 0.5 | 0.4686 | | 0.3332 | 2.0 | 61 | 0.3677 | 0.8819 | 0.4410 | 0.5 | 0.4686 | | 0.3293 | 2.9836 | 91 | 0.3768 | 0.8819 | 0.4410 | 0.5 | 0.4686 | | 0.3275 | 4.0 | 122 | 0.3612 | 0.8819 | 0.4410 | 0.5 | 0.4686 | | 0.2919 | 4.9836 | 152 | 0.3752 | 0.8819 | 0.4410 | 0.5 | 0.4686 | | 0.291 | 6.0 | 183 | 0.3618 | 0.8819 | 0.4410 | 0.5 | 0.4686 | | 0.281 | 6.9836 | 213 | 0.3793 | 0.8819 | 0.4410 | 0.5 | 0.4686 | | 0.2399 | 8.0 | 244 | 0.3854 | 0.8819 | 0.4410 | 0.5 | 0.4686 | | 0.1822 | 8.9836 | 274 | 0.4216 | 0.8819 | 0.4410 | 0.5 | 0.4686 | | 0.1354 | 9.8361 | 300 | 0.4200 | 0.8819 | 0.6938 | 0.5265 | 0.5229 | ### Framework versions - Transformers 4.45.1 - Pytorch 2.4.0 - Datasets 3.0.1 - Tokenizers 0.20.0