MARBERT Sarcasm Detector
This model is fine-tuned UBC-NLP/MARBERTv2 was finetuned on ArSarcasT corpus training dataset. It achieves the following results on the evaluation sets:
Eval Datatset | Accuracy | F1 | Precision | Recall |
---|---|---|---|---|
ArSarcasT | 0.844 | 0.735 | 0.754 | 0.718 |
iSarcasmEVAL | 0.892 | 0.633 | 0.616 | 0.650 |
ArSarcasmV2 | 0.771 | 0.561 | 0.590 | 0.534 |
Model description
Fine-tuned MARBERT-v2 model on Sarcastic tweets dataset for sarcasm detection text classification.
Intended uses & limitations
More information needed
Training and evaluation data
- Training dataset: ArSarcasT development split.
- Evaluation Datasets:
- ArSarcasm-v2 test dataset.
- iSarcasmEVAL test dataset.
- ArSarcasT test dataset.
Training procedure
Fine-tuning, 3 epochs
Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
Training results
Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Tokenizers 0.13.3
- Downloads last month
- 6
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.