Text Classification
Transformers
Safetensors
English
Indonesian
xlm-roberta
Inference Endpoints
Edit model card

Model Description

Finetuned xlm-roberta-base for Sentiment Analysis in English and Bahasa Indonesia

Training results

Trained on TPU VM v4-8 for ~3 hours

epoch step train_accuracy train_loss val_accuracy val_loss
0 5391 0.955597997 0.118527733 0.963498533 0.098501749
1 10783 0.965486944 0.092906699 0.964689374 0.094814248
2 16175 0.968293846 0.085916176 0.965770006 0.093040377

Training procedure

For replication, go to GitHub page

Special Thanks

  1. Google’s TPU Research Cloud (TRC) for providing Cloud TPU VM.
  2. carlesoctav for making the training script on TPU VM
  3. thonyyy for gathering the sentiment dataset
Downloads last month
29
Safetensors
Model size
278M params
Tensor type
F32
·

Datasets used to train carant-ai/xlm-roberta-sentiment-base