Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Fine-tuned XLM-R Model for marathi Sentiment Analysis

This is a fine-tuned XLM-R model for sentiment analysis in marathi.

Model Details

  • Model Name: XLM-R Sentiment Analysis
  • Language: marathi
  • Fine-tuning Dataset: DGurgurov/marathi_sa

Training Details

  • Epochs: 20
  • Batch Size: 32 (train), 64 (eval)
  • Optimizer: AdamW
  • Learning Rate: 5e-5

Performance Metrics

  • Accuracy: 0.90200
  • Macro F1: 0.90198
  • Micro F1: 0.90200

Usage

To use this model, you can load it with the Hugging Face Transformers library:

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("DGurgurov/xlm-r_marathi_sentiment")
model = AutoModelForSequenceClassification.from_pretrained("DGurgurov/xlm-r_marathi_sentiment")

License

[MIT]

Downloads last month
3
Safetensors
Model size
278M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.