Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Fine-tuned XLM-R Model for amharic Sentiment Analysis

This is a fine-tuned XLM-R model for sentiment analysis in amharic.

Model Details

  • Model Name: XLM-R Sentiment Analysis
  • Language: amharic
  • Fine-tuning Dataset: DGurgurov/amharic_sa

Training Details

  • Epochs: 20
  • Batch Size: 32 (train), 64 (eval)
  • Optimizer: AdamW
  • Learning Rate: 5e-5

Performance Metrics

  • Accuracy: 0.86842
  • Macro F1: 0.86833
  • Micro F1: 0.86842

Usage

To use this model, you can load it with the Hugging Face Transformers library:

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("DGurgurov/xlm-r_amharic_sentiment")
model = AutoModelForSequenceClassification.from_pretrained("DGurgurov/xlm-r_amharic_sentiment")

License

[MIT]

Downloads last month
3
Safetensors
Model size
278M params
Tensor type
F32
·
Inference API
This model can be loaded on Inference API (serverless).