File size: 1,598 Bytes
6b90a3f 826e7d1 6b90a3f 826e7d1 6b90a3f 7c19a72 6b90a3f 826e7d1 6b90a3f 8eb77d3 6b90a3f 7c19a72 6b90a3f 7bd5c34 6b90a3f 7c19a72 6b90a3f 7c19a72 e1c88da 6b90a3f 4f3392c 6b90a3f 7c19a72 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
base_model: cardiffnlp/twitter-xlm-roberta-base-sentiment
metrics:
- accuracy
model-index:
- name: result
results: []
language:
- ar
- en
library_name: transformers
pipeline_tag: text-classification
---
---
# SentimentArEng
This model is a fine-tuned version of [cardiffnlp/twitter-xlm-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.502831
- Accuracy: 0.798512
# inference with pipeline
```
from transformers import pipeline
model_path = "Noor0/SentimentArEng"
sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path)
sentiment_task("تعامل الموظفين كان أقل من المتوقع")
```
- output:
- [{'label': 'negative', 'score': 0.9905518293380737}]
## Training and evaluation data
- Training set: 114,885 records
- evaluation data: 12,765 records
## Training procedure
| Training Loss | Epoch |Validation Loss | Accuracy |
|:-------------:|:-----:|:---------------:|:--------:|
| 0.4511 | 2.0 |0.502831 | 0.7985 |
| 0.3655 | 3.0 |0.576118 | 0.7954 |
| 0.3019 | 4.0 |0.625391 | 0.7985 |
| 0.2466 | 5.0 |0.835689 | 0.7979 |
### Training hyperparameters
- The following hyperparameters were used during training:
- learning_rate=2e-5
- num_train_epochs=20
- weight_decay=0.01
- batch_size=16,
### Framework versions
- Transformers 4.35.0
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.14.1 |