|
--- |
|
tags: |
|
- text-classification |
|
- emotion-analysis |
|
language: |
|
- it |
|
widget: |
|
- text: I love AutoTrain 🤗 |
|
datasets: |
|
- tradicio/autotrain-data-it-emotion-analysis |
|
- dair-ai/emotion |
|
co2_eq_emissions: |
|
emissions: 0.4489187526120041 |
|
license: cc-by-sa-4.0 |
|
metrics: |
|
- accuracy |
|
- f1 |
|
- recall |
|
pipeline_tag: text-classification |
|
--- |
|
# IT-EMOTION-ANALYZER |
|
|
|
This is a model for emotion analysis of italian sentences trained on a translated dataset by [Google Translator](https://pypi.org/project/deep-translator/). It maps sentences & paragraphs with 6 emotions which are: |
|
|
|
- 0: sadness |
|
- 1: joy |
|
- 2: love |
|
- 3: anger |
|
- 4: fear |
|
- 5: surprise |
|
|
|
<!--- Describe your model here --> |
|
|
|
## Model in action |
|
|
|
Using this model becomes easy when you have [transformers](https://github.com/huggingface/transformers) installed: |
|
|
|
``` |
|
pip install -U transformers |
|
``` |
|
|
|
Then you can use the model like this: |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForSequenceClassification |
|
from transformers import pipeline |
|
|
|
sentences = ["Questa è una frase triste", "Questa è una frase felice", "Questa è una frase di stupore"] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("aiknowyou/it-emotion-analyzer") |
|
model = AutoModelForSequenceClassification.from_pretrained("aiknowyou/it-emotion-analyzer") |
|
|
|
emotion_analysis = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer) |
|
emotion_analysis(sentences) |
|
``` |
|
Obtaining the following result: |
|
```python |
|
[{'label': '0', 'score': 0.9481984972953796}, |
|
{'label': '1', 'score': 0.9299975037574768}, |
|
{'label': '5', 'score': 0.9543816447257996}] |
|
``` |
|
|
|
# Model Trained Using AutoTrain |
|
|
|
- Problem type: Multi-class Classification |
|
- Model ID: 43095109829 |
|
- CO2 Emissions (in grams): 0.4489 |
|
|
|
## Validation Metrics |
|
|
|
- Loss: 0.566 |
|
- Accuracy: 0.828 |
|
- Macro F1: 0.828 |
|
- Micro F1: 0.828 |
|
- Weighted F1: 0.828 |
|
- Macro Precision: 0.828 |
|
- Micro Precision: 0.828 |
|
- Weighted Precision: 0.828 |
|
- Macro Recall: 0.828 |
|
- Micro Recall: 0.828 |
|
- Weighted Recall: 0.828 |