File size: 2,393 Bytes
9ed93d8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
---
tags:
- text
- stance
- classification
language:
- en
model-index:
- name: BEtMan-Tw
results:
- task:
type: stance-classification # Required. Example: automatic-speech-recognition
name: Text Classification # Optional. Example: Speech Recognition
dataset:
type: stance # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: stance # Required. A pretty name for the dataset. Example: Common Voice (French)
metrics:
- type: f1
value: 75.8
- type: accuracy
value: 76.2
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERTweet_EmotAn6
This model is a fine-tuned version of [j-hartmann/sentiment-roberta-large-english-3-classes](https://huggingface.co/j-hartmann/sentiment-roberta-large-english-3-classes) to predict 3 categories.
```
# Model usage
from transformers import pipeline
model_path = "eevvgg/BEtMan-Tw"
cls_task = pipeline(task = "text-classification", model = model_path, tokenizer = model_path)#, device=0
sequence = ['his rambling has no clear ideas behind it',
'That has nothing to do with medical care',
"Turns around and shows how qualified she is because of her political career.",
'She has very little to gain by speaking too much']
result = cls_task(sequence)
labels = [i['label'] for i in result]
labels # ['attack', 'neutral', 'support', 'attack']
```
## Intended uses & limitations
Classification in short text up to 200 tokens (maxlen).
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 4e-5, 'decay': 0.01}
Trained for 3 epochs, mini-batch size of 8.
- loss: 0.719
## Evaluation data
It achieves the following results on the evaluation set:
- macro f1-score: 0.758
- weighted f1-score: 0.762
- accuracy: 0.762
precision recall f1-score support
0 0.762 0.770 0.766 200
1 0.759 0.775 0.767 191
2 0.769 0.714 0.741 84
|