EngEmBERT8 / README.md
poltextlab's picture
Update README.md
d5d7cdc verified
|
raw
history blame
1.36 kB
---
license: apache-2.0
---
Model description Cased fine-tuned BERT model for English, trained on (manually annotated) Hungarian parliamentary speeches scraped from parlament.hu, and translated with Google Translate API.
Training The fine-tuned version of the original bert-base-cased model (bert-base-cased), trained on HunEmPoli corpus, translated with Google Translate API.
The model can be used as any other (cased) BERT model. It has been tested recognizing emotions at the sentence level in (parliamentary) pre-agenda speeches, where:
'Label_0': Neutral
'Label_1': Fear
'Label_2': Sadness
'Label_3': Anger
'Label_4': Disgust
'Label_5': Success
'Label_6': Joy
'Label_7': Trust
Eval results
precision recall f1-score support
0 1.00 0.50 0.67 46
1 0.00 0.00 0.00 4
2 0.70 0.85 0.76 188
3 0.50 0.09 0.15 11
4 0.85 0.75 0.80 375
5 0.78 0.93 0.84 335
6 0.67 0.28 0.39 36
7 0.00 0.00 0.00 4
accuracy 0.79 999
macro avg 0.56 0.42 0.45 999
weighted avg 0.79 0.79 0.77 999