|
--- |
|
license: mit |
|
language: |
|
- en |
|
library_name: transformers |
|
widget: |
|
- text: "--" |
|
--- |
|
This is a BERTweet-base model that has been further pre-trained with preferential masking of emotion words for 100k steps on about 6.3M Vent posts. |
|
|
|
This model is meant to be fine-tuned on labeled data or used as feature extractor for downstream tasks. |
|
|
|
## Citation |
|
Please cite the following paper if you find the model useful for your work: |
|
```bibtex |
|
@article{aroyehun2023leia, |
|
title={LEIA: Linguistic Embeddings for the Identification of Affect}, |
|
author={Aroyehun, Segun Taofeek and Malik, Lukas and Metzler, Hannah and Haimerl, Nikolas and Di Natale, Anna and Garcia, David}, |
|
journal={EPJ Data Science}, |
|
volume={12}, |
|
year={2023}, |
|
publisher={Springer} |
|
} |
|
``` |