cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-2020
This model is a fine-tuned version of cardiffnlp/twitter-roberta-base-dec2021 on the tweet_topic_single. This model is fine-tuned on train_2020
split and validated on test_2021
split of tweet_topic.
Fine-tuning script can be found here. It achieves the following results on the test_2021 set:
- F1 (micro): 0.8777318369757826
- F1 (macro): 0.7461188384572722
- Accuracy: 0.8777318369757826
Usage
from transformers import pipeline
pipe = pipeline("text-classification", "cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-2020")
topic = pipe("Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US.")
print(topic)
Reference
@inproceedings{dimosthenis-etal-2022-twitter,
title = "{T}witter {T}opic {C}lassification",
author = "Antypas, Dimosthenis and
Ushio, Asahi and
Camacho-Collados, Jose and
Neves, Leonardo and
Silva, Vitor and
Barbieri, Francesco",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics"
}
- Downloads last month
- 57
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Dataset used to train cardiffnlp/twitter-roberta-base-dec2021-tweet-topic-single-2020
Evaluation results
- F1 on cardiffnlp/tweet_topic_singleself-reported0.878
- F1 (macro) on cardiffnlp/tweet_topic_singleself-reported0.746
- Accuracy on cardiffnlp/tweet_topic_singleself-reported0.878