Edit model card

cardiffnlp/roberta-large-tweet-topic-single-all

This model is a fine-tuned version of roberta-large on the tweet_topic_single. This model is fine-tuned on train_all split and validated on test_2021 split of tweet_topic. Fine-tuning script can be found here. It achieves the following results on the test_2021 set:

  • F1 (micro): 0.896042528056704
  • F1 (macro): 0.8000614127334341
  • Accuracy: 0.896042528056704

Usage

from transformers import pipeline

pipe = pipeline("text-classification", "cardiffnlp/roberta-large-tweet-topic-single-all")  
topic = pipe("Love to take night time bike rides at the jersey shore. Seaside Heights boardwalk. Beautiful weather. Wishing everyone a safe Labor Day weekend in the US.")
print(topic)

Reference


@inproceedings{dimosthenis-etal-2022-twitter,
    title = "{T}witter {T}opic {C}lassification",
    author = "Antypas, Dimosthenis  and
    Ushio, Asahi  and
    Camacho-Collados, Jose  and
    Neves, Leonardo  and
    Silva, Vitor  and
    Barbieri, Francesco",
    booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
    month = oct,
    year = "2022",
    address = "Gyeongju, Republic of Korea",
    publisher = "International Committee on Computational Linguistics"
}
Downloads last month
616

Dataset used to train cardiffnlp/roberta-large-tweet-topic-single-all

Evaluation results