Edit model card

cardiffnlp/bert-base-multilingual-cased-sentiment-multilingual

This model is a fine-tuned version of bert-base-multilingual-cased on the cardiffnlp/tweet_sentiment_multilingual (all) via tweetnlp. Training split is train and parameters have been tuned on the validation split validation.

Following metrics are achieved on the test split test (link).

  • F1 (micro): 0.6169540229885058
  • F1 (macro): 0.6168385894019698
  • Accuracy: 0.6169540229885058

Usage

Install tweetnlp via pip.

pip install tweetnlp

Load the model in python.

import tweetnlp
model = tweetnlp.Classifier("cardiffnlp/bert-base-multilingual-cased-sentiment-multilingual", max_length=128)
model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')

Reference

@inproceedings{dimosthenis-etal-2022-twitter,
    title = "{T}witter {T}opic {C}lassification",
    author = "Antypas, Dimosthenis  and
    Ushio, Asahi  and
    Camacho-Collados, Jose  and
    Neves, Leonardo  and
    Silva, Vitor  and
    Barbieri, Francesco",
    booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
    month = oct,
    year = "2022",
    address = "Gyeongju, Republic of Korea",
    publisher = "International Committee on Computational Linguistics"
}
Downloads last month
289

Dataset used to train cardiffnlp/bert-base-multilingual-cased-sentiment-multilingual

Evaluation results

  • Micro F1 (cardiffnlp/tweet_sentiment_multilingual/all) on cardiffnlp/tweet_sentiment_multilingual
    test set self-reported
    0.617
  • Macro F1 (cardiffnlp/tweet_sentiment_multilingual/all) on cardiffnlp/tweet_sentiment_multilingual
    test set self-reported
    0.617
  • Accuracy (cardiffnlp/tweet_sentiment_multilingual/all) on cardiffnlp/tweet_sentiment_multilingual
    test set self-reported
    0.617