--- language: en datasets: - tweet_eval widget: - text: Covid cases are increasing fast! model-index: - name: cardiffnlp/twitter-roberta-base-sentiment-latest results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval config: sentiment split: validation metrics: - type: accuracy value: 0.7715 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzEyZDhlYTA4ZTYwYTg0ZWIwZDlmYzIyYWQ3YjY4NGQ1ZjVjYzJjODk2Mjc4YWRiNjU2NzhmMmJmNDUzNTIxMiIsInZlcnNpb24iOjF9.75SSI8U0ZlfehMx7Zh6LotmSB_Zp9taCnKCi23SVVHghX--eM0jy6OWtqf4IWxkEwb6yoTNxcyoOw_Av6UNTCg - type: f1 value: 0.7606415252231301 name: F1 Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMWIxYzJhNjgxNTM5ZTRjNDJmNTU2NGFlOWE2ZjViODk3NWJkMzA0YTMyYmUzNjdhM2RjNzhkYTViMDRjNDcyZiIsInZlcnNpb24iOjF9.wIjAJNlzzk-M8tsigytlLRYy0uQDGo3Qy1F7afmk5b1XrGnAy1E4Mw-JHDtbZ2uYZiPx0grbOOxL-yT_4DCSCg - type: f1 value: 0.7715000000000001 name: F1 Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTVhZDA4ZWM3YzgzYjVlOTk2ODkxNzQzNjBjMjBlMmZiM2QwN2QyYTVjMDUxNWU3ZTQ2MGZhNGIxYTY3NGI0ZSIsInZlcnNpb24iOjF9.-VYy5OLXwpaoiD4HR7wBjmV5izt2yTXvRbp93cs7jPvPEij7rkidjd-HpVaHMvIOLoTjxnKozFf0pmNQF06WBg - type: f1 value: 0.7732314418938615 name: F1 Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNmRmNjUyMjU3ZTljYmViMWRiNDMzODE4YTU3ZjU2YzQ3MDQyZGRhYjBmYzU0Yjk0Yjk3MzVmYjNjM2U5YzFjZCIsInZlcnNpb24iOjF9.BguI5gGX0H4P8LNTAayaBxv7rUYqvepCyKo9rOIsEXsTVN9N-J9IfjUGjptpKJBpOXEi_MGFLV6H7IJUyhdbDA - type: precision value: 0.7508336175429541 name: Precision Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzM2NDNlZGE2ZDNmYmMyZTU0OWYzOGYwYzM1NWE1YzIzNjBkZjA0NzkxYjY1ZWI4OTM5NWVkZTZkNjgzZTQ1MSIsInZlcnNpb24iOjF9.3YBiMV0HMcEtr4lFDe4BFhTkyfL0EL6Xk3V9ICNOtOMdNgDChRMnphsYh6WaUILJNA0qlmHzh7h_RpciLwMDBw - type: precision value: 0.7715 name: Precision Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDk5NDk0OGFlNTI1NDhkMTY3NWZmYmYwODBiY2M2YmI0YjkxOWJmYWZiZTViNWQ3ZDk2Mjk3OTNiMDMxMmEwMiIsInZlcnNpb24iOjF9._Zk6Kwarj5Jv_rLX9fp-Np6qwUZwyQ7dD-ylnCJtXEm-ZkarYemTLZqjq_1nWATD3vQcYoHlXD0RFOzYQxSaCw - type: precision value: 0.7782372190165424 name: Precision Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjI5YzIwODcwYjUwMTY1MDVjNThlMGUxMWUzMTQ5MGE5Nzk5ZmZlNTM1ZTQzYjJhNTFkYzkyYzQzOTUwZGRkNiIsInZlcnNpb24iOjF9.OoGtZoogQHq49Vh_MZMO4yASGembVB1xDE216tT_JQGV3zh0_IRdJ9eztxXOn3Hx8qxrQwSEwzKZKp3gj4l3Dw - type: recall value: 0.7762803886221606 name: Recall Macro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGI5YTI4OWI5ZGVkOWUyYzc3NzI1M2I1MWUzN2JmOGQ3ODlmN2MwMDI0MmI0ZjkxZWZjNDZjOTNkODg4ZmFlNCIsInZlcnNpb24iOjF9.fkdes7mIwaxI_8AVJuahiZoRq0MZzzMsjDddn8trtxi37fHCMEX86hf__Kmbs5AxrgtkJA3fd4H5iKcEaq1MBA - type: recall value: 0.7715 name: Recall Micro verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDkwZTQzNzhiNzgyODU1YTVjYzFiMTg4ZGZiYjg5ZTBlYTNkNWM5MWMyZTFkMjQyZDA0OGU3ODUwNzQ0MzNiNiIsInZlcnNpb24iOjF9.JtK5c3OOO9ryDKsddzAykHcj8nF-LvA96oF3MPTqB8FtyWuWQEBJAMhID-xhCgGTfEtD-n_LggDBeww1AZQOBg - type: recall value: 0.7715 name: Recall Weighted verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzc5OTY0NWYxNDM5ZDEyMDM2ZDdlYjQ0YWIwMzU2YTQ0YTBjMmE3NGEzOGIzNmY5ODEwNzQ3M2YyOWY3NDVkMCIsInZlcnNpb24iOjF9.gpw2NXq5Z6zj4JXXBDkETnY6dQxKDBLyQP3nGaKeRhTA_sQ7zud0xDiKKSJa8dckE4tSS6fjW-9xoAyvlxFxAw - type: loss value: 0.525364875793457 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmVhNTE5MThiNTMxMzZlOThiNWFhOGYzYjBkZjUzZjUwYWM5NGIxZjc1ZjIzMGRjZmIzZmVhNDAxZjVjNGUyZSIsInZlcnNpb24iOjF9.W3vo0Hdh0tL8kfWDGUjtYj6AUJCt8xYaW6WEiICUPhLVeRaUab_rwSGLiEQ5Sy1ccnOC38gEzZvrPlxs0VDlDg --- # Twitter-roBERTa-base for Sentiment Analysis - UPDATED (2022) This is a RoBERTa-base model trained on ~124M tweets from January 2018 to December 2021, and finetuned for sentiment analysis with the TweetEval benchmark. The original Twitter-based RoBERTa model can be found [here](https://huggingface.co/cardiffnlp/twitter-roberta-base-2021-124m) and the original reference paper is [TweetEval](https://github.com/cardiffnlp/tweeteval). This model is suitable for English. - Reference Paper: [TimeLMs paper](https://arxiv.org/abs/2202.03829). - Git Repo: [TimeLMs official repository](https://github.com/cardiffnlp/timelms). Labels: 0 -> Negative; 1 -> Neutral; 2 -> Positive This sentiment analysis model has been integrated into [TweetNLP](https://github.com/cardiffnlp/tweetnlp). You can access the demo [here](https://tweetnlp.org). ## Example Pipeline ```python from transformers import pipeline sentiment_task = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path) sentiment_task("Covid cases are increasing fast!") ``` ``` [{'label': 'Negative', 'score': 0.7236}] ``` ## Full classification example ```python from transformers import AutoModelForSequenceClassification from transformers import TFAutoModelForSequenceClassification from transformers import AutoTokenizer, AutoConfig import numpy as np from scipy.special import softmax # Preprocess text (username and link placeholders) def preprocess(text): new_text = [] for t in text.split(" "): t = '@user' if t.startswith('@') and len(t) > 1 else t t = 'http' if t.startswith('http') else t new_text.append(t) return " ".join(new_text) MODEL = f"cardiffnlp/twitter-roberta-base-sentiment-latest" tokenizer = AutoTokenizer.from_pretrained(MODEL) config = AutoConfig.from_pretrained(MODEL) # PT model = AutoModelForSequenceClassification.from_pretrained(MODEL) #model.save_pretrained(MODEL) text = "Covid cases are increasing fast!" text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) scores = output[0][0].detach().numpy() scores = softmax(scores) # # TF # model = TFAutoModelForSequenceClassification.from_pretrained(MODEL) # model.save_pretrained(MODEL) # text = "Covid cases are increasing fast!" # encoded_input = tokenizer(text, return_tensors='tf') # output = model(encoded_input) # scores = output[0][0].numpy() # scores = softmax(scores) # Print labels and scores ranking = np.argsort(scores) ranking = ranking[::-1] for i in range(scores.shape[0]): l = config.id2label[ranking[i]] s = scores[ranking[i]] print(f"{i+1}) {l} {np.round(float(s), 4)}") ``` Output: ``` 1) Negative 0.7236 2) Neutral 0.2287 3) Positive 0.0477 ```