Back to all models
text-classification mask_token: [MASK]
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

							curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
Share Copied link to clipboard

Monthly model downloads

shrugging-grace/tweetclassifier shrugging-grace/tweetclassifier
last 30 days



Contributed by

shrugging-grace jme-p
1 model

How to use this model directly from the 🤗/transformers library:

Copy to clipboard
from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("shrugging-grace/tweetclassifier") model = AutoModelForSequenceClassification.from_pretrained("shrugging-grace/tweetclassifier")


Model description

This model classifies tweets as either relating to the Covid-19 pandemic or not.

Intended uses & limitations

It is intended to be used on tweets commenting on UK politics, in particular those trending with the #PMQs hashtag, as this refers to weekly Prime Ministers' Questions.

How to use

LABEL_0 means that the tweet relates to Covid-19

LABEL_1 means that the tweet does not relate to Covid-19

Training data

The model was trained on 1000 tweets (with the "#PMQs'), which were manually labeled by the author. The tweets were collected between May-July 2020.

BibTeX entry and citation info

This was based on a pretrained version of BERT.

@article{devlin2018bert, title={Bert: Pre-training of deep bidirectional transformers for language understanding}, author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina}, journal={arXiv preprint arXiv:1810.04805}, year={2018} }