Edit model card

Model Card for Model COVID-19-CT-tweets-classification

Model Description

This is a DeBERTa-v3-base-tasksource-nli model with an adapter trained on webimmunization/COVID-19-conspiracy-theories-tweets, which contains 6,590 pairs of a tweet and a conspiracy theory along with class labels: support, deny, neutral. The model was finetuned for text classification to predict whether a tweet supports a given conspiracy theory or not. The model was trained on tweets related to six common COVID-19 conspiracy theories.

  1. CT1: Vaccines are unsafe. The coronavirus vaccine is either unsafe or part of a larger plot to control people or reduce the population.

  2. CT2: Governments and politicians spread misinformation. Politicians or government agencies are intentionally spreading false information, or they have some other motive for the way they are responding to the coronavirus.

  3. CT3: The Chinese intentionally spread the virus. The Chinese government intentionally created or spread the coronavirus to harm other countries.

  4. CT4: Deliberate strategy to create economic instability or benefit large corporations. The coronavirus or the government's response to it is a deliberate strategy to create economic instability or to benefit large corporations over small businesses.

  5. CT5: Public was intentionally misled about the true nature of the virus and prevention. The public is being intentionally misled about the true nature of the Coronavirus, its risks, or the efficacy of certain treatments or prevention methods.

  6. CT6: Human made and bioweapon. The Coronavirus was created intentionally, made by humans, or as a bioweapon.

This model is suitable for English only.

Uses

The model was trained to classify a pair of short texts: tweet and conspiracy theory. The model returns a float number which represents the likelihood that the tweet supports a given conspiracy theory.

Out-of-Scope Use

Spreading/Generating Tweets that support conspiracy theories:

This model is explicitly designed for the purpose of classifying and understanding tweets related to COVID-19 conspiracy theories, particularly to determine whether a tweet supports or denies a specific conspiracy theory. It is not intended for, and should not be used to generate or propagate tweets that endorse or support any conspiracy theory. Any use of the model for such purposes is considered unethical and goes against the intended use case.

Amplifying echo chambers of social subnetworks susceptible to conspiracy theories:

While the model can help identify tweets that are related to conspiracy theories, it is important to note that it should not be used to target or amplify echo chambers or social subnetworks that are susceptible to believing in conspiracy theories. Ethical use of this model involves promoting responsible and unbiased information dissemination and discourages actions that may contribute to the spread of misinformation or polarization. Users should be cautious about using this model in ways that may further divide communities or promote harmful narratives.

Bias, Risks, and Limitations

Results may be distorted for conspiracy theories out of the training dataset:

This model has been specifically fine-tuned to classify tweets related to a predefined set of COVID-19 conspiracy theories. As a result, its performance may not be as reliable when applied to conspiracy theories or topics that were not included in the training data. Users should exercise caution and consider the potential for distorted results when applying this model to subjects beyond its training scope. The model may not perform well in categorizing or understanding content that falls outside the designated conspiracy theories.

Unintentional stifling of legitimate public discourse:

The model's primary purpose is to identify tweets related to COVID-19 conspiracy theories, and it is not intended to stifle legitimate public discourse or eliminate discussions that merely resemble conspiracy theories. There is a risk that using this model inappropriately may lead to the suppression of valid conversations and the removal of content that is not explicitly conspiratorial but might be flagged due to similarities in language or topic. Users should be aware of this limitation and use the model judiciously, ensuring that it does not impede the free exchange of ideas and discussions.

Bias in decision making:

Like many machine learning models, this model may exhibit bias in its decision-making process. Factors such as the text style which may represent the socio-economical status of the individuals may inadvertently affect the model's classifications. The model's outputs may not always be entirely free from bias and to use its predictions as supplementary information rather than definitive judgments.

How to Get Started with the Model (Demo)

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("webimmunization/COVID-19-CT-tweets-classification")
model = AutoModelForSequenceClassification.from_pretrained("webimmunization/COVID-19-CT-tweets-classification")

tweet = "bill gates has been talking about population control openly for years - now we have a coronavirus vaccine! coincidence? i think not!"
conspiracy_theory = "the coronavirus vaccine is either unsafe or part of a larger plot to control people or reduce the population."

encoded_input = tokenizer(tweet, conspiracy_theory, return_tensors="pt")
logits = model(encoded_input.input_ids, encoded_input.attention_mask).logits
support_likelihood = logits.softmax(dim=1)[0].tolist()[0]  # 0.93198

Loading the model shouldn't take more than 10 minutes depending on the Internet connection.

Training Details

Training Data

The model was finetuned with webimmunization/COVID-19-CT-tweets-classification and and mnli datasets.

Training Procedure

The adapter was trained for 5 epochs with a batch size of 16.

System requirements

We used Python 3.10, PyTorch 2.0.1, and transformers 4.27.0.

Preprocessing

The training data was cleaned before the training. All URLs, Twitter user mentions, and non-ASCII characters were removed.

Evaluation

The model was evaluated on a sample of the tweets collected during the COVID-19 pandemic. All the tweets were rated against each of the six theories by five annotators. Using sliding scales, they rated each tweets' endorsement likelihood for the respective conspiracy theory from 0% to 100%. The consensus among raters was substantial for every conspiracy theory. Comparisons with human evaluations revealed substantial correlations. The model significantly surpasses the performance of the pre-trained model without the finetuned adapter (see table below).

Conspiracy Theory Correlations between human raters Correlation between human ratings and model without adapter Correlation between human ratings and model with finetuned adapter
Vaccines are unsafe. 0.658 0.371 0.590
Governments and politicians spread misinformation. 0.580 0.306 0.648
The Chinese intentionally spread the virus. 0.623 0.530 0.648
Deliberate strategy to create economic instability or benefit large corporations. 0.562 0.336 0.508
Public was intentionally misled about the true nature of the virus and prevention. 0.668 0.157 0.717
Human made and bioweapon. 0.784 0.293 .735

Environmental Impact

Carbon emissions are estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

Model Card Authors

@ikrysinska

Model Card Contact

izabela.krysinska@doctorate.put.poznan.pl

tomi.wojtowicz@doctorate.put.poznan.pl

mikolaj.morzy@put.poznan.pl

The research leading to these results has received funding from the EEA Financial Mechanism 2014–2021. Project registration number: 2019/35/J/HS6/03498

Downloads last month
5

Datasets used to train webimmunization/COVID-19-CT-tweets-classification