Datasets:

ArXiv:
rulefollower_parsed / README.md
whytea's picture
Update README.md
b7f0123 verified

RuleFollower – Parsed Datasets

This repository contains parsed datasets used in the RuleFollower project.

Datasets

  • GWSD
  • Misinfo
  • Misinfo Cancer
  • Bureaucracies
  • Tweets23
  • Tweets Congress
  • Tweets News
  • Tweets
  • Rumoureval2019
  • Hatecot
  • Implicit Hate
  • Yelp Reviews

Each folder contains a data.csv file with the parsed annotations. Each dataset is capped at 5k samples (full dataset used if <5k).

Each data file includes standardized columns:

  • Text: The text input for annotation
  • id: Unique identifier for the row
  • source: Name of the original dataset or paper
  • ground_truth: (Optional) Gold label, if available in the original dataset

RumorEval

  • Text content: Replies to source tweets about rumored events
  • Source: Gorrell et al (2019)
  • Dataset: Huggingface
  • Annotation goal: Classify reply stance as support, deny, query, or comment
  • Ground truth: Provided (stance label)
  • Note: Only reply texts are retained for classification.

HateCoT

  • Text content: Social media posts from 8 hate/offensive speech datasets
  • Source: Nghiem and Daumé III (2024)
  • Dataset: Github
  • Annotation goal: Classify post as benign, offensive, or hateful
  • Ground truth: Provided
  • Note: The original HateCoT dataset combines samples from 8 hate speech datasets with diverse annotation schemes. We standardized its many fine-grained labels into 3 unified classes:
    • 0 = Benign (e.g., "Not Hate", "Normal", "Neutral"),
    • 1 = Offensive (e.g., "Toxic", "Offensive"),
    • 2 = Hateful (e.g., "Hate", "Dehumanization", "Directed Abuse").

Tweets(2023) / Tweets News(2017) / Tweets(2020-2021)

  • Text content: Tweets from different public Twitter samples spanning 2017 to 2023, focused on content moderation and related debates. Sample sizes vary across years and sources.
  • Source: Gilardi et al (2023)
  • Dataset: Harvard Dataverse
  • Ground truth: Not provided
  • Supported tasks (from annotation codebook, excluding political content tasks):
Task ID Task Name Task Description Summary Labels
T1 Content Moderation Relevance Is the tweet about content moderation? relevant (1), irrelevant (0)
T3 Problem/Solution Frame Does the tweet portray content moderation as a problem, a solution, or neither? problem, solution, neutral
T4 Policy Frame (Moderation) What policy dimension frames the content moderation issue? e.g., morality, fairness, security, equality, health, ...
T6 Stance on Section 230 Does the tweet support, oppose, or remain neutral about Section 230 of U.S. law? positive, negative, neutral
T7 Topic Classification What is the topic in relation to content moderation? section 230, trump ban, complaints, platform policies, ...
  • Note:
    • Each of these tasks has a dedicated .txt task description under prompt/task_descriptions/.
    • The same dataset can be reused across tasks; task choice is controlled by the selected prompt.
    • Tweets by U.S. Congress members (used for political content) are handled separately (see below).

Tweets Congressional

  • Text content: Tweets by U.S. Congress members (2017–2022)
  • Source: Gilardi et al (2023)
  • Dataset: Harvard Dataverse
  • Annotation goal: Binary classification: whether tweet is political
  • Ground truth: Not provided

Misinfo / Misinfo Cancer

  • Text content:News headlines (The Misinfo Reaction Frames corpus is an dataset of 25k news headlines to articles that have been factchecked. The articles can be about Covid-19, Cancer or Climate Change)
  • Source: Gabriel et al. (2022)
  • Dataset: Github
  • Annotation goal: Classify headline as misinformation or not
  • Ground truth: Provided

Implicit Hate

  • Text content:A dataset of English-language tweets annotated to capture both explicit and implicit hate speech. Implicit hate includes stereotypical, sarcastic, or coded language that may not be overtly hateful. The data was originally collected from Twitter and the Social Bias Inference Corpus.
  • Source: Elsafoury et al. (2021)
  • Dataset: Github
  • Annotation goal: Classify post as explicit_hate, implicit_hate, or not_hate
  • Ground truth: Provided
  • We use Stage 1 annotations for parsing:
    • explicit_hate: overt hate speech
    • implicit_hate: indirect hate speech (e.g., stereotypes, sarcasm)
    • not_hate: no hateful content
  • Note:
    • The dataset originally included multiple annotation stages (e.g., fine-grained subtypes, implied statements).
    • Additional parsing for Stage 2 (fine-grained) or Stage 3 (target/implied meaning) will be added later.

GWSD (Global Warming Stance Dataset)

  • Text content: News spans (Opinion spans extracted from global warming news articles, published from Jan. 1, 2000 to April 12, 2020 by various U.S. news sources.)
  • Source: Luo et al. (2020)
  • Dataset: Github
  • Annotation goal: Classify stance toward the statement: “Climate change is a serious concern”
    Labels: agree, neutral, or disagree
  • Ground truth: Provided

Bureaucracies

  • Text content: Crisis communication texts (bureaucratic cables and memos from international crises)
  • Source: Schub (2022)
  • Dataset: Data
  • Annotation goal:
    • Info Type Task: Classify whether the text conveys political or military information relevant to Cold War crisis decision-making.
    • Certainty Task: Determine whether the adviser expresses the information with certainty or uncertainty.
  • Ground truth: Available only by "Info Type Task"

Yelp Reviews

  • Text content: Yelp restaurant reviews
  • Dataset: Data
  • Annotation goal:
    • Sentiment Task: Classify the overall evaluation expressed in a Yelp restaurant review.
  • Ground truth: Provided