OLID / README.md
christophsonntag's picture
Update README.md
39b34cf verified
|
raw
history blame
No virus
8.57 kB
metadata
task_categories:
  - text-classification
language:
  - en
pretty_name: Offensive Language Identification Dataset
configs:
  - config_name: default
    data_files: '*.tsv'
    sep: "\t"
size_categories:
  - 10K<n<100K

Dataset Card for Dataset Name

The Offensice Language Identification Dataset (OLID) contains 14,100 annotated tweets from Twitter, annotated with three subcategories via crowdsourcing and has been released together with the paper Predicting the Type and Target of Offensive Posts in Social Media.

Previous datasets mainly focused on detecting specific types of offensive messages (hate speech, cyberbulling, etc.) but did not consider offensive language as a whole. This dataset is annoated using a hierarchical annotation with up to 3 labels corresponding to offensive language detection (OFF/NOT), automatic categorization of offense types (TIN/UNT) and offense target identification (IND/GRP/OTH), described below.

Dataset Details

"The gold labels were assigned taking the agreement of three annotators into consideration. No correction has been carried out on the crowdsourcing annotations. Twitter user mentions were substituted by @USER and URLs have been substitute by URL.

OLID is annotated using a hierarchical annotation. Each instance contains up to 3 labels each corresponding to one of the following levels:

  • Level (or sub-task) A: Offensive language identification;

  • Level (or sub-task) B: Automatic categorization of offense types;

  • Level (or sub-task) C: Offense target identification." (Source)

Tasks and Labels (Source)

(A) Level A: Offensive language identification

  • (NOT) Not Offensive - This post does not contain offense or profanity.
  • (OFF) Offensive - This post contains offensive language or a targeted (veiled or direct) offense

In our annotation, we label a post as offensive (OFF) if it contains any form of non-acceptable language (profanity) or a targeted offense, which can be veiled or direct.

(B) Level B: Automatic categorization of offense types

  • (TIN) Targeted Insult and Threats - A post containing an insult or threat to an individual, a group, or others (see categories in sub-task C).
  • (UNT) Untargeted - A post containing non-targeted profanity and swearing.

Posts containing general profanity are not targeted, but they contain non-acceptable language.

(C) Level C: Offense target identification

  • (IND) Individual - The target of the offensive post is an individual: a famous person, a named individual or an unnamed person interacting in the conversation.
  • (GRP) Group - The target of the offensive post is a group of people considered as a unity due to the same ethnicity, gender or sexual orientation, political affiliation, religious belief, or something else.
  • (OTH) Other – The target of the offensive post does not belong to any of the previous two categories (e.g., an organization, a situation, an event, or an issue)

Dataset Description

  • Curated by: [More Information Needed]
  • Funded by [optional]: [More Information Needed]
  • Shared by [optional]: [More Information Needed]
  • Language(s) (NLP): English
  • License: [More Information Needed]

Dataset Sources [optional]

Uses

Direct Use

[More Information Needed]

Out-of-Scope Use

[More Information Needed]

Dataset Structure

[More Information Needed]

Dataset Creation

Curation Rationale

The goal of this dataset was

[More Information Needed]

Source Data

The data originates from Twitter

Data Collection and Processing

The authors retrieved the samples "from Twitter using its API and searching for keywords and constructions that are often included in offensive messages, such as ‘she is’ or ‘to:BreitBartNews’" (Source).

They used the following keywords (except for the first three rows)

Keyword Offensive %
medical marijuana 0.0
they are 5.9
to:NewYorker 8.3
--------- -----
you are 21.0
she is 26.6
to:BreitBartNews 31.6
he is 32.4
gun control 34.7
-filter:safe 58.9
conservatives 23.2
antifa 26.7
MAGA 27.7
liberals 38.0

[More Information Needed]

Who are the source data producers?

[More Information Needed]

Annotations [optional]

Extensive information on this can be found in the original paper in the Data Collection section.

Annotation process

The annotation has been executed in a crowdsourcing process, where the gold label has been created by considering the annotations of three different annotators.

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

Usernames have been replaced by "USER", URL's by "URL". [More Information Needed]

Bias, Risks, and Limitations

[More Information Needed]

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation [optional]

BibTeX:

[More Information Needed]

APA:

[More Information Needed]

Glossary [optional]

[More Information Needed]

More Information [optional]

[More Information Needed]

Dataset Card Authors [optional]

[More Information Needed]

Dataset Card Contact

[More Information Needed]