dougtrajano commited on
Commit
67a8b61
·
1 Parent(s): dc410a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -0
README.md CHANGED
@@ -1,3 +1,125 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+
5
+ # OLID-BR
6
+
7
+ Offensive Language Identification Dataset for Brazilian Portuguese (OLID-BR) is a dataset with multi-task annotations for the detection of offensive language.
8
+
9
+ The current version (v1.0) contains **7,943** (extendable to 13,538) comments from different sources, including social media (YouTube and Twitter) and related datasets.
10
+
11
+ OLID-BR contains a collection of annotated sentences in Brazilian Portuguese using an annotation model that encompasses the following levels:
12
+
13
+ - [[Offensive content detection](#offensive-content-detection)]{Detect offensive content in sentences and categorize it.|top-right}
14
+ - [[Offense target identification](#offense-target-identification)]{Detect if an offensive sentence is targeted to a person or group of people.|top-right}
15
+ - [[Offensive spans identification](#offensive-spans-identification)]{Detect curse words in sentences.|top-right}
16
+
17
+ ![](https://dougtrajano.github.io/olid-br/images/olid-br-taxonomy.png)
18
+
19
+ ## Categorization
20
+
21
+ ### Offensive Content Detection
22
+
23
+ This level is used to detect offensive content in the sentence.
24
+
25
+ **Is this text offensive?**
26
+
27
+ We use the [[Perspective API](https://www.perspectiveapi.com/)]{Perspective API is the product of a collaborative research effort by Jigsaw and Google's Counter Abuse Technology team.|top-right} to detect if the sentence contains offensive content with double-checking by our [qualified annotators](annotation/index.en.md#who-are-qualified-annotators).
28
+
29
+ - `OFF` Offensive: Inappropriate language, insults, or threats.
30
+ - `NOT` Not offensive: No offense or profanity.
31
+
32
+ **Which kind of offense does it contain?**
33
+
34
+ The following labels were tagged by our annotators:
35
+
36
+ `Health`, `Ideology`, `Insult`, `LGBTQphobia`, `Other-Lifestyle`, `Physical Aspects`, `Profanity/Obscene`, `Racism`, `Religious Intolerance`, `Sexism`, and `Xenophobia`.
37
+
38
+ See the [Glossary](glossary.en.md) for further information.
39
+
40
+ ### Offense Target Identification
41
+
42
+ This level is used to detect if an offensive sentence is targeted to a person or group of people.
43
+
44
+ **Is the offensive text targeted?**
45
+
46
+ - `TIN` Targeted Insult: Targeted insult or threat towards an individual, a group or other.
47
+ - `UNT` Untargeted: Non-targeted profanity and swearing.
48
+
49
+ **What is the target of the offense?**
50
+
51
+ - `IND` The offense targets an individual, often defined as “cyberbullying”.
52
+ - `GRP` The offense targets a group of people based on ethnicity, gender, sexual
53
+ - `OTH` The target can belong to other categories, such as an organization, an event, an issue, etc.
54
+
55
+ ### Offensive Spans Identification
56
+
57
+ As toxic spans, we define a sequence of words that attribute to the text's toxicity.
58
+
59
+ For example, let's consider the following text:
60
+
61
+ > "USER `Canalha` URL"
62
+
63
+ The toxic spans are:
64
+
65
+ ```python
66
+ [5, 6, 7, 8, 9, 10, 11, 12, 13]
67
+ ```
68
+
69
+ ## Dataset Structure
70
+
71
+ ### Data Instances
72
+
73
+ Each instance is a social media comment with a corresponding ID and annotations for all the tasks described below.
74
+
75
+ ### Data Fields
76
+
77
+ The simplified configuration includes:
78
+
79
+ - `id` (string): Unique identifier of the instance.
80
+ - `text` (string): The text of the instance.
81
+ - `is_offensive` (string): Whether the text is offensive (`OFF`) or not (`NOT`).
82
+ - `is_targeted` (string): Whether the text is targeted (`TIN`) or untargeted (`UNT`).
83
+ - `targeted_type` (string): Type of the target (individual `IND`, group `GRP`, or other `OTH`). Only available if `is_targeted` is `True`.
84
+ - `toxic_spans` (string): List of toxic spans.
85
+ - `health` (boolean): Whether the text contains hate speech based on health conditions such as disability, disease, etc.
86
+ - `ideology` (boolean): Indicates if the text contains hate speech based on a person's ideas or beliefs.
87
+ - `insult` (boolean): Whether the text contains insult, inflammatory, or provocative content.
88
+ - `lgbtqphobia` (boolean): Whether the text contains harmful content related to gender identity or sexual orientation.
89
+ - `other_lifestyle` (boolean): Whether the text contains hate speech related to life habits (e.g. veganism, vegetarianism, etc.).
90
+ - `physical_aspects` (boolean): Whether the text contains hate speech related to physical appearance.
91
+ - `profanity_obscene` (boolean): Whether the text contains profanity or obscene content.
92
+ - `racism` (boolean): Whether the text contains prejudiced thoughts or discriminatory actions based on differences in race/ethnicity.
93
+ - `religious_intolerance` (boolean): Whether the text contains religious intolerance.
94
+ - `sexism` (boolean): Whether the text contains discriminatory content based on differences in sex/gender (e.g. sexism, misogyny, etc.).
95
+ - `xenophobia` (boolean): Whether the text contains hate speech against foreigners.
96
+
97
+ See the [**Get Started**](get-started.en.md) page for more information.
98
+
99
+ ## Considerations for Using the Data
100
+
101
+ ### Social Impact of Dataset
102
+
103
+ Toxicity detection is a worthwhile problem that can ensure a safer online environment for everyone.
104
+
105
+ However, toxicity detection algorithms have focused on English and do not consider the specificities of other languages.
106
+
107
+ This is a problem because the toxicity of a comment can be different in different languages.
108
+
109
+ Additionally, the toxicity detection algorithms focus on the binary classification of a comment as toxic or not toxic.
110
+
111
+ Therefore, we believe that the OLID-BR dataset can help to improve the performance of toxicity detection algorithms in Brazilian Portuguese.
112
+
113
+ ### Discussion of Biases
114
+
115
+ We are aware that the dataset contains biases and is not representative of global diversity.
116
+
117
+ We are aware that the language used in the dataset could not represent the language used in different contexts.
118
+
119
+ Potential biases in the data include: Inherent biases in the social media and user base biases, the offensive/vulgar word lists used for data filtering, and inherent or unconscious bias in the assessment of offensive identity labels.
120
+
121
+ All these likely affect labeling, precision, and recall for a trained model.
122
+
123
+ ## Citation
124
+
125
+ Pending