marmarg2 commited on
Commit
f828179
·
1 Parent(s): 7e047d8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -1
README.md CHANGED
@@ -6,4 +6,75 @@ language:
6
  size_categories:
7
  - n<1K
8
  pretty_name: Detecting toxic and healthy adolescent relationships
9
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  size_categories:
7
  - n<1K
8
  pretty_name: Detecting toxic and healthy adolescent relationships
9
+ ---
10
+ # Dataset Card for Dataset toxic-teenage-relationships
11
+
12
+ ## Dataset Description
13
+
14
+ - **Homepage:**
15
+ - **Repository:**
16
+ - **Paper:**
17
+ - **Leaderboard:**
18
+ - **Point of Contact:** mmartinevqh@alumnos.unex.es
19
+
20
+ ### Dataset Summary
21
+ This dataset is prototypes collected by Spanish adolescents (4 girls and 4 boys) aged 15-19 years with previous training on toxic relationships. For 2 weeks, this group of people analyzed phrases that had occurred in their environment or that they produced themselves, classifying them as toxic or healthy and collecting them through a form.
22
+
23
+ ### Supported Tasks and Leaderboards
24
+
25
+ This dataset supported text-classification
26
+
27
+ ### Languages
28
+
29
+ The sentences are in Spanish.
30
+
31
+ ## Dataset Structure
32
+
33
+ ### Data Instances
34
+
35
+ A data point consists of a comment followed by a label that is associated with it. {'comment_text': 'Sample comment text', 'toxic': 0, }
36
+
37
+ ### Data Fields
38
+
39
+ comment_text: the text of the comment
40
+ toxic: value of 0(non-toxic) or 1(toxic) classifying the comment
41
+
42
+ ### Data Splits
43
+
44
+ The data is split into a training and testing set.
45
+
46
+ ## Dataset Creation
47
+
48
+
49
+ ### Curation Rationale
50
+
51
+ The dataset was created to help in efforts to identify and curb instances of toxicity between teenagers.
52
+
53
+ ### Source Data
54
+
55
+ #### Initial Data Collection and Normalization
56
+
57
+ This dataset is prototypes collected by me thanks to my group of students (4 girls and 4 boys) aged 15-19 with previous training on toxic relationships. For 2 weeks, this group of teenagers analysed phrases that had occurred in their environment (social media, direct communication) or that they themselves produced, classifying them as toxic or healthy and collecting them through a form.
58
+ Afterwards, the examples given by each student were discussed and evaluated by the others, using peer evaluation. The classification was also ratified by two specialists in the field.
59
+
60
+
61
+
62
+ ### Personal and Sensitive Information
63
+
64
+ No personal or sensitive information have been scored in this dataset.
65
+
66
+ ## Considerations for Using the Data
67
+
68
+ ### Social Impact of Dataset
69
+
70
+ [More Information Needed]
71
+
72
+ ### Discussion of Biases
73
+
74
+ If words associated with swearing, insults or profanity appear in a comment, it is likely to be classified as toxic, regardless of the author's tone or intention, e.g. humorous/self-critical. This could present some bias towards already vulnerable minority groups.
75
+
76
+
77
+ ### Licensing Information
78
+
79
+ Creative Commons Attribution-NonCommercial-ShareAlike (CC-BY-NC-SA)
80
+