File size: 2,593 Bytes
dd4ddf6
 
 
 
 
 
 
7e047d8
f828179
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50f8245
f828179
 
 
50f8245
 
 
f828179
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
task_categories:
- text-classification
language:
- es
size_categories:
- n<1K
pretty_name: Detecting toxic and healthy adolescent relationships
---
# Dataset Card for Dataset toxic-teenage-relationships

## Dataset Description

- **Homepage:** 
- **Repository:** 
- **Paper:** 
- **Leaderboard:** 
- **Point of Contact:** mmartinevqh@alumnos.unex.es 

### Dataset Summary
This dataset is prototypes collected by Spanish adolescents (4 girls and 4 boys) aged 15-19 years with previous training on toxic relationships. For 2 weeks, this group of people analyzed phrases that had occurred in their environment or that they produced themselves, classifying them as toxic or healthy and collecting them through a form.

### Supported Tasks and Leaderboards

This dataset supported text-classification

### Languages

The sentences are in Spanish.

## Dataset Structure

### Data Instances

A data point consists of a comment followed by a label that is associated with it.  {'label': 0,'text': 'Sample comment text',  }

### Data Fields

label: value of 0(non-toxic) or 1(toxic) classifying the comment
text: the text of the comment


### Data Splits

The data is split into a training and testing set.

## Dataset Creation


### Curation Rationale

The dataset was created to help in efforts to identify and curb instances of toxicity between teenagers.

### Source Data

#### Initial Data Collection and Normalization

This dataset is prototypes collected by me thanks to my group of students (4 girls and 4 boys) aged 15-19 with previous training on toxic relationships. For 2 weeks, this group of teenagers analysed phrases that had occurred in their environment (social media, direct communication) or that they themselves produced, classifying them as toxic or healthy and collecting them through a form.
Afterwards, the examples given by each student were discussed and evaluated by the others, using peer evaluation. The classification was also ratified by two specialists in the field.



### Personal and Sensitive Information

No personal or sensitive information have been scored in this dataset.

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

If words associated with swearing, insults or profanity appear in a comment, it is likely to be classified as toxic, regardless of the author's tone or intention, e.g. humorous/self-critical. This could present some bias towards already vulnerable minority groups.


### Licensing Information

Creative Commons Attribution-NonCommercial-ShareAlike (CC-BY-NC-SA)