File size: 4,418 Bytes
e1d2379
 
193e112
 
 
 
 
 
 
87e4fd7
771682a
 
193e112
 
771682a
193e112
 
771682a
193e112
771682a
 
21a8ea9
 
 
 
e1d2379
21a8ea9
 
 
 
19d644e
21a8ea9
 
87de4ff
 
 
21a8ea9
87de4ff
21a8ea9
87de4ff
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21a8ea9
 
 
 
 
 
87de4ff
 
 
21a8ea9
 
 
 
 
 
 
87de4ff
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
license: mit
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: text
    dtype: string
  - name: label
    dtype: int64
  - name: label_text
    dtype: string
  splits:
  - name: train
    num_bytes: 12150228.415975923
    num_examples: 74159
  - name: test
    num_bytes: 1350043.584024078
    num_examples: 8240
  download_size: 8392302
  dataset_size: 13500272.0
language:
- en
size_categories:
- 10K<n<100K
---

# Dataset Card for Dataset Name

This is an edit of original work from [this paper](https://arxiv.org/abs/2012.15761), which I have uploaded to Huggingface [here](https://huggingface.co/datasets/LennardZuendorf/Dynamically-Generated-Hate-Speech-Dataset/edit/main/README.md). It is not my original work, I just edited it. 
Data is used in the similarly named Interpretor-Model.
## Dataset Description

- **Homepage:** [ignitr.tech](ignitr.tech/interpretor)
- **Repository:** [GitHub Monorepo](https://github.com/LennardZuendorf/interpretor)
- **Author:** Lennard Zündorf

### Original Dataset Description

- **Source Homepage:** [GitHub](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset)
- **Source Contact:** [bertievidgen@gmail.com](mailto:bertievidgen@gmail.com)
- **Original Source:** [Dynamically-Generated-Hate-Speech-Dataset](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset)
- **Original Author List:** Bertie Vidgen (The Alan Turing Institute), Tristan Thrush (Facebook AI Research), Zeerak Waseem (University of Sheffield) and Douwe Kiela (Facebook AI Research).

**Refer to the Huggingface or GitHub Repo for more information**

### Dataset Summary
This Dataset contains dynamically generated hate-speech, already split into training (90%) and testing (10%). I inteded it to be used for classifcation tasks like [this]() model.

### Languages
The only represented language is english.

## Dataset Structure

### Data Instances

Each entry looks like this (train and test).

```
{
  'id': ...,
  'text': ,
  ''
}
```

Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.

### Data Fields

List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.

- `example_field`: description of `example_field`

Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [Datasets Tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.

### Data Splits

Describe and name the splits in the dataset if there are more than one.

Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.

Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length.  For example:

|                         | train | validation | test |
|-------------------------|------:|-----------:|-----:|
| Input Sentences         |       |            |      |
| Average Sentence Length |       |            |      |


## Additional Information

### Licensing Information

- The original repository does not provide any license, but is free for use with proper citation of the original paper in the Proceedings of ACL 2021, available on [Arxiv](https://arxiv.org/abs/2012.15761)
- This dataset can be used under the MIT license, with proper citation of both the original and this source.
- But I suggest taking data from the original source and doing your own editing.

### Citation Information

Please cite this repository and the original authors when using it.

### Contributions

I removed some data fields and did a new split with hugging face datasets.