LennardZuendorf commited on
Commit
0e5d8a3
1 Parent(s): 19d644e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -38
README.md CHANGED
@@ -18,75 +18,59 @@ dataset_info:
18
  num_bytes: 1350043.584024078
19
  num_examples: 8240
20
  download_size: 8392302
21
- dataset_size: 13500272.0
22
  language:
23
  - en
24
  size_categories:
25
  - 10K<n<100K
 
 
 
26
  ---
27
 
28
  # Dataset Card for Dataset Name
29
 
30
- This is an edit of original work from [this paper](https://arxiv.org/abs/2012.15761), which I have uploaded to Huggingface [here](https://huggingface.co/datasets/LennardZuendorf/Dynamically-Generated-Hate-Speech-Dataset/edit/main/README.md). It is not my original work, I just edited it.
31
- Data is used in the similarly named Interpretor-Model.
 
32
  ## Dataset Description
33
 
34
- - **Homepage:** [ignitr.tech](ignitr.tech/interpretor)
35
  - **Repository:** [GitHub Monorepo](https://github.com/LennardZuendorf/interpretor)
36
  - **Author:** Lennard Zündorf
37
 
38
  ### Original Dataset Description
39
 
40
- - **Source Homepage:** [GitHub](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset)
41
- - **Source Contact:** [bertievidgen@gmail.com](mailto:bertievidgen@gmail.com)
42
  - **Original Source:** [Dynamically-Generated-Hate-Speech-Dataset](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset)
43
  - **Original Author List:** Bertie Vidgen (The Alan Turing Institute), Tristan Thrush (Facebook AI Research), Zeerak Waseem (University of Sheffield) and Douwe Kiela (Facebook AI Research).
44
 
45
  **Refer to the Huggingface or GitHub Repo for more information**
46
 
47
  ### Dataset Summary
48
- This Dataset contains dynamically generated hate-speech, already split into training (90%) and testing (10%). I inteded it to be used for classifcation tasks like [this]() model.
49
 
50
- ### Languages
51
- The only represented language is english.
52
 
53
- ## Dataset Structure
 
 
54
 
55
- ### Data Instances
56
 
57
- Each entry looks like this (train and test).
58
 
59
- ```
60
- {
61
- 'id': ...,
62
- 'text': ,
63
- ''
64
- }
65
- ```
66
 
67
- Provide any additional information that is not covered in the other sections about the data here. In particular describe any relationships between data points and if these relationships are made explicit.
 
 
68
 
69
  ### Data Fields
70
 
71
- List and describe the fields present in the dataset. Mention their data type, and whether they are used as input or output in any of the tasks the dataset currently supports. If the data has span indices, describe their attributes, such as whether they are at the character level or word level, whether they are contiguous or not, etc. If the datasets contains example IDs, state whether they have an inherent meaning, such as a mapping to other datasets or pointing to relationships between data points.
72
-
73
- - `example_field`: description of `example_field`
74
-
75
- Note that the descriptions can be initialized with the **Show Markdown Data Fields** output of the [Datasets Tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging), you will then only need to refine the generated descriptions.
76
-
77
- ### Data Splits
78
-
79
- Describe and name the splits in the dataset if there are more than one.
80
-
81
- Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g. if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
82
-
83
- Provide the sizes of each split. As appropriate, provide any descriptive statistics for the features, such as average length. For example:
84
-
85
- | | train | validation | test |
86
- |-------------------------|------:|-----------:|-----:|
87
- | Input Sentences | | | |
88
- | Average Sentence Length | | | |
89
 
 
 
 
90
 
91
  ## Additional Information
92
 
 
18
  num_bytes: 1350043.584024078
19
  num_examples: 8240
20
  download_size: 8392302
21
+ dataset_size: 13500272
22
  language:
23
  - en
24
  size_categories:
25
  - 10K<n<100K
26
+ tags:
27
+ - not-for-all-audiences
28
+ - legal
29
  ---
30
 
31
  # Dataset Card for Dataset Name
32
 
33
+ This is an edit of original work from Bertie Vidgen, Tristan Thrush, Zeerak Waseem and Douwe Kiela. Which I have uploaded to Huggingface [here](https://huggingface.co/datasets/LennardZuendorf/Dynamically-Generated-Hate-Speech-Dataset/edit/main/README.md). It is not my original work, I just edited it.
34
+ Data is used in the similarly named Interpretor Model.
35
+
36
  ## Dataset Description
37
 
38
+ - **Homepage:** [zuendorf.me](https://www.zuendorf.me)
39
  - **Repository:** [GitHub Monorepo](https://github.com/LennardZuendorf/interpretor)
40
  - **Author:** Lennard Zündorf
41
 
42
  ### Original Dataset Description
43
 
44
+ - **Original Source Contact:** [bertievidgen@gmail.com](mailto:bertievidgen@gmail.com)
 
45
  - **Original Source:** [Dynamically-Generated-Hate-Speech-Dataset](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset)
46
  - **Original Author List:** Bertie Vidgen (The Alan Turing Institute), Tristan Thrush (Facebook AI Research), Zeerak Waseem (University of Sheffield) and Douwe Kiela (Facebook AI Research).
47
 
48
  **Refer to the Huggingface or GitHub Repo for more information**
49
 
50
  ### Dataset Summary
51
+ This Dataset contains dynamically generated hate-speech, processed to be used in classification tasks with i.E. BERT.
52
 
53
+ ### Edit Summary
 
54
 
55
+ - I have edited the dataset to use it in training the similarly named [Interpretor Classifier]()
56
+ - see data/label fields below and the original dataset [here](https://huggingface.co/datasets/LennardZuendorf/Dynamically-Generated-Hate-Speech-Dataset/edit/main/README.md)
57
+ - Edits mostly include cleaning out information not needed for a simple binary classification tasks and adding a numerical binary label
58
 
59
+ ## Dataset Structure
60
 
 
61
 
62
+ ### Split
 
 
 
 
 
 
63
 
64
+ - The dataset is split into train and test, in a 90% to 10% split
65
+ - Train = ~ 74k entries
66
+ - Test = ~ 8k entries
67
 
68
  ### Data Fields
69
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
 
71
+ | id | text | label | label_text |
72
+ | - | - | - | - |
73
+ | numeric id | text of the comment | binary label, 0 = not hate, 1 = hate | label in text form
74
 
75
  ## Additional Information
76