Datasets:
ofai
/

Modalities:
Text
Formats:
json
Languages:
German
Libraries:
Datasets
pandas
License:
johann-petrak commited on
Commit
bf68174
1 Parent(s): b9c7312

Initial commit

Browse files
Files changed (3) hide show
  1. README.md +95 -0
  2. datasheet.md +42 -0
  3. load_dataset.py +3 -0
README.md ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ task_categories:
4
+ - text-classification
5
+ language:
6
+ - de
7
+ pretty_name: GerMS-AT
8
+ size_categories:
9
+ - 1K<n<10K
10
+ ---
11
+
12
+ # German Misogyny/Sexsim - Austria (GerMS-AT) Dataset
13
+
14
+ ## Summary
15
+
16
+ This dataset contains user comments from an Austrian online newspaper.
17
+ The comments have been annotated by 2 or more out of 8 annotators
18
+ as to how strong sexism/mysogyny is present in the comment.
19
+
20
+ For each comment, the code of the annotator and the label assigned is given for all
21
+ annotators which have annotated that comment. Labels represent the severity of any
22
+ sexism/misogyny present in the comment from 0 (none),
23
+ 1 (mild), 2 (present), 3 (strong) to 4 (severe).
24
+
25
+ The dataset currently contains 7995 comments.
26
+
27
+ A unique propery of this corpus is that it contains only a small portion of sexist/misogynyst remarks
28
+ which use strong language, curse-words or otherwise blatantly offending terms, a large number
29
+ of comments contain more subtle, indirect or at times ambiguous forms of sexism/misogyny.
30
+
31
+ ## Data Structure
32
+
33
+ All comments are in a single JSONL file, one comment per line with the following properties:
34
+
35
+ * JSONL File: each line contains the JSON representation of a map
36
+ * Each map contains the information for one comment
37
+ * The map contains the following fields
38
+ * `text`: the text of the comment. The text may contain umlauts or other special characters and may contain arbitrary whitespace, newlines or carriage return charachters
39
+ * `annotations`: a list of maps, each map containing the fields "user" (the code of the annotator which provided the label) and "label" (the label assigned, see below).
40
+ * `round`: comments were annotated in rounds of 100, this gives the round identifier as a string containing a two-digit round number, e.g. "00" or "13"
41
+ * `source`: the code which identifies how comments which are likely negative and positive examples where selected for the annotation round
42
+
43
+ ### Annotator codes - the following table shows the possible annotator codes and the number of comments annotated by each of them:
44
+
45
+ | Annotator code | Annotations |
46
+ | -- | --: |
47
+ | A1m | 1298 |
48
+ | A2f | 7995 |
49
+ | A3m | 1699 |
50
+ | A4m | 1898 |
51
+ | A5f | 2097 |
52
+ | A7f | 1698 |
53
+ | A8f | 2498 |
54
+ | A9f | 3897 |
55
+
56
+ The suffix of the annotator code identifies the self-declared gender (f=female, m=male) of the annotator.
57
+
58
+ ### Comment source identifiers
59
+
60
+ The following comment source identifiers are present in the `source` field of the given number of comments:
61
+
62
+ | Comment source | number of comments |
63
+ | --- | ---: |
64
+ | forum1-sexist | 1400
65
+ | meld02/meld02neg | 1000
66
+ | meld02/neg01 | 999
67
+ | meld04/meld04neg | 899
68
+ | forum2 | 800
69
+ | forum1 | 799
70
+ | meld02CLpos/meld02CLneg | 700
71
+ | meld01/meld01neg | 698
72
+ | meld03/meld03neg | 500
73
+ | meld01/neg01 | 200
74
+
75
+
76
+ ## Language
77
+
78
+ The comments are from a German (mostly Austrian variant) language web site, but may contain English terms which are
79
+ commonly used by German speakers, quotes of English text or other non-German parts.
80
+
81
+ ## Anonymization
82
+
83
+ No metatdata about the comments is provided, the date of when the comment was written is deliberately not provided.
84
+ The comment texts have been scanned automatically and manually for any occurrence of information about the username or
85
+ real name of a person. Any such occurrence has been replaced with the placeholder `{USER}`. In ambiguous cases where it was not
86
+ clear if a name refers to a user or e.g. somebody mentioned in the newspaper article, the name was replaced by the placeholder.
87
+ All mentions of web addresses / URLs were replaced with the placeholder `{URL}`
88
+
89
+ ## Datasheet
90
+
91
+ See the detailled [datasheet](./datasheet.md)
92
+
93
+ ## Papers
94
+
95
+ TBD
datasheet.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GerMS-AT Dataset Datasheet
2
+
3
+ This file contains information about the GerMS-AT Dataset structured according to
4
+ ["T. Gebru et.al. (2021): Datasheets for Datasets"](https://arxiv.org/abs/1803.09010)
5
+
6
+ If there is a need for additional information or clarification, please feel free to contact any
7
+ of the the maintainers of this repository.
8
+
9
+ ### Motivation
10
+
11
+ * Purpose of dataset creation:
12
+ * Dataset creators:
13
+ * Funding of dataset creation:
14
+
15
+ ### Composition
16
+
17
+ * Instance representation:
18
+ * Number of instances:
19
+ * Completeness/sampling:
20
+ * Data per instance:
21
+ * Label/target per instance:
22
+ * Missing per-instance information:
23
+ * Relationships between instances:
24
+ * Recommended data splits:
25
+ * Errors, sources of noise, redundancies:
26
+ * Self contained:
27
+ * Presence of confidential information:
28
+ * Presence of offensive or otherwise problematic data:
29
+ * Identifyability of subpopulations:
30
+ * Identifyability of individuals:
31
+ * Presence of sensitive information:
32
+
33
+ ### Collection Process
34
+
35
+ * Data associated with each instance:
36
+ * Data collection procedure:
37
+ * Sampling strategy:
38
+ * People involved in the data collection process:
39
+ * Collection period:
40
+ * Ethical review processes:
41
+ * Direct/indirect data collection:
42
+
load_dataset.py ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ from datasets import load_dataset
2
+
3
+ dataset = load_dataset("ofai/GerMS-AT")