Datasets:

Languages:
German
License:
File size: 8,046 Bytes
1361a6e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
# coding=utf-8
# Copyright 2023 HuggingFace Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""GermEval 2018 Shared Task on the Identification of Offensive Language"""


import json

import datasets


_CITATION = """\
@incollection{WiegandSiegelRuppenhofer2019,
  author    = {Michael Wiegand and Melanie Siegel and Josef Ruppenhofer},
  title     = {Overview of the GermEval 2018 Shared Task on the Identification of Offensive Language},
  series = {Proceedings of GermEval 2018, 14th Conference on Natural Language Processing (KONVENS 2018), Vienna, Austria – September 21, 2018},
  editor    = {Josef Ruppenhofer and Melanie Siegel and Michael Wiegand},
  publisher = {Austrian Academy of Sciences},
  address   = {Vienna, Austria},
  isbn      = {978-3-7001-8435-5},
  url       = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-84935},
  pages     = {1 -- 10},
  year      = {2019},
  abstract  = {We present the pilot edition of the GermEval Shared Task on the Identification of Offensive Language. This shared task deals with the classification of German tweets from Twitter. It comprises two tasks, a coarse-grained binary classification task and a fine-grained multi-class classification task. The shared task had 20 participants submitting 51 runs for the coarse-grained task and 25 runs for the fine-grained task. Since this is a pilot task, we describe the process of extracting the raw-data for the data collection and the annotation schema. We evaluate the results of the systems submitted to the shared task. The shared task homepage can be found at https://projects.cai. fbi.h-da.de/iggsa/},
  language  = {en}
}
"""

_LICENSE = """\
By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.
"""

_DESCRIPTION = """\
# Task Description

Participants were allowed to participate in one or
both tasks and submit at most three runs per task.

## Task 1: Coarse-grained Binary Classification

Task 1 was to decide whether a tweet includes some
form of offensive language or not. The tweets had
to be classified into the two classes OFFENSE and
OTHER. The OFFENSE category covered abusive
language, insults, as well as merely profane statements.

## Task 2: Fine-grained 4-way Classification

The second task involved four categories, a nonoffensive OTHER class and three sub-categories of what is OFFENSE in 
Task 1. In the case of PROFANITY, profane words are used, however, the tweet does not want to insult anyone. This 
typically concerns the usage of swearwords (Scheiße, Fuck etc.) and cursing (Zur Holle! Verdammt! etc.). This can be 
often found in youth language. Swearwords and cursing may, but need not, co-occur with insults or abusive speech. 
Profane language may in fact be used in tweets with positive sentiment to express emphasis. Whenever profane words are 
not directed towards a specific person or group of persons and there are no separate cues of INSULT or ABUSE, then 
tweets are labeled as simple cases of PROFANITY.

In the case of INSULT, unlike PROFANITY, the tweet clearly wants to offend someone. INSULT is the ascription of 
negatively evaluated qualities or deficiencies or the labeling of persons as unworthy (in some sense) or unvalued. 
Insults convey disrespect and contempt. Whether an utterance is an insult usually depends on the community in which it 
is made, on the social context (ongoing activity etc.) in which it is made, and on the linguistic means that are used 
(which have to be found to be conventional means whose assessment as insulting are intersubjectively reasonably 
stable).

And finally, in the case of ABUSE, the tweet does not just insult a person but represents the stronger form of abusive 
language. By abuse we define a special type of degradation. This type of degrading consists in ascribing a social 
identity to a person that is judged negatively by a (perceived) majority of society. The identity in question is seen 
as a shameful, unworthy, morally objectionable or marginal identity. In contrast to insults, instances of abusive 
language require that the target of judgment is seen as a representative of a group and it is ascribed negative 
qualities that are taken to be universal, omnipresent and unchangeable characteristics of the group. (This part of the 
definition largely co-incides with what is referred to as abusive speech in other research.) Aside from the cases where 
people are degraded based on their membership in some group, we also classify it as abusive language when 
dehumanization is employed even just towards an individual (i.e. describing a person as scum or vermin etc.).
"""

_VERSION = "1.0.0"
_HOMEPAGE_URL = "https://fz.h-da.de/iggsa"
_DOWNLOAD_URL = "https://raw.githubusercontent.com/uds-lsv/GermEval-2018-Data/master/germeval2018.{split}.txt"


class GermEval2018Config(datasets.BuilderConfig):
    """BuilderConfig for GermEval 2014."""

    def __init__(self, **kwargs):
        """BuilderConfig for GermEval 2018.
        Args:
          **kwargs: keyword arguments forwarded to super.
        """
        super(GermEval2018Config, self).__init__(**kwargs)

class GermEval2018(datasets.GeneratorBasedBuilder):
    """GermEval 2018 Shared Task dataset on the Identification of Offensive Language."""

    BUILDER_CONFIGS = [
        GermEval2018Config(
            name="germeval2018", version=datasets.Version("1.0.0"), description="GermEval 2018 Shared Task dataset on the Identification of Offensive Language"
        ),
    ]

    def _info(self):
        return datasets.DatasetInfo(
            description=_DESCRIPTION,
            features=datasets.Features(
                {
                    "text": datasets.Value("string"),
                    "coarse-grained": datasets.Value("string"),
                    "fine-grained": datasets.Value("string"),
                }
            ),
            supervised_keys=None,
            license=_LICENSE,
            homepage=_HOMEPAGE_URL,
            citation=_CITATION,
        )

    def _split_generators(self, dl_manager):
        train_urls = [_DOWNLOAD_URL.format(split="training")]
        test_urls = [_DOWNLOAD_URL.format(split="test")]

        train_paths = dl_manager.download_and_extract(train_urls)
        test_paths = dl_manager.download_and_extract(test_urls)

        return [
            datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"file_paths": train_paths}),
            datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"file_paths": test_paths}),
        ]

    def _generate_examples(self, file_paths):
        row_count = 0
        for file_path in file_paths:
            with open(file_path, "r", encoding="utf-8") as f:
                for line in f:
                    line = line.strip()
                    line_splitted = line.split("\t")
                    yield row_count, {
                      "text": line_splitted[0],
                      "coarse-grained": line_splitted[1],
                      "fine-grained": line_splitted[2],
                    }
                    row_count += 1