shamikbose89 commited on
Commit
afd6453
1 Parent(s): 1401d89

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +202 -1
README.md CHANGED
@@ -1,3 +1,204 @@
1
  ---
2
- license: cc-by-2.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - crowdsourced
5
+ language:
6
+ - nl
7
+ language_creators:
8
+ - machine-generated
9
+ license:
10
+ - cc-by-2.0
11
+ multilinguality:
12
+ - monolingual
13
+ pretty_name: Contentious Contexts Corpus
14
+ size_categories:
15
+ - 1K<n<10K
16
+ source_datasets:
17
+ - original
18
+ tags:
19
+ - newspapers
20
+ - historic
21
+ - dutch
22
+ - problematic
23
+ - ConConCor
24
+ task_categories:
25
+ - text-classification
26
+ task_ids:
27
+ - sentiment-scoring
28
+ - multi-label-classification
29
  ---
30
+ # Dataset Card for contentious_contexts
31
+
32
+ ## Table of Contents
33
+ - [Dataset Description](#dataset-description)
34
+ - [Dataset Summary](#dataset-summary)
35
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
36
+ - [Languages](#languages)
37
+ - [Dataset Structure](#dataset-structure)
38
+ - [Data Instances](#data-instances)
39
+ - [Data Fields](#data-instances)
40
+ - [Data Splits](#data-instances)
41
+ - [Dataset Creation](#dataset-creation)
42
+ - [Curation Rationale](#curation-rationale)
43
+ - [Source Data](#source-data)
44
+ - [Annotations](#annotations)
45
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
46
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
47
+ - [Social Impact of Dataset](#social-impact-of-dataset)
48
+ - [Discussion of Biases](#discussion-of-biases)
49
+ - [Other Known Limitations](#other-known-limitations)
50
+ - [Additional Information](#additional-information)
51
+ - [Dataset Curators](#dataset-curators)
52
+ - [Licensing Information](#licensing-information)
53
+ - [Citation Information](#citation-information)
54
+
55
+ ## Dataset Description
56
+
57
+ - **Homepage:** [ConConCor](https://github.com/cultural-ai/ConConCor)
58
+
59
+ - **Repository:** [ConConCor](https://github.com/cultural-ai/ConConCor)
60
+ - **Paper:** [N/A]
61
+ - **Leaderboard:** [N/A]
62
+ - **Point of Contact:** [Jacco van Ossenbruggen](https://github.com/jrvosse)
63
+
64
+ ### Dataset Summary
65
+
66
+ This dataset contains extracts from historical Dutch newspapers which have been containing keywords of potentially contentious words (according to present-day sensibilities).
67
+ The dataset contains multiple annotations per instance, given the option to quantify agreement scores for annotations. This dataset can be used to track how words and their meanings have changed over time
68
+
69
+ ### Supported Tasks and Leaderboards
70
+
71
+ - `text-classification`: This dataset can be used for tracking how the meanings of words in different contexts have changed and become contentious over time
72
+
73
+ ### Languages
74
+
75
+ The text in the dataset is in Dutch. The responses are available in both English and Dutch. Suggestions, where present, are only in Dutch. The associated BCP-47 code is `nl`
76
+
77
+ ## Dataset Structure
78
+
79
+ ### Data Instances
80
+
81
+ ```
82
+ {
83
+ 'extract_id': 'H97',
84
+ 'text': 'en waardoor het eerste doel wordt voorbijgestreefd om voor den 5D5c5Y 5d-5@5j5g5d5e5Z5V5V5c een speciale eigen werkingssfeer te
85
+ scheppen.Intusschen is het',
86
+ 'target': '5D 5c5Y5d-5@5j5g5d5e5Z5V5V5c',
87
+ 'annotator_responses_english': [
88
+ {'id': 'unknown_2a', 'response': 'Not contentious'},
89
+ {'id': 'unknown_2b', 'response': 'Contentious according to current standards'},
90
+ {'id': 'unknown_2c', 'response': "I don't know"},
91
+ {'id': 'unknown_2d', 'response': 'Contentious according to current standards'},
92
+ {'id': 'unknown_2e', 'response': 'Not contentious'},
93
+ {'id': 'unknown_2f', 'response': "I don't know"},
94
+ {'id': 'unknown_2g', 'response': 'Not contentious'}],
95
+ 'annotator_responses_dutch': [
96
+ {'id': 'unknown_2a', 'response': 'Niet omstreden'},
97
+ {'id': 'unknown_2b', 'response': 'Omstreden naar huidige maatstaven'},
98
+ {'id': 'unknown_2c', 'response': 'Weet ik niet'},
99
+ {'id': 'unknown_2d', 'response': 'Omstreden naar huidige maatstaven'},
100
+ {'id': 'unknown_2e', 'response': 'Niet omstreden'},
101
+ {'id': 'unknown_2f', 'response': 'Weet ik niet'},
102
+ {'id': 'unknown_2g', 'response': 'Niet omstreden'}],
103
+ 'annotator_suggestions': [
104
+ {'id': 'unknown_2a', 'suggestion': ''},
105
+ {'id': 'unknown_2b', 'suggestion': 'ander ras nodig'},
106
+ {'id': 'unknown_2c', 'suggestion': 'personen van ander ras'},
107
+ {'id': 'unknown_2d', 'suggestion': ''},
108
+ {'id': 'unknown_2e', 'suggestion': ''},
109
+ {'id': 'unknown_2f', 'suggestion': ''},
110
+ {'id': 'unknown_2g', 'suggestion': 'ras'}]
111
+ }
112
+ ```
113
+
114
+ ### Data Fields
115
+
116
+ |extract_id|text|target|annotator_responses_english|annotator_responses_dutch|annotator_suggestions|
117
+ |---|---|---|---|---|---|
118
+ |Unique identifier|Text|Target phrase or word|Response(translated to English)|Response in Dutch|Suggestions, if present|
119
+
120
+
121
+ ### Data Splits
122
+
123
+ Train: 2720
124
+
125
+ ## Dataset Creation
126
+
127
+ ### Curation Rationale
128
+
129
+ > Cultural heritage institutions recognise the problem of language use in their collections. The cultural objects in archives, libraries, and museums contain words and phrases that are inappropriate in modern society but were used broadly back in times. Such words can be offensive and discriminative. In our work, we use the term contentious to refer to all (potentially) inappropriate or otherwise sensitive words. For example, words suggestive of some (implicit or explicit) bias towards or against something. The National Archives of the Netherlands stated that they "explore the possibility of explaining language that was acceptable and common in the past and providing it with contemporary alternatives", meanwhile "keeping the original descriptions [with contentious words], because they give an idea of the time in which they were made or included in the collection". There is a page on the institution website where people can report "offensive language".
130
+
131
+ ### Source Data
132
+
133
+ #### Initial Data Collection and Normalization
134
+
135
+ > The queries were run on OCRd versions of the Europeana Newspaper collection, as provided by the KB National Library of the Netherlands. We limited our pool to text categorised as article, thus excluding other types of texts such as advertisements and family notices. We then only focused our sample on the 6 decades between 1890-01-01 and 1941-12-31, as this is the period available in the Europeana newspaper corpus. The dataset represents a stratified sample set over target word, decade, and newspaper issue distribution metadata. For the final set of extracts for annotation, we gave extracts sampling weights proportional to their actual probabilities, as estimated from the initial set of extracts via trigram frequencies, rather than sampling uniformly.
136
+
137
+ #### Who are the source language producers?
138
+
139
+ [Needs More Information]
140
+
141
+ ### Annotations
142
+
143
+ #### Annotation process
144
+
145
+ > The annotation process included 3 stages: pilot annotation, expert annotation, and crowdsourced annotation on the "Prolific" platform. All stages required the participation of Dutch speakers. The pilot stage was intended for testing the annotation layout, the instructions clarity, the number of sentences provided as context, the survey questions, and the difficulty of the task in general. The Dutch-speaking members of the Cultural AI Lab were asked to test the annotation process and give their feedback anonymously using Google Sheets. Six
146
+ volunteers contributed to the pilot stage, each annotating the same 40 samples where either a context of 3 or 5 sentences surrounding the term were given. An individual annotation sheet had a table layout with 4 options to choose for every sample
147
+ - 'Omstreden' (Contentious),
148
+ - 'Niet omstreden' (Not contentious)
149
+ - 'Weet ik niet', (I dont know)
150
+ - 'Onleesbare OCR', (Illegible OCR))
151
+ 2 open fields
152
+ - 'Andere omstreden termen in de context' (Other contentious terms in the context)
153
+ - 'Notities' (Notes)
154
+ and the instructions in the header. The rows were the samples with the highlighted words, the tickboxes for every option, and 2 empty cells for the open questions. The obligatory part of the annotation was to select one ofthe 4 options for every sample. Finding other contentious terms in the given sample, leaving notes, and answering 4 additional open questions at the end of the task were optional. Based on the received feedback and the answers to the open questions in the pilot study, the following decisions were made regarding the next, experts' annotation stage:
155
+ - The annotation layout was built in Google Forms as a questionnaire instead of the table layout in Google Sheets to make the data collection and analysis faster as the number of participants would increase;
156
+ - The context window of 5 sentences per sample was found optimal;
157
+ - The number of samples per annotator was increased to 50;
158
+ - The option 'Omstreden' (Contentious) was changed to 'Omstreden naar huidige maatstaven' ('Contentious according to current standards') to clarify that annotators should judge contentiousness of the words use in context from today's perspective;
159
+ - The annotation instruction was edited to clarify 2 points: (1) that annotators while judging contentiousness should take into account not only a bolded word but also the context surrounding it, and (2) if a word seems even slightly contentious to an annotator, they should choose the option 'Omstreden naar huidige maatstaven' (Contentious according to current standards);
160
+ - The non-required field for every sample 'Notities' (Notes) was removed as there was an open question at the end of the annotation, where participants could leave their comments;
161
+ - Another open question was added at the end of the annotation asking how much time it took to complete the annotation.
162
+
163
+ #### Who are the annotators?
164
+
165
+ Volunteers and Expert annotators
166
+
167
+ ### Personal and Sensitive Information
168
+
169
+ [N/A]
170
+
171
+ ## Considerations for Using the Data
172
+
173
+ ### Social Impact of Dataset
174
+
175
+ This dataset can be used to see how words change in meaning over time
176
+
177
+ ### Discussion of Biases
178
+
179
+ > Due to the nature of the project, some examples used in this documentation may be shocking or offensive. They are provided only as an illustration or explanation of the resulting dataset and do not reflect the opinions of the project team or their organisations.
180
+
181
+ ### Other Known Limitations
182
+
183
+ [Needs More Information]
184
+
185
+ ## Additional Information
186
+
187
+ ### Dataset Curators
188
+
189
+ [Cultural AI](https://github.com/cultural-ai)
190
+
191
+ ### Licensing Information
192
+
193
+ CC-BY
194
+
195
+ ### Citation Information
196
+
197
+ @misc{ContentiousContextsCorpus2021,
198
+ author = {Cultural AI},
199
+ title = {Contentious Contexts Corpus},
200
+ year = {2021},
201
+ publisher = {GitHub},
202
+ journal = {GitHub repository},
203
+ howpublished = {\\url{https://github.com/cultural-ai/ConConCor}},
204
+ }