nlp-thedeep commited on
Commit
32a1f49
1 Parent(s): 066795d

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +194 -0
README.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - en
6
+ - fr
7
+ - es
8
+ language_creators:
9
+ - expert-generated
10
+ license:
11
+ - apache-2.0
12
+ multilinguality:
13
+ - multilingual
14
+ pretty_name: HumSet
15
+ size_categories:
16
+ - 100K<n<1M
17
+ source_datasets:
18
+ - original
19
+ tags:
20
+ - humanitarian
21
+ - research
22
+ - analytical-framework
23
+ - multilabel
24
+ - humset
25
+ - humbert
26
+ task_categories:
27
+ - text-classification
28
+ - text-retrieval
29
+ - token-classification
30
+ task_ids:
31
+ - multi-label-classification
32
+ splits:
33
+ - name: train
34
+ num_examples: 117435
35
+ - name: validation
36
+ num_examples: 16039
37
+ - name: test
38
+ num_examples: 15147
39
+ ---
40
+
41
+ # Dataset Card for [HumSet]
42
+
43
+ ## Table of Contents
44
+ - [Table of Contents](#table-of-contents)
45
+ - [Dataset Description](#dataset-description)
46
+ - [Dataset Summary](#dataset-summary)
47
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
48
+ - [Languages](#languages)
49
+ - [Dataset Structure](#dataset-structure)
50
+ - [Data Instances](#data-instances)
51
+ - [Data Fields](#data-fields)
52
+ - [Data Splits](#data-splits)
53
+ - [Dataset Creation](#dataset-creation)
54
+ - [Curation Rationale](#curation-rationale)
55
+ - [Source Data](#source-data)
56
+ - [Annotations](#annotations)
57
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
58
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
59
+ - [Social Impact of Dataset](#social-impact-of-dataset)
60
+ - [Discussion of Biases](#discussion-of-biases)
61
+ - [Other Known Limitations](#other-known-limitations)
62
+ - [Additional Information](#additional-information)
63
+ - [Dataset Curators](#dataset-curators)
64
+ - [Licensing Information](#licensing-information)
65
+ - [Citation Information](#citation-information)
66
+ - [Contributions](#contributions)
67
+
68
+ ## Dataset Description
69
+
70
+ - **Homepage:** [http://blog.thedeep.io/humset/](http://blog.thedeep.io/humset/)
71
+ - **Repository:** [https://github.com/the-deep/humset](https://github.com/the-deep/humset)
72
+ - **Paper:** [EMNLP Findings 2022](https://preview.aclanthology.org/emnlp-22-ingestion/2022.findings-emnlp.321/)
73
+ - **Leaderboard:**
74
+ - **Point of Contact:**[the DEEP NLP team](mailto:nlp@thedeep.io)
75
+
76
+ ### Dataset Summary
77
+
78
+ HumSet is a novel and rich multilingual dataset of humanitarian response documents annotated by experts in the humanitarian response community. HumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three languages of English, French, and Spanish, originally taken from publicly-available resources. For each document, analysts have identified informative snippets (entries) in respect to common humanitarian frameworks, and assigned one or many classes to each entry. See the our paper for details.
79
+
80
+ ### Supported Tasks and Leaderboards
81
+
82
+ This dataset is intended for multi-label classification
83
+
84
+ ### Languages
85
+
86
+ This dataset is in English, French and Spanish
87
+
88
+ ## Dataset Structure
89
+
90
+
91
+ ### Data Instances
92
+
93
+ [More Information Needed]
94
+
95
+ ### Data Fields
96
+
97
+ <div class="alert bg-success text-dark" cellspacing="0" style="width:100%">
98
+ <table id="leaderboard_head_dctr" class="table table-bordered" cellspacing="0">
99
+ <thead>
100
+ <tr><th>entry_id</th><th>lead_id</th><th>project_id</th><th>sectors</th><th>pillars_1d</th><th>pillars_2d</th><th>subpillars_1d</th><th>subpillars_2d</th><th>lang</th><th>n_tokens</th><th>project_title</th><th>created_at</th><th>document</th><th>excerpt</th></tr>
101
+ </thead>
102
+ </table>
103
+ </div>
104
+
105
+ - **entry_id**: tpyeunique identification number for a given entry. (int64)
106
+ - **lead_id**: unique identification number for the document to which the corrisponding entry belongs. (int64)
107
+ - **sectors**, **pillars_1d**, **pillars_2d**, **subpillars_1d**, **subpillars_2d**: labels assigned to the corresponding entry. Since this is a multi-label dataset (each entry may have several annotations belonging to the same category), they are reported as arrays of strings. For a detailed description of these categories, see the [paper]. (list)
108
+ - **lang**: language. (str)
109
+ - **n_tokens**: number of tokens (tokenized using NLTK v3.7 library). (int64)
110
+ - **project_title**: the name of the project where the corresponding annotation was created. (str)
111
+ - **created_at**: date and time of creation of the annotation in stardard ISO 8601 format. (str)
112
+ - **document**: document URL source of the excerpt. (str)
113
+ - **excerpt**: excerpt text. (str)
114
+
115
+ ### Data Splits
116
+
117
+ The dataset includes a set of train/validation/test splits, with 117435, 16039 and 15147 examples respectively.
118
+
119
+ ## Dataset Creation
120
+
121
+ The collection originated from a multi-organizational platform called <em>the Data Entry and Exploration Platform (DEEP)</em> developed and maintained by Data Friendly Space (DFS). The platform facilitates classifying primarily qualitative information with respect to analysis frameworks and allows for collaborative classification and annotation of secondary data.
122
+
123
+ ### Curation Rationale
124
+
125
+ [More Information Needed]
126
+
127
+ ### Source Data
128
+
129
+ Documents are selected from different sources, ranging from official reports by humanitarian organizations to international and national media articles. See the paper for more informations.
130
+
131
+ #### Initial Data Collection and Normalization
132
+
133
+ #### Who are the source language producers?
134
+
135
+ [More Information Needed]
136
+
137
+ ### Annotations
138
+
139
+ #### Annotation process
140
+
141
+ HumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three
142
+ languages of English, French, and Spanish, originally taken from publicly-available resources. For
143
+ each document, analysts have identified informative snippets (entries, or excerpt in the imported dataset) with respect to common <em>humanitarian frameworks</em> and assigned one or many classes to each entry.
144
+
145
+ #### Who are the annotators?
146
+
147
+ [More Information Needed]
148
+
149
+ ### Personal and Sensitive Information
150
+
151
+ [More Information Needed]
152
+
153
+ ## Considerations for Using the Data
154
+
155
+ ### Social Impact of Dataset
156
+
157
+ [More Information Needed]
158
+
159
+ ### Discussion of Biases
160
+
161
+ [More Information Needed]
162
+
163
+ ### Other Known Limitations
164
+
165
+ [More Information Needed]
166
+
167
+ ## Additional Information
168
+
169
+ ### Dataset Curators
170
+
171
+ NLP team at [Data Friendly Space](https://datafriendlyspace.org/)
172
+
173
+ ### Licensing Information
174
+
175
+ The GitHub repository which houses this dataset has an Apache License 2.0.
176
+
177
+ ### Citation Information
178
+
179
+ ```
180
+ @misc{https://doi.org/10.48550/arxiv.2210.04573,
181
+ doi = {10.48550/ARXIV.2210.04573},
182
+ url = {https://arxiv.org/abs/2210.04573},
183
+ author = {Fekih, Selim and Tamagnone, Nicolò and Minixhofer, Benjamin and Shrestha, Ranjan and Contla, Ximena and Oglethorpe, Ewan and Rekabsaz, Navid},
184
+ keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
185
+ title = {HumSet: Dataset of Multilingual Information Extraction and Classification for Humanitarian Crisis Response},
186
+ publisher = {arXiv},
187
+ year = {2022},
188
+ copyright = {arXiv.org perpetual, non-exclusive license}
189
+ }
190
+ ```
191
+
192
+ ### Contributions
193
+
194
+ Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.