File size: 7,113 Bytes
32a1f49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4274d7c
32a1f49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
---
annotations_creators:
- expert-generated
language:
- en
- fr
- es
language_creators:
- expert-generated
license:
- apache-2.0
multilinguality:
- multilingual
pretty_name: HumSet
size_categories:
- 100K<n<1M
source_datasets:
- original
tags:
- humanitarian
- research
- analytical-framework
- multilabel
- humset
- humbert
task_categories:
- text-classification
- text-retrieval
- token-classification
task_ids:
- multi-label-classification
splits:
  - name: train
    num_examples: 117435
  - name: validation
    num_examples: 16039
  - name: test
    num_examples: 15147
---

# Dataset Card for [HumSet]

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)

## Dataset Description

- **Homepage:** [http://blog.thedeep.io/humset/](http://blog.thedeep.io/humset/)
- **Repository:** [https://github.com/the-deep/humset](https://github.com/the-deep/humset)
- **Paper:** [EMNLP Findings 2022](https://preview.aclanthology.org/emnlp-22-ingestion/2022.findings-emnlp.321/)
- **Leaderboard:**
- **Point of Contact:**[the DEEP NLP team](mailto:nlp@thedeep.io)

### Dataset Summary

HumSet is a novel and rich multilingual dataset of humanitarian response documents annotated by experts in the humanitarian response community. HumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three languages of English, French, and Spanish, originally taken from publicly-available resources. For each document, analysts have identified informative snippets (entries) in respect to common humanitarian frameworks, and assigned one or many classes to each entry. See the our paper for details.

### Supported Tasks and Leaderboards

This dataset is intended for multi-label classification 

### Languages

This dataset is in English, French and Spanish

## Dataset Structure


### Data Instances

[More Information Needed]

### Data Fields

<div class="alert bg-success text-dark" cellspacing="0" style="width:100%">
  <table id="leaderboard_head_dctr" class="table table-bordered" cellspacing="0">
    <thead>
      <tr><th>entry_id</th><th>lead_id</th><th>project_id</th><th>sectors</th><th>pillars_1d</th><th>pillars_2d</th><th>subpillars_1d</th><th>subpillars_2d</th><th>lang</th><th>n_tokens</th><th>project_title</th><th>created_at</th><th>document</th><th>excerpt</th></tr>
    </thead>
  </table>
</div>

- **entry_id**: tpyeunique identification number for a given entry. (int64)
- **lead_id**: unique identification number for the document to which the corrisponding entry belongs. (int64)
- **sectors**, **pillars_1d**, **pillars_2d**, **subpillars_1d**, **subpillars_2d**: labels assigned to the corresponding entry. Since this is a multi-label dataset (each entry may have several annotations belonging to the same category), they are reported as arrays of strings. See the paper for a detailed description of these categories. (list)
- **lang**: language. (str)
- **n_tokens**: number of tokens (tokenized using NLTK v3.7 library). (int64)
- **project_title**: the name of the project where the corresponding annotation was created. (str)
- **created_at**: date and time of creation of the annotation in stardard ISO 8601 format. (str)
- **document**: document URL source of the excerpt. (str)
- **excerpt**: excerpt text. (str)

### Data Splits

The dataset includes a set of train/validation/test splits, with 117435, 16039 and 15147 examples respectively.

## Dataset Creation

The collection originated from a multi-organizational platform called <em>the Data Entry and Exploration Platform (DEEP)</em> developed and maintained by Data Friendly Space (DFS). The platform facilitates classifying primarily qualitative information with respect to analysis frameworks and allows for collaborative classification and annotation of secondary data. 

### Curation Rationale

[More Information Needed]

### Source Data

Documents are selected from different sources, ranging from official reports by humanitarian organizations to international and national media articles. See the paper for more informations. 

#### Initial Data Collection and Normalization

#### Who are the source language producers?

[More Information Needed]


#### Annotation process

HumSet is curated by humanitarian analysts and covers various disasters around the globe that occurred from 2018 to 2021 in 46 humanitarian response projects. The dataset consists of approximately 17K annotated documents in three
languages of English, French, and Spanish, originally taken from publicly-available resources. For
each document, analysts have identified informative snippets (entries, or excerpt in the imported dataset) with respect to common <em>humanitarian frameworks</em> and assigned one or many classes to each entry.

#### Who are the annotators?

[More Information Needed]

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

NLP team at [Data Friendly Space](https://datafriendlyspace.org/)

### Licensing Information

The GitHub repository which houses this dataset has an Apache License 2.0.

### Citation Information

```
@misc{https://doi.org/10.48550/arxiv.2210.04573,
  doi = {10.48550/ARXIV.2210.04573},
  url = {https://arxiv.org/abs/2210.04573},
  author = {Fekih, Selim and Tamagnone, Nicolò and Minixhofer, Benjamin and Shrestha, Ranjan and Contla, Ximena and Oglethorpe, Ewan and Rekabsaz, Navid},
  keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {HumSet: Dataset of Multilingual Information Extraction and Classification for Humanitarian Crisis Response},
  publisher = {arXiv},
  year = {2022},
  copyright = {arXiv.org perpetual, non-exclusive license}
}
```