ruslan commited on
Commit
4f12f10
1 Parent(s): 8bc12db

Fill-in the template of the dataset card

Browse files
Files changed (1) hide show
  1. README.md +67 -22
README.md CHANGED
@@ -66,92 +66,137 @@ task_ids:
66
 
67
  ### Dataset Summary
68
 
69
- [More Information Needed]
 
 
 
70
 
71
  ### Supported Tasks and Leaderboards
72
 
73
- [More Information Needed]
 
 
74
 
75
  ### Languages
76
 
77
- [More Information Needed]
78
 
79
  ## Dataset Structure
80
 
81
  ### Data Instances
82
 
83
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
84
 
85
  ### Data Fields
86
 
87
- [More Information Needed]
 
 
 
 
 
 
 
 
 
88
 
89
  ### Data Splits
90
 
91
- [More Information Needed]
92
 
93
  ## Dataset Creation
94
 
95
  ### Curation Rationale
96
 
97
- [More Information Needed]
 
98
 
99
  ### Source Data
100
 
101
  #### Initial Data Collection and Normalization
102
 
103
- [More Information Needed]
 
104
 
105
  #### Who are the source language producers?
106
 
107
- [More Information Needed]
108
 
109
  ### Annotations
110
 
111
  #### Annotation process
112
 
113
- [More Information Needed]
 
 
114
 
115
  #### Who are the annotators?
116
 
117
- [More Information Needed]
118
 
119
  ### Personal and Sensitive Information
120
 
121
- [More Information Needed]
122
 
123
  ## Considerations for Using the Data
124
 
125
  ### Social Impact of Dataset
126
 
127
- [More Information Needed]
 
128
 
129
  ### Discussion of Biases
130
 
131
- [More Information Needed]
 
 
 
132
 
133
  ### Other Known Limitations
134
 
135
- [More Information Needed]
136
 
137
  ## Additional Information
138
 
139
  ### Dataset Curators
140
 
141
- [More Information Needed]
 
 
142
 
143
  ### Licensing Information
144
 
145
- [More Information Needed]
146
 
147
  ### Citation Information
148
 
149
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
150
 
151
  ### Contributions
152
 
153
- Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
154
 
155
- ---
156
- license: apache-2.0
157
- ---
 
66
 
67
  ### Dataset Summary
68
 
69
+ *BioLeaflets* is a biomedical dataset for Data2Text generation. It is a corpus of 1,336 package leaflets of medicines authorised in Europe, which were obtained by scraping the European Medicines Agency (EMA) website.
70
+ Package leaflets are included in the packaging of medicinal products and contain information to help patients use the product safely and appropriately.
71
+ This dataset comprises the large majority (∼ 90%) of medicinal products authorised through the centralised procedure in Europe as of January 2021.
72
+ For more detailed information, please read the paper at [ACL Anthology](https://aclanthology.org/2021.inlg-1.40/).
73
 
74
  ### Supported Tasks and Leaderboards
75
 
76
+ BioLeaflets proposes a **conditional generation task** (data-to-text) in the biomedical domain: given an ordered set of entities as source, the *goal* is to produce a multi-sentence section.
77
+ Successful generation thus requires the model to learn specific syntax, terminology, and writing style from the corpus. Alternatively, the dataset might be used for **named-entity recognition task**: given text, detect medical entities.
78
+ The dataset provides an extensive description of medicinal products and thus supports a plain **language modeling task** focused on biomedical data.
79
 
80
  ### Languages
81
 
82
+ Monolingual - en.
83
 
84
  ## Dataset Structure
85
 
86
  ### Data Instances
87
 
88
+ For each instance (leaflet), there is a unique ID, URL, Product_Name, and textual information clearly describing the medicine.
89
+ The content of each section is not standardized (NO template), yet it is still well-structured.
90
+ Each document contains six sections:
91
+ 1) What is the product and what is it used for
92
+ 2) What you need to know before you take the product
93
+ 3) Product usage instructions
94
+ 4) possible side effects
95
+ 5) product storage conditions
96
+ 6) other information
97
+
98
+ Every section is represented as a dictionary with the 'Title', 'Section_Content', and 'Entity_Recognition' as keys.
99
+ The content of each section is lower-cased and tokenized by treating all special characters as separate tokens.
100
 
101
  ### Data Fields
102
 
103
+ - `ID`: a string representing a unique ID assigned to a leaflet
104
+ - `URL`: a string containing the link to the article on the European Medicines Agency (EMA) website
105
+ - `Product Name`: a string, the name of the medicine
106
+ - `Full Content`: a string covering the full content of the article available at URL
107
+ - `Section 1`: a dictionary including section 1 content and associated medical entities
108
+ - `Section 2`: a dictionary including section 2 content and associated medical entities
109
+ - `Section 3`: a dictionary including section 3 content and associated medical entities
110
+ - `Section 4`: a dictionary including section 4 content and associated medical entities
111
+ - `Section 5`: a dictionary including section 5 content and associated medical entities
112
+ - `Section 6`: a dictionary including section 6 content and associated medical entities
113
 
114
  ### Data Splits
115
 
116
+ We randomly split the dataset into training (80%), development (10%), and test (10%) set. Duplicates are removed.
117
 
118
  ## Dataset Creation
119
 
120
  ### Curation Rationale
121
 
122
+ Introduce a new biomedical dataset (BioLeaflets), which could serve as a benchmark for biomedical text generation models.
123
+ BioLeaflets proposes a conditional generation task: given an ordered set of entities as source, the goal is to produce a multi-sentence section.
124
 
125
  ### Source Data
126
 
127
  #### Initial Data Collection and Normalization
128
 
129
+ The dataset was obtained by scraping the European Medicines Agency (EMA) website.
130
+ Each leaflet has an URL associated with it to the article on the EMA website.
131
 
132
  #### Who are the source language producers?
133
 
134
+ Labeling experts with domain knowledge produced factual information.
135
 
136
  ### Annotations
137
 
138
  #### Annotation process
139
 
140
+ To create the required input for data-to-text generation, we augment each document by leveraging named entity recognition (NER).
141
+ We combine two NER frameworks: Amazon Comprehend Medical (commercial) and Stanford Stanza (open-sourced).
142
+ Additionally, we treat all digits as entities and add the medicine name as the first entity
143
 
144
  #### Who are the annotators?
145
 
146
+ Machine-generated: ensemble of state-of-the-art named entity recognition (NER) models.
147
 
148
  ### Personal and Sensitive Information
149
 
150
+ [Not included / Not present]
151
 
152
  ## Considerations for Using the Data
153
 
154
  ### Social Impact of Dataset
155
 
156
+ The purpose of this dataset is to help develop models that can automatically generate long paragraphs of text as well as to facilitate the development of NLP models in the biomedical domain.
157
+ The main challenges of this dataset for D2T generation are multi-sentence and multi-section target text, small sample size, specialized medical vocabulary, and syntax.
158
 
159
  ### Discussion of Biases
160
 
161
+ Package leaflets are published for medicinal products approved in the European Union (EU).
162
+ They are included in the packaging of medicinal products and contain information to help patients use the product safely and appropriately.
163
+ The dataset represents factual information produced by labeling experts and validated before publishing. Hence, biases of any kind are not present in the dataset.
164
+ Package leaflets are required to be written in a way that is clear and understandable.
165
 
166
  ### Other Known Limitations
167
 
168
+ [N/A]
169
 
170
  ## Additional Information
171
 
172
  ### Dataset Curators
173
 
174
+ The data was originally collected by Ruslan Yermakov<sup>*</sup>, Nicholas Drago, and Angelo Ziletti at Bayer AG (Decision Science & Advanced Analytics unit). The code is made publicly available at [github link](https://github.com/bayer-science-for-a-better-life/data2text-bioleaflets)
175
+
176
+ <sup>*</sup> Work done during internship.
177
 
178
  ### Licensing Information
179
 
180
+ The *BioLeaflets* dataset is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0).
181
 
182
  ### Citation Information
183
 
184
+ @inproceedings{yermakov-etal-2021-biomedical,
185
+ title = "Biomedical Data-to-Text Generation via Fine-Tuning Transformers",
186
+ author = "Yermakov, Ruslan and
187
+ Drago, Nicholas and
188
+ Ziletti, Angelo",
189
+ booktitle = "Proceedings of the 14th International Conference on Natural Language Generation",
190
+ month = aug,
191
+ year = "2021",
192
+ address = "Aberdeen, Scotland, UK",
193
+ publisher = "Association for Computational Linguistics",
194
+ url = "https://aclanthology.org/2021.inlg-1.40",
195
+ pages = "364--370",
196
+ abstract = "Data-to-text (D2T) generation in the biomedical domain is a promising - yet mostly unexplored - field of research. Here, we apply neural models for D2T generation to a real-world dataset consisting of package leaflets of European medicines. We show that fine-tuned transformers are able to generate realistic, multi-sentence text from data in the biomedical domain, yet have important limitations. We also release a new dataset (BioLeaflets) for benchmarking D2T generation models in the biomedical domain.",
197
+ }
198
 
199
  ### Contributions
200
 
201
+ Thanks to [@wingedRuslan](https://github.com/wingedRuslan) for adding this dataset.
202