Datasets:
EMBO
/

Languages:
English
Multilinguality:
monolingual
Size Categories:
n>1M
Language Creators:
expert-generated
Annotations Creators:
machine-generated
License:
Thomas Lemberger commited on
Commit
194f3a9
1 Parent(s): 16d88ff

dataset card

Browse files
Files changed (1) hide show
  1. README.md +153 -0
README.md ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ YAML tags:
3
+ - copy-paste the tags obtained with the tagging app: https://github.com/huggingface/datasets-tagging
4
+ ---
5
+
6
+ # Dataset Card for [Dataset Name]
7
+
8
+ ## Table of Contents
9
+ - [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
10
+ - [Table of Contents](#table-of-contents)
11
+ - [Dataset Description](#dataset-description)
12
+ - [Dataset Summary](#dataset-summary)
13
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
14
+ - [Languages](#languages)
15
+ - [Dataset Structure](#dataset-structure)
16
+ - [Data Instances](#data-instances)
17
+ - [Data Fields](#data-fields)
18
+ - [Data Splits](#data-splits)
19
+ - [Dataset Creation](#dataset-creation)
20
+ - [Curation Rationale](#curation-rationale)
21
+ - [Source Data](#source-data)
22
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
23
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
24
+ - [Annotations](#annotations)
25
+ - [Annotation process](#annotation-process)
26
+ - [Who are the annotators?](#who-are-the-annotators)
27
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
28
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
29
+ - [Social Impact of Dataset](#social-impact-of-dataset)
30
+ - [Discussion of Biases](#discussion-of-biases)
31
+ - [Other Known Limitations](#other-known-limitations)
32
+ - [Additional Information](#additional-information)
33
+ - [Dataset Curators](#dataset-curators)
34
+ - [Licensing Information](#licensing-information)
35
+ - [Citation Information](#citation-information)
36
+ - [Contributions](#contributions)
37
+
38
+ ## Dataset Description
39
+
40
+ - **Homepage:**
41
+ - **Repository:**
42
+ - **Paper:**
43
+ - **Leaderboard:**
44
+ - **Point of Contact:**
45
+
46
+ ### Dataset Summary
47
+
48
+ This dataset is based on abstracts from the open access section of EuropePubMed Central to train language models in the domain of biology. The dataset can be used for random masked language modeling or for language modeling using only specific part-of-speech maksing. More details on generation and use of the dataset at https://github.com/source-data/soda-roberta .
49
+
50
+ ### Supported Tasks and Leaderboards
51
+
52
+ `MLM`: masked language modeling
53
+ `DET`: part-of-speach masked language model, with determinants (`DET`) tagged
54
+ `SMALL`: part-of-speech masked language model, with "small" words (`DET`, `CCONJ`, `SCONJ`, `ADP`, `PRON`) tagged
55
+ `VERB`: part-of-speach masked language model, with verbs (`VERB`) tagged
56
+
57
+
58
+ ### Languages
59
+
60
+ English
61
+
62
+ ## Dataset Structure
63
+
64
+ ### Data Instances
65
+
66
+ ```json
67
+ {
68
+ "input_ids":[
69
+ 0, 2444, 6997, 46162, 7744, 35, 20632, 20862, 3457, 36, 500, 23858, 29, 43, 32, 3919, 716, 15, 49, 4476, 4, 1398, 6, 52, 1118, 5, 20862, 819, 9, 430, 23305, 248, 23858, 29, 4, 256, 40086, 104, 35, 1927, 1069, 459, 1484, 58, 4776, 13, 23305, 634, 16706, 493, 2529, 8954, 14475, 73, 34263, 6, 4213, 718, 833, 12, 24291, 4473, 22500, 14475, 73, 510, 705, 73, 34263, 6, 5143, 4313, 2529, 8954, 14475, 73, 34263, 6, 8, 5143, 4313, 2529, 8954, 14475, 248, 23858, 29, 23, 4448, 225, 4722, 2392, 11, 9341, 261, 4, 49043, 35, 96, 746, 6, 5962, 9, 38415, 4776, 408, 36, 3897, 4, 398, 8871, 56, 23305, 4, 20, 15608, 21, 8061, 6164, 207, 13, 70, 248, 23858, 29, 6, 150, 5, 42561, 21, 8061, 5663, 207, 13, 80, 3457, 4, 509, 1296, 5129, 21567, 3457, 36, 398, 23528, 8748, 22065, 11654, 35, 7253, 15, 49, 4476, 6, 70, 3457, 4682, 65, 189, 28, 5131, 13, 23305, 9726, 4, 2
70
+ ],
71
+ "label_ids": [
72
+ "X", "NOUN", "NOUN", "NOUN", "NOUN", "PUNCT", "ADJ", "ADJ", "NOUN", "PUNCT", "PROPN", "PROPN", "PROPN", "PUNCT", "AUX", "VERB", "VERB", "ADP", "DET", "NOUN", "PUNCT", "ADV", "PUNCT", "PRON", "VERB", "DET", "ADJ", "NOUN", "ADP", "ADJ", "NOUN", "NOUN", "NOUN", "NOUN", "PUNCT", "ADJ", "ADJ", "ADJ", "PUNCT", "NOUN", "NOUN", "NOUN", "NOUN", "AUX", "VERB", "ADP", "NOUN", "VERB", "PROPN", "PROPN", "PROPN", "PROPN", "PROPN", "SYM", "PROPN", "PUNCT", "PROPN", "PROPN", "PROPN", "PUNCT", "PROPN", "PROPN", "PROPN", "PROPN", "SYM", "PROPN", "PROPN", "SYM", "PROPN", "PUNCT", "PROPN", "PROPN", "PROPN", "PROPN", "PROPN", "SYM", "PROPN", "PUNCT", "CCONJ", "ADJ", "PROPN", "PROPN", "PROPN", "PROPN", "NOUN", "NOUN", "NOUN", "ADP", "PROPN", "PROPN", "PROPN", "PROPN", "ADP", "PROPN", "PROPN", "PUNCT", "PROPN", "PUNCT", "ADP", "NOUN", "PUNCT", "NUM", "ADP", "NUM", "VERB", "NOUN", "PUNCT", "NUM", "NUM", "NUM", "NOUN", "AUX", "NOUN", "PUNCT", "DET", "NOUN", "AUX", "X", "NUM", "NOUN", "ADP", "DET", "NOUN", "NOUN", "NOUN", "PUNCT", "SCONJ", "DET", "NOUN", "AUX", "X", "NUM", "NOUN", "ADP", "NUM", "NOUN", "PUNCT", "NUM", "NOUN", "VERB", "ADJ", "NOUN", "PUNCT", "NUM", "NOUN", "NOUN", "NOUN", "NOUN", "PUNCT", "VERB", "ADP", "DET", "NOUN", "PUNCT", "DET", "NOUN", "SCONJ", "PRON", "VERB", "AUX", "VERB", "ADP", "NOUN", "NOUN", "PUNCT", "X"
73
+ ],
74
+ "special_tokens_mask": [
75
+ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1
76
+ ]
77
+ }
78
+ ```
79
+
80
+
81
+ ### Data Fields
82
+
83
+ `input_ids`: token id in the roberta-base vocabulary.
84
+ `labels_ids`: part of speech obtained with Spacy
85
+ `spcial_tokens_mask`: special token
86
+
87
+ ### Data Splits
88
+
89
+ [More Information Needed]
90
+
91
+ ## Dataset Creation
92
+
93
+ ### Curation Rationale
94
+
95
+ The dataset was assembled to train modles in the field of cell and molecular biology. To expand the size of the dataset and to include many examples with highly technical language, abstracts were complemented with figure legends.
96
+
97
+ ### Source Data
98
+
99
+ #### Initial Data Collection and Normalization
100
+
101
+ The xml content of papers were downloaded in January 2021 from the open access section of [EuropePMC]("https://europepmc.org/downloads/openaccess"). Figure legends and abstracts were extracted from the JATS XML, tokenized with the `roberta-base` tokenizer and part-of-speech tagged with Spacy's `en_core_web_sm` model (https://spacy.io).
102
+
103
+ More details at https://github.com/source-data/soda-roberta
104
+
105
+ #### Who are the source language producers?
106
+
107
+ Experts scientists.
108
+
109
+ ### Annotations
110
+
111
+ #### Annotation process
112
+
113
+ Part-of-speech was tagged automatically.
114
+
115
+ #### Who are the annotators?
116
+
117
+ Spacy's `en_core_web_sm` model (https://spacy.io) was used for part-of-speech tagging.
118
+
119
+ ### Personal and Sensitive Information
120
+
121
+ [More Information Needed]
122
+
123
+ ## Considerations for Using the Data
124
+
125
+ ### Social Impact of Dataset
126
+
127
+ [More Information Needed]
128
+
129
+ ### Discussion of Biases
130
+
131
+ [More Information Needed]
132
+
133
+ ### Other Known Limitations
134
+
135
+ [More Information Needed]
136
+
137
+ ## Additional Information
138
+
139
+ ### Dataset Curators
140
+
141
+ Thomas Lemberger
142
+
143
+ ### Licensing Information
144
+
145
+ CC-BY 4.0
146
+
147
+ ### Citation Information
148
+
149
+ [More Information Needed]
150
+
151
+ ### Contributions
152
+
153
+ Thanks to [@tlemberger](https://github.com/tlemberger) for adding this dataset.