system HF staff commited on
Commit
6a4d4b9
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +198 -0
  3. dataset_infos.json +0 -0
  4. dummy/2021/1.0.0/dummy_data.zip +3 -0
  5. pubmed.py +398 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - crowdsourced
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - other-nlm-license
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - n<1K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - conditional-text-generation
18
+ - sequence-modeling
19
+ - text-classification
20
+ - text-scoring
21
+ task_ids:
22
+ - language-modeling
23
+ - other-stuctured-to-text
24
+ - text-scoring-other-citation-estimation
25
+ - topic-classification
26
+ ---
27
+
28
+ # Dataset Card for [Dataset Name]
29
+
30
+ ## Table of Contents
31
+ - [Dataset Description](#dataset-description)
32
+ - [Dataset Summary](#dataset-summary)
33
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
34
+ - [Languages](#languages)
35
+ - [Dataset Structure](#dataset-structure)
36
+ - [Data Instances](#data-instances)
37
+ - [Data Fields](#data-instances)
38
+ - [Data Splits](#data-instances)
39
+ - [Dataset Creation](#dataset-creation)
40
+ - [Curation Rationale](#curation-rationale)
41
+ - [Source Data](#source-data)
42
+ - [Annotations](#annotations)
43
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
44
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
45
+ - [Social Impact of Dataset](#social-impact-of-dataset)
46
+ - [Discussion of Biases](#discussion-of-biases)
47
+ - [Other Known Limitations](#other-known-limitations)
48
+ - [Additional Information](#additional-information)
49
+ - [Dataset Curators](#dataset-curators)
50
+ - [Licensing Information](#licensing-information)
51
+ - [Citation Information](#citation-information)
52
+
53
+ ## Dataset Description
54
+
55
+ - **Homepage:** : [https://www.nlm.nih.gov/databases/download/pubmed_medline.html]()
56
+ - **Documentation:** : [https://www.nlm.nih.gov/databases/download/pubmed_medline_documentation.html]()
57
+ - **Repository:**
58
+ - **Paper:**
59
+ - **Leaderboard:**
60
+ - **Point of Contact:**
61
+
62
+ ### Dataset Summary
63
+
64
+ NLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information.
65
+
66
+ ### Supported Tasks and Leaderboards
67
+
68
+ [More Information Needed]
69
+
70
+ ### Languages
71
+
72
+ - English
73
+
74
+ ## Dataset Structure
75
+
76
+ Bear in mind the data comes from XML that have various tags that are hard to reflect
77
+ in a concise JSON format. Tags and list are kind of non "natural" to XML documents
78
+ leading this library to make some choices regarding data. "Journal" info was dropped
79
+ altogether as it would have led to many fields being empty all the time.
80
+
81
+ The hierarchy is also a bit unnatural but the choice was made to keep as close as
82
+ possible to the original data for future releases that may change schema from NLM's side.
83
+
84
+ Author has been kept and contains either "ForeName", "LastName", "Initials", or "CollectiveName".
85
+ (All the fields will be present all the time, but only some will be filled)
86
+
87
+ ### Data Instances
88
+
89
+ ```json
90
+ {
91
+ "MedlineCitation": {
92
+ "PMID": 0,
93
+ "DateCompleted": {"Year": 0, "Month": 0, "Day": 0},
94
+ "NumberOfReferences": 0,
95
+ "DateRevised": {"Year": 0, "Month": 0, "Day": 0},
96
+ "Article": {
97
+ "Abstract": {"AbstractText": "Some abstract (can be missing)" },
98
+ "ArticleTitle": "Article title",
99
+ "AuthorList": {"Author": [
100
+ {"FirstName": "John", "ForeName": "Doe", "Initials": "JD", "CollectiveName": ""}
101
+ {"CollectiveName": "The Manhattan Project", "FirstName": "", "ForeName": "", "Initials": ""}
102
+ ]},
103
+ "Language": "en",
104
+ "GrantList": {
105
+ "Grant": [],
106
+ },
107
+ "PublicationTypeList": {"PublicationType": []},
108
+ },
109
+ "MedlineJournalInfo": {"Country": "France"},
110
+ "ChemicalList": {"Chemical": [{
111
+ "RegistryNumber": "XX",
112
+ "NameOfSubstance": "Methanol"
113
+ }]},
114
+ "CitationSubset": "AIM",
115
+ "MeshHeadingList": {
116
+ "MeshHeading": [],
117
+ },
118
+ },
119
+ "PubmedData": {
120
+ "ArticleIdList": {"ArticleId": "10.1002/bjs.1800650203"},
121
+ "PublicationStatus": "ppublish",
122
+ "History": {"PubMedPubDate": [{"Year": 0, "Month": 0, "Day": 0}]},
123
+ "ReferenceList": [{"Citation": "Somejournal", "CitationId": 01}],
124
+ },
125
+ }
126
+ ```
127
+
128
+ ### Data Fields
129
+
130
+ Main Fields will probably interest people are:
131
+
132
+ - "MedlineCitation" > "Article" > "AuthorList" > "Author"
133
+ - "MedlineCitation" > "Article" > "Abstract" > "AbstractText"
134
+ - "MedlineCitation" > "Article" > "Article Title"
135
+ - "MedlineCitation" > "ChemicalList" > "Chemical"
136
+ - "MedlineCitation" > "NumberOfReferences"
137
+
138
+ ### Data Splits
139
+
140
+ There are no splits in this dataset. It is given as is.
141
+
142
+ ## Dataset Creation
143
+
144
+ ### Curation Rationale
145
+
146
+ [More Information Needed]
147
+
148
+ ### Source Data
149
+
150
+ #### Initial Data Collection and Normalization
151
+
152
+ [https://www.nlm.nih.gov/databases/download/pubmed_medline_faq.html]()
153
+
154
+ #### Who are the source language producers?
155
+
156
+ [More Information Needed]
157
+
158
+ ### Annotations
159
+
160
+ #### Annotation process
161
+
162
+ [More Information Needed]
163
+
164
+ #### Who are the annotators?
165
+
166
+ [More Information Needed]
167
+
168
+ ### Personal and Sensitive Information
169
+
170
+ [More Information Needed]
171
+
172
+ ## Considerations for Using the Data
173
+
174
+ ### Social Impact of Dataset
175
+
176
+ [More Information Needed]
177
+
178
+ ### Discussion of Biases
179
+
180
+ [More Information Needed]
181
+
182
+ ### Other Known Limitations
183
+
184
+ [More Information Needed]
185
+
186
+ ## Additional Information
187
+
188
+ ### Dataset Curators
189
+
190
+ [More Information Needed]
191
+
192
+ ### Licensing Information
193
+
194
+ [https://www.nlm.nih.gov/databases/download/terms_and_conditions.html]()
195
+
196
+ ### Citation Information
197
+
198
+ [More Information Needed]
dataset_infos.json ADDED
The diff for this file is too large to render. See raw diff
 
dummy/2021/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0ef6fe2745e3d8469ea99bdb1eecf97d87d75c93055c051c80912c6ab51885e
3
+ size 1920
pubmed.py ADDED
@@ -0,0 +1,398 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """TODO: Add a description here."""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import copy
20
+ import logging
21
+ import xml.etree.ElementTree as etree
22
+
23
+ import datasets
24
+
25
+
26
+ logger = logging.getLogger(__name__)
27
+
28
+
29
+ # Find for instance the citation on arxiv or on the dataset repo/website
30
+ _CITATION = """\
31
+ """
32
+
33
+ # You can copy an official description
34
+ _DESCRIPTION = """\
35
+ NLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information.
36
+ """
37
+
38
+ _HOMEPAGE = "https://www.nlm.nih.gov/databases/download/pubmed_medline.html"
39
+
40
+ _LICENSE = ""
41
+
42
+ # TODO: Add link to the official dataset URLs here
43
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
44
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
45
+ _URLs = [f"ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n{i:04d}.xml.gz" for i in range(1, 1063)]
46
+
47
+
48
+ # Copyright Ferry Boender, released under the MIT license.
49
+ # Modified by @Narsil to handle more oddities
50
+ def deepupdate(target, src):
51
+ """Deep update target dict with src
52
+ For each k,v in src: if k doesn't exist in target, it is deep copied from
53
+ src to target. Otherwise, if v is a list, target[k] is extended with
54
+ src[k]. If v is a set, target[k] is updated with v, If v is a dict,
55
+ recursively deep-update it.
56
+
57
+ Examples:
58
+ >>> t = {'name': 'Ferry', 'hobbies': ['programming', 'sci-fi']}
59
+ >>> deepupdate(t, {'hobbies': ['gaming']})
60
+ >>> print(t)
61
+ {'name': 'Ferry', 'hobbies': ['programming', 'sci-fi', 'gaming']}
62
+ """
63
+ for k, v in src.items():
64
+ if k in target and isinstance(target[k], int) and isinstance(v, str):
65
+ try:
66
+ v = int(v)
67
+ except Exception:
68
+ pass
69
+ if k in target and type(target[k]) != type(v):
70
+ logger.warning(f"Ignoring field {k} it's a {type(v)} and we expect a {type(target[k])}")
71
+ continue
72
+
73
+ if type(v) == list:
74
+ if k not in target:
75
+ target[k] = copy.deepcopy(v)
76
+ elif isinstance(target[k], list):
77
+ target[k].extend(v)
78
+ elif isinstance(target[k], str):
79
+ # Very special case to handle `AbstractText` which sometimes end up
80
+ # being a list.
81
+ new_v = " ".join(el for el in v if isinstance(el, str))
82
+ target[k] = new_v
83
+ else:
84
+ logger.warning(f"Ignoring field {k} it's a {type(v)} and we expect a {type(target[k])}")
85
+ elif type(v) == dict:
86
+ if k not in target:
87
+ target[k] = copy.deepcopy(v)
88
+ elif isinstance(target[k], dict):
89
+ deepupdate(target[k], v)
90
+ else:
91
+ logger.warning(f"Ignoring field {k} it's a {type(v)} and we expect a {type(target[k])}")
92
+ elif type(v) == set:
93
+ if k not in target:
94
+ target[k] = v.copy()
95
+ elif isinstance(target[k], set):
96
+ target[k].update(v.copy())
97
+ else:
98
+ logger.warning(f"Ignoring field {k} it's a {type(v)} and we expect a {type(target[k])}")
99
+ else:
100
+ if isinstance(target[k], (list, tuple, dict)):
101
+ logger.warning(f"Ignoring field {k} it's a {type(v)} and we expect a {type(target[k])}")
102
+ continue
103
+
104
+ target[k] = copy.copy(v)
105
+
106
+
107
+ def default_date():
108
+ return {"Year": 0, "Month": 0, "Day": 0}
109
+
110
+
111
+ def default_inline_article():
112
+ return {
113
+ # 'Journal': Journal,
114
+ "Abstract": {"AbstractText": ""},
115
+ "ArticleTitle": "",
116
+ # 'Pagination': {'MedlinePgn': datasets.Value('string')},
117
+ "AuthorList": {"Author": []},
118
+ "Language": "",
119
+ "GrantList": {
120
+ "Grant": [],
121
+ },
122
+ "PublicationTypeList": {"PublicationType": []},
123
+ }
124
+
125
+
126
+ def default_article():
127
+ return {
128
+ "MedlineCitation": {
129
+ "PMID": 0,
130
+ "DateCompleted": default_date(),
131
+ "NumberOfReferences": 0,
132
+ "DateRevised": default_date(),
133
+ "Article": default_inline_article(),
134
+ "MedlineJournalInfo": {"Country": ""},
135
+ "ChemicalList": {"Chemical": []},
136
+ "CitationSubset": "",
137
+ "MeshHeadingList": {"MeshHeading": []},
138
+ },
139
+ "PubmedData": {
140
+ "ArticleIdList": [{"ArticleId": []}],
141
+ "PublicationStatus": "",
142
+ "History": {"PubMedPubDate": []},
143
+ "ReferenceList": [],
144
+ },
145
+ }
146
+
147
+
148
+ class Pubmed(datasets.GeneratorBasedBuilder):
149
+ """Pubmed citations records"""
150
+
151
+ BUILDER_CONFIGS = [
152
+ datasets.BuilderConfig(name="2021", description="The 2021 annual record", version=datasets.Version("1.0.0")),
153
+ ]
154
+
155
+ # FILLED automatically from features
156
+ SIMPLE_KEYS = {"PubmedArticleSet"}
157
+ LIST_KEYS = {"PubmedArticle"}
158
+ IGNORE_KEYS = set()
159
+
160
+ def fill_keys_from_features(self, features):
161
+ if isinstance(features, dict):
162
+ for key, value in features.items():
163
+ if isinstance(value, datasets.Sequence):
164
+ self.LIST_KEYS.add(key)
165
+ self.fill_keys_from_features(value.feature)
166
+ else:
167
+ self.SIMPLE_KEYS.add(key)
168
+ self.fill_keys_from_features(value)
169
+
170
+ def xml_to_dictionnary(self, parentElement):
171
+ data = {}
172
+ if parentElement.tag in {"AbstractText", "ArticleTitle"}:
173
+ # XXX
174
+ # Very special case, it will contain html leading to having very odd structure
175
+ tag = parentElement.tag
176
+ string = etree.tostring(parentElement).decode("utf-8").strip()
177
+ inner_string = string[len(f"<{tag}>") : -len(f"</{tag}>")]
178
+ return {parentElement.tag: inner_string}
179
+
180
+ for child in list(parentElement):
181
+ child.text = child.text if (child.text is not None) else " "
182
+ key = child.tag
183
+ if len(child) == 0:
184
+ value = child.text.strip()
185
+ else:
186
+ value = self.xml_to_dictionnary(child)
187
+ if isinstance(value, dict) and set(value.keys()) == {key}:
188
+ value = value[key]
189
+
190
+ if key in data:
191
+ old_value = data[key]
192
+ if isinstance(old_value, dict):
193
+ data[key] = [old_value, value]
194
+ elif isinstance(old_value, list):
195
+ data[key].append(value)
196
+ elif key in self.LIST_KEYS:
197
+ data[key] = [value]
198
+ elif key in self.SIMPLE_KEYS:
199
+ data[key] = value
200
+ elif key in self.IGNORE_KEYS:
201
+ continue
202
+ else:
203
+ logger.info(f"Ignoring key {key} from {parentElement.tag}")
204
+ self.IGNORE_KEYS.add(key)
205
+
206
+ # Filling defaults
207
+ if parentElement.tag == "MeshHeading" and "QualifierName" not in data:
208
+ data["QualifierName"] = ""
209
+ elif parentElement.tag == "Author":
210
+ if "ForeName" not in data:
211
+ data["ForeName"] = ""
212
+ if "Initials" not in data:
213
+ data["Initials"] = ""
214
+ if "LastName" not in data:
215
+ data["LastName"] = ""
216
+ if "CollectiveName" not in data:
217
+ data["CollectiveName"] = ""
218
+ elif parentElement.tag == "JournalIssue":
219
+ if "Volume" not in data:
220
+ data["Volume"] = ""
221
+ if "Issue" not in data:
222
+ data["Issue"] = ""
223
+ elif parentElement.tag == "Grant" and "GrantID" not in data:
224
+ data["GrantID"] = ""
225
+
226
+ return {parentElement.tag: data}
227
+
228
+ def _info(self):
229
+ Date = {
230
+ "Year": datasets.Value("int32"),
231
+ "Month": datasets.Value("int32"),
232
+ "Day": datasets.Value("int32"),
233
+ }
234
+
235
+ MeshHeading = {"DescriptorName": datasets.Value("string"), "QualifierName": datasets.Value("string")}
236
+
237
+ MedlineJournalInfo = {
238
+ "Country": datasets.Value("string"),
239
+ # Too inconsistent
240
+ # 'MedlineTA': datasets.Value('string'),
241
+ # 'NlmUniqueID': datasets.Value('string'),
242
+ # 'ISSNLinking': datasets.Value('string'),
243
+ }
244
+ Chemical = {
245
+ "RegistryNumber": datasets.Value("string"),
246
+ "NameOfSubstance": datasets.Value("string"),
247
+ }
248
+ # Too inconsistent in the data to be used
249
+ # Journal = {
250
+ # 'ISSN': datasets.Value('string'),
251
+ # 'JournalIssue': {
252
+ # 'Volume': datasets.Value('string'),
253
+ # 'Issue': datasets.Value('string'),
254
+ # },
255
+ # # 'PubDate': Date,
256
+ # 'Title': datasets.Value('string'),
257
+ # 'ISOAbbreviation': datasets.Value('string')
258
+ # }
259
+ Author = {
260
+ "LastName": datasets.Value("string"),
261
+ "ForeName": datasets.Value("string"),
262
+ "Initials": datasets.Value("string"),
263
+ "CollectiveName": datasets.Value("string"),
264
+ }
265
+ Reference = {
266
+ "Citation": datasets.Value("string"),
267
+ "CitationId": datasets.Value("int32"),
268
+ }
269
+ Grant = {
270
+ "GrantID": datasets.Value("string"),
271
+ "Agency": datasets.Value("string"),
272
+ "Country": datasets.Value("string"),
273
+ }
274
+ Article = {
275
+ # 'Journal': Journal,
276
+ "Abstract": {"AbstractText": datasets.Value("string")},
277
+ "ArticleTitle": datasets.Value("string"),
278
+ # Too inconistent
279
+ # 'Pagination': {'MedlinePgn': datasets.Value('string')},
280
+ "AuthorList": {"Author": datasets.Sequence(Author)},
281
+ "Language": datasets.Value("string"),
282
+ "GrantList": {
283
+ "Grant": datasets.Sequence(Grant),
284
+ },
285
+ "PublicationTypeList": {"PublicationType": datasets.Sequence(datasets.Value("string"))},
286
+ }
287
+ features = datasets.Features(
288
+ {
289
+ "MedlineCitation": {
290
+ "PMID": datasets.Value("int32"),
291
+ "DateCompleted": Date,
292
+ "NumberOfReferences": datasets.Value("int32"),
293
+ "DateRevised": Date,
294
+ "Article": Article,
295
+ "MedlineJournalInfo": MedlineJournalInfo,
296
+ "ChemicalList": {"Chemical": datasets.Sequence(Chemical)},
297
+ "CitationSubset": datasets.Value("string"),
298
+ "MeshHeadingList": {
299
+ "MeshHeading": datasets.Sequence(MeshHeading),
300
+ },
301
+ },
302
+ "PubmedData": {
303
+ "ArticleIdList": datasets.Sequence({"ArticleId": datasets.Sequence(datasets.Value("string"))}),
304
+ "PublicationStatus": datasets.Value("string"),
305
+ "History": {"PubMedPubDate": datasets.Sequence(Date)},
306
+ "ReferenceList": datasets.Sequence(Reference),
307
+ },
308
+ }
309
+ )
310
+ self.fill_keys_from_features(features)
311
+ return datasets.DatasetInfo(
312
+ # This is the description that will appear on the datasets page.
313
+ description=_DESCRIPTION,
314
+ # This defines the different columns of the dataset and their types
315
+ features=features, # Here we define them above because they are different between the two configurations
316
+ # If there's a common (input, target) tuple from the features,
317
+ # specify them here. They'll be used if as_supervised=True in
318
+ # builder.as_dataset.
319
+ supervised_keys=None,
320
+ # Homepage of the dataset for documentation
321
+ homepage=_HOMEPAGE,
322
+ # License for the dataset if available
323
+ license=_LICENSE,
324
+ # Citation for the dataset
325
+ citation=_CITATION,
326
+ )
327
+
328
+ def _split_generators(self, dl_manager):
329
+ """Returns SplitGenerators."""
330
+ dl_dir = dl_manager.download_and_extract(_URLs)
331
+ return [
332
+ datasets.SplitGenerator(
333
+ name=datasets.Split.TRAIN,
334
+ # These kwargs will be passed to _generate_examples
335
+ gen_kwargs={"filenames": dl_dir},
336
+ ),
337
+ ]
338
+
339
+ def update_citation(self, article):
340
+ """
341
+ ArticleId and ArticleIdList are already used field name so we rewrite and
342
+ flatten those as {Citation, CitationId}.
343
+ """
344
+ citations = []
345
+ try:
346
+ list_ = article["PubmedData"]["ReferenceList"]
347
+ except Exception:
348
+ return
349
+
350
+ for ref in list_:
351
+ if "Reference" not in ref:
352
+ continue
353
+ for re in ref["Reference"]:
354
+ if "Citation" not in re:
355
+ continue
356
+ citation = re["Citation"]
357
+ if "ArticleIdList" not in re:
358
+ continue
359
+ for r in re["ArticleIdList"]:
360
+ if "ArticleId" not in r:
361
+ continue
362
+ for rr in r["ArticleId"]:
363
+ try:
364
+ citation = {"Citation": citation, "CitationId": int(rr)}
365
+ except Exception:
366
+ continue
367
+ citations.append(citation)
368
+ article["PubmedData"]["ReferenceList"] = citations
369
+
370
+ def _generate_examples(self, filenames):
371
+ """ Yields examples. """
372
+ id_ = 0
373
+ for filename in filenames:
374
+ try:
375
+ tree = etree.parse(filename)
376
+ root = tree.getroot()
377
+ xmldict = self.xml_to_dictionnary(root)
378
+ except etree.ParseError:
379
+ logger.warning(f"Ignoring file {filename}, it is malformed")
380
+ continue
381
+
382
+ for article in xmldict["PubmedArticleSet"]["PubmedArticle"]:
383
+ self.update_citation(article)
384
+ new_article = default_article()
385
+
386
+ try:
387
+ deepupdate(new_article, article)
388
+ except Exception:
389
+ logger.warning(f"Ignoring article {article}, it is malformed")
390
+ continue
391
+
392
+ try:
393
+ _ = self.info.features.encode_example(new_article)
394
+ except Exception as e:
395
+ logger.warning(f"Ignore example because {e}")
396
+ continue
397
+ yield id_, new_article
398
+ id_ += 1