albertvillanova HF staff commited on
Commit
8263182
1 Parent(s): 066c52d

Update 2024 annual baseline data URLs (#11)

Browse files

- Update dataset card (9f8c0745a7a7e92d61fbe17ca8cd31519526f32d)
- Update URLs for 2024 baseline data files (161dafd06569c6ab8f07952c11b266ef65bc7cb8)
- Uncompress data files on the fly (3f9242821851476bb4998915b9e783cdcaae26b6)
- Update metadata in README (4b5ea5ef0f9075a5af64115704f1f9d807e18901)

Files changed (2) hide show
  1. README.md +45 -10
  2. pubmed.py +27 -28
README.md CHANGED
@@ -27,7 +27,7 @@ pretty_name: PubMed
27
  tags:
28
  - citation-estimation
29
  dataset_info:
30
- - config_name: '2023'
31
  features:
32
  - name: MedlineCitation
33
  struct:
@@ -135,10 +135,10 @@ dataset_info:
135
  dtype: int32
136
  splits:
137
  - name: train
138
- num_bytes: 52199025303
139
- num_examples: 34960700
140
- download_size: 41168762331
141
- dataset_size: 52199025303
142
  ---
143
 
144
  # Dataset Card for PubMed
@@ -174,11 +174,18 @@ dataset_info:
174
  - **Repository:**
175
  - **Paper:**
176
  - **Leaderboard:**
177
- - **Point of Contact:**
178
 
179
  ### Dataset Summary
180
 
181
- NLM produces a baseline set of MEDLINE/PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year. Each day, NLM produces update files that include new, revised and deleted citations. See our documentation page for more information.
 
 
 
 
 
 
 
182
 
183
  ### Supported Tasks and Leaderboards
184
 
@@ -260,13 +267,15 @@ There are no splits in this dataset. It is given as is.
260
 
261
  ### Curation Rationale
262
 
263
- [More Information Needed]
 
 
264
 
265
  ### Source Data
266
 
267
  #### Initial Data Collection and Normalization
268
 
269
- [https://www.nlm.nih.gov/databases/download/pubmed_medline_faq.html]()
270
 
271
  #### Who are the source language producers?
272
 
@@ -308,7 +317,33 @@ There are no splits in this dataset. It is given as is.
308
 
309
  ### Licensing Information
310
 
311
- [https://www.nlm.nih.gov/databases/download/terms_and_conditions.html]()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
312
 
313
  ### Citation Information
314
 
27
  tags:
28
  - citation-estimation
29
  dataset_info:
30
+ - config_name: '2024'
31
  features:
32
  - name: MedlineCitation
33
  struct:
135
  dtype: int32
136
  splits:
137
  - name: train
138
+ num_bytes: 54723097181
139
+ num_examples: 36555430
140
+ download_size: 45202943276
141
+ dataset_size: 54723097181
142
  ---
143
 
144
  # Dataset Card for PubMed
174
  - **Repository:**
175
  - **Paper:**
176
  - **Leaderboard:**
177
+ - **Point of Contact:** [National Center for Biotechnology Information](mailto:info@ncbi.nlm.nih.gov)
178
 
179
  ### Dataset Summary
180
 
181
+ PubMed comprises more than 36 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full-text content from PubMed Central and publisher web sites.
182
+
183
+ NLM produces a baseline set of PubMed citation records in XML format for download on an annual basis. The annual baseline is released in December of each year.
184
+ - Last Updated December 15, 2023
185
+
186
+ Each day, NLM produces update files that include new, revised, and deleted citations.
187
+
188
+ Source: https://ftp.ncbi.nlm.nih.gov/pubmed/README.txt
189
 
190
  ### Supported Tasks and Leaderboards
191
 
267
 
268
  ### Curation Rationale
269
 
270
+ The use of "Medline" in an element name does not mean the record represents a citation from a MEDLINE-selected journal. When the NLM DTDs and XML elements were first created, MEDLINE records were the only data exported. Now NLM exports citations other than MEDLINE records. To minimize unnecessary disruption to users of the data, NLM has retained the original element names (e.g., MedlineCitation, MedlineJournalInfo, MedlineTA).
271
+
272
+ Policies affecting data creation have evolved over the years. Some PubMed records are added or revised well after the cited article was first published. In these cases, on occasion an element that had not yet been created when the article was published may appear on the record. For example, the Abstract element was not created until 1975, but some records published before 1975 but added to PubMed after 1975 contain <Abstract>. It is also possible that an element may be treated differently from the way it would have been treated had the record been created or maintained near the time the article was published. For example, the number of <Author> occurrences can diverge from the policies stated in the NLM author indexing policy (https://pubmed.ncbi.nlm.nih.gov/help/#author-indexing-policy). Lastly, as of October 2016, the publisher of the original article has the capability to edit the PubMed record’s citation data, with the exception of MeSH data, using the PubMed Data Management system. PubMed record data for older citations, therefore, may contain data for elements that didn’t exist when the citation was created.
273
 
274
  ### Source Data
275
 
276
  #### Initial Data Collection and Normalization
277
 
278
+ [More Information Needed]
279
 
280
  #### Who are the source language producers?
281
 
317
 
318
  ### Licensing Information
319
 
320
+ [National Library of Medicine Terms and Conditions](https://www.nlm.nih.gov/databases/download/terms_and_conditions.html)
321
+
322
+ Downloading PubMed data from the National Library of Medicine FTP servers indicates your acceptance of the following Terms and Conditions. No charges, usage fees or royalties are paid to NLM for these data.
323
+
324
+ #### PubMed Specific Terms:
325
+
326
+ NLM freely provides PubMed data. Please note some abstracts may be protected by copyright.
327
+
328
+ #### General Terms and Conditions
329
+
330
+ Users of the data agree to:
331
+ - acknowledge NLM as the source of the data in a clear and conspicuous manner,
332
+ - NOT use the PubMed wordmark or the PubMed logo in association or in connection with user's or any other party's product or service.
333
+ - NOT adopt, use, or seek to register any mark or trade name confusingly similar to or suggestive of the PubMed wordmark or PubMed logo
334
+ - NOT to indicate or imply that NLM/NIH/HHS has endorsed its products/services/applications.
335
+
336
+ Users who republish or redistribute the data (services, products or raw data) agree to:
337
+ - maintain the most current version of all distributed data, or
338
+ - make known in a clear and conspicuous manner that the products/services/applications do not reflect the most current/accurate data available from NLM.
339
+
340
+ These data are produced with a reasonable standard of care, but NLM makes no warranties express or implied, including no warranty of merchantability or fitness for particular purpose, regarding the accuracy or completeness of the data. Users agree to hold NLM and the U.S. Government harmless from any liability resulting from errors in the data. NLM disclaims any liability for any consequences due to use, misuse, or interpretation of information contained or not contained in the data.
341
+
342
+ NLM does not provide legal advice regarding copyright, fair use, or other aspects of intellectual property rights. See the NLM Copyright page: https://www.nlm.nih.gov/web_policies.html#copyright
343
+
344
+ NLM reserves the right to change the type and format of its machine-readable data. NLM will take reasonable steps to inform users of any changes to the format of the data before the data are distributed via the announcement section or subscription to email and RSS updates.
345
+
346
+ The PubMed wordmark and the PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
347
 
348
  ### Citation Information
349
 
pubmed.py CHANGED
@@ -16,6 +16,7 @@
16
 
17
 
18
  import copy
 
19
  import xml.etree.ElementTree as ET
20
 
21
  import datasets
@@ -36,10 +37,7 @@ _HOMEPAGE = "https://www.nlm.nih.gov/databases/download/pubmed_medline.html"
36
 
37
  _LICENSE = ""
38
 
39
- # The HuggingFace dataset library don't host the datasets but only point to the original files
40
- # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
41
- # Note these URLs here are used by MockDownloadManager.create_dummy_data_list
42
- _URLs = [f"https://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed23n{i:04d}.xml.gz" for i in range(1, 1167)]
43
 
44
 
45
  # Copyright Ferry Boender, released under the MIT license.
@@ -146,7 +144,7 @@ class Pubmed(datasets.GeneratorBasedBuilder):
146
  """Pubmed citations records"""
147
 
148
  BUILDER_CONFIGS = [
149
- datasets.BuilderConfig(name="2023", description="The 2023 annual record", version=datasets.Version("3.0.0")),
150
  ]
151
 
152
  # FILLED automatically from features
@@ -315,7 +313,7 @@ class Pubmed(datasets.GeneratorBasedBuilder):
315
 
316
  def _split_generators(self, dl_manager):
317
  """Returns SplitGenerators."""
318
- dl_dir = dl_manager.download_and_extract(_URLs)
319
  return [
320
  datasets.SplitGenerator(
321
  name=datasets.Split.TRAIN,
@@ -358,28 +356,29 @@ class Pubmed(datasets.GeneratorBasedBuilder):
358
  """Yields examples."""
359
  id_ = 0
360
  for filename in filenames:
361
- try:
362
- tree = ET.parse(filename)
363
- root = tree.getroot()
364
- xmldict = self.xml_to_dictionnary(root)
365
- except ET.ParseError:
366
- logger.warning(f"Ignoring file {filename}, it is malformed")
367
- continue
368
-
369
- for article in xmldict["PubmedArticleSet"]["PubmedArticle"]:
370
- self.update_citation(article)
371
- new_article = default_article()
372
-
373
  try:
374
- deepupdate(new_article, article)
375
- except Exception:
376
- logger.warning(f"Ignoring article {article}, it is malformed")
 
 
377
  continue
378
 
379
- try:
380
- _ = self.info.features.encode_example(new_article)
381
- except Exception as e:
382
- logger.warning(f"Ignore example because {e}")
383
- continue
384
- yield id_, new_article
385
- id_ += 1
 
 
 
 
 
 
 
 
 
 
16
 
17
 
18
  import copy
19
+ import gzip
20
  import xml.etree.ElementTree as ET
21
 
22
  import datasets
37
 
38
  _LICENSE = ""
39
 
40
+ _URLs = [f"https://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed24n{i:04d}.xml.gz" for i in range(1, 1220)]
 
 
 
41
 
42
 
43
  # Copyright Ferry Boender, released under the MIT license.
144
  """Pubmed citations records"""
145
 
146
  BUILDER_CONFIGS = [
147
+ datasets.BuilderConfig(name="2024", description="The 2024 annual record", version=datasets.Version("4.0.0")),
148
  ]
149
 
150
  # FILLED automatically from features
313
 
314
  def _split_generators(self, dl_manager):
315
  """Returns SplitGenerators."""
316
+ dl_dir = dl_manager.download(_URLs)
317
  return [
318
  datasets.SplitGenerator(
319
  name=datasets.Split.TRAIN,
356
  """Yields examples."""
357
  id_ = 0
358
  for filename in filenames:
359
+ with gzip.open(filename) as f:
 
 
 
 
 
 
 
 
 
 
 
360
  try:
361
+ tree = ET.parse(f)
362
+ root = tree.getroot()
363
+ xmldict = self.xml_to_dictionnary(root)
364
+ except ET.ParseError:
365
+ logger.warning(f"Ignoring file {filename}, it is malformed")
366
  continue
367
 
368
+ for article in xmldict["PubmedArticleSet"]["PubmedArticle"]:
369
+ self.update_citation(article)
370
+ new_article = default_article()
371
+
372
+ try:
373
+ deepupdate(new_article, article)
374
+ except Exception:
375
+ logger.warning(f"Ignoring article {article}, it is malformed")
376
+ continue
377
+
378
+ try:
379
+ _ = self.info.features.encode_example(new_article)
380
+ except Exception as e:
381
+ logger.warning(f"Ignore example because {e}")
382
+ continue
383
+ yield id_, new_article
384
+ id_ += 1