soldni commited on
Commit
b0b0ae5
β€’
1 Parent(s): 9b13897

updated readme

Browse files
This view is limited to 50 files because it contains too many changes. Β  See raw diff
Files changed (50) hide show
  1. README.md +52 -5
  2. pes2o.py +99 -0
  3. v1/{s2ag.train.0.gz β†’ train/s2ag/00000.json.gz} +0 -0
  4. v1/{s2ag.train.1.gz β†’ train/s2ag/00001.json.gz} +0 -0
  5. v1/{s2ag.train.2.gz β†’ train/s2ag/00002.json.gz} +0 -0
  6. v1/{s2ag.train.3.gz β†’ train/s2ag/00003.json.gz} +0 -0
  7. v1/{s2ag.train.4.gz β†’ train/s2ag/00004.json.gz} +0 -0
  8. v1/{s2ag.train.5.gz β†’ train/s2ag/00005.json.gz} +0 -0
  9. v1/{s2ag.train.6.gz β†’ train/s2ag/00006.json.gz} +0 -0
  10. v1/{s2ag.train.7.gz β†’ train/s2ag/00007.json.gz} +0 -0
  11. v1/{s2ag.train.8.gz β†’ train/s2ag/00008.json.gz} +0 -0
  12. v1/{s2ag.train.9.gz β†’ train/s2ag/00009.json.gz} +0 -0
  13. v1/{s2orc.train.0.gz β†’ train/s2orc/00000.json.gz} +0 -0
  14. v1/{s2orc.train.1.gz β†’ train/s2orc/00001.json.gz} +0 -0
  15. v1/{s2orc.train.2.gz β†’ train/s2orc/00002.json.gz} +0 -0
  16. v1/{s2orc.train.3.gz β†’ train/s2orc/00003.json.gz} +0 -0
  17. v1/{s2orc.train.4.gz β†’ train/s2orc/00004.json.gz} +0 -0
  18. v1/{s2orc.train.5.gz β†’ train/s2orc/00005.json.gz} +0 -0
  19. v1/{s2orc.train.6.gz β†’ train/s2orc/00006.json.gz} +0 -0
  20. v1/{s2orc.train.7.gz β†’ train/s2orc/00007.json.gz} +0 -0
  21. v1/{s2orc.train.8.gz β†’ train/s2orc/00008.json.gz} +0 -0
  22. v1/{s2orc.train.9.gz β†’ train/s2orc/00009.json.gz} +0 -0
  23. v1/{s2ag.valid.0.gz β†’ validation/s2ag/00000.json.gz} +0 -0
  24. v1/{s2orc.valid.0.gz β†’ validation/s2orc/00000.json.gz} +0 -0
  25. v2/s2ag.valid.2.gz +0 -3
  26. v2/s2ag.valid.3.gz +0 -3
  27. v2/s2ag.valid.4.gz +0 -3
  28. v2/s2ag.valid.5.gz +0 -3
  29. v2/s2ag.valid.6.gz +0 -3
  30. v2/s2ag.valid.7.gz +0 -3
  31. v2/s2ag.valid.8.gz +0 -3
  32. v2/s2ag.valid.9.gz +0 -3
  33. v2/s2orc.valid.0.gz +0 -3
  34. v2/s2orc.valid.1.gz +0 -3
  35. v2/s2orc.valid.2.gz +0 -3
  36. v2/s2orc.valid.3.gz +0 -3
  37. v2/s2orc.valid.4.gz +0 -3
  38. v2/s2orc.valid.5.gz +0 -3
  39. v2/s2orc.valid.6.gz +0 -3
  40. v2/s2orc.valid.7.gz +0 -3
  41. v2/s2orc.valid.8.gz +0 -3
  42. v2/s2orc.valid.9.gz +0 -3
  43. v2/{s2ag.train.0.gz β†’ train/s2ag/00000.json.gz} +0 -0
  44. v2/{s2ag.train.1.gz β†’ train/s2ag/00001.json.gz} +0 -0
  45. v2/{s2ag.train.2.gz β†’ train/s2ag/00002.json.gz} +0 -0
  46. v2/{s2ag.train.3.gz β†’ train/s2ag/00003.json.gz} +0 -0
  47. v2/{s2ag.train.4.gz β†’ train/s2ag/00004.json.gz} +0 -0
  48. v2/{s2ag.train.5.gz β†’ train/s2ag/00005.json.gz} +0 -0
  49. v2/{s2ag.train.6.gz β†’ train/s2ag/00006.json.gz} +0 -0
  50. v2/{s2ag.train.7.gz β†’ train/s2ag/00007.json.gz} +0 -0
README.md CHANGED
@@ -34,14 +34,24 @@ size_categories:
34
 
35
  The PES2O dataset is a collection of ~40M creative commmon licensed academic papers,
36
  cleaned, filtered, and formatted for pre-training of language models. It is derived from
37
- the [Semantic Scholar Open Research Corpus][2]([Lo et al, 2020][1]), or S2ORC.
38
 
39
  We release multiple version of PES2O, each with different processing and knowledge cutoff
40
  date. We recommend you to use the latest version available.
41
 
42
  ## Document Format
43
 
44
- TODO
 
 
 
 
 
 
 
 
 
 
45
 
46
  ## PES2O V1
47
 
@@ -49,11 +59,49 @@ TODO
49
 
50
  - *Knowledge cutoff*: 2023-01-03
51
  - *Number of documents*: 67.56M
52
- - *Number of whitespace-separated tokens**: 47,3M
53
 
54
  ### Processing
55
 
56
- TODO
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
 
58
  | Dataset | Split | # Documents | # Words |
59
  |:-------:|:-------:|:-----------:|:--------------:|
@@ -85,4 +133,3 @@ TODO
85
 
86
  [1]: https://aclanthology.org/2020.acl-main.447/
87
  [2]: https://github.com/allenai/s2orc
88
-
 
34
 
35
  The PES2O dataset is a collection of ~40M creative commmon licensed academic papers,
36
  cleaned, filtered, and formatted for pre-training of language models. It is derived from
37
+ the [Semantic Scholar Open Research Corpus][2]([Lo et al, 2020][1]), or S2ORC.
38
 
39
  We release multiple version of PES2O, each with different processing and knowledge cutoff
40
  date. We recommend you to use the latest version available.
41
 
42
  ## Document Format
43
 
44
+ Each document in the dataset is a dictionary with the following fields:
45
+
46
+ - `added`: Date the document was added to the corpus.
47
+ - `created`: Best-guess date for when the document was first published. Some have resolution down to the day, only down to the year.
48
+ - `id`: Semantic Scholar Corpus ID of the document; it can be used with the [Semantic Scholar API](https://api.semanticscholar.org/) to retrieve metadata about the document (e.g., fields of study, authors).
49
+ - `source`: Collection from which the document was sourced from. At the moment, two are supported:
50
+ - `s2orc`: collection of full-text papers
51
+ - `s2ag`: collection of title and abstracts
52
+ - `text`: Text of the document. Paragraphs are separated by two newlines (`\n\n`).
53
+ - `version`: version of PES2O.
54
+
55
 
56
  ## PES2O V1
57
 
 
59
 
60
  - *Knowledge cutoff*: 2023-01-03
61
  - *Number of documents*: 67.56M
62
+ - *Number of whitespace-separated tokens*: 47.37M
63
 
64
  ### Processing
65
 
66
+ Processing differs slightly wether it was derived from the full-text corpus (`s2orc`) or the title and abstract corpus (`s2ag`).
67
+
68
+ #### S2ORC-derived documents
69
+
70
+ Unfiltered, S2ORC contains 11.3M papers and 46.9B whitespace-separated tokens as of 2023-01-03. To derive PES2O v1, we impose the following constraints:
71
+
72
+ - The paper must have a title and abstract.
73
+ - From each paper, we use [Grobid](https://github.com/kermitt2/grobid) to extract section headers and paragraphs; figures, tables, and references, and any other non-textual content is removed. Title and abstracts are also available, but they come from the Semantic Scholar metadata (obtained through the APIs), not Grobid.
74
+ - The paper must be in English.
75
+ - To determine the language of each document, we use the [pycld3](https://github.com/bsolomon1124/pycld3) library
76
+ - We run pycld3 on the first 2000 characters of each paragraph in the paper.
77
+ - The language of the paper is the most common language of the paragraphs.
78
+ - The paper must have at least 500 whitespace-separated words.
79
+ - The paper was published after 1969; papers published before this date are often obtained through OCR and contain unrecoverable errors.
80
+ - The paper must have at least 5 paragraphs.
81
+ - All sections that have a average log word probability of less than `-20` are removed.
82
+ - To calculate the average log word probability, we use word frequencies extracted from the [1T Web Ngram corpus](https://catalog.ldc.upenn.edu/LDC2006T13); specifically, we use the list available [created by Rachel Tatman](https://www.kaggle.com/datasets/rtatman/english-word-frequency). A copy is hosted [here](https://ai2-s2-research-public.s3-us-west-2.amazonaws.com/lucas/google-1T-unigram/unigram_freq.csv).
83
+ - The most frequent word in the paper consists of alpha characters only, and it appears in less than 7.5% of the document.
84
+ - Words are obtained by splitting the text on whitespace.
85
+
86
+ The train set contains papers published before 2022-12-01;
87
+ the validation set includes documents published after 2022-12-01 and until 2023-01-03.
88
+
89
+ #### S2AG-derived documents
90
+
91
+ The S2AG corpus contains titles and abstracts of papers in Semantic Scholar.
92
+ Unfiltered, the corpus contains 91.1M papers and 15.5B whitespace-separated tokens as of 2023-01-03. To derive PES2O v1, we impose the following constraints:
93
+
94
+ - Abstract must be in English.
95
+ - To calculate the language, we once again use pycld3
96
+ - Title must be in English, or have average unigram log probability greater than -20.
97
+ - Abstract must be in English.
98
+ - Abstract must have higher than -20 average unigram log probability.
99
+ - Abstract must have at least 50 words.
100
+ - Abstract must have no more than 1000 words.
101
+ - The most frequent word in the union of text and abstract must be a 2+ character alpha word, or it can be `a` followed by a 2+ character alpha word.
102
+ - Paper was published after 1969.
103
+
104
+ #### Statistics
105
 
106
  | Dataset | Split | # Documents | # Words |
107
  |:-------:|:-------:|:-----------:|:--------------:|
 
133
 
134
  [1]: https://aclanthology.org/2020.acl-main.447/
135
  [2]: https://github.com/allenai/s2orc
 
pes2o.py ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gzip
2
+ import json
3
+ import datasets
4
+
5
+
6
+ logger = datasets.logging.get_logger(__name__)
7
+
8
+
9
+ _URL = "https://huggingface.co/datasets/allenai/pes2o"
10
+
11
+ _VARIANTS = ["v1", "v2"]
12
+
13
+ _N_SHARDS_PER_SPLIT = {
14
+ "v1": {"train": {'s2orc': 10, 's2ag': 10}, "valid": {'s2orc': 1, 's2ag': 1}},
15
+ "v2": {"train": {'s2orc': 10, 's2ag': 10}, "valid": {'s2orc': 1, 's2ag': 1}},
16
+ }
17
+
18
+ _DATA_URL = "\
19
+ https://huggingface.co/datasets/allenai/pes2o/resolve/main/\
20
+ {name}/{subset}/{split}/{shard:05d}.json.gz\
21
+ "
22
+
23
+ _DESCRIPTION = "\
24
+ The PES2O dataset is a collection of ~40M creative commmon licensed academic \
25
+ papers, cleaned, filtered, and formatted for pre-training of language models. \
26
+ It is derived from the Semantic Scholar Open Research Corpus(Lo et al, 2020), \
27
+ or S2ORC.\
28
+ "
29
+
30
+ _CITATION = ""
31
+
32
+
33
+ class pes2o(datasets.GeneratorBasedBuilder):
34
+ """Pretraining Efficiently on S2ORC!"""
35
+
36
+ BUILDER_CONFIGS = [datasets.BuilderConfig(name) for name in _VARIANTS]
37
+
38
+ def _info(self):
39
+ return datasets.DatasetInfo(
40
+ description=_DESCRIPTION,
41
+ features=datasets.Features(
42
+ {
43
+ "added": datasets.Value("string"),
44
+ "created": datasets.Value("string"),
45
+ "id": datasets.Value("string"),
46
+ "source": datasets.Value("string"),
47
+ "text": datasets.Value("string"),
48
+ "version": datasets.Value("string")
49
+ }
50
+ ),
51
+ supervised_keys=None,
52
+ homepage=_URL,
53
+ citation=_CITATION,
54
+ )
55
+
56
+ def _split_generators(self, dl_manager):
57
+ data_urls = {}
58
+ for split in ["train", "validation"]:
59
+ n_shards = _N_SHARDS_PER_SPLIT[self.config.name][split]
60
+ data_urls[split] = [
61
+ _DATA_URL.format(
62
+ name=self.config.name,
63
+ split=split,
64
+ subset=subset,
65
+ index=index
66
+ )
67
+ for subset, n_shards in n_shards.items()
68
+ for index in range(n_shards)
69
+ ]
70
+ train_downloaded_files = dl_manager.download(
71
+ data_urls["train"]
72
+ )
73
+ validation_downloaded_files = dl_manager.download(
74
+ data_urls["validation"]
75
+ )
76
+ return [
77
+ datasets.SplitGenerator(
78
+ name=str(datasets.Split.TRAIN), gen_kwargs={
79
+ "filepaths": train_downloaded_files
80
+ }),
81
+ datasets.SplitGenerator(
82
+ name=str(datasets.Split.VALIDATION), gen_kwargs={
83
+ "filepaths": validation_downloaded_files
84
+ }
85
+ ),
86
+ ]
87
+
88
+ def _generate_examples(self, filepaths):
89
+ """This function returns the examples in the raw (text) form by
90
+ iterating on all the files."""
91
+ id_ = 0
92
+ for filepath in filepaths:
93
+ logger.info("generating examples from = %s", filepath)
94
+ with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f:
95
+ for line in f:
96
+ if line:
97
+ example = json.loads(line)
98
+ yield id_, example
99
+ id_ += 1
v1/{s2ag.train.0.gz β†’ train/s2ag/00000.json.gz} RENAMED
File without changes
v1/{s2ag.train.1.gz β†’ train/s2ag/00001.json.gz} RENAMED
File without changes
v1/{s2ag.train.2.gz β†’ train/s2ag/00002.json.gz} RENAMED
File without changes
v1/{s2ag.train.3.gz β†’ train/s2ag/00003.json.gz} RENAMED
File without changes
v1/{s2ag.train.4.gz β†’ train/s2ag/00004.json.gz} RENAMED
File without changes
v1/{s2ag.train.5.gz β†’ train/s2ag/00005.json.gz} RENAMED
File without changes
v1/{s2ag.train.6.gz β†’ train/s2ag/00006.json.gz} RENAMED
File without changes
v1/{s2ag.train.7.gz β†’ train/s2ag/00007.json.gz} RENAMED
File without changes
v1/{s2ag.train.8.gz β†’ train/s2ag/00008.json.gz} RENAMED
File without changes
v1/{s2ag.train.9.gz β†’ train/s2ag/00009.json.gz} RENAMED
File without changes
v1/{s2orc.train.0.gz β†’ train/s2orc/00000.json.gz} RENAMED
File without changes
v1/{s2orc.train.1.gz β†’ train/s2orc/00001.json.gz} RENAMED
File without changes
v1/{s2orc.train.2.gz β†’ train/s2orc/00002.json.gz} RENAMED
File without changes
v1/{s2orc.train.3.gz β†’ train/s2orc/00003.json.gz} RENAMED
File without changes
v1/{s2orc.train.4.gz β†’ train/s2orc/00004.json.gz} RENAMED
File without changes
v1/{s2orc.train.5.gz β†’ train/s2orc/00005.json.gz} RENAMED
File without changes
v1/{s2orc.train.6.gz β†’ train/s2orc/00006.json.gz} RENAMED
File without changes
v1/{s2orc.train.7.gz β†’ train/s2orc/00007.json.gz} RENAMED
File without changes
v1/{s2orc.train.8.gz β†’ train/s2orc/00008.json.gz} RENAMED
File without changes
v1/{s2orc.train.9.gz β†’ train/s2orc/00009.json.gz} RENAMED
File without changes
v1/{s2ag.valid.0.gz β†’ validation/s2ag/00000.json.gz} RENAMED
File without changes
v1/{s2orc.valid.0.gz β†’ validation/s2orc/00000.json.gz} RENAMED
File without changes
v2/s2ag.valid.2.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:880aa83b0da706316866b3706a8a1d0cdb63f282b1397acb412e3f0129850245
3
- size 6247186
 
 
 
 
v2/s2ag.valid.3.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:af030a42a393dd7115e0d0c3cfb84e863f38d308e388fbcea1a64643f9af39b0
3
- size 6208595
 
 
 
 
v2/s2ag.valid.4.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c813ac8c9f91d8276dbfc2826c42b52b0192a3555402dffdeceaa88493d1a236
3
- size 6196981
 
 
 
 
v2/s2ag.valid.5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:fc2ee035c0d80d8c4fd485e9669c4c6091c0636e14eb01cdc725f536a2fd3b8e
3
- size 6190459
 
 
 
 
v2/s2ag.valid.6.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:454edae748353e4bde1f6abe89d4f7636c1a74608b139d1cdab8b05c281a090c
3
- size 6269943
 
 
 
 
v2/s2ag.valid.7.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:3cce628a835d4116bd6d9dc521a8ece476fa0b806b7a12ee99b3c379a698e435
3
- size 6105991
 
 
 
 
v2/s2ag.valid.8.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:ea61dceefbe1204b991e2b0042f61d6022de4256a76caeb962955f1198fb30cd
3
- size 6303064
 
 
 
 
v2/s2ag.valid.9.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:c2558b817be309bab741a331f935471afa87ae485952387a34a0b60aaa320386
3
- size 6199081
 
 
 
 
v2/s2orc.valid.0.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:bb0bd1a9c19b9921457bc7ba1e7ef6f5cdff5527315d451c6649568bb57fb93c
3
- size 49765244
 
 
 
 
v2/s2orc.valid.1.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:09c880e5b193fc05258a524e3aba5a0f7c3eea7935d11c2f75a23ec7cbc3f0d3
3
- size 47886027
 
 
 
 
v2/s2orc.valid.2.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5ac0ce32a1202816fe7da581396bb21fa2ccb457db3b5134065bd34b25747f0e
3
- size 50496634
 
 
 
 
v2/s2orc.valid.3.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a5bd3d044a9407cbc2c19ce650e36b4aac8bb350422b15962b58d31e2405bc8d
3
- size 48589847
 
 
 
 
v2/s2orc.valid.4.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:3cbba95ad61581e55cab617f7f8b41477062397260f2de59c2d23921774035fd
3
- size 49718320
 
 
 
 
v2/s2orc.valid.5.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:eb2390ee466be665496df4d6bba7aab859d3cc1c5405e00ab2575f3164a1f551
3
- size 48574182
 
 
 
 
v2/s2orc.valid.6.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a00a5b6df69130cb45bdadddb8ff242098578438da532d866424bf7a700e1e3a
3
- size 49826311
 
 
 
 
v2/s2orc.valid.7.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:803e6eb2fff87ad333e7dcbb9d1d1e1d9863f840aeefd0ffc16252f6e953332a
3
- size 49543400
 
 
 
 
v2/s2orc.valid.8.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:0e82785d1649fcff70100a8e48a2c1500d95ac9d04a46ff197382b6b7227cc3e
3
- size 49562841
 
 
 
 
v2/s2orc.valid.9.gz DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:a56cb6a5edf51422f5e002cd1611afe1f5cffa1823e38be9994e444c52ad316a
3
- size 49207053
 
 
 
 
v2/{s2ag.train.0.gz β†’ train/s2ag/00000.json.gz} RENAMED
File without changes
v2/{s2ag.train.1.gz β†’ train/s2ag/00001.json.gz} RENAMED
File without changes
v2/{s2ag.train.2.gz β†’ train/s2ag/00002.json.gz} RENAMED
File without changes
v2/{s2ag.train.3.gz β†’ train/s2ag/00003.json.gz} RENAMED
File without changes
v2/{s2ag.train.4.gz β†’ train/s2ag/00004.json.gz} RENAMED
File without changes
v2/{s2ag.train.5.gz β†’ train/s2ag/00005.json.gz} RENAMED
File without changes
v2/{s2ag.train.6.gz β†’ train/s2ag/00006.json.gz} RENAMED
File without changes
v2/{s2ag.train.7.gz β†’ train/s2ag/00007.json.gz} RENAMED
File without changes