system HF staff commited on
Commit
fc9972e
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +183 -0
  3. arxiv_dataset.py +130 -0
  4. dataset_infos.json +1 -0
  5. dummy/1.1.0/dummy_data.zip +3 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc0-1-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - n>1M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - conditional-text-generation
18
+ - text-retrieval
19
+ task_ids:
20
+ - document-retrieval
21
+ - entity-linking-retrieval
22
+ - explanation-generation
23
+ - fact-checking-retrieval
24
+ - machine-translation
25
+ - summarization
26
+ - text-simplification
27
+ ---
28
+
29
+ # Dataset Card For arXiv Dataset
30
+
31
+ ## Table of Contents
32
+ - [Dataset Description](#dataset-description)
33
+ - [Dataset Summary](#dataset-summary)
34
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
35
+ - [Languages](#languages)
36
+ - [Dataset Structure](#dataset-structure)
37
+ - [Data Instances](#data-instances)
38
+ - [Data Fields](#data-instances)
39
+ - [Data Splits](#data-instances)
40
+ - [Dataset Creation](#dataset-creation)
41
+ - [Curation Rationale](#curation-rationale)
42
+ - [Source Data](#source-data)
43
+ - [Annotations](#annotations)
44
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
46
+ - [Social Impact of Dataset](#social-impact-of-dataset)
47
+ - [Discussion of Biases](#discussion-of-biases)
48
+ - [Other Known Limitations](#other-known-limitations)
49
+ - [Additional Information](#additional-information)
50
+ - [Dataset Curators](#dataset-curators)
51
+ - [Licensing Information](#licensing-information)
52
+ - [Citation Information](#citation-information)
53
+
54
+ ## Dataset Description
55
+
56
+ - **Homepage:** [Kaggle arXiv Dataset Homepage](https://www.kaggle.com/Cornell-University/arxiv)
57
+ - **Repository:**
58
+ - **Paper:** [On the Use of ArXiv as a Dataset](https://arxiv.org/abs/1905.00075)
59
+ - **Leaderboard:**
60
+ - **Point of Contact:** [Matt Bierbaum](mailto:matt.bierbaum@gmail.com)
61
+
62
+ ### Dataset Summary
63
+
64
+ A dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.
65
+
66
+ ### Supported Tasks and Leaderboards
67
+
68
+ [More Information Needed]
69
+
70
+ ### Languages
71
+
72
+ The language supported is English
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Data Instances
77
+
78
+ This dataset is a mirror of the original ArXiv data. Because the full dataset is rather large (1.1TB and growing), this dataset provides only a metadata file in the json format. An example is given below
79
+
80
+ ```
81
+ {'id': '0704.0002',
82
+ 'submitter': 'Louis Theran',
83
+ 'authors': 'Ileana Streinu and Louis Theran',
84
+ 'title': 'Sparsity-certifying Graph Decompositions',
85
+ 'comments': 'To appear in Graphs and Combinatorics',
86
+ 'journal-ref': None,
87
+ 'doi': None,
88
+ 'report-no': None,
89
+ 'categories': 'math.CO cs.CG',
90
+ 'license': 'http://arxiv.org/licenses/nonexclusive-distrib/1.0/',
91
+ 'abstract': ' We describe a new algorithm, the $(k,\\ell)$-pebble game with colors, and use\nit obtain a characterization of the family of $(k,\\ell)$-sparse graphs and\nalgorithmic solutions to a family of problems concerning tree decompositions of\ngraphs. Special instances of sparse graphs appear in rigidity theory and have\nreceived increased attention in recent years. In particular, our colored\npebbles generalize and strengthen the previous results of Lee and Streinu and\ngive a new proof of the Tutte-Nash-Williams characterization of arboricity. We\nalso present a new decomposition that certifies sparsity based on the\n$(k,\\ell)$-pebble game with colors. Our work also exposes connections between\npebble game algorithms and previous sparse graph algorithms by Gabow, Gabow and\nWestermann and Hendrickson.\n',
92
+ 'update_date': '2008-12-13'}
93
+ ```
94
+
95
+ ### Data Fields
96
+
97
+ - `id`: ArXiv ID (can be used to access the paper)
98
+ - `submitter`: Who submitted the paper
99
+ - `authors`: Authors of the paper
100
+ - `title`: Title of the paper
101
+ - `comments`: Additional info, such as number of pages and figures
102
+ - `journal-ref`: Information about the journal the paper was published in
103
+ - `doi`: [Digital Object Identifier](https://www.doi.org)
104
+ - `report-no`: Report Number
105
+ - `abstract`: The abstract of the paper
106
+ - `categories`: Categories / tags in the ArXiv system
107
+
108
+
109
+ ### Data Splits
110
+
111
+ The data was not splited.
112
+
113
+ ## Dataset Creation
114
+
115
+ ### Curation Rationale
116
+
117
+ For nearly 30 years, ArXiv has served the public and research communities by providing open access to scholarly articles, from the vast branches of physics to the many subdisciplines of computer science to everything in between, including math, statistics, electrical engineering, quantitative biology, and economics. This rich corpus of information offers significant, but sometimes overwhelming depth. In these times of unique global challenges, efficient extraction of insights from data is essential. To help make the arXiv more accessible, a free, open pipeline on Kaggle to the machine-readable arXiv dataset: a repository of 1.7 million articles, with relevant features such as article titles, authors, categories, abstracts, full text PDFs, and more is presented to empower new use cases that can lead to the exploration of richer machine learning techniques that combine multi-modal features towards applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.
118
+
119
+ ### Source Data
120
+
121
+ This data is based on arXiv papers.
122
+ [More Information Needed]
123
+
124
+ #### Initial Data Collection and Normalization
125
+
126
+ [More Information Needed]
127
+
128
+ #### Who are the source language producers?
129
+
130
+ [More Information Needed]
131
+
132
+ ### Annotations
133
+
134
+ This dataset contains no annotations.
135
+
136
+ #### Annotation process
137
+
138
+ [More Information Needed]
139
+
140
+ #### Who are the annotators?
141
+
142
+ [More Information Needed]
143
+
144
+ ### Personal and Sensitive Information
145
+
146
+ [More Information Needed]
147
+
148
+ ## Considerations for Using the Data
149
+
150
+ ### Social Impact of Dataset
151
+
152
+ [More Information Needed]
153
+
154
+ ### Discussion of Biases
155
+
156
+ [More Information Needed]
157
+
158
+ ### Other Known Limitations
159
+
160
+ [More Information Needed]
161
+
162
+ ## Additional Information
163
+
164
+ ### Dataset Curators
165
+
166
+ The original data is maintained by [ArXiv](https://arxiv.org/)
167
+
168
+ ### Licensing Information
169
+
170
+ The data is under the [Creative Commons CC0 1.0 Universal Public Domain Dedication](https://creativecommons.org/publicdomain/zero/1.0/)
171
+
172
+ ### Citation Information
173
+
174
+ ```
175
+ @misc{clement2019arxiv,
176
+ title={On the Use of ArXiv as a Dataset},
177
+ author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi},
178
+ year={2019},
179
+ eprint={1905.00075},
180
+ archivePrefix={arXiv},
181
+ primaryClass={cs.IR}
182
+ }
183
+ ```
arxiv_dataset.py ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """arXiv Dataset."""
17
+
18
+ from __future__ import absolute_import, division, print_function
19
+
20
+ import json
21
+ import os
22
+
23
+ import datasets
24
+
25
+
26
+ _CITATION = """\
27
+ @misc{clement2019arxiv,
28
+ title={On the Use of ArXiv as a Dataset},
29
+ author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi},
30
+ year={2019},
31
+ eprint={1905.00075},
32
+ archivePrefix={arXiv},
33
+ primaryClass={cs.IR}
34
+ }
35
+ """
36
+
37
+ _DESCRIPTION = """\
38
+ A dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.
39
+ """
40
+
41
+ _HOMEPAGE = "https://www.kaggle.com/Cornell-University/arxiv"
42
+ _LICENSE = "https://creativecommons.org/publicdomain/zero/1.0/"
43
+
44
+ _ID = "id"
45
+ _SUBMITTER = "submitter"
46
+ _AUTHORS = "authors"
47
+ _TITLE = "title"
48
+ _COMMENTS = "comments"
49
+ _JOURNAL_REF = "journal-ref"
50
+ _DOI = "doi"
51
+ _REPORT_NO = "report-no"
52
+ _CATEGORIES = "categories"
53
+ _LICENSE = "license"
54
+ _ABSTRACT = "abstract"
55
+ _UPDATE_DATE = "update_date"
56
+
57
+ _FILENAME = "arxiv-metadata-oai-snapshot.json"
58
+
59
+
60
+ class ArxivDataset(datasets.GeneratorBasedBuilder):
61
+ """arXiv Dataset: arXiv dataset and metadata of 1.7M+ scholarly papers across STEM"""
62
+
63
+ VERSION = datasets.Version("1.1.0")
64
+
65
+ @property
66
+ def manual_download_instructions(self):
67
+ return """\
68
+ You need to go to https://www.kaggle.com/Cornell-University/arxiv,
69
+ and manually download the dataset. Once it is completed,
70
+ a zip folder named archive.zip will be appeared in your Downloads folder
71
+ or whichever folder your browser chooses to save files to. Extract that folder
72
+ and you would get a arxiv-metadata-oai-snapshot.json file
73
+ You can then move that file under <path/to/folder>.
74
+ The <path/to/folder> can e.g. be "~/manual_data".
75
+ arxiv_dataset can then be loaded using the following command `datasets.load_dataset("arxiv_dataset", data_dir="<path/to/folder>")`.
76
+ """
77
+
78
+ def _info(self):
79
+ feature_names = [
80
+ _ID,
81
+ _SUBMITTER,
82
+ _AUTHORS,
83
+ _TITLE,
84
+ _COMMENTS,
85
+ _JOURNAL_REF,
86
+ _DOI,
87
+ _REPORT_NO,
88
+ _CATEGORIES,
89
+ _LICENSE,
90
+ _ABSTRACT,
91
+ _UPDATE_DATE,
92
+ ]
93
+ return datasets.DatasetInfo(
94
+ description=_DESCRIPTION,
95
+ features=datasets.Features({k: datasets.Value("string") for k in feature_names}),
96
+ supervised_keys=None,
97
+ homepage=_HOMEPAGE,
98
+ citation=_CITATION,
99
+ )
100
+
101
+ def _split_generators(self, dl_manager):
102
+ """Returns SplitGenerators."""
103
+ path_to_manual_file = os.path.join(os.path.abspath(os.path.expanduser(dl_manager.manual_dir)), _FILENAME)
104
+ if not os.path.exists(path_to_manual_file):
105
+ raise FileNotFoundError(
106
+ "{} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('arxiv_dataset', data_dir=...)` that includes a file name {}. Manual download instructions: {})".format(
107
+ path_to_manual_file, _FILENAME, self.manual_download_instructions
108
+ )
109
+ )
110
+ return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"path": path_to_manual_file})]
111
+
112
+ def _generate_examples(self, path=None, title_set=None):
113
+ """ Yields examples. """
114
+ with open(path, encoding="utf8") as f:
115
+ for i, entry in enumerate(f):
116
+ data = dict(json.loads(entry))
117
+ yield i, {
118
+ _ID: data["id"],
119
+ _SUBMITTER: data["submitter"],
120
+ _AUTHORS: data["authors"],
121
+ _TITLE: data["title"],
122
+ _COMMENTS: data["comments"],
123
+ _JOURNAL_REF: data["journal-ref"],
124
+ _DOI: data["doi"],
125
+ _REPORT_NO: data["report-no"],
126
+ _CATEGORIES: data["categories"],
127
+ _LICENSE: data["license"],
128
+ _ABSTRACT: data["abstract"],
129
+ _UPDATE_DATE: data["update_date"],
130
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "A dataset of 1.7 million arXiv articles for applications like trend analysis, paper recommender engines, category prediction, co-citation networks, knowledge graph construction and semantic search interfaces.\n", "citation": "@misc{clement2019arxiv,\n title={On the Use of ArXiv as a Dataset},\n author={Colin B. Clement and Matthew Bierbaum and Kevin P. O'Keeffe and Alexander A. Alemi},\n year={2019},\n eprint={1905.00075},\n archivePrefix={arXiv},\n primaryClass={cs.IR}\n}\n", "homepage": "https://www.kaggle.com/Cornell-University/arxiv", "license": "", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "submitter": {"dtype": "string", "id": null, "_type": "Value"}, "authors": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}, "comments": {"dtype": "string", "id": null, "_type": "Value"}, "journal-ref": {"dtype": "string", "id": null, "_type": "Value"}, "doi": {"dtype": "string", "id": null, "_type": "Value"}, "report-no": {"dtype": "string", "id": null, "_type": "Value"}, "categories": {"dtype": "string", "id": null, "_type": "Value"}, "license": {"dtype": "string", "id": null, "_type": "Value"}, "abstract": {"dtype": "string", "id": null, "_type": "Value"}, "update_date": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "arxiv_dataset", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2246545603, "num_examples": 1796911, "dataset_name": "arxiv_dataset"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 2246545603, "size_in_bytes": 2246545603}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5264fc659e1e86c90bb64abc5e091a7ed442a2a6589e9fba49b3907ffc3b4b6
3
+ size 2682