christopherastone commited on
Commit
a569c18
1 Parent(s): 785eaab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -10
README.md CHANGED
@@ -42,17 +42,25 @@ We hope it can serve as an aid in the development of language-based proof assist
42
 
43
  ## Dataset Structure
44
 
45
- There are two versions of the data: `proofs` divides up the data proof-by-proof, and `sentences` further divides up the same data sentence-by-sentence.
 
46
 
47
- * The data in `proofs` consists of a `fileID` that specifies the paper where the proof was extracted, and the `proof` as a string.
48
 
49
- * The data in `sentences` consists of a `fileID` that specifies the paper where the sentence occurred, and the `sentence` as a string.
50
 
51
- ## Dataset Size
 
 
52
 
53
- * `proofs` is 3197091800 bytes and has 3681901 examples.
54
 
55
- * `sentences` is 3736579062 bytes and has 38899130 examples.
 
 
 
 
 
56
 
57
  ## Dataset Statistics
58
 
@@ -62,16 +70,41 @@ There are two versions of the data: `proofs` divides up the data proof-by-proof,
62
 
63
  ## Dataset Usage
64
 
65
- Data can be downloaded as TSV files. Accessing the data programmatically from Python is also possible using the `Datasets` library. For example, to print the first 10 proofs:
 
 
 
66
 
67
  ```python
68
  from datasets import load_dataset
69
- dataset = load_dataset('proofcheck/prooflang', 'proofs', split='train', streaming=`True`)
 
 
 
 
 
 
 
 
70
  for d in dataset.take(10):
71
- print(d['fileID'], d['proof'])
72
  ```
73
 
74
- To look at individual sentences from the proofs, `'proofs'` and `d['proof']` by `sentences` and `d['sentence']` .
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
 
77
  ### Data Splits
 
42
 
43
  ## Dataset Structure
44
 
45
+ There are multiple TSV versions of the data. Primarily, `proofs` divides up the data proof-by-proof, and `sentences` further divides up the same data sentence-by-sentence.
46
+ The `raw` dataset is a less-cleaned-up version of `proofs`. More usefully, the `tags` dataset gives arXiv subject tags for each paper ID found in the other data files.
47
 
48
+ * The data in `proofs` (and `raw`) consists of a `paper` ID (identifying where the proof was extracted from), and the `proof` as a string.
49
 
50
+ * The data in `sentences` consists of a `paper` ID, and the `sentence` as a string.
51
 
52
+ * The data in `tags` consists of a `paper` ID, and the arXiv subject tags for that paper as a single comma-separated string.
53
+
54
+ Further metadata about papers can be queried from arXiv.org using the paper ID.
55
 
56
+ In particular, each paper `<id>` in the dataset can be accessed online at the url `https://arxiv.org/abs/<id>`
57
 
58
+ ## Dataset Size
59
+
60
+ * `proofs` is 3,094,779,182 bytes (unzipped) and has 3,681,893 examples.
61
+ * `sentences` is 3,545,309,822 bytes (unzipped) and has 38,899,132 examples.
62
+ * `tags` is 7,967,839 bytes (unzipped) and has 328,642 rows.
63
+ * `raw` is 3,178,997,379 bytes (unzipped) and has 3,681,903 examples.
64
 
65
  ## Dataset Statistics
66
 
 
70
 
71
  ## Dataset Usage
72
 
73
+ Data can be downloaded as (zipped) TSV files.
74
+
75
+ Accessing the data programmatically from Python is also possible using the `Datasets` library.
76
+ For example, to print the first 10 proofs:
77
 
78
  ```python
79
  from datasets import load_dataset
80
+ dataset = load_dataset('proofcheck/prooflang', 'proofs', split='train', streaming='True')
81
+ for d in dataset.take(10):
82
+ print(d['paper'], d['proof'])
83
+ ```
84
+
85
+ To look at individual sentences from the proofs,
86
+
87
+ ```python
88
+ dataset = load_dataset('proofcheck/prooflang', 'proofs', split='train', streaming='True')
89
  for d in dataset.take(10):
90
+ print(d['paper'], d['sentence'])
91
  ```
92
 
93
+ To get a comma-separated list of arXiv subject tags for each paper,
94
+ ```python
95
+ from datasets import load_dataset
96
+ dataset = load_dataset('proofcheck/prooflang', 'tags', split='train', streaming='True')
97
+ for d in dataset.take(10):
98
+ print(d['paper'], d['tags'])
99
+ ```
100
+
101
+ Finally, to look at a version of the proofs with less aggressive cleanup (straight from the LaTeX extraction),
102
+
103
+ ```python
104
+ dataset = load_dataset('proofcheck/prooflang', 'raw', split='train', streaming='True')
105
+ for d in dataset.take(10):
106
+ print(d['paper'], d['proof'])
107
+ ```
108
 
109
 
110
  ### Data Splits