neurallambda
commited on
Commit
•
cb652ee
1
Parent(s):
01f7271
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,107 +1,3 @@
|
|
1 |
-
---
|
2 |
-
configs:
|
3 |
-
- config_name: default
|
4 |
-
data_files:
|
5 |
-
- split: train_2
|
6 |
-
path: data/train_2-*
|
7 |
-
- split: test_2
|
8 |
-
path: data/test_2-*
|
9 |
-
- split: train_4
|
10 |
-
path: data/train_4-*
|
11 |
-
- split: test_4
|
12 |
-
path: data/test_4-*
|
13 |
-
- split: train_6
|
14 |
-
path: data/train_6-*
|
15 |
-
- split: test_6
|
16 |
-
path: data/test_6-*
|
17 |
-
- split: train_8
|
18 |
-
path: data/train_8-*
|
19 |
-
- split: test_8
|
20 |
-
path: data/test_8-*
|
21 |
-
- split: train_10
|
22 |
-
path: data/train_10-*
|
23 |
-
- split: test_10
|
24 |
-
path: data/test_10-*
|
25 |
-
- split: train_20
|
26 |
-
path: data/train_20-*
|
27 |
-
- split: test_20
|
28 |
-
path: data/test_20-*
|
29 |
-
- split: train_30
|
30 |
-
path: data/train_30-*
|
31 |
-
- split: test_30
|
32 |
-
path: data/test_30-*
|
33 |
-
- split: train_50
|
34 |
-
path: data/train_50-*
|
35 |
-
- split: test_50
|
36 |
-
path: data/test_50-*
|
37 |
-
- split: train_100
|
38 |
-
path: data/train_100-*
|
39 |
-
- split: test_100
|
40 |
-
path: data/test_100-*
|
41 |
-
dataset_info:
|
42 |
-
features:
|
43 |
-
- name: input
|
44 |
-
sequence: string
|
45 |
-
- name: output
|
46 |
-
dtype: string
|
47 |
-
splits:
|
48 |
-
- name: train_2
|
49 |
-
num_bytes: 443070
|
50 |
-
num_examples: 8000
|
51 |
-
- name: test_2
|
52 |
-
num_bytes: 110938
|
53 |
-
num_examples: 2000
|
54 |
-
- name: train_4
|
55 |
-
num_bytes: 677672
|
56 |
-
num_examples: 8000
|
57 |
-
- name: test_4
|
58 |
-
num_bytes: 169725
|
59 |
-
num_examples: 2000
|
60 |
-
- name: train_6
|
61 |
-
num_bytes: 929545
|
62 |
-
num_examples: 8000
|
63 |
-
- name: test_6
|
64 |
-
num_bytes: 232342
|
65 |
-
num_examples: 2000
|
66 |
-
- name: train_8
|
67 |
-
num_bytes: 1177415
|
68 |
-
num_examples: 8000
|
69 |
-
- name: test_8
|
70 |
-
num_bytes: 293720
|
71 |
-
num_examples: 2000
|
72 |
-
- name: train_10
|
73 |
-
num_bytes: 1427596
|
74 |
-
num_examples: 8000
|
75 |
-
- name: test_10
|
76 |
-
num_bytes: 356409
|
77 |
-
num_examples: 2000
|
78 |
-
- name: train_20
|
79 |
-
num_bytes: 2810319
|
80 |
-
num_examples: 8000
|
81 |
-
- name: test_20
|
82 |
-
num_bytes: 701379
|
83 |
-
num_examples: 2000
|
84 |
-
- name: train_30
|
85 |
-
num_bytes: 4194268
|
86 |
-
num_examples: 8000
|
87 |
-
- name: test_30
|
88 |
-
num_bytes: 1048790
|
89 |
-
num_examples: 2000
|
90 |
-
- name: train_50
|
91 |
-
num_bytes: 6958734
|
92 |
-
num_examples: 8000
|
93 |
-
- name: test_50
|
94 |
-
num_bytes: 1738125
|
95 |
-
num_examples: 2000
|
96 |
-
- name: train_100
|
97 |
-
num_bytes: 13862843
|
98 |
-
num_examples: 8000
|
99 |
-
- name: test_100
|
100 |
-
num_bytes: 3462772
|
101 |
-
num_examples: 2000
|
102 |
-
download_size: 11904704
|
103 |
-
dataset_size: 40595662
|
104 |
-
---
|
105 |
|
106 |
# Arithmetic Puzzles Dataset
|
107 |
|
@@ -113,10 +9,10 @@ Outputs are filtered to be between [-100, 100], and self-reference/looped depend
|
|
113 |
|
114 |
Splits are named like:
|
115 |
|
116 |
-
- `train_N`
|
117 |
-
- `test_N`
|
118 |
|
119 |
-
|
120 |
|
121 |
Conceptually the data looks like this:
|
122 |
|
@@ -148,11 +44,11 @@ In actuality it looks like this:
|
|
148 |
from datasets import load_dataset
|
149 |
|
150 |
# Load the entire dataset
|
151 |
-
dataset = load_dataset("neurallambda/
|
152 |
|
153 |
# Load specific splits
|
154 |
-
train_small = load_dataset("neurallambda/
|
155 |
-
test_small = load_dataset("neurallambda/
|
156 |
```
|
157 |
|
158 |
### Preparing Inputs
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
|
2 |
# Arithmetic Puzzles Dataset
|
3 |
|
|
|
9 |
|
10 |
Splits are named like:
|
11 |
|
12 |
+
- `train_N` 8k total examples of puzzles with N variables
|
13 |
+
- `test_N` 2k more examples with N variables
|
14 |
|
15 |
+
Train/test leakage is prevented: all training examples are filtered out of the test set.
|
16 |
|
17 |
Conceptually the data looks like this:
|
18 |
|
|
|
44 |
from datasets import load_dataset
|
45 |
|
46 |
# Load the entire dataset
|
47 |
+
dataset = load_dataset("neurallambda/arithmetic_dataset")
|
48 |
|
49 |
# Load specific splits
|
50 |
+
train_small = load_dataset("neurallambda/arithmetic_dataset", split="train_10")
|
51 |
+
test_small = load_dataset("neurallambda/arithmetic_dataset", split="test_10")
|
52 |
```
|
53 |
|
54 |
### Preparing Inputs
|