File size: 4,302 Bytes
bf97ea0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f2522dd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c44a3f2
f2522dd
 
c44a3f2
 
 
 
f2522dd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
---
configs:
- config_name: default
  data_files:
  - split: train_2
    path: data/train_2-*
  - split: test_2
    path: data/test_2-*
  - split: train_4
    path: data/train_4-*
  - split: test_4
    path: data/test_4-*
  - split: train_6
    path: data/train_6-*
  - split: test_6
    path: data/test_6-*
  - split: train_8
    path: data/train_8-*
  - split: test_8
    path: data/test_8-*
  - split: train_10
    path: data/train_10-*
  - split: test_10
    path: data/test_10-*
  - split: train_20
    path: data/train_20-*
  - split: test_20
    path: data/test_20-*
  - split: train_30
    path: data/train_30-*
  - split: test_30
    path: data/test_30-*
  - split: train_50
    path: data/train_50-*
  - split: test_50
    path: data/test_50-*
  - split: train_100
    path: data/train_100-*
  - split: test_100
    path: data/test_100-*
dataset_info:
  features:
  - name: input
    sequence: string
  - name: output
    dtype: string
  splits:
  - name: train_2
    num_bytes: 443138
    num_examples: 8000
  - name: test_2
    num_bytes: 110870
    num_examples: 2000
  - name: train_4
    num_bytes: 678096
    num_examples: 8000
  - name: test_4
    num_bytes: 169301
    num_examples: 2000
  - name: train_6
    num_bytes: 929295
    num_examples: 8000
  - name: test_6
    num_bytes: 232592
    num_examples: 2000
  - name: train_8
    num_bytes: 1176497
    num_examples: 8000
  - name: test_8
    num_bytes: 294638
    num_examples: 2000
  - name: train_10
    num_bytes: 1427621
    num_examples: 8000
  - name: test_10
    num_bytes: 356384
    num_examples: 2000
  - name: train_20
    num_bytes: 2809482
    num_examples: 8000
  - name: test_20
    num_bytes: 702216
    num_examples: 2000
  - name: train_30
    num_bytes: 4194340
    num_examples: 8000
  - name: test_30
    num_bytes: 1048718
    num_examples: 2000
  - name: train_50
    num_bytes: 6957957
    num_examples: 8000
  - name: test_50
    num_bytes: 1738902
    num_examples: 2000
  - name: train_100
    num_bytes: 13859042
    num_examples: 8000
  - name: test_100
    num_bytes: 3466573
    num_examples: 2000
  download_size: 11905468
  dataset_size: 40595662
---

# Arithmetic Puzzles Dataset

A collection of arithmetic puzzles with heavy use of variable assignment. Current LLMs struggle with variable indirection/multi-hop reasoning, this should be a tough test for them.

Inputs are a list of strings representing variable assignments (`c=a+b`), and the output is the integer answer.

Outputs are filtered to be between [-100, 100], and self-reference/looped dependencies are forbidden.

Splits include:

- `train_small/test_small` which includes 10k total examples of puzzles with up to 10 variables.
- `train_large/test_large` which includes 10k total examples of puzzles with up to 100 variables.

Conceptually the data looks like this:

```python
Input:

  a=1
  b=2
  c=a+b
  solve(c)=

Output:
  3
```

In actuality it looks like this:

```python
{
    "input": ['var_0=1', 'var_1=2', 'var_2=a+b', 'solve(var_2)='],
    "output": 3
}
```


### Loading the Dataset

```python
from datasets import load_dataset

# Load the entire dataset
dataset = load_dataset("neurallambda/arithmetic_dataset")

# Load specific splits
train_small = load_dataset("neurallambda/arithmetic_dataset", split="train_small")
test_small = load_dataset("neurallambda/arithmetic_dataset", split="test_small")
train_large = load_dataset("neurallambda/arithmetic_dataset", split="train_large")
test_large = load_dataset("neurallambda/arithmetic_dataset", split="test_large")
```

### Preparing Inputs

To prepare the inputs as concatenated strings, you can do this:

```python
def prepare_input(example):
    return {
        "input_text": "
".join(example["input"]),
        "output": example["output"]
    }

# Apply the preparation to a specific split
train_small_prepared = train_small.map(prepare_input)

# Example of using the prepared dataset
for example in train_small_prepared.select(range(5)):  # Show first 5 examples
    print("Input:", example["input_text"])
    print("Output:", example["output"])
    print()
```

This will produce output similar to:

```
Input: var_0=5
var_1=2
var_2=-2 + -8
var_3=3
var_4=4
var_5=var_2
var_6=var_3 * 10
var_7=var_2 - var_0
var_8=var_1
var_9=-2 - 9
solve(var_3)=
Output: 3
```