arjunguha commited on
Commit
7240386
1 Parent(s): ed0f519

Upload builder.ipynb

Browse files
Files changed (1) hide show
  1. builder.ipynb +352 -0
builder.ipynb ADDED
@@ -0,0 +1,352 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "This is Arjun's attempt at building a long-content benchmark based on HumanEvalPlus,\n",
8
+ "as imagined by Leandro."
9
+ ]
10
+ },
11
+ {
12
+ "cell_type": "code",
13
+ "execution_count": 88,
14
+ "metadata": {},
15
+ "outputs": [],
16
+ "source": [
17
+ "import datasets\n",
18
+ "import random\n",
19
+ "import bounded_subprocess\n",
20
+ "import tempfile\n",
21
+ "from pathlib import Path"
22
+ ]
23
+ },
24
+ {
25
+ "cell_type": "markdown",
26
+ "metadata": {},
27
+ "source": [
28
+ "Let's start by thanking Loubna for uploading this to the Hub."
29
+ ]
30
+ },
31
+ {
32
+ "cell_type": "code",
33
+ "execution_count": 41,
34
+ "metadata": {},
35
+ "outputs": [
36
+ {
37
+ "name": "stderr",
38
+ "output_type": "stream",
39
+ "text": [
40
+ "Found cached dataset parquet (/home/arjun/.cache/huggingface/datasets/loubnabnl___parquet/loubnabnl--humaneval_plus-d3a2da5c53783cd1/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec)\n"
41
+ ]
42
+ }
43
+ ],
44
+ "source": [
45
+ "humanevalplus = datasets.load_dataset(\"loubnabnl/humaneval_plus\", split=\"train\")"
46
+ ]
47
+ },
48
+ {
49
+ "cell_type": "markdown",
50
+ "metadata": {},
51
+ "source": [
52
+ "The tests in HumanEval are in the same style as HumanEval:\n",
53
+ "\n",
54
+ "```\n",
55
+ "def check(candidate):\n",
56
+ " assert candidate(x) == y\n",
57
+ " ...\n",
58
+ "```\n",
59
+ "\n",
60
+ "The code below extracts the assrtions, unindents them, and renamed `candidate`\n",
61
+ "to the name of the function being tested. Moreover, not all lines are simple\n",
62
+ "assertions, so we skip over them. There is a possibiliby of error: an assertion\n",
63
+ "may span several lines. But, it's fairly unlikely, and the models we are testing\n",
64
+ "shouldn't fall apart on a little noise like that.\n",
65
+ "\n",
66
+ "Finally, we strip out the docstring from the prompt.\n"
67
+ ]
68
+ },
69
+ {
70
+ "cell_type": "code",
71
+ "execution_count": 71,
72
+ "metadata": {},
73
+ "outputs": [
74
+ {
75
+ "name": "stderr",
76
+ "output_type": "stream",
77
+ "text": [
78
+ "Loading cached processed dataset at /home/arjun/.cache/huggingface/datasets/loubnabnl___parquet/loubnabnl--humaneval_plus-d3a2da5c53783cd1/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec/cache-25a2ca89ddab56cb.arrow\n"
79
+ ]
80
+ },
81
+ {
82
+ "data": {
83
+ "application/vnd.jupyter.widget-view+json": {
84
+ "model_id": "2bdeec04efc94b4bb45601a963eff47e",
85
+ "version_major": 2,
86
+ "version_minor": 0
87
+ },
88
+ "text/plain": [
89
+ "Filter: 0%| | 0/164 [00:00<?, ? examples/s]"
90
+ ]
91
+ },
92
+ "metadata": {},
93
+ "output_type": "display_data"
94
+ }
95
+ ],
96
+ "source": [
97
+ "def extract_and_unindent(s, entrypoint):\n",
98
+ " idx = s.find(\"def check(candidate):\")\n",
99
+ " if idx == -1:\n",
100
+ " return None\n",
101
+ " extracted = s[idx+len(\"def check(candidate):\"):]\n",
102
+ " lines = extracted.split(\"\\n\")\n",
103
+ " tests = [ ]\n",
104
+ " for line in lines:\n",
105
+ " if line == \"\":\n",
106
+ " continue\n",
107
+ " if not line.startswith(\" assert\"):\n",
108
+ " continue\n",
109
+ " tests.append(line.strip().replace(\"candidate(\", entrypoint + \"(\"))\n",
110
+ " return tests\n",
111
+ "\n",
112
+ "def clean_item(item):\n",
113
+ " tests = extract_and_unindent(item[\"test\"], item[\"entry_point\"])\n",
114
+ " prompt = item[\"prompt\"][:item[\"prompt\"].find(\"\\n \")]\n",
115
+ " return {\n",
116
+ " \"tests\": tests,\n",
117
+ " \"prompt\": prompt,\n",
118
+ " \"canonical\": item[\"canonical_solution\"].strip()\n",
119
+ " }\n",
120
+ "\n",
121
+ "processed_humaneval_plus = humanevalplus.map(clean_item).filter(lambda item: len(item[\"tests\"]) > 0)"
122
+ ]
123
+ },
124
+ {
125
+ "cell_type": "markdown",
126
+ "metadata": {},
127
+ "source": [
128
+ "Given `processed_humaneval_plus`, we turn each item into a benchmark:\n",
129
+ "\n",
130
+ "- `prompt` has several assertions, including distractors, in random order and\n",
131
+ " concludes with a function signature `def f(x):`.\n",
132
+ "- `size` is the length of the prompt in characters.\n",
133
+ "- `target_tests` are the subset of the assertions that test `f`.\n",
134
+ "- `canonical_prompt` is the prompt without distractors and assertions\n",
135
+ "- `canonical_solution` is a canonical solution that should pass the tests."
136
+ ]
137
+ },
138
+ {
139
+ "cell_type": "code",
140
+ "execution_count": 101,
141
+ "metadata": {},
142
+ "outputs": [],
143
+ "source": [
144
+ "def build_benchmark(ds, other_indices, target_index):\n",
145
+ " canonical_prompt = ds[target_index][\"prompt\"]\n",
146
+ " canonical_solution = ds[target_index][\"canonical\"]\n",
147
+ "\n",
148
+ " tests = []\n",
149
+ " tests.extend(ds[target_index][\"tests\"])\n",
150
+ " for ix in other_indices:\n",
151
+ " tests.extend(ds[ix][\"tests\"])\n",
152
+ " random.shuffle(tests)\n",
153
+ " prompt = \"\\n\".join(tests)\n",
154
+ " prompt = prompt + \"\\n\\n\" + canonical_prompt\n",
155
+ " target_tests = \"\\n\".join(ds[target_index][\"tests\"])\n",
156
+ " return {\n",
157
+ " \"prompt\": prompt, \n",
158
+ " \"target_tests\": target_tests,\n",
159
+ " \"canonical_prompt\": canonical_prompt,\n",
160
+ " \"canonical_solution\": \"\\n \" + canonical_solution,\n",
161
+ " \"size\": len(prompt)\n",
162
+ " }\n",
163
+ "\n",
164
+ "def random_benchmark(ds, size: int):\n",
165
+ " assert size > 0\n",
166
+ " indices = random.sample(range(len(ds)), size)\n",
167
+ " return build_benchmark(ds, indices[1:], indices[0])\n",
168
+ "\n",
169
+ "def validate_benchmark(item):\n",
170
+ " program = item[\"canonical_prompt\"] + item[\"canonical_solution\"] + \"\\n\\n\" + item[\"target_tests\"]\n",
171
+ " with tempfile.NamedTemporaryFile(suffix=\".py\", delete=True) as f:\n",
172
+ " Path(f.name).write_text(program)\n",
173
+ " r = bounded_subprocess.run([\"python3\", f.name])\n",
174
+ " return r.exit_code == 0"
175
+ ]
176
+ },
177
+ {
178
+ "cell_type": "markdown",
179
+ "metadata": {},
180
+ "source": [
181
+ "This is a decent way to prompt an instruction-tuned model, but we aren't going to do it right now."
182
+ ]
183
+ },
184
+ {
185
+ "cell_type": "code",
186
+ "execution_count": 83,
187
+ "metadata": {},
188
+ "outputs": [
189
+ {
190
+ "name": "stdout",
191
+ "output_type": "stream",
192
+ "text": [
193
+ "These are several assertions:\n",
194
+ "\n",
195
+ "```\n",
196
+ "assert words_string(\"One,, two, three, four, five, six,\") == [\"One\", \"two\", \"three\", \"four\", \"five\", \"six\"]\n",
197
+ "assert words_string(\"One, two, three, four, five, six\") == [\"One\", \"two\", \"three\", \"four\", \"five\", \"six\"]\n",
198
+ "assert separate_paren_groups('() (()) ((())) (((())))') == [\n",
199
+ "assert separate_paren_groups('(()(())((())))') == [\n",
200
+ "assert True, \"This prints if this assert fails 2 (also good for debugging!)\"\n",
201
+ "assert words_string(\"\") == []\n",
202
+ "assert words_string(\"Hi, my name\") == [\"Hi\", \"my\", \"name\"]\n",
203
+ "assert words_string(\"Hi, my name is John\") == [\"Hi\", \"my\", \"name\", \"is\", \"John\"]\n",
204
+ "assert True, \"This prints if this assert fails 1 (good for debugging!)\"\n",
205
+ "assert separate_paren_groups('(()()) ((())) () ((())()())') == [\n",
206
+ "assert words_string(\"ahmed , gamal\") == [\"ahmed\", \"gamal\"]\n",
207
+ "assert separate_paren_groups('( ) (( )) (( )( ))') == ['()', '(())', '(()())']\n",
208
+ "```\n",
209
+ "\n",
210
+ "Complete the following function so that the assertions pass:\n",
211
+ "\n",
212
+ "```\n",
213
+ "def words_string(s):\n",
214
+ "```\n"
215
+ ]
216
+ }
217
+ ],
218
+ "source": [
219
+ "b = random_benchmark(processed_humaneval_plus, 2)\n",
220
+ "b_assertions = b[\"prompt\"].split(\"\\n\")\n",
221
+ "b_signature = b_assertions[-1]\n",
222
+ "b_assertions = \"\\n\".join(b_assertions[:-1]).rstrip()\n",
223
+ "print(f\"These are several assertions:\\n\\n```\\n{b_assertions}\\n```\\n\\nComplete the following function so that the assertions pass:\\n\\n```\\n{b_signature}\\n```\")"
224
+ ]
225
+ },
226
+ {
227
+ "cell_type": "markdown",
228
+ "metadata": {},
229
+ "source": [
230
+ "Now we build a benchmark of varying size."
231
+ ]
232
+ },
233
+ {
234
+ "cell_type": "code",
235
+ "execution_count": 102,
236
+ "metadata": {},
237
+ "outputs": [
238
+ {
239
+ "name": "stdout",
240
+ "output_type": "stream",
241
+ "text": [
242
+ "Failed to generate benchmark of size 40\n"
243
+ ]
244
+ }
245
+ ],
246
+ "source": [
247
+ "items = [ ] \n",
248
+ "for size in [10, 20, 40, 80, 160]:\n",
249
+ " for i in range(5):\n",
250
+ " b = random_benchmark(processed_humaneval_plus, size)\n",
251
+ " if validate_benchmark(b):\n",
252
+ " items.append(b)\n",
253
+ " else:\n",
254
+ " print(f\"Failed to generate benchmark of size {size}\")"
255
+ ]
256
+ },
257
+ {
258
+ "cell_type": "code",
259
+ "execution_count": 107,
260
+ "metadata": {},
261
+ "outputs": [
262
+ {
263
+ "data": {
264
+ "application/vnd.jupyter.widget-view+json": {
265
+ "model_id": "a5a96a8665534bb48490901996856341",
266
+ "version_major": 2,
267
+ "version_minor": 0
268
+ },
269
+ "text/plain": [
270
+ "Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]"
271
+ ]
272
+ },
273
+ "metadata": {},
274
+ "output_type": "display_data"
275
+ },
276
+ {
277
+ "data": {
278
+ "application/vnd.jupyter.widget-view+json": {
279
+ "model_id": "68a8489fca5a4f3e81d1a832080b157e",
280
+ "version_major": 2,
281
+ "version_minor": 0
282
+ },
283
+ "text/plain": [
284
+ "Downloading metadata: 0%| | 0.00/517 [00:00<?, ?B/s]"
285
+ ]
286
+ },
287
+ "metadata": {},
288
+ "output_type": "display_data"
289
+ },
290
+ {
291
+ "name": "stderr",
292
+ "output_type": "stream",
293
+ "text": [
294
+ "Updating downloaded metadata with the new split.\n"
295
+ ]
296
+ }
297
+ ],
298
+ "source": [
299
+ "longtest_benchmark = datasets.Dataset.from_list(items)\n",
300
+ "longtest_benchmark.push_to_hub(\"nuprl-staging/longtest_benchmark\", private=False)"
301
+ ]
302
+ },
303
+ {
304
+ "cell_type": "markdown",
305
+ "metadata": {},
306
+ "source": [
307
+ "How long is the longest benchmark (in characters, not tokens):"
308
+ ]
309
+ },
310
+ {
311
+ "cell_type": "code",
312
+ "execution_count": 108,
313
+ "metadata": {},
314
+ "outputs": [
315
+ {
316
+ "data": {
317
+ "text/plain": [
318
+ "64230"
319
+ ]
320
+ },
321
+ "execution_count": 108,
322
+ "metadata": {},
323
+ "output_type": "execute_result"
324
+ }
325
+ ],
326
+ "source": [
327
+ "max(longtest_benchmark[\"size\"])"
328
+ ]
329
+ }
330
+ ],
331
+ "metadata": {
332
+ "kernelspec": {
333
+ "display_name": "venv",
334
+ "language": "python",
335
+ "name": "python3"
336
+ },
337
+ "language_info": {
338
+ "codemirror_mode": {
339
+ "name": "ipython",
340
+ "version": 3
341
+ },
342
+ "file_extension": ".py",
343
+ "mimetype": "text/x-python",
344
+ "name": "python",
345
+ "nbconvert_exporter": "python",
346
+ "pygments_lexer": "ipython3",
347
+ "version": "3.10.6"
348
+ }
349
+ },
350
+ "nbformat": 4,
351
+ "nbformat_minor": 2
352
+ }