euclaise commited on
Commit
0bf7deb
1 Parent(s): 922d4a9

Create notebook.ipynb

Browse files
Files changed (1) hide show
  1. notebook.ipynb +352 -0
notebook.ipynb ADDED
@@ -0,0 +1,352 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "cbe0f126",
6
+ "metadata": {
7
+ "jupyter": {
8
+ "source_hidden": false
9
+ }
10
+ },
11
+ "source": [
12
+ "# Introducing Genstruct\n",
13
+ "Generating high-quality synthetic instruction data is an important challenge. Standard approaches rely heavily on in-context learning and prompting of large language models to generate instruction pairs. This has limitations in terms of quality, diversity, and lack of explicit reasoning.\n",
14
+ "\n",
15
+ "Two previous methods aimed to improve upon this naive prompting approach:\n",
16
+ "- Retrieval-augmented generation (RAG) pipelines convert passages from sources like Wikipedia into instructional pairs.\n",
17
+ "- [Ada-Instruct](https://arxiv.org/abs/2310.04484) instead trains a custom model to generate instructions, rather than relying on prompting. This improves quality and diversity compared to prompting alone. Further, the authors of the Ada-Instruct paper found that training could be performed with as few as 10 examples.\n",
18
+ "\n",
19
+ "Genstruct is a new method that combines and extends these previous approaches. Like Ada-instruct, it is a custom trained model rather than relying on prompting. However, Ada-Instruct relies heavily on ungrounded generation, which can lead to hallucinations. To mitigate this, Genstruct generates instructions based upon a user-provided context, like RAG methods.\n",
20
+ "\n",
21
+ "Additionally, Genstruct goes beyond prior work by focusing on the generation of complex questions and multi-step reasoning for each generated instruction pair, rather than just direct questions and responses."
22
+ ]
23
+ },
24
+ {
25
+ "cell_type": "markdown",
26
+ "id": "bf417800",
27
+ "metadata": {
28
+ "jupyter": {
29
+ "source_hidden": false
30
+ }
31
+ },
32
+ "source": [
33
+ "## Generating instruction pairs\n",
34
+ "Ada-Instruct is trained based on Mistral. Specifically, it is trained over the [MetaMath-Mistral-7B](meta-math/MetaMath-Mistral-7B) model, in order to improve reasoning with math-heavy topcs.\n",
35
+ "\n",
36
+ "Like any other Mistral model, it can be imported from Huggingface Hub as follows:"
37
+ ]
38
+ },
39
+ {
40
+ "cell_type": "code",
41
+ "execution_count": 1,
42
+ "id": "7492d81a",
43
+ "metadata": {
44
+ "collapsed": false,
45
+ "jupyter": {
46
+ "outputs_hidden": false,
47
+ "source_hidden": false
48
+ }
49
+ },
50
+ "outputs": [
51
+ {
52
+ "name": "stderr",
53
+ "output_type": "stream",
54
+ "text": [
55
+ "/home/user/.conda/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
56
+ " from .autonotebook import tqdm as notebook_tqdm\n"
57
+ ]
58
+ },
59
+ {
60
+ "name": "stderr",
61
+ "output_type": "stream",
62
+ "text": [
63
+ "The `load_in_4bit` and `load_in_8bit` arguments are deprecated and will be removed in the future versions. Please, pass a `BitsAndBytesConfig` object in `quantization_config` argument instead.\n"
64
+ ]
65
+ },
66
+ {
67
+ "name": "stderr",
68
+ "output_type": "stream",
69
+ "text": [
70
+ "\r",
71
+ "Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s]"
72
+ ]
73
+ },
74
+ {
75
+ "name": "stderr",
76
+ "output_type": "stream",
77
+ "text": [
78
+ "\r",
79
+ "Loading checkpoint shards: 33%|███▎ | 1/3 [00:01<00:03, 1.75s/it]"
80
+ ]
81
+ },
82
+ {
83
+ "name": "stderr",
84
+ "output_type": "stream",
85
+ "text": [
86
+ "\r",
87
+ "Loading checkpoint shards: 67%|██████▋ | 2/3 [00:03<00:01, 1.72s/it]"
88
+ ]
89
+ },
90
+ {
91
+ "name": "stderr",
92
+ "output_type": "stream",
93
+ "text": [
94
+ "\r",
95
+ "Loading checkpoint shards: 100%|██████████| 3/3 [00:04<00:00, 1.64s/it]"
96
+ ]
97
+ },
98
+ {
99
+ "name": "stderr",
100
+ "output_type": "stream",
101
+ "text": [
102
+ "\r",
103
+ "Loading checkpoint shards: 100%|██████████| 3/3 [00:04<00:00, 1.66s/it]"
104
+ ]
105
+ },
106
+ {
107
+ "name": "stderr",
108
+ "output_type": "stream",
109
+ "text": [
110
+ "\n"
111
+ ]
112
+ }
113
+ ],
114
+ "source": [
115
+ "from transformers import AutoModelForCausalLM, AutoTokenizer\n",
116
+ "\n",
117
+ "MODEL_NAME = 'NousResearch/Genstruct-7B'\n",
118
+ "\n",
119
+ "model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, device_map='cuda', load_in_8bit=True)\n",
120
+ "tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)"
121
+ ]
122
+ },
123
+ {
124
+ "cell_type": "markdown",
125
+ "id": "34f73db8",
126
+ "metadata": {
127
+ "jupyter": {
128
+ "source_hidden": false
129
+ }
130
+ },
131
+ "source": [
132
+ "Genstruct works by generating instructions and answers from a user-provided context and title. It utilizes a custom prompt format, as in the following example:\n",
133
+ "```\n",
134
+ "[[[Title]]] p-value\n",
135
+ "[[[Content]]] The p-value is used in the context of null hypothesis testing in order to quantify the statistical significance of a result, the result being the observed value of the chosen statistic T {\\displaystyle T}.[note 2] The lower the p-value is, the lower the probability of getting that result if the null hypothesis were true. A result is said to be statistically significant if it allows us to reject the null hypothesis. All other things being equal, smaller p-values are taken as stronger evidence against the null hypothesis.\n",
136
+ "\n",
137
+ "The following is an interaction between a user and an AI assistant that is related to the above text.\n",
138
+ "\n",
139
+ "[[[User]]]\n",
140
+ "```\n",
141
+ "\n",
142
+ "The model then completes from `[[[User]]]`, generating an instruction and a response.\n",
143
+ "\n",
144
+ "\n",
145
+ "To simplify its use, the Genstruct tokenizer includes a 'chat template'. It accepts a list containing a single dict, with members 'title' and 'content' - for the title and content of the context to generate from:"
146
+ ]
147
+ },
148
+ {
149
+ "cell_type": "code",
150
+ "execution_count": 2,
151
+ "id": "2617d9f5",
152
+ "metadata": {
153
+ "collapsed": false,
154
+ "jupyter": {
155
+ "outputs_hidden": false,
156
+ "source_hidden": false
157
+ }
158
+ },
159
+ "outputs": [],
160
+ "source": [
161
+ "msg =[{\n",
162
+ " 'title': 'p-value',\n",
163
+ " 'content': \"The p-value is used in the context of null hypothesis testing in order to quantify the statistical significance of a result, the result being the observed value of the chosen statistic T {\\displaystyle T}.[note 2] The lower the p-value is, the lower the probability of getting that result if the null hypothesis were true. A result is said to be statistically significant if it allows us to reject the null hypothesis. All other things being equal, smaller p-values are taken as stronger evidence against the null hypothesis.\"\n",
164
+ "}]\n",
165
+ "inputs = tokenizer.apply_chat_template(msg, return_tensors='pt').cuda()"
166
+ ]
167
+ },
168
+ {
169
+ "cell_type": "markdown",
170
+ "id": "997b8d92",
171
+ "metadata": {
172
+ "jupyter": {
173
+ "source_hidden": false
174
+ }
175
+ },
176
+ "source": [
177
+ "Generation can then be performed with `model.generate()`, as follows (or with vllm or whaatever other pipeline you prefer):"
178
+ ]
179
+ },
180
+ {
181
+ "cell_type": "code",
182
+ "execution_count": 3,
183
+ "id": "1429b6bc",
184
+ "metadata": {
185
+ "collapsed": false,
186
+ "jupyter": {
187
+ "outputs_hidden": false,
188
+ "source_hidden": false
189
+ }
190
+ },
191
+ "outputs": [
192
+ {
193
+ "name": "stderr",
194
+ "output_type": "stream",
195
+ "text": [
196
+ "The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\n"
197
+ ]
198
+ },
199
+ {
200
+ "name": "stderr",
201
+ "output_type": "stream",
202
+ "text": [
203
+ "Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.\n"
204
+ ]
205
+ },
206
+ {
207
+ "name": "stdout",
208
+ "output_type": "stream",
209
+ "text": [
210
+ "[[[Title]]] p-value\n",
211
+ "[[[Content]]] The p-value is used in the context of null hypothesis testing in order to quantify the statistical significance of a result, the result being the observed value of the chosen statistic T {\\displaystyle T}.[note 2] The lower the p-value is, the lower the probability of getting that result if the null hypothesis were true. A result is said to be statistically significant if it allows us to reject the null hypothesis. All other things being equal, smaller p-values are taken as stronger evidence against the null hypothesis.\n",
212
+ "\n",
213
+ "The following is an interaction between a user and an AI assistant that is related to the above text.\n",
214
+ "\n",
215
+ "[[[User]]] The share prices of two rival companies, A and B, have been monitored for many years, allowing a large number of data points for rigorous statistical analysis. This year's summer, which is known to affect share prices, had two distinct sub-periods, A and B, which were roughly equal in length. The company 'A's share price, during sub-period A, was found to be 2.35, using a test statistic T. The same statistic, for sub-period B, was 1.45.\n",
216
+ "Which company, A or B, had a smaller p-value?\n",
217
+ "[[[Assistant]]] In the context of statistical analysis, the p-value is a key component of null hypothesis testing. It signifies the probability of obtaining results equal to or more extreme than the observed value of the statistic, under the assumption that the null hypothesis is true. In other words, the lower the p-value is, the less likely the result is if the null hypothesis were true.\n",
218
+ "\n",
219
+ "In this case, we are comparing the share prices of companies A and B over two distinct sub-periods, A and B, during which the summer had a notable impact on share prices. Using a test statistic T, we found that for sub-period A, the value was 2.35, and for sub-period B, it was 1.45.\n",
220
+ "\n",
221
+ "When we calculate the p-value for these results, assuming the null hypothesis is true, if we were to get a result as extreme as 2.35 (or more extreme), the probability of that occurring is lower for company A than it is for company B and the statistic 1.45. This means that, all other things being equal, the evidence provided by the data is stronger against the null hypothesis for company A than it is for company B.\n",
222
+ "\n",
223
+ "Therefore, company A would have a smaller p-value than company B, which means that, based on the data, we would have a lower probability of getting the observed result of 2.35 for company A if the null hypothesis were true. Consequently, the result for company A is a stronger indicator that it's time to reject the null hypothesis.\n",
224
+ "\n",
225
+ "So, the company with the smaller p-value is A.\n"
226
+ ]
227
+ }
228
+ ],
229
+ "source": [
230
+ "gen = tokenizer.decode(model.generate(inputs, max_new_tokens=512)[0]).split(tokenizer.eos_token)[0]\n",
231
+ "print(gen)"
232
+ ]
233
+ },
234
+ {
235
+ "cell_type": "markdown",
236
+ "id": "0848af10",
237
+ "metadata": {
238
+ "jupyter": {
239
+ "source_hidden": false
240
+ }
241
+ },
242
+ "source": [
243
+ "Note that the model is optimized for single-paragraph extracts from Wikipedia articles. You may have varying luck with other input types.\n",
244
+ "\n",
245
+ "## Filtering outputs using a reward model\n",
246
+ "The model may occasionally generate incorrect or improperly formatted output - the likelihood of this can be reduced with clever sampling methods, such as rejection sampling using a reward model, or even simple regex filtering.\n",
247
+ "\n",
248
+ "For instance, we might consider `OpenAssistant/reward-model-deberta-v3-large-v2` as a reward model, and perform best-of-n sampling:"
249
+ ]
250
+ },
251
+ {
252
+ "cell_type": "code",
253
+ "execution_count": 4,
254
+ "id": "a93868ac",
255
+ "metadata": {
256
+ "collapsed": false,
257
+ "jupyter": {
258
+ "outputs_hidden": false,
259
+ "source_hidden": false
260
+ }
261
+ },
262
+ "outputs": [
263
+ {
264
+ "name": "stderr",
265
+ "output_type": "stream",
266
+ "text": [
267
+ "The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.\n"
268
+ ]
269
+ },
270
+ {
271
+ "name": "stderr",
272
+ "output_type": "stream",
273
+ "text": [
274
+ "Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.\n"
275
+ ]
276
+ },
277
+ {
278
+ "name": "stdout",
279
+ "output_type": "stream",
280
+ "text": [
281
+ "[[[Title]]] p-value\n",
282
+ "[[[Content]]] The p-value is used in the context of null hypothesis testing in order to quantify the statistical significance of a result, the result being the observed value of the chosen statistic T {\\displaystyle T}.[note 2] The lower the p-value is, the lower the probability of getting that result if the null hypothesis were true. A result is said to be statistically significant if it allows us to reject the null hypothesis. All other things being equal, smaller p-values are taken as stronger evidence against the null hypothesis.\n",
283
+ "\n",
284
+ "The following is an interaction between a user and an AI assistant that is related to the above text.\n",
285
+ "\n",
286
+ "[[[User]]] Two medical procedures were compared by flipping 2 coins, procedure A assumed to be better and so it was labeled head, while procedure B was labeled as tail for a flip. The coins where then flipped 25 times, with the following results:[{'Tails', 12}, {'Heads', 13}]\n",
287
+ "\n",
288
+ "Which procedure had better results with statistical significance?\n",
289
+ "[[[Assistant]]] The statistical significance of the outcomes between the two procedures can be assessed using the p-value, which represents the probability of obtaining results as extreme as, or more extreme than, those observed, if the null hypothesis is true.\n",
290
+ "\n",
291
+ "In this case, let's assume that the null hypothesis would suggest that there is no difference between the two procedures, so each one should result in heads or tails with approximately equal probability (assuming fair coins).\n",
292
+ "\n",
293
+ "To calculate the p-value, we can use the statistic T, which in this context could be any relevant statistic calculated from the data, such as the difference in the number of flips resulting in heads or tails. We want to find the p-value corresponding to the observed value of T when the data is Tails = 12, Heads\n"
294
+ ]
295
+ }
296
+ ],
297
+ "source": [
298
+ "import torch\n",
299
+ "from transformers import AutoModelForSequenceClassification\n",
300
+ "\n",
301
+ "N = 4\n",
302
+ "\n",
303
+ "rm_tokenizer = AutoTokenizer.from_pretrained('OpenAssistant/reward-model-deberta-v3-large-v2')\n",
304
+ "rm_model = AutoModelForSequenceClassification.from_pretrained('OpenAssistant/reward-model-deberta-v3-large-v2', torch_dtype=torch.bfloat16)\n",
305
+ "\n",
306
+ "def extract_pair(resp):\n",
307
+ " response = resp.split('[[[Content]]]')[1]\n",
308
+ " inst, resp = resp.split('[[[User]]]')[:2]\n",
309
+ " return inst.strip(), resp.strip()\n",
310
+ " \n",
311
+ "def score(resp):\n",
312
+ " inst, resp = extract_pair(resp.split(tokenizer.eos_token)[0])\n",
313
+ " \n",
314
+ " with torch.no_grad():\n",
315
+ " inputs = rm_tokenizer(inst, resp, return_tensors='pt')\n",
316
+ " score = float(rm_model(**inputs).logits[0].cpu())\n",
317
+ " return score\n",
318
+ "\n",
319
+ "gens = tokenizer.batch_decode(model.generate(inputs, max_new_tokens=256, num_return_sequences=N, do_sample=True))\n",
320
+ "print(max(gens, key=score))"
321
+ ]
322
+ }
323
+ ],
324
+ "metadata": {
325
+ "kernelspec": {
326
+ "display_name": "Python 3 (ipykernel)",
327
+ "language": "python",
328
+ "name": "python3"
329
+ },
330
+ "language_info": {
331
+ "codemirror_mode": {
332
+ "name": "ipython",
333
+ "version": 3
334
+ },
335
+ "file_extension": ".py",
336
+ "mimetype": "text/x-python",
337
+ "name": "python",
338
+ "nbconvert_exporter": "python",
339
+ "pygments_lexer": "ipython3",
340
+ "version": "3.10.0"
341
+ },
342
+ "widgets": {
343
+ "application/vnd.jupyter.widget-state+json": {
344
+ "state": {},
345
+ "version_major": 2,
346
+ "version_minor": 0
347
+ }
348
+ }
349
+ },
350
+ "nbformat": 4,
351
+ "nbformat_minor": 5
352
+ }