awacke1 commited on
Commit
f39338a
β€’
1 Parent(s): 3049e24

Upload 4 files

Browse files
Transcript-AndrejKarpathyStateofGPT.txt ADDED
@@ -0,0 +1,812 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 0:00
2
+ [MUSIC]
3
+ 0:07
4
+ ANNOUNCER: Please welcome AI researcher and founding member of OpenAI, Andrej Karpathy.
5
+ 0:21
6
+ ANDREJ KARPATHY: Hi, everyone. I'm happy to be here to tell you about the state of GPT and more generally about
7
+ 0:28
8
+ the rapidly growing ecosystem of large language models. I would like to partition the talk into two parts.
9
+ 0:35
10
+ In the first part, I would like to tell you about how we train GPT Assistance, and then in the second part,
11
+ 0:40
12
+ we're going to take a look at how we can use these assistants effectively for your applications.
13
+ 0:46
14
+ First, let's take a look at the emerging recipe for how to train these assistants and keep in mind that this is all very new and still rapidly evolving,
15
+ 0:53
16
+ but so far, the recipe looks something like this. Now, this is a complicated slide, I'm going to go through it piece by
17
+ GPT Assistant training pipeline
18
+ 0:59
19
+ piece, but roughly speaking, we have four major stages, pretraining,
20
+ 1:04
21
+ supervised finetuning, reward modeling, reinforcement learning, and they follow each other serially.
22
+ 1:09
23
+ Now, in each stage, we have a dataset that powers that stage. We have an algorithm that for our purposes will be
24
+ 1:17
25
+ a objective and over for training the neural network, and then we have a resulting model,
26
+ 1:23
27
+ and then there are some notes on the bottom. The first stage we're going to start with as the pretraining stage. Now, this stage is special in this diagram,
28
+ 1:31
29
+ and this diagram is not to scale because this stage is where all of the computational work basically happens. This is 99 percent of the training
30
+ 1:38
31
+ compute time and also flops. This is where we are dealing with
32
+ 1:44
33
+ Internet scale datasets with thousands of GPUs in the supercomputer and also months of training potentially.
34
+ 1:51
35
+ The other three stages are finetuning stages that are much more along the lines of small few number of GPUs and hours or days.
36
+ 1:59
37
+ Let's take a look at the pretraining stage to achieve a base model. First, we are going to gather a large amount of data.
38
+ Data collection
39
+ 2:07
40
+ Here's an example of what we call a data mixture that comes from this paper that was released by
41
+ 2:13
42
+ Meta where they released this LLaMA based model. Now, you can see roughly the datasets that
43
+ 2:18
44
+ enter into these collections. We have CommonCrawl, which is a web scrape, C4, which is also CommonCrawl,
45
+ 2:25
46
+ and then some high quality datasets as well. For example, GitHub, Wikipedia, Books, Archives, Stock Exchange and so on.
47
+ 2:31
48
+ These are all mixed up together, and then they are sampled according to some given proportions,
49
+ 2:36
50
+ and that forms the training set for the GPT. Now before we can actually train on this data,
51
+ 2:43
52
+ we need to go through one more preprocessing step, and that is tokenization. This is basically a translation of
53
+ 2:48
54
+ the raw text that we scrape from the Internet into sequences of integers because
55
+ 2:53
56
+ that's the native representation over which GPTs function. Now, this is a lossless translation
57
+ 3:00
58
+ between pieces of texts and tokens and integers, and there are a number of algorithms for the stage.
59
+ 3:05
60
+ Typically, for example, you could use something like byte pair encoding, which iteratively merges text chunks
61
+ 3:11
62
+ and groups them into tokens. Here, I'm showing some example chunks of these tokens,
63
+ 3:16
64
+ and then this is the raw integer sequence that will actually feed into a transformer. Now, here I'm showing
65
+ 2 example models
66
+ 3:23
67
+ two examples for hybrid parameters that govern this stage.
68
+ 3:28
69
+ GPT-4, we did not release too much information about how it was trained and so on, I'm using GPT-3s numbers,
70
+ 3:33
71
+ but GPT-3 is of course a little bit old by now, about three years ago. But LLaMA is a fairly recent model from Meta.
72
+ 3:40
73
+ These are roughly the orders of magnitude that we're dealing with when we're doing pretraining. The vocabulary size is usually a couple 10,000 tokens.
74
+ 3:48
75
+ The context length is usually something like 2,000, 4,000, or nowadays even 100,000,
76
+ 3:53
77
+ and this governs the maximum number of integers that the GPT will look at when it's trying to
78
+ 3:58
79
+ predict the next integer in a sequence. You can see that roughly the number of parameters say,
80
+ 4:04
81
+ 65 billion for LLaMA. Now, even though LLaMA has only 65B parameters compared to GPP-3s 175 billion parameters,
82
+ 4:11
83
+ LLaMA is a significantly more powerful model, and intuitively, that's because the model is trained for significantly longer.
84
+ 4:17
85
+ In this case, 1.4 trillion tokens, instead of 300 billion tokens. You shouldn't judge the power of a model by
86
+ 4:23
87
+ the number of parameters that it contains. Below, I'm showing some tables of rough hyperparameters that typically
88
+ 4:31
89
+ go into specifying the transformer neural network, the number of heads, the dimension size, number of layers,
90
+ 4:36
91
+ and so on, and on the bottom I'm showing some training hyperparameters. For example, to train the 65B model,
92
+ 4:44
93
+ Meta used 2,000 GPUs, roughly 21 days of training and a roughly several million dollars.
94
+ 4:52
95
+ That's the rough orders of magnitude that you should have in mind for the pre-training stage.
96
+ 4:57
97
+ Now, when we're actually pre-training, what happens? Roughly speaking, we are going to take our tokens,
98
+ 5:03
99
+ and we're going to lay them out into data batches. We have these arrays that will feed into the transformer,
100
+ 5:09
101
+ and these arrays are B, the batch size and these are all independent examples stocked up in rows and B by T,
102
+ 5:16
103
+ T being the maximum context length. In my picture I only have 10 the context lengths, so this could be 2,000, 4,000, etc.
104
+ 5:23
105
+ These are extremely long rows. What we do is we take these documents, and we pack them into rows,
106
+ 5:28
107
+ and we delimit them with these special end of texts tokens, basically telling the transformer where a new document begins.
108
+ 5:35
109
+ Here, I have a few examples of documents and then I stretch them out into this input.
110
+ 5:41
111
+ Now, we're going to feed all of these numbers into transformer. Let me just focus on a single particular cell,
112
+ 5:49
113
+ but the same thing will happen at every cell in this diagram. Let's look at the green cell. The green cell is going to take
114
+ 5:56
115
+ a look at all of the tokens before it, so all of the tokens in yellow, and we're going to feed that entire context
116
+ 6:03
117
+ into the transforming neural network, and the transformer is going to try to predict the next token in
118
+ 6:08
119
+ a sequence, in this case in red. Now the transformer, I don't have too much time to, unfortunately, go into the full details of this
120
+ 6:14
121
+ neural network architecture is just a large blob of neural net stuff for our purposes, and it's got several,
122
+ 6:20
123
+ 10 billion parameters typically or something like that. Of course, as I tune these parameters, you're getting slightly different predicted distributions
124
+ 6:26
125
+ for every single one of these cells. For example, if our vocabulary size is 50,257 tokens,
126
+ 6:34
127
+ then we're going to have that many numbers because we need to specify a probability distribution for what comes next.
128
+ 6:40
129
+ Basically, we have a probability for whatever may follow. Now, in this specific example, for this specific cell,
130
+ 6:45
131
+ 513 will come next, and so we can use this as a source of supervision to update our transformers weights.
132
+ 6:51
133
+ We're applying this basically on every single cell in the parallel, and we keep swapping batches, and we're trying to get the transformer to make
134
+ 6:58
135
+ the correct predictions over what token comes next in a sequence. Let me show you more concretely what this looks
136
+ 7:03
137
+ like when you train one of these models. This is actually coming from the New York Times, and they trained a small GPT on Shakespeare.
138
+ 7:11
139
+ Here's a small snippet of Shakespeare, and they train their GPT on it. Now, in the beginning, at initialization,
140
+ 7:17
141
+ the GPT starts with completely random weights. You're getting completely random outputs as well. But over time, as you train the GPT longer and longer,
142
+ 7:26
143
+ you are getting more and more coherent and consistent samples from the model,
144
+ 7:31
145
+ and the way you sample from it, of course, is you predict what comes next, you sample from that distribution and
146
+ 7:36
147
+ you keep feeding that back into the process, and you can basically sample large sequences.
148
+ 7:42
149
+ By the end, you see that the transformer has learned about words and where to put spaces and where to put commas and so on.
150
+ 7:48
151
+ We're making more and more consistent predictions over time. These are the plots that you are looking at when you're doing model pretraining.
152
+ 7:54
153
+ Effectively, we're looking at the loss function over time as you train, and low loss means that our transformer
154
+ 8:00
155
+ is giving a higher probability to the next correct integer in the sequence.
156
+ 8:06
157
+ What are we going to do with model once we've trained it after a month? Well, the first thing that we noticed, we the field,
158
+ Base models learn powerful, general representations
159
+ 8:14
160
+ is that these models basically in the process of language modeling, learn very powerful general representations,
161
+ 8:21
162
+ and it's possible to very efficiently fine tune them for any arbitrary downstream tasks you might be interested in.
163
+ 8:26
164
+ As an example, if you're interested in sentiment classification, the approach used to be that you collect a bunch of positives
165
+ 8:33
166
+ and negatives and then you train some NLP model for that, but the new approach is:
167
+ 8:38
168
+ ignore sentiment classification, go off and do large language model pretraining,
169
+ 8:43
170
+ train a large transformer, and then you may only have a few examples and you can very efficiently fine tune
171
+ 8:48
172
+ your model for that task. This works very well in practice. The reason for this is that basically
173
+ 8:55
174
+ the transformer is forced to multitask a huge amount of tasks in the language modeling task,
175
+ 9:00
176
+ because in terms of predicting the next token, it's forced to understand a lot about the structure of the text and all the different concepts therein.
177
+ 9:09
178
+ That was GPT-1. Now around the time of GPT-2, people noticed that actually even better than fine tuning,
179
+ 9:15
180
+ you can actually prompt these models very effectively. These are language models and they want to complete documents,
181
+ 9:20
182
+ you can actually trick them into performing tasks by arranging these fake documents.
183
+ 9:25
184
+ In this example, for example, we have some passage and then we like do QA, QA, QA.
185
+ 9:31
186
+ This is called Few-shot prompt, and then we do Q, and then as the transformer is tried to complete the document is actually answering our question.
187
+ 9:37
188
+ This is an example of prompt engineering based model, making it believe that it's imitating a document and getting it to perform a task.
189
+ 9:45
190
+ This kicked off, I think the era of, I would say, prompting over fine tuning and seeing that this
191
+ 9:50
192
+ actually can work extremely well on a lot of problems, even without training any neural networks, fine tuning or so on.
193
+ 9:56
194
+ Now since then, we've seen an entire evolutionary tree of base models that everyone has trained.
195
+ 10:02
196
+ Not all of these models are available. for example, the GPT-4 base model was never released.
197
+ 10:08
198
+ The GPT-4 model that you might be interacting with over API is not a base model, it's an assistant model, and we're going to cover how to get those in a bit.
199
+ 10:15
200
+ GPT-3 based model is available via the API under the name Devanshi and GPT-2 based model
201
+ 10:21
202
+ is available even as weights on our GitHub repo. But currently the best available base model
203
+ 10:27
204
+ probably is the LLaMA series from Meta, although it is not commercially licensed.
205
+ 10:32
206
+ Now, one thing to point out is base models are not assistants. They don't want to make answers to your questions,
207
+ 10:41
208
+ they want to complete documents. If you tell them to write a poem about the bread and cheese,
209
+ 10:46
210
+ it will answer questions with more questions, it's completing what it thinks is a document.
211
+ 10:51
212
+ However, you can prompt them in a specific way for base models that is more likely to work.
213
+ 10:57
214
+ As an example, here's a poem about bread and cheese, and in that case it will autocomplete correctly. You can even trick base models into being assistants.
215
+ 11:06
216
+ The way you would do this is you would create a specific few-shot prompt that makes it look like there's some document between the human and assistant
217
+ 11:13
218
+ and they're exchanging information. Then at the bottom, you put your query at the end and the base model
219
+ 11:21
220
+ will condition itself into being a helpful assistant and answer,
221
+ 11:26
222
+ but this is not very reliable and doesn't work super well in practice, although it can be done. Instead, we have a different path to make
223
+ 11:32
224
+ actual GPT assistants not base model document completers. That takes us into supervised finetuning.
225
+ 11:39
226
+ In the supervised finetuning stage, we are going to collect small but high quality data-sets, and in this case,
227
+ 11:45
228
+ we're going to ask human contractors to gather data of the form prompt and ideal response.
229
+ 11:52
230
+ We're going to collect lots of these typically tens of thousands or something like that. Then we're going to still do language
231
+ 11:58
232
+ modeling on this data. Nothing changed algorithmically, we're swapping out a training set. It used to be Internet documents,
233
+ 12:04
234
+ which has a high quantity local for basically Q8 prompt response data.
235
+ 12:11
236
+ That is low quantity, high quality. We will still do language modeling and then after training,
237
+ 12:16
238
+ we get an SFT model. You can actually deploy these models and they are actual assistants and they work to some extent.
239
+ 12:22
240
+ Let me show you what an example demonstration might look like. Here's something that a human contractor might come up with.
241
+ 12:28
242
+ Here's some random prompt. Can you write a short introduction about the relevance of the term monopsony or something like that?
243
+ 12:34
244
+ Then the contractor also writes out an ideal response. When they write out these responses, they are following extensive labeling
245
+ 12:40
246
+ documentations and they are being asked to be helpful, truthful, and harmless.
247
+ 12:45
248
+ These labeling instructions here, you probably can't read it, neither can I, but they're long and this is people
249
+ 12:52
250
+ following instructions and trying to complete these prompts. That's what the dataset looks like. You can train these models. This works to some extent.
251
+ 12:59
252
+ Now, you can actually continue the pipeline from here on, and go into RLHF,
253
+ 13:05
254
+ reinforcement learning from human feedback that consists of both reward modeling and reinforcement learning.
255
+ 13:10
256
+ Let me cover that and then I'll come back to why you may want to go through the extra steps and how that compares to SFT models.
257
+ 13:16
258
+ In the reward modeling step, what we're going to do is we're now going to shift our data collection to be of the form of comparisons.
259
+ 13:23
260
+ Here's an example of what our dataset will look like. I have the same identical prompt on the top,
261
+ RM Dataset
262
+ 13:28
263
+ which is asking the assistant to write a program or a function that checks if a given string is a palindrome.
264
+ 13:35
265
+ Then what we do is we take the SFT model which we've already trained and we create multiple completions.
266
+ 13:41
267
+ In this case, we have three completions that the model has created, and then we ask people to rank these completions.
268
+ 13:47
269
+ If you stare at this for a while, and by the way, these are very difficult things to do to compare some of these predictions.
270
+ 13:52
271
+ This can take people even hours for a single prompt completion pairs,
272
+ 13:57
273
+ but let's say we decided that one of these is much better than the others and so on. We rank them.
274
+ 14:03
275
+ Then we can follow that with something that looks very much like a binary classification on all the possible pairs between these completions.
276
+ RM Training
277
+ 14:10
278
+ What we do now is, we lay out our prompt in rows, and the prompt is identical across all three rows here.
279
+ 14:16
280
+ It's all the same prompt, but the completion of this varies. The yellow tokens are coming from the SFT model.
281
+ 14:21
282
+ Then what we do is we append another special reward readout token at the end and we basically only
283
+ 14:28
284
+ supervise the transformer at this single green token. The transformer will predict some reward
285
+ 14:34
286
+ for how good that completion is for that prompt and basically it makes
287
+ 14:39
288
+ a guess about the quality of each completion. Then once it makes a guess for every one of them,
289
+ 14:44
290
+ we also have the ground truth which is telling us the ranking of them. We can actually enforce that some of
291
+ 14:50
292
+ these numbers should be much higher than others, and so on. We formulate this into a loss function and we train our model to make reward predictions
293
+ 14:56
294
+ that are consistent with the ground truth coming from the comparisons from all these contractors. That's how we train our reward model.
295
+ 15:02
296
+ That allows us to score how good a completion is for a prompt. Once we have a reward model,
297
+ 15:09
298
+ we can't deploy this because this is not very useful as an assistant by itself, but it's very useful for the reinforcement
299
+ 15:15
300
+ learning stage that follows now. Because we have a reward model, we can score the quality of any arbitrary completion for any given prompt.
301
+ 15:22
302
+ What we do during reinforcement learning is we basically get, again, a large collection of prompts and now we do
303
+ 15:28
304
+ reinforcement learning with respect to the reward model. Here's what that looks like. We take a single prompt,
305
+ 15:34
306
+ we lay it out in rows, and now we use basically the model we'd like to train which
307
+ 15:39
308
+ was initialized at SFT model to create some completions in yellow, and then we append the reward token again
309
+ 15:45
310
+ and we read off the reward according to the reward model, which is now kept fixed. It doesn't change any more. Now the reward model
311
+ 15:53
312
+ tells us the quality of every single completion for all these prompts and so what we can do is we can now just basically apply the same
313
+ 15:59
314
+ language modeling loss function, but we're currently training on the yellow tokens, and we are weighing
315
+ 16:06
316
+ the language modeling objective by the rewards indicated by the reward model. As an example, in the first row,
317
+ 16:13
318
+ the reward model said that this is a fairly high-scoring completion and so all the tokens that we
319
+ 16:18
320
+ happen to sample on the first row are going to get reinforced and they're going to get higher probabilities for the future.
321
+ 16:25
322
+ Conversely, on the second row, the reward model really did not like this completion, -1.2. Therefore, every single token that we sampled in
323
+ 16:32
324
+ that second row is going to get a slightly higher probability for the future. We do this over and over on many prompts on many batches and basically,
325
+ 16:39
326
+ we get a policy that creates yellow tokens here. It's basically all the completions here will
327
+ 16:46
328
+ score high according to the reward model that we trained in the previous stage.
329
+ 16:51
330
+ That's what the RLHF pipeline is. Then at the end, you get a model that you could deploy.
331
+ 16:58
332
+ As an example, ChatGPT is an RLHF model, but some other models that you might come across for example,
333
+ 17:05
334
+ Vicuna-13B, and so on, these are SFT models. We have base models, SFT models, and RLHF models.
335
+ 17:12
336
+ That's the state of things there. Now why would you want to do RLHF? One answer that's not
337
+ 17:19
338
+ that exciting is that it works better. This comes from the instruct GPT paper. According to these experiments a while ago now,
339
+ 17:25
340
+ these PPO models are RLHF. We see that they are basically preferred in a lot
341
+ 17:30
342
+ of comparisons when we give them to humans. Humans prefer basically tokens
343
+ 17:36
344
+ that come from RLHF models compared to SFT models, compared to base model that is prompted to be an assistant. It just works better.
345
+ 17:43
346
+ But you might ask why does it work better? I don't think that there's a single amazing answer
347
+ 17:49
348
+ that the community has really agreed on, but I will offer one reason potentially.
349
+ 17:55
350
+ It has to do with the asymmetry between how easy computationally it is to compare versus generate.
351
+ 18:02
352
+ Let's take an example of generating a haiku. Suppose I ask a model to write a haiku about paper clips.
353
+ 18:07
354
+ If you're a contractor trying to train data, then imagine being a contractor collecting basically data for the SFT stage,
355
+ 18:14
356
+ how are you supposed to create a nice haiku for a paper clip? You might not be very good at that, but if I give you a few examples of
357
+ 18:20
358
+ haikus you might be able to appreciate some of these haikus a lot more than others. Judging which one of these is good is a much easier task.
359
+ 18:27
360
+ Basically, this asymmetry makes it so that comparisons are a better way to potentially leverage
361
+ 18:33
362
+ yourself as a human and your judgment to create a slightly better model. Now, RLHF models are not
363
+ 18:40
364
+ strictly an improvement on the base models in some cases. In particular, we'd notice for example that they lose some entropy.
365
+ 18:46
366
+ That means that they give more peaky results. They can output samples
367
+ Mode collapse
368
+ 18:54
369
+ with lower variation than the base model. The base model has lots of entropy and will give lots of diverse outputs.
370
+ 19:00
371
+ For example, one place where I still prefer to use a base model is in the setup
372
+ 19:06
373
+ where you basically have n things and you want to generate more things like it.
374
+ 19:13
375
+ Here is an example that I just cooked up. I want to generate cool Pokemon names.
376
+ 19:18
377
+ I gave it seven Pokemon names and I asked the base model to complete the document and it gave me a lot more Pokemon names.
378
+ 19:24
379
+ These are fictitious. I tried to look them up. I don't believe they're actual Pokemons. This is the task that I think the base model would be
380
+ 19:31
381
+ good at because it still has lots of entropy. It'll give you lots of diverse cool more things that look like whatever you give it before.
382
+ 19:41
383
+ Having said all that, these are the assistant models that are probably available to you at this point.
384
+ 19:47
385
+ There was a team at Berkeley that ranked a lot of the available assistant models and give them basically Elo ratings.
386
+ 19:53
387
+ Currently, some of the best models, of course, are GPT-4, by far, I would say, followed by Claude, GPT-3.5, and then a number of models,
388
+ 20:00
389
+ some of these might be available as weights, like Vicuna, Koala, etc. The first three rows here are
390
+ 20:07
391
+ all RLHF models and all of the other models to my knowledge, are SFT models, I believe.
392
+ 20:15
393
+ That's how we train these models on the high level. Now I'm going to switch gears and let's look at how we can
394
+ 20:22
395
+ best apply the GPT assistant model to your problems. Now, I would like to work
396
+ 20:27
397
+ in setting of a concrete example. Let's work with a concrete example here.
398
+ 20:32
399
+ Let's say that you are working on an article or a blog post, and you're going to write this sentence at the end.
400
+ 20:38
401
+ "California's population is 53 times that of Alaska." So for some reason, you want to compare the populations of these two states.
402
+ 20:44
403
+ Think about the rich internal monologue and tool use and how much work actually goes computationally in
404
+ 20:50
405
+ your brain to generate this one final sentence. Here's maybe what that could look like in your brain.
406
+ 20:55
407
+ For this next step, let me blog on my blog, let me compare these two populations.
408
+ 21:01
409
+ First I'm going to obviously need to get both of these populations. Now, I know that I probably
410
+ 21:06
411
+ don't know these populations off the top of my head so I'm aware of what I know or don't know of my self-knowledge.
412
+ 21:12
413
+ I go, I do some tool use and I go to Wikipedia and I look up California's population and Alaska's population.
414
+ 21:19
415
+ Now, I know that I should divide the two, but again, I know that dividing 39.2 by 0.74 is very unlikely to succeed.
416
+ 21:26
417
+ That's not the thing that I can do in my head and so therefore, I'm going to rely on the calculator so I'm going to use a calculator,
418
+ 21:33
419
+ punch it in and see that the output is roughly 53. Then maybe I do some reflection and sanity checks in
420
+ 21:40
421
+ my brain so does 53 makes sense? Well, that's quite a large fraction, but then California is the most
422
+ 21:45
423
+ populous state, so maybe that looks okay. Then I have all the information I might need, and now I get to the creative portion of writing.
424
+ 21:52
425
+ I might start to write something like "California has 53x times greater" and then I think to myself,
426
+ 21:58
427
+ that's actually like really awkward phrasing so let me actually delete that and let me try again.
428
+ 22:03
429
+ As I'm writing, I have this separate process, almost inspecting what I'm writing and judging whether it looks good
430
+ 22:09
431
+ or not and then maybe I delete and maybe I reframe it, and then maybe I'm happy with what comes out.
432
+ 22:15
433
+ Basically long story short, a ton happens under the hood in terms of your internal monologue when you create sentences like this.
434
+ 22:21
435
+ But what does a sentence like this look like when we are training a GPT on it? From GPT's perspective, this
436
+ 22:28
437
+ is just a sequence of tokens. GPT, when it's reading or generating these tokens,
438
+ 22:34
439
+ it just goes chunk, chunk, chunk, chunk and each chunk is roughly the same amount of computational work for each token.
440
+ 22:40
441
+ These transformers are not very shallow networks they have about 80 layers of reasoning,
442
+ 22:45
443
+ but 80 is still not like too much. This transformer is going to do its best to imitate,
444
+ 22:51
445
+ but of course, the process here looks very different from the process that you took. In particular, in our final artifacts
446
+ 22:59
447
+ in the data sets that we create, and then eventually feed to LLMs, all that internal dialogue was completely stripped and unlike you,
448
+ 23:07
449
+ the GPT will look at every single token and spend the same amount of compute on every one of them. So, you can't expect it
450
+ 23:13
451
+ to do too much work per token and also in particular,
452
+ 23:21
453
+ basically these transformers are just like token simulators, they don't know what they don't know.
454
+ 23:26
455
+ They just imitate the next token. They don't know what they're good at or not good at. They just tried their best to imitate the next token.
456
+ 23:32
457
+ They don't reflect in the loop. They don't sanity check anything. They don't correct their mistakes along the way.
458
+ 23:37
459
+ By default, they just are sample token sequences. They don't have separate inner monologue streams
460
+ 23:43
461
+ in their head right? They're evaluating what's happening. Now, they do have some cognitive advantages,
462
+ 23:48
463
+ I would say and that is that they do actually have a very large fact-based knowledge across a vast number of areas because they have,
464
+ 23:55
465
+ say, several, 10 billion parameters. That's a lot of storage for a lot of facts. They also, I think have
466
+ 24:02
467
+ a relatively large and perfect working memory. Whatever fits into the context window
468
+ 24:07
469
+ is immediately available to the transformer through its internal self attention mechanism and so it's perfect memory,
470
+ 24:14
471
+ but it's got a finite size, but the transformer has a very direct access to it and so it can a losslessly remember anything that
472
+ 24:22
473
+ is inside its context window. This is how I would compare those two and the reason I bring all of this up is because I
474
+ 24:27
475
+ think to a large extent, prompting is just making up for this cognitive difference between
476
+ 24:34
477
+ these two architectures like our brains here and LLM brains.
478
+ 24:39
479
+ You can look at it that way almost. Here's one thing that people found for example works pretty well in practice.
480
+ 24:45
481
+ Especially if your tasks require reasoning, you can't expect the transformer to do too much reasoning per token.
482
+ 24:52
483
+ You have to really spread out the reasoning across more and more tokens. For example, you can't give a transformer
484
+ 24:57
485
+ a very complicated question and expect it to get the answer in a single token. There's just not enough time for it. "These transformers need tokens to
486
+ 25:04
487
+ think," I like to say sometimes. This is some of the things that work well, you may for example have a few-shot prompt that
488
+ 25:10
489
+ shows the transformer that it should show its work when it's answering question and if you give a few examples,
490
+ 25:17
491
+ the transformer will imitate that template and it will just end up working out better in terms of its evaluation.
492
+ 25:24
493
+ Additionally, you can elicit this behavior from the transformer by saying, let things step-by-step.
494
+ 25:29
495
+ Because this conditions the transformer into showing its work and because
496
+ 25:34
497
+ it snaps into a mode of showing its work, is going to do less computational work per token.
498
+ 25:40
499
+ It's more likely to succeed as a result because it's making slower reasoning over time.
500
+ 25:46
501
+ Here's another example, this one is called self-consistency. We saw that we had the ability
502
+ Ensemble multiple attempts
503
+ 25:51
504
+ to start writing and then if it didn't work out, I can try again and I can try multiple times
505
+ 25:56
506
+ and maybe select the one that worked best. In these approaches,
507
+ 26:02
508
+ you may sample not just once, but you may sample multiple times and then have some process for finding
509
+ 26:07
510
+ the ones that are good and then keeping just those samples or doing a majority vote or something like that. Basically these transformers in the process as
511
+ 26:14
512
+ they predict the next token, just like you, they can get unlucky and they could sample a not a very good
513
+ 26:19
514
+ token and they can go down like a blind alley in terms of reasoning. Unlike you, they cannot recover from that.
515
+ 26:27
516
+ They are stuck with every single token they sample and so they will continue the sequence, even if they know that this sequence is not going to work out.
517
+ 26:34
518
+ Give them the ability to look back, inspect or try to basically sample around it.
519
+ 26:40
520
+ Here's one technique also, it turns out that actually LLMs, they know when they've screwed up,
521
+ Ask for reflection
522
+ 26:47
523
+ so as an example, say you ask the model to generate a poem that does not
524
+ 26:52
525
+ rhyme and it might give you a poem, but it actually rhymes. But it turns out that especially for the bigger models like GPT-4,
526
+ 26:58
527
+ you can just ask it "did you meet the assignment?" Actually GPT-4 knows very well that it did not meet the assignment.
528
+ 27:04
529
+ It just got unlucky in its sampling. It will tell you, "No, I didn't actually meet the assignment here. Let me try again."
530
+ 27:10
531
+ But without you prompting it it doesn't know to revisit and so on.
532
+ 27:17
533
+ You have to make up for that in your prompts, and you have to get it to check, if you don't ask it to check,
534
+ 27:23
535
+ its not going to check by itself it's just a token simulator.
536
+ 27:28
537
+ I think more generally, a lot of these techniques fall into the bucket of what I would say recreating our System 2.
538
+ 27:34
539
+ You might be familiar with the System 1 and System 2 thinking for humans. System 1 is a fast automatic process and I
540
+ 27:40
541
+ think corresponds to an LLM just sampling tokens. System 2 is the slower deliberate
542
+ 27:46
543
+ planning part of your brain. This is a paper actually from
544
+ 27:51
545
+ just last week because this space is pretty quickly evolving, it's called Tree of Thought.
546
+ 27:56
547
+ The authors of this paper proposed maintaining multiple completions for any given prompt
548
+ 28:02
549
+ and then they are also scoring them along the way and keeping the ones that are going well if that makes sense.
550
+ 28:08
551
+ A lot of people are really playing around with prompt engineering
552
+ 28:13
553
+ to basically bring back some of these abilities that we have in our brain for LLMs.
554
+ 28:19
555
+ Now, one thing I would like to note here is that this is not just a prompt. This is actually prompts that are together
556
+ 28:25
557
+ used with some Python Glue code because you actually have to maintain multiple prompts and you also have to do
558
+ 28:30
559
+ some tree search algorithm here to figure out which prompts to expand, etc. It's a symbiosis of Python Glue code and
560
+ 28:38
561
+ individual prompts that are called in a while loop or in a bigger algorithm. I also think there's a really cool
562
+ 28:43
563
+ parallel here to AlphaGo. AlphaGo has a policy for placing the next stone when it plays go,
564
+ 28:48
565
+ and its policy was trained originally by imitating humans. But in addition to this policy,
566
+ 28:54
567
+ it also does Monte Carlo Tree Search. Basically, it will play out a number of possibilities in its head and evaluate all of
568
+ 29:00
569
+ them and only keep the ones that work well. I think this is an equivalent of AlphaGo but for text if that makes sense.
570
+ 29:08
571
+ Just like Tree of Thought, I think more generally people are starting to really explore
572
+ 29:13
573
+ more general techniques of not just the simple question-answer prompts, but something that looks a lot more like
574
+ 29:19
575
+ Python Glue code stringing together many prompts. On the right, I have an example from this paper called React where they
576
+ 29:25
577
+ structure the answer to a prompt as a sequence of thought-action-observation,
578
+ 29:32
579
+ thought-action-observation, and it's a full rollout and a thinking process to answer the query.
580
+ 29:38
581
+ In these actions, the model is also allowed to tool use. On the left, I have an example of AutoGPT.
582
+ 29:45
583
+ Now AutoGPT by the way is a project that I think got a lot of hype recently,
584
+ 29:51
585
+ but I think I still find it inspirationally interesting. It's a project that allows an LLM to keep
586
+ 29:58
587
+ the task list and continue to recursively break down tasks. I don't think this currently works very well and I would
588
+ 30:04
589
+ not advise people to use it in practical applications. I just think it's something to generally take inspiration
590
+ 30:09
591
+ from in terms of where this is going, I think over time. That's like giving our model System 2 thinking.
592
+ 30:16
593
+ The next thing I find interesting is, this following serve I would say almost psychological quirk of LLMs,
594
+ 30:23
595
+ is that LLMs don't want to succeed, they want to imitate. You want to succeed, and you should ask for it.
596
+ 30:31
597
+ What I mean by that is, when transformers are trained, they have training sets and there can be
598
+ 30:38
599
+ an entire spectrum of performance qualities in their training data. For example, there could be some kind of a prompt
600
+ 30:43
601
+ for some physics question or something like that, and there could be a student's solution that is completely wrong but there can also be an expert
602
+ 30:49
603
+ answer that is extremely right. Transformers can't tell the difference between low,
604
+ 30:54
605
+ they know about low-quality solutions and high-quality solutions, but by default, they want to imitate all of
606
+ 30:59
607
+ it because they're just trained on language modeling. At test time, you actually have to ask for a good performance.
608
+ 31:06
609
+ In this example in this paper, they tried various prompts. Let's think step-by-step was very powerful
610
+ 31:13
611
+ because it spread out the reasoning over many tokens. But what worked even better is, let's work this out in a step-by-step way
612
+ 31:19
613
+ to be sure we have the right answer. It's like conditioning on getting the right answer, and this actually makes the transformer work
614
+ 31:25
615
+ better because the transformer doesn't have to now hedge its probability mass on low-quality solutions,
616
+ 31:31
617
+ as ridiculous as that sounds. Basically, feel free to ask for a strong solution.
618
+ 31:37
619
+ Say something like, you are a leading expert on this topic. Pretend you have IQ 120, etc. But don't try to ask for too much IQ because if
620
+ 31:44
621
+ you ask for IQ 400, you might be out of data distribution, or even worse, you could be in data distribution for
622
+ 31:51
623
+ something like sci-fi stuff and it will start to take on some sci-fi, or like roleplaying or something like that.
624
+ 31:56
625
+ You have to find the right amount of IQ. I think it's got some U-shaped curve there.
626
+ 32:02
627
+ Next up, as we saw when we are trying to solve problems, we know what we are good at and what we're not good at,
628
+ 32:09
629
+ and we lean on tools computationally. You want to do the same potentially with your LLMs.
630
+ Tool use / Plugins
631
+ 32:15
632
+ In particular, we may want to give them calculators, code interpreters,
633
+ 32:20
634
+ and so on, the ability to do search, and there's a lot of techniques for doing that.
635
+ 32:27
636
+ One thing to keep in mind, again, is that these transformers by default may not know what they don't know.
637
+ 32:32
638
+ You may even want to tell the transformer in a prompt you are not very good at mental arithmetic. Whenever you need to do very large number addition,
639
+ 32:40
640
+ multiplication, or whatever, instead, use this calculator. Here's how you use the calculator, you use this token combination, etc.
641
+ 32:46
642
+ You have to actually spell it out because the model by default doesn't know what it's good at or not good at, necessarily, just like you and I might be.
643
+ 32:54
644
+ Next up, I think something that is very interesting is we went from a world that was retrieval only all the way,
645
+ 33:02
646
+ the pendulum has swung to the other extreme where its memory only in LLMs. But actually, there's this entire space in-between of
647
+ 33:08
648
+ these retrieval-augmented models and this works extremely well in practice. As I mentioned, the context window of
649
+ 33:14
650
+ a transformer is its working memory. If you can load the working memory with any information that is relevant to the task,
651
+ 33:21
652
+ the model will work extremely well because it can immediately access all that memory. I think a lot of people are really interested
653
+ 33:28
654
+ in basically retrieval-augment degeneration. On the bottom, I have an example of LlamaIndex which is
655
+ 33:35
656
+ one data connector to lots of different types of data. You can index all
657
+ 33:41
658
+ of that data and you can make it accessible to LLMs. The emerging recipe there is you take relevant documents,
659
+ 33:47
660
+ you split them up into chunks, you embed all of them, and you basically get embedding vectors that represent that data.
661
+ 33:53
662
+ You store that in the vector store and then at test time, you make some kind of a query to your vector store and you fetch chunks that
663
+ 34:00
664
+ might be relevant to your task and you stuff them into the prompt and then you generate. This can work quite well in practice.
665
+ 34:06
666
+ This is, I think, similar to when you and I solve problems. You can do everything from your memory and
667
+ 34:11
668
+ transformers have very large and extensive memory, but also it really helps to reference some primary documents.
669
+ 34:17
670
+ Whenever you find yourself going back to a textbook to find something, or whenever you find yourself going back to documentation of the library to look something up,
671
+ 34:25
672
+ transformers definitely want to do that too. You have some memory over how
673
+ 34:30
674
+ some documentation of the library works but it's much better to look it up. The same applies here.
675
+ 34:35
676
+ Next, I wanted to briefly talk about constraint prompting. I also find this very interesting.
677
+ 34:41
678
+ This is basically techniques for forcing a certain template in the outputs of LLMs.
679
+ 34:50
680
+ Guidance is one example from Microsoft actually. Here we are enforcing that the output from the LLM will be JSON.
681
+ 34:57
682
+ This will actually guarantee that the output will take on this form because they go in and they mess with the probabilities of
683
+ 35:03
684
+ all the different tokens that come out of the transformer and they clamp those tokens and then the transformer is only filling in the blanks here,
685
+ 35:09
686
+ and then you can enforce additional restrictions on what could go into those blanks. This might be really helpful, and I think
687
+ 35:15
688
+ this constraint sampling is also extremely interesting. I also want to say
689
+ 35:20
690
+ a few words about fine tuning. It is the case that you can get really far with prompt engineering, but it's also possible to
691
+ 35:27
692
+ think about fine tuning your models. Now, fine tuning models means that you are actually going to change the weights of the model.
693
+ 35:33
694
+ It is becoming a lot more accessible to do this in practice, and that's because of a number of techniques that have been
695
+ 35:39
696
+ developed and have libraries for very recently. So for example parameter efficient fine tuning techniques like Laura,
697
+ 35:46
698
+ make sure that you're only training small, sparse pieces of your model. So most of the model is kept clamped at
699
+ 35:53
700
+ the base model and some pieces of it are allowed to change and this still works pretty well empirically and makes
701
+ 35:58
702
+ it much cheaper to tune only small pieces of your model. It also means that because most of your model is clamped,
703
+ 36:05
704
+ you can use very low precision inference for computing those parts because you are not going to be updated by
705
+ 36:10
706
+ gradient descent and so that makes everything a lot more efficient as well. And in addition, we have a number of open source, high-quality base models.
707
+ 36:17
708
+ Currently, as I mentioned, I think LLaMa is quite nice, although it is not commercially licensed, I believe right now.
709
+ 36:23
710
+ Some things to keep in mind is that basically fine tuning is a lot more technically involved.
711
+ 36:29
712
+ It requires a lot more, I think, technical expertise to do right. It requires human data contractors for
713
+ 36:34
714
+ datasets and/or synthetic data pipelines that can be pretty complicated. This will definitely slow down
715
+ 36:40
716
+ your iteration cycle by a lot, and I would say on a high level SFT is achievable because you're continuing
717
+ 36:47
718
+ the language modeling task. It's relatively straightforward, but RLHF, I would say is very much research territory
719
+ 36:53
720
+ and is even much harder to get to work, and so I would probably not advise that someone just tries to roll their own RLHF of implementation.
721
+ 37:00
722
+ These things are pretty unstable, very difficult to train, not something that is, I think, very beginner friendly right now,
723
+ 37:06
724
+ and it's also potentially likely also to change pretty rapidly still.
725
+ 37:11
726
+ So I think these are my default recommendations right now. I would break up your task into two major parts.
727
+ Default recommendations
728
+ 37:18
729
+ Number 1, achieve your top performance, and Number 2, optimize your performance in that order.
730
+ 37:23
731
+ Number 1, the best performance will currently come from GPT-4 model. It is the most capable of all by far.
732
+ 37:29
733
+ Use prompts that are very detailed. They have lots of task content, relevant information and instructions.
734
+ 37:36
735
+ Think along the lines of what would you tell a task contractor if they can't email you back, but then also keep in mind that a task contractor is a
736
+ 37:43
737
+ human and they have inner monologue and they're very clever, etc. LLMs do not possess those qualities.
738
+ 37:48
739
+ So make sure to think through the psychology of the LLM almost and cater prompts to that.
740
+ 37:54
741
+ Retrieve and add any relevant context and information to these prompts. Basically refer to a lot of
742
+ 38:01
743
+ the prompt engineering techniques. Some of them I've highlighted in the slides above, but also this is a very large space and I would
744
+ 38:07
745
+ just advise you to look for prompt engineering techniques online. There's a lot to cover there.
746
+ 38:13
747
+ Experiment with few-shot examples. What this refers to is, you don't just want to tell, you want to show whenever it's possible.
748
+ 38:19
749
+ So give it examples of everything that helps it really understand what you mean if you can.
750
+ 38:25
751
+ Experiment with tools and plug-ins to offload tasks that are difficult for LLMs natively,
752
+ 38:30
753
+ and then think about not just a single prompt and answer, think about potential chains and reflection and how you glue
754
+ 38:36
755
+ them together and how you can potentially make multiple samples and so on. Finally, if you think you've squeezed
756
+ 38:42
757
+ out prompt engineering, which I think you should stick with for a while, look at some potentially
758
+ 38:48
759
+ fine tuning a model to your application, but expect this to be a lot more slower in the vault and then
760
+ 38:54
761
+ there's an expert fragile research zone here and I would say that is RLHF, which currently does work a bit
762
+ 39:00
763
+ better than SFT if you can get it to work. But again, this is pretty involved, I would say. And to optimize your costs,
764
+ 39:06
765
+ try to explore lower capacity models or shorter prompts and so on.
766
+ 39:12
767
+ I also wanted to say a few words about the use cases in which I think LLMs are currently well suited for.
768
+ 39:18
769
+ In particular, note that there's a large number of limitations to LLMs today, and so I would keep that
770
+ 39:24
771
+ definitely in mind for all of your applications. Models, and this by the way could be an entire talk. So I don't have time to cover it in full detail.
772
+ 39:30
773
+ Models may be biased, they may fabricate, hallucinate information, they may have reasoning errors, they may struggle in entire classes of applications,
774
+ 39:38
775
+ they have knowledge cut-offs, so they might not know any information above, say, September, 2021.
776
+ 39:43
777
+ They are susceptible to a large range of attacks which are coming out on Twitter daily,
778
+ 39:48
779
+ including prompt injection, jailbreak attacks, data poisoning attacks and so on. So my recommendation right now is
780
+ 39:54
781
+ use LLMs in low-stakes applications. Combine them always with human oversight.
782
+ 40:00
783
+ Use them as a source of inspiration and suggestions and think co-pilots, instead of completely autonomous agents
784
+ 40:05
785
+ that are just like performing a task somewhere. It's just not clear that the models are there right now.
786
+ 40:11
787
+ So I wanted to close by saying that GPT-4 is an amazing artifact. I'm very thankful that it exists, and it's beautiful.
788
+ 40:18
789
+ It has a ton of knowledge across so many areas. It can do math, code and so on. And in addition, there's this
790
+ 40:24
791
+ thriving ecosystem of everything else that is being built and incorporated into the ecosystem. Some of these things I've talked about,
792
+ 40:31
793
+ and all of this power is accessible at your fingertips. So here's everything that's needed in terms of
794
+ 40:37
795
+ code to ask GPT-4 a question, to prompt it, and get a response. In this case, I said,
796
+ 40:44
797
+ can you say something to inspire the audience of Microsoft Build 2023? And I just punched this into Python and verbatim
798
+ 40:50
799
+ GPT-4 said the following: And by the way, I did not know that they
800
+ 40:55
801
+ used this trick in the keynote. So I thought I was being clever, but it is really good at this.
802
+ 41:02
803
+ It says, ladies and gentlemen, innovators and trailblazers Microsoft Build 2023. Welcome to the gathering of brilliant
804
+ 41:08
805
+ minds like no other, you are the architects of the future, the visionaries molding the digital realm
806
+ 41:13
807
+ in which humanity thrives. Embrace the limitless possibilities of technologies and let your ideas soar as high as your imagination.
808
+ 41:20
809
+ Together, let's create a more connected, remarkable, and inclusive world for generations to come. Get ready to unleash your creativity,
810
+ 41:27
811
+ canvas the unknown, and turn dreams into reality. Your journey begins today!
812
+
TwoTranscriptQuotesFromIlyaSutskever.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ https://www.youtube.com/watch?v=9EN_HoEk3KY&t=172s
2
+
3
+
4
+ 1:42
5
+ program the does very very well on your data then you will achieve the best
6
+ 1:48
7
+ generalization possible with a little bit of modification you can turn it into a precise theorem
8
+ 1:54
9
+ and on a very intuitive level it's easy to see what it should be the case if you
10
+ 2:01
11
+ have some data and you're able to find a shorter program which generates this
12
+ 2:06
13
+ data then you've essentially extracted all the all conceivable regularity from
14
+ 2:11
15
+ this data into your program and then you can use these objects to make the best predictions possible like if if you have
16
+ 2:19
17
+ data which is so complex but there is no way to express it as a shorter program
18
+ 2:25
19
+ then it means that your data is totally random there is no way to extract any regularity from it whatsoever now there
20
+ 2:32
21
+ is little known mathematical theory behind this and the proofs of these statements actually not even that hard
22
+ 2:38
23
+ but the one minor slight disappointment is that it's actually not possible at
24
+ 2:44
25
+ least given today's tools and understanding to find the best short program that
26
+
27
+
28
+
29
+ https://youtu.be/9EN_HoEk3KY?t=442
30
+ 5
31
+ to talk a little bit about reinforcement learning so reinforcement learning is a framework it's a framework of evaluating
32
+ 6:53
33
+ agents in their ability to achieve goals and complicated stochastic environments
34
+ 6:58
35
+ you've got an agent which is plugged into an environment as shown in the figure right here and for any given
36
+ 7:06
37
+ agent you can simply run it many times and compute its average reward now the
38
+ 7:13
39
+ thing that's interesting about the reinforcement learning framework is that there exist interesting useful
40
+ 7:20
41
+ reinforcement learning algorithms the framework existed for a long time it
42
+ 7:25
43
+ became interesting once we realized that good algorithms exist now these are there are perfect algorithms but they
44
+ 7:31
45
+ are good enough to do interesting things and all you want the mathematical
46
+ 7:37
47
+ problem is one where you need to maximize the expected reward now one
48
+ 7:44
49
+ important way in which the reinforcement learning framework is not quite complete is that it assumes that the reward is
50
+ 7:50
51
+ given by the environment you see this picture the agent sends an action while
52
+ 7:56
53
+ the reward sends it an observation in a both the observation and the reward backwards that's what the environment
54
+ 8:01
55
+ communicates back the way in which this is not the case in the real world is that we figure out
56
+ 8:11
57
+ what the reward is from the observation we reward ourselves we are not told
58
+ 8:16
59
+ environment doesn't say hey here's some negative reward it's our interpretation over census that lets us determine what
60
+ 8:23
61
+ the reward is and there is only one real true reward in life and this is
62
+ 8:28
63
+ existence or nonexistence and everything else is a corollary of that so well what
64
+ 8:35
65
+ should our agent be you already know the answer should be a neural network because whenever you want to do
66
+ 8:41
67
+ something dense it's going to be a neural network and you want the agent to map observations to actions so you let
68
+ 8:47
69
+ it be parametrized with a neural net and you apply learning algorithm so I want to explain to you how reinforcement
70
+ 8:53
71
+ learning works this is model free reinforcement learning the reinforcement learning has actually been used in practice everywhere but it's
app.py ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ import streamlit as st
3
+ import re
4
+ import json
5
+ import nltk
6
+ from nltk.corpus import stopwords
7
+ from nltk import FreqDist
8
+ from graphviz import Digraph
9
+ from collections import Counter
10
+
11
+ nltk.download('punkt')
12
+ nltk.download('stopwords')
13
+
14
+ def remove_timestamps(text):
15
+ return re.sub(r'\d{1,2}:\d{2}\n', '', text)
16
+
17
+ def process_text(text):
18
+ lines = text.split("\n")
19
+ processed_lines = []
20
+
21
+ for line in lines:
22
+ if line:
23
+ processed_lines.append(line)
24
+
25
+ outline = ""
26
+ for i, line in enumerate(processed_lines):
27
+ if i % 2 == 0:
28
+ outline += f"**{line}**\n"
29
+ else:
30
+ outline += f"- {line} πŸ˜„\n"
31
+
32
+ return outline
33
+
34
+ def create_jsonl_list(text):
35
+ lines = text.split("\n")
36
+ jsonl_list = []
37
+
38
+ for line in lines:
39
+ if line:
40
+ jsonl_list.append({"text": line})
41
+
42
+ return jsonl_list
43
+
44
+ def unit_test(input_text):
45
+ st.write("Test Text without Timestamps:")
46
+ test_text_without_timestamps = remove_timestamps(input_text)
47
+ st.write(test_text_without_timestamps)
48
+
49
+ st.write("Test JSONL List:")
50
+ test_jsonl_list = create_jsonl_list(test_text_without_timestamps)
51
+ st.write(test_jsonl_list)
52
+
53
+
54
+
55
+ def extract_high_information_words(text, top_n=10):
56
+ words = nltk.word_tokenize(text)
57
+ words = [word.lower() for word in words if word.isalpha()]
58
+
59
+ stop_words = set(stopwords.words('english'))
60
+ filtered_words = [word for word in words if word not in stop_words]
61
+
62
+ freq_dist = FreqDist(filtered_words)
63
+ high_information_words = [word for word, _ in freq_dist.most_common(top_n)]
64
+
65
+ return high_information_words
66
+
67
+
68
+ def create_relationship_graph(words):
69
+ graph = Digraph()
70
+
71
+ for index, word in enumerate(words):
72
+ graph.node(str(index), word)
73
+
74
+ if index > 0:
75
+ graph.edge(str(index - 1), str(index), label=str(index))
76
+
77
+ return graph
78
+
79
+
80
+ def display_relationship_graph(words):
81
+ graph = create_relationship_graph(words)
82
+ st.graphviz_chart(graph)
83
+
84
+
85
+
86
+
87
+ text_input = st.text_area("Enter text:", value="", height=300)
88
+ text_without_timestamps = remove_timestamps(text_input)
89
+
90
+ st.markdown("**Text without Timestamps:**")
91
+ st.write(text_without_timestamps)
92
+
93
+ processed_text = process_text(text_without_timestamps)
94
+ st.markdown("**Markdown Outline with Emojis:**")
95
+ st.markdown(processed_text)
96
+
97
+ unit_test_text = '''
98
+ 1:42
99
+ program the does very very well on your data then you will achieve the best
100
+ 1:48
101
+ generalization possible with a little bit of modification you can turn it into a precise theorem
102
+ 1:54
103
+ and on a very intuitive level it's easy to see what it should be the case if you
104
+ 2:01
105
+ have some data and you're able to find a shorter program which generates this
106
+ 2:06
107
+ data then you've essentially extracted all the all conceivable regularity from
108
+ 2:11
109
+ this data into your program and then you can use these objects to make the best predictions possible like if if you have
110
+ 2:19
111
+ data which is so complex but there is no way to express it as a shorter program
112
+ 2:25
113
+ then it means that your data is totally random there is no way to extract any regularity from it whatsoever now there
114
+ 2:32
115
+ is little known mathematical theory behind this and the proofs of these statements actually not even that hard
116
+ 2:38
117
+ but the one minor slight disappointment is that it's actually not possible at
118
+ 2:44
119
+ least given today's tools and understanding to find the best short program that explains or generates or
120
+ 2:52
121
+ solves your problem given your data this problem is computationally intractable
122
+ '''
123
+
124
+ unit_test(unit_test_text)
125
+
126
+ unit_test_text_2 = '''
127
+ 5
128
+ to talk a little bit about reinforcement learning so reinforcement learning is a framework it's a framework of evaluating
129
+ 6:53
130
+ agents in their ability to achieve goals and complicated stochastic environments
131
+ 6:58
132
+ you've got an agent which is plugged into an environment as shown in the figure right here and for any given
133
+ 7:06
134
+ agent you can simply run it many times and compute its average reward now the
135
+ 7:13
136
+ thing that's interesting about the reinforcement learning framework is that there exist interesting useful
137
+ 7:20
138
+ reinforcement learning algorithms the framework existed for a long time it
139
+ 7:25
140
+ became interesting once we realized that good algorithms exist now these are there are perfect algorithms but they
141
+ 7:31
142
+ are good enough todo interesting things and all you want the mathematical
143
+ 7:37
144
+ problem is one where you need to maximize the expected reward now one
145
+ 7:44
146
+ important way in which the reinforcement learning framework is not quite complete is that it assumes that the reward is
147
+ 7:50
148
+ given by the environment you see this picture the agent sends an action while
149
+ 7:56
150
+ the reward sends it an observation in a both the observation and the reward backwards that's what the environment
151
+ 8:01
152
+ communicates back the way in which this is not the case in the real world is that we figure out
153
+ 8:11
154
+ what the reward is from the observation we reward ourselves we are not told
155
+ 8:16
156
+ environment doesn't say hey here's some negative reward it's our interpretation over census that lets us determine what
157
+ 8:23
158
+ the reward is and there is only one real true reward in life and this is
159
+ 8:28
160
+ existence or nonexistence and everything else is a corollary of that so well what
161
+ 8:35
162
+ should our agent be you already know the answer should be a neural network because whenever you want to do
163
+ 8:41
164
+ something dense it's going to be a neural network and you want the agent to map observations to actions so you let
165
+ 8:47
166
+ it be parametrized with a neural net and you apply learning algorithm so I want to explain to you how reinforcement
167
+ 8:53
168
+ learning works this is model free reinforcement learning the reinforcement learning has actually been used in practice everywhere but it's
169
+ '''
170
+
171
+ unit_test(unit_test_text_2)
172
+
173
+ unit_test_text_3 = '''
174
+ ort try something new add
175
+ 9:17
176
+ randomness directions and compare the result to your expectation if the result
177
+ 9:25
178
+ surprises you if you find that the results exceeded your expectation then
179
+ 9:31
180
+ change your parameters to take those actions in the future that's it this is
181
+ 9:36
182
+ the fool idea of reinforcement learning try it out see if you like it and if you do do more of that in the future and
183
+ 9:44
184
+ that's it that's literally it this is the core idea now it turns out it's not
185
+ 9:49
186
+ difficult to formalize mathematically but this is really what's going on if in a neural network
187
+
188
+ '''
189
+
190
+ unit_test(unit_test_text_3)
191
+
192
+
193
+
194
+
195
+
196
+ # Adding new functionality to the existing code
197
+ text_without_timestamps = remove_timestamps(unit_test_text_2)
198
+ top_words = extract_high_information_words(text_without_timestamps, 10)
199
+ st.markdown("**Top 10 High Information Words:**")
200
+ st.write(top_words)
201
+
202
+ st.markdown("**Relationship Graph:**")
203
+ display_relationship_graph(top_words)
204
+
205
+
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ nltk
2
+ graphviz