File size: 14,579 Bytes
b40998e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Transformer Theoretical Model\n",
    "\n",
    "This notebook stores a bunch of analysis about a Transformer, e.g. estimates the number of FLOPs, parameters, peak memory footprint, checkpoint size, etc."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "from collections import OrderedDict"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# config_args = {\n",
    "#     'gpt2':         dict(n_layer=12, n_head=12, n_embd=768),  # 124M params\n",
    "#     'gpt2-medium':  dict(n_layer=24, n_head=16, n_embd=1024), # 350M params\n",
    "#     'gpt2-large':   dict(n_layer=36, n_head=20, n_embd=1280), # 774M params\n",
    "#     'gpt2-xl':      dict(n_layer=48, n_head=25, n_embd=1600), # 1558M params\n",
    "# }[model_type]\n",
    "\n",
    "block_size = 1024\n",
    "vocab_size = 50257\n",
    "n_layer = 12\n",
    "n_head = 12\n",
    "n_embd = 768\n",
    "bias = False\n",
    "assert not bias, \"this notebook assumes bias=False just for simplicity\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "we see: 124337664, expected: 124337664, match: True\n",
      "name                 params     ratio (%) \n",
      "emebedding/position      786432     0.6325\n",
      "embedding/token        38597376    31.0424\n",
      "embedding              39383808    31.6749\n",
      "attention/ln                768     0.0006\n",
      "attention/kqv           1769472     1.4231\n",
      "attention/proj           589824     0.4744\n",
      "attention               2360064     1.8981\n",
      "mlp/ln                      768     0.0006\n",
      "mlp/ffw                 2359296     1.8975\n",
      "mlp/proj                2359296     1.8975\n",
      "mlp                     4719360     3.7956\n",
      "block                   7079424     5.6937\n",
      "transformer            84953088    68.3245\n",
      "ln_f                        768     0.0006\n",
      "dense                         0     0.0000\n",
      "total                 124337664   100.0000\n"
     ]
    }
   ],
   "source": [
    "def params():\n",
    "    \"\"\" estimates the number of parameters in the model\"\"\"\n",
    "    out = OrderedDict()\n",
    "\n",
    "    # token and position embeddings\n",
    "    out['emebedding/position'] = n_embd * block_size\n",
    "    out['embedding/token'] = n_embd * vocab_size\n",
    "    out['embedding'] = out['emebedding/position'] + out['embedding/token']\n",
    "\n",
    "    # attention blocks\n",
    "    out['attention/ln'] = n_embd # note, bias=False in our LN\n",
    "    out['attention/kqv'] = n_embd * 3*n_embd\n",
    "    out['attention/proj'] = n_embd**2\n",
    "    out['attention'] = out['attention/ln'] + out['attention/kqv'] + out['attention/proj']\n",
    "\n",
    "    # MLP blocks\n",
    "    ffw_size = 4*n_embd # feed forward size\n",
    "    out['mlp/ln'] = n_embd\n",
    "    out['mlp/ffw'] = n_embd * ffw_size\n",
    "    out['mlp/proj'] = ffw_size * n_embd\n",
    "    out['mlp'] = out['mlp/ln'] + out['mlp/ffw'] + out['mlp/proj']\n",
    "    \n",
    "    # the transformer and the rest of it\n",
    "    out['block'] = out['attention'] + out['mlp']\n",
    "    out['transformer'] = n_layer * out['block']\n",
    "    out['ln_f'] = n_embd # final layernorm\n",
    "    out['dense'] = 0 # 0 because of parameter sharing. This layer uses the weights from the embedding layer\n",
    "\n",
    "    # total\n",
    "    out['total'] = out['embedding'] + out['transformer'] + out['ln_f'] + out['dense']\n",
    "\n",
    "    return out\n",
    "\n",
    "# compare our param count to that reported by PyTorch\n",
    "p = params()\n",
    "params_total = p['total']\n",
    "print(f\"we see: {params_total}, expected: {124337664}, match: {params_total == 124337664}\")\n",
    "# create a header\n",
    "print(f\"{'name':20s} {'params':10s} {'ratio (%)':10s}\")\n",
    "for k,v in p.items():\n",
    "    print(f\"{k:20s} {v:10d} {v/params_total*100:10.4f}\")\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "est checkpoint size: 1.49 GB\n",
      "measured with wc -c ckpt.pt: 1542470366\n",
      "fluff ratio: 103.38%\n"
     ]
    }
   ],
   "source": [
    "# we can now calculate the size of each checkpoint\n",
    "# params are stored in fp32, and the AdamW optimizer has 2 additional buffers per param for statistics\n",
    "params_bytes = params_total*4\n",
    "params_and_buffers_bytes = params_bytes + 2*params_bytes\n",
    "print(f\"est checkpoint size: {params_and_buffers_bytes/1e9:.2f} GB\")\n",
    "measured_bytes = 1542470366 # from wc -c ckpt.pt\n",
    "print(f\"measured with wc -c ckpt.pt: {measured_bytes}\")\n",
    "print(f\"fluff ratio: {measured_bytes/params_and_buffers_bytes*100:.2f}%\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can also estimate the ratio of our GPU memory that will be taken up just by the weights and the buffers inside the AdamW optimizer"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "memory ratio taken up just for parameters: 3.73%\n"
     ]
    }
   ],
   "source": [
    "gpu_memory = 40e9 # 40 GB A100 GPU, roughly\n",
    "print(f\"memory ratio taken up just for parameters: {params_and_buffers_bytes / gpu_memory * 100:.2f}%\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "i.e. not that much of the memory for this tiny model, most of the memory is activations (forward and backward). This of course changes dramatically for larger and larger models."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's estimate FLOPs for a single forward pass."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "name                 flops          ratio (%) \n",
      "attention/kqv            3623878656     1.2426\n",
      "attention/scores         1610612736     0.5522\n",
      "attention/reduce         1610612736     0.5522\n",
      "attention/proj           1207959552     0.4142\n",
      "attention                8053063680     2.7612\n",
      "mlp/ffw1                 4831838208     1.6567\n",
      "mlp/ffw2                 4831838208     1.6567\n",
      "mlp                      9663676416     3.3135\n",
      "block                   17716740096     6.0747\n",
      "transformer            212600881152    72.8963\n",
      "dense                   79047426048    27.1037\n",
      "forward_total          291648307200   100.0000\n",
      "backward_total         583296614400   200.0000\n",
      "total                  874944921600   300.0000\n"
     ]
    }
   ],
   "source": [
    "def flops():\n",
    "    # we only count Weight FLOPs, all other layers (LayerNorm, Softmax, etc) are effectively irrelevant\n",
    "    # we count actual FLOPs, not MACs. Hence 2* all over the place\n",
    "    # basically for any matrix multiply A (BxC) @ B (CxD) -> (BxD) flops are 2*B*C*D\n",
    "\n",
    "    out = OrderedDict()\n",
    "    head_size = n_embd // n_head\n",
    "\n",
    "    # attention blocks\n",
    "    # 1) the projection to key, query, values\n",
    "    out['attention/kqv'] = 2 * block_size * (n_embd * 3*n_embd)\n",
    "    # 2) calculating the attention scores\n",
    "    out['attention/scores'] = 2 * block_size * block_size * n_embd\n",
    "    # 3) the reduction of the values (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)\n",
    "    out['attention/reduce'] = 2 * n_head * (block_size * block_size * head_size)\n",
    "    # 4) the final linear projection\n",
    "    out['attention/proj'] = 2 * block_size * (n_embd * n_embd)\n",
    "    out['attention'] = sum(out['attention/'+k] for k in ['kqv', 'scores', 'reduce', 'proj'])\n",
    "\n",
    "    # MLP blocks\n",
    "    ffw_size = 4*n_embd # feed forward size\n",
    "    out['mlp/ffw1'] = 2 * block_size * (n_embd * ffw_size)\n",
    "    out['mlp/ffw2'] = 2 * block_size * (ffw_size * n_embd)\n",
    "    out['mlp'] = out['mlp/ffw1'] + out['mlp/ffw2']\n",
    "\n",
    "    # the transformer and the rest of it\n",
    "    out['block'] = out['attention'] + out['mlp']\n",
    "    out['transformer'] = n_layer * out['block']\n",
    "    out['dense'] = 2 * block_size * (n_embd * vocab_size)\n",
    "\n",
    "    # forward,backward,total\n",
    "    out['forward_total'] = out['transformer'] + out['dense']\n",
    "    out['backward_total'] = 2 * out['forward_total'] # use common estimate of bwd = 2*fwd\n",
    "    out['total'] = out['forward_total'] + out['backward_total']\n",
    "\n",
    "    return out\n",
    "    \n",
    "# compare our param count to that reported by PyTorch\n",
    "f = flops()\n",
    "flops_total = f['forward_total']\n",
    "print(f\"{'name':20s} {'flops':14s} {'ratio (%)':10s}\")\n",
    "for k,v in f.items():\n",
    "    print(f\"{k:20s} {v:14d} {v/flops_total*100:10.4f}\")\n",
    "    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "palm_flops: 875062886400, flops: 874944921600, ratio: 1.0001\n"
     ]
    }
   ],
   "source": [
    "# now here is an estimate copy pasted from the PaLM paper\n",
    "# this formula is often used to calculate MFU (model flops utilization)\n",
    "def palm_flops():\n",
    "    \"\"\"estimate of the model flops following PaLM paper formula\"\"\"\n",
    "    # non-embedding model parameters. note that we do not subtract the\n",
    "    # embedding/token params because those are tied and get used in the last layer.\n",
    "    N = params()['total'] - params()['emebedding/position']\n",
    "    L, H, Q, T = n_layer, n_head, n_embd//n_head, block_size\n",
    "    mf_per_token = 6*N + 12*L*H*Q*T\n",
    "    mf = mf_per_token * block_size\n",
    "    return mf\n",
    "\n",
    "print(f\"palm_flops: {palm_flops():d}, flops: {flops()['total']:d}, ratio: {palm_flops()/flops()['total']:.4f}\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Ok they are quite similar, giving some confidence that my math in flops() function was ~ok. Now, A100 is cited at 312TFLOPS bfloat16 on tensor cores. So what is our model flops utilization (MFU)? I trained the model above with a batch_size of 20 and grad_accum of 5, which runs in about 755ms on a single A100 GPU. We get:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "fraction of A100 used: 37.14%\n"
     ]
    }
   ],
   "source": [
    "# here is what we currently roughly measure\n",
    "batch_size = 20 * 5 # 5 is grad_accum, so total batch size is 100\n",
    "measured_time = 0.755 # in seconds per iteration\n",
    "measured_throughput = batch_size / measured_time\n",
    "flops_achieved = f['total'] * measured_throughput\n",
    "\n",
    "# A100 is cited to be 312 TFLOPS of bloat16 running on tensor cores\n",
    "a100_flops_promised = 312e12\n",
    "\n",
    "# the fraction of the A100 that we are using:\n",
    "print(f\"fraction of A100 used: {flops_achieved / a100_flops_promised * 100:.2f}%\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For reference, we'd prefer to be somewhere around 50%+, and not just for a single GPU but for an entire DDP run. So we still have some work to do, but at least we're within a factor of ~2X of what is achievable with this GPU."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "time needed to train the model: 3.46 days\n"
     ]
    }
   ],
   "source": [
    "# Finally let's check out the 6ND approximation as total cost of training in FLOPs\n",
    "model_size = params()['total'] # this is number of parameters, N\n",
    "tokens_num = 300e9 # 300B tokens, this is dataset size in tokens, D\n",
    "a100_flops = 312e12 # 312 TFLOPS\n",
    "assumed_mfu = 0.3 # assume this model flops utilization (take the current 37% from above and add some DDP overhead)\n",
    "flops_throughput = a100_flops * 8 * assumed_mfu # assume an 8XA100 node at 30% utilization\n",
    "flops_needed = 6 * model_size * tokens_num # 6ND\n",
    "time_needed_s = flops_needed / flops_throughput # in seconds\n",
    "print(f\"time needed to train the model: {time_needed_s/3600/24:.2f} days\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This is not a bad estimate at all. I trained this model and it converged in roughly 4 days. Btw as a good reference for where 6ND comes from and some intuition around it I recommend [Dzmitry's post](https://medium.com/@dzmitrybahdanau/the-flops-calculus-of-language-model-training-3b19c1f025e4)."
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now, FLOPs are just one constraint, the other that we have to keep a close track of is the memory bandwidth. TODO estimate LOAD/STORE costs of our model later."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "pytorch2",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.8"
  },
  "orig_nbformat": 4,
  "vscode": {
   "interpreter": {
    "hash": "7f5833218766b48e6e35e4452ee875aac0e2188d05bbe5298f2c62b79f08b222"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}